First, I create a bridged network interface for my DomUs. In this case, it is a Dom0 in a private LAN.
If this is not yet installed, install the bridging utilities:
# sudo aptitude install bridge-utils
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
|
restart your server or stop and start networking, if you’re on a local console
If you use bridging this way, nothing needs to be changed in xend config
I cannot help you out here - this usually goes too deep. However, I can give you some hints:
Networking can be really frustrating, especially if you have multiple bridges or public ip Addresses. Hosters often have special networking setups that require special actions, like setting up routes on the Dom0 or asking the hosters support to allow multiple mac addresses on a switchport.
The most important thing here is: Check if your hoster supports XEN before ordering!
Ideally there is a faq/bulletin that describes Virtual Machine Networking setup.
If you still have heavy problems with networking, I’am available for rent ;)
Usually I am not setting up just one DomU, but a lot. As every sysadmin is a lazy bastard (at least I am), I try to keep my systems as homogeneous as possible: same distribution, same standard packages, same configuration, etc. For example: You have a Mail gateway in your LAN. Why not pass this as relayserver to every DomU’s mailserver in the moment of creation? Or what about granting remote access by auto-providing your ssh public key?
Note: this is a rather old fashioned way of auto-provisioning virtual servers and services. But it works pretty good. If you prefer the hot stuff, have a look at Chef!
I prefer installation with xen-tools, a toolset for semi-automatic DomU creation:
# sudo aptitude install xen-tools
In order to install an ubuntu release as domU, the corresponding folder must be existent in /usr/lib/xen-tools. Precise is not there, so we just copy the karmic folder:
# cp -a /usr/lib/xen-tools/karmic.d /usr/lib/xen-tools/precise.d
Why not symlink? Because you could add release-specific changes to the installation recipe like changing the default postfix configuration as described above, or installing toolsets and monitoring stuff like nagios-nrpe or munin-node.
You may also create a tar package and preinstall everything you need. Xen tools can handle tar-templates as well.
Now it’s time to create the DomU.
# xen-create-image --bridge=xenbr0 --lvm=vg0 --dist=precise --fs=xfs --netmask=255.255.255.0 --gateway=10.0.0.254 --size=10Gb --swap=2Gb --memory=512Mb --ip=10.0.0.2 --hostname=myfirstdomU
If you always use the same parameters for your machines, I recommend to put them as default in /etc/xen-tools/xen-tools.conf
Now, rename the config file:
# mv /etc/xen/myfirstdomU.cfg /etc/xen/myfirstdomU
Reason: the config file name is now the same as the DomU name, so you can start/restart/stop with the same command.
Start it with
# xm create myfirstdomU
start a domU:
# xm create <name>
send a shutdown signal to the DomU:
# xm shut <name>
Sudden death to the DomU. Same as taking away power - no proper shutdown. Use this only when DomU is not responding on the console:
# xm destroy <name>
If you need HVM DomUs (For Linux Systems, please use paravirtualized DomUs!), You can do the setup manually:
Create the volumes you want to use, i.E. a 60Gb Disk:
# lvcreate -L 60G -n hvmdomu-disk /dev/vg0
and provision the installation iso image on the DomU, in my case this is /tmp/InstallImage.iso
Now, use the Ubuntu provided hvm configuration:
# zcat /usr/share/doc/xen-utils-common/examples/xmexample.hvm.gz > /etc/xen/hvmdomu
Edit your new DomU config file and enter what you just created (only changes listed, leave the rest as provided):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
|
start the domu,
# xm create hvmdomu
connect with a VNC-viewer to your Dom0, Port 5900 (the next HVM DomU will bind to Port 5901, 5902, and so on) and perform the installation.
After the HVM System has installed its own bootloader (usually when it requests the first reboot), change the boot sequence in your config file as commented above. Yoy may also comment out the iso image once everything is set up.
That’s it.
If memory increasing does not have any effect, check, if the menory is present, but not registered:
# grep offline /sys/devices/system/memory/*/state
note the numbers and activate them one by one:
# echo online > /sys/devices/system/memory/memory[number]/state
all DomUs that are present in /etc/xen/auto will be started directly after system startup:
# mkdir /etc/xen/auto
# cd /etc/xen/auto
# ln -s ../<name> .
Note that we have set
XENDOMAINS_RESTORE=false
in /etc/default/xendomains!
Perform these tasks on the Dom0:
This will start the DomU with attached console and lets you view the booting process. If this hangs, check the kernel messages. Exit the console with “ctrl + ]”
# xm create -c myfirstdomU
This will attach to the console of an already started DomU. When networking is not working, you still can act on the local console to perform some commands. Exit the console with “ctrl + ]”
# xm console myfirstdomU
There is no console on HVM DomUs, use VNC for diagnostics.
I don’t need to explain ping, do I? If this is not working, check the networking setup:
# ping <DomU IP>
DomU instant cloning and backup with lvm snapshots (THE perfect solution for test/staging systems)
First, install a standard Ubuntu Server System. Select no extras but ssh server.
If you’re going to setup your DomUs as decribed in my best-practice DomU setup on Ubuntu 12.04 (precise pangolin), please install with LVM and use 15G as root partition and about 5G as swap Space. The rest of the volume group is reserved for DomUs. Remember to mount /boot outside of the LVM, usually a 512M ext4 partition on the very beginning of the disk.
# sudo aptitude install xen-hypervisor-amd64
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
|
I set the memory to 512MB for the Dom0. If you don’t plan additional services and use this host as Dom0 only, this is largely enough.
# sudo update-grub
1 2 3 4 5 6 7 |
|
You could also use ‘xl’, I will use xm in this case.
By default, the system will save the memory of running domUs when shutting dowm or restarting the Dom0. This usually takes a very long time and also this can cause the system to hang. So we deactivate this in /etc/default/xendomains
1 2 3 4 5 |
|
# xm list
this should give you an output like this:
1 2 |
|
also, the xl info command gives you the right amount of memory you have on your system:
1 2 3 |
|
and that’s it.
DomU setup (paravirtualized Linux Guests, HVM Windows Guests)
If you want a virtual host or a location to be jailed to certain GET parameters, use the rewrite module:
1 2 3 |
|
an even smarter solution is to transport existing GET parameters, too:
1 2 3 4 5 |
|
this way, nginx forwards all other GET parameters. The jailed “list=true” should be safe, too. In my tests, the app behind used the “first come first serve” method:
http://example.com/list/?list=false
rewrites to:
/list/?list=true&list=false
evaluates to:
list = true
With the above, accessing external APIs gives you more possibilities: To hide details of the api calls (credentials, keys, service name, etc.) just add parameters at proxy level, keeping it away from your app and your visitors:
1 2 3 4 5 6 |
|
http basic auth should work, too (I didn’t test this, Feedback appreciated!):
1 2 3 4 5 6 |
|
The site needed to support both, plain http and encrypted https, so I decided to start slanger in standard mode (no ssl) and put a ssl-terminating proxy in front to handle the wss:// URIs
There were rumors that pound was capable of proxying TCP requests. I work with pound for quite a long time and did not manage to get it working. However, stunnel offered a fast and solid solution:
The code snippets apply to Ubuntu 10.04, but this should work on other environments, too. I installed stunnel with
# aptitude install stunnel4
and ended up with this configuration:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
|
If you can spare an extra server or an additional IP Address for your websocket server, it may be better to use the standard ports 80 and 443.
Possible pitfall: make sure, the hostname (don’t use an IP Address!) of yor pusher clients matches the common name of the certificate provided to stunnel. Otherwise some browsers (chrome at least) will fail silently when connecting to secure websocket URIs (wss://example.com/).
]]>I presume installing apache2 is no problem for you. This short tutorial covers a very minimalistic icinga installation, no idoutils, no distribution, no check_mk. So, let’s do it:
aptitude install icinga pnp4nagios
and follow debconf’s instructions
In /etc/icinga/icinga.cfg change the following variable
process_performance_data=1
and set this one:
broker_module=/usr/lib/pnp4nagios/npcdmod.o config_file=/etc/pnp4nagios/npcd.cfg
Now, edit /etc/default/npcd and set
RUN="yes"
finally, enable the views in icingas standard templates: for hosts, edit /etc/icinga/objects/generic-host_icinga.cfg and add
action_url /pnp4nagios/graph?host=$HOSTNAME$
for services, edit /etc/icinga/objects/generic-service_icinga.cfg and add:
action_url /pnp4nagios/graph?host=$HOSTNAME$&srv=$SERVICEDESC$
One last thing: change this line in /etc/apache2/conf.d/pnp4nagios.conf from the “nagios3” directory to “icinga”:
AuthUserFile /etc/icinga/htpasswd.users
finally, start npcd and restart icinga by executing
# service apache2 restart
# service npcd start
# service icinga restart
login to http://hostname/icinga with user icingaadmin and the password you specified. enjoy icinga with pnp4nagios!
One last hint: If you’re still using nagios and thinking about using icinga: Just copy your nagios config files to icinga and be much happier. In most cases this awesome fork works out of the box!
]]>