David Heidt's Blog

linux and webserver stuff

My Best-practice DomU Setup on Ubuntu 12.04 (Precise Pangolin)

| Comments

In this post I will demonstrate how I am setting up rather big infrastructures (> 10 DomUs, >2 Dom0s)

Networking

First, I create a bridged network interface for my DomUs. In this case, it is a Dom0 in a private LAN.

If this is not yet installed, install the bridging utilities:

# sudo aptitude install bridge-utils
/etc/network/interfaces
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# Local
auto lo
iface lo inet loopback

# LAN interface
auto eth0
iface eth0 inet manual
        post-up ifconfig eth0 0.0.0.0 up
        pre-down ifconfig eth0 0.0.0.0 down


# Bridge to LAN 
auto xenbr0
iface xenbr0 inet static
  address 10.0.0.1
  netmask 255.255.255.0
  gateway 10.0.0.254
  dns-nameservers 10.0.0.254

  # configure the bridge
    bridge_ports eth0
    bridge_stp no
    bridge_fd 2

restart your server or stop and start networking, if you’re on a local console

If you use bridging this way, nothing needs to be changed in xend config

non-local networking

I cannot help you out here - this usually goes too deep. However, I can give you some hints:

Networking can be really frustrating, especially if you have multiple bridges or public ip Addresses. Hosters often have special networking setups that require special actions, like setting up routes on the Dom0 or asking the hosters support to allow multiple mac addresses on a switchport.

The most important thing here is: Check if your hoster supports XEN before ordering!

Ideally there is a faq/bulletin that describes Virtual Machine Networking setup.

If you still have heavy problems with networking, I’am available for rent ;)

Paravirtualized DomUs

Usually I am not setting up just one DomU, but a lot. As every sysadmin is a lazy bastard (at least I am), I try to keep my systems as homogeneous as possible: same distribution, same standard packages, same configuration, etc. For example: You have a Mail gateway in your LAN. Why not pass this as relayserver to every DomU’s mailserver in the moment of creation? Or what about granting remote access by auto-providing your ssh public key?

Preparing for many DomUs

Note: this is a rather old fashioned way of auto-provisioning virtual servers and services. But it works pretty good. If you prefer the hot stuff, have a look at Chef!

I prefer installation with xen-tools, a toolset for semi-automatic DomU creation:

# sudo aptitude install xen-tools

In order to install an ubuntu release as domU, the corresponding folder must be existent in /usr/lib/xen-tools. Precise is not there, so we just copy the karmic folder:

# cp -a /usr/lib/xen-tools/karmic.d /usr/lib/xen-tools/precise.d

Why not symlink? Because you could add release-specific changes to the installation recipe like changing the default postfix configuration as described above, or installing toolsets and monitoring stuff like nagios-nrpe or munin-node.

You may also create a tar package and preinstall everything you need. Xen tools can handle tar-templates as well.

Now it’s time to create the DomU.

Create a DomU

# xen-create-image --bridge=xenbr0 --lvm=vg0 --dist=precise --fs=xfs --netmask=255.255.255.0 --gateway=10.0.0.254 --size=10Gb --swap=2Gb --memory=512Mb --ip=10.0.0.2 --hostname=myfirstdomU

If you always use the same parameters for your machines, I recommend to put them as default in /etc/xen-tools/xen-tools.conf

Now, rename the config file:

# mv /etc/xen/myfirstdomU.cfg /etc/xen/myfirstdomU

Reason: the config file name is now the same as the DomU name, so you can start/restart/stop with the same command.

Start it with

# xm create myfirstdomU

DomU control commands

start a domU:

# xm create <name> 

send a shutdown signal to the DomU:

# xm shut <name> 

Sudden death to the DomU. Same as taking away power - no proper shutdown. Use this only when DomU is not responding on the console:

# xm destroy <name> 

Fully virtualized DomUs (HVM) (tested with Windows7, and Windows Server 2008 R2)

If you need HVM DomUs (For Linux Systems, please use paravirtualized DomUs!), You can do the setup manually:

Prerequisites

Create the volumes you want to use, i.E. a 60Gb Disk:

# lvcreate -L 60G -n hvmdomu-disk /dev/vg0

and provision the installation iso image on the DomU, in my case this is /tmp/InstallImage.iso

Now, use the Ubuntu provided hvm configuration:

# zcat /usr/share/doc/xen-utils-common/examples/xmexample.hvm.gz > /etc/xen/hvmdomu

Edit your new DomU config file and enter what you just created (only changes listed, leave the rest as provided):

/etc/xen/hvmdomu
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[...]
name = 'hvmdomu'

vif = [ 'type=ioemu, bridge=xenbr0' ]

disk =  [
                'phy:/dev/vg0/hvmdomu-disk,hda,w',
                'file:/tmp/InstallImage.iso,hdc:cdrom,r'
        ]


boot="dc"
#change this to "cd" after installation!

vnc=1
vnclisten=0.0.0.0
vncunused=1
vncpasswd='supersecret'
# you may use a different one ;)


[...]

start the domu,

# xm create hvmdomu

connect with a VNC-viewer to your Dom0, Port 5900 (the next HVM DomU will bind to Port 5901, 5902, and so on) and perform the installation.

After the HVM System has installed its own bootloader (usually when it requests the first reboot), change the boot sequence in your config file as commented above. Yoy may also comment out the iso image once everything is set up.

That’s it.

Best practice hints

  • instead of xm create/shutdown/console, use abbreviations: xm crea/shut/con
  • When using Ubuntu 12.04 both as Dom0 and DomU, hot adding and removing memory works out of the box. Nice!
  • When using xfs as filesystem, growing Disk size without downtime is possible, too!

If memory increasing does not have any effect, check, if the menory is present, but not registered:

# grep offline /sys/devices/system/memory/*/state

note the numbers and activate them one by one:

# echo online > /sys/devices/system/memory/memory[number]/state

starting DomUs on Dom0 startup

all DomUs that are present in /etc/xen/auto will be started directly after system startup:

# mkdir /etc/xen/auto
# cd /etc/xen/auto
# ln -s ../<name> .

Note that we have set

XENDOMAINS_RESTORE=false

in /etc/default/xendomains!

Troubeshooting DomUs

Perform these tasks on the Dom0:

This will start the DomU with attached console and lets you view the booting process. If this hangs, check the kernel messages. Exit the console with “ctrl + ]”

# xm create -c myfirstdomU

This will attach to the console of an already started DomU. When networking is not working, you still can act on the local console to perform some commands. Exit the console with “ctrl + ]”

# xm console myfirstdomU

There is no console on HVM DomUs, use VNC for diagnostics.

I don’t need to explain ping, do I? If this is not working, check the networking setup:

# ping <DomU IP> 

coming up next:

DomU instant cloning and backup with lvm snapshots (THE perfect solution for test/staging systems)

recipe based on:

Setting Up Ubuntu 12.04 (Precise Pangolin) as XEN Dom0

| Comments

Setting up a XEN Dom0 with a LTS release of Ubuntu Linux is easy again. Hooray!

First, install a standard Ubuntu Server System. Select no extras but ssh server.

If you’re going to setup your DomUs as decribed in my best-practice DomU setup on Ubuntu 12.04 (precise pangolin), please install with LVM and use 15G as root partition and about 5G as swap Space. The rest of the volume group is reserved for DomUs. Remember to mount /boot outside of the LVM, usually a 512M ext4 partition on the very beginning of the disk.

Install the XEN packages

# sudo aptitude install xen-hypervisor-amd64

modify grub configuration in /etc/default/grub

/etc/default/grub
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
# If you change this file, run 'update-grub' afterwards to update
# /boot/grub/grub.cfg.

GRUB_DEFAULT="Xen 4.1-amd64"
#GRUB_HIDDEN_TIMEOUT=0
GRUB_HIDDEN_TIMEOUT_QUIET=true
GRUB_TIMEOUT=3
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT=""
GRUB_CMDLINE_LINUX="rootdelay=180"

# Uncomment to disable graphical terminal (grub-pc only)
#GRUB_TERMINAL=console

# The resolution used on graphical terminal
# note that you can use only modes which your graphic card supports via VBE
# you can see them in real GRUB with the command `vbeinfo'
#GRUB_GFXMODE=640x480

# Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux
#GRUB_DISABLE_LINUX_UUID=true

# Uncomment to disable generation of recovery mode menu entries
#GRUB_DISABLE_LINUX_RECOVERY="true"

# Uncomment to get a beep at grub start
#GRUB_INIT_TUNE="480 440 1"

# Disable OS prober to prevent virtual machines on logical volumes from appearing in the boot menu.
GRUB_DISABLE_OS_PROBER=true

# Xen boot parameters for all Xen boots
#GRUB_CMDLINE_XEN=""

# Xen boot parameters for non-recovery Xen boots (in addition to GRUB_CMDLINE_XEN)
GRUB_CMDLINE_XEN_DEFAULT="dom0_mem=512M"

I set the memory to 512MB for the Dom0. If you don’t plan additional services and use this host as Dom0 only, this is largely enough.

update the bootloader:

# sudo update-grub

select default toolstack

/etc/default/xen
1
2
3
4
5
6
7
# Configuration for Xen system
# ----------------------------

# There exists several tool stacks to configure a Xen system.
# ?~@?
# Attention: You need to reboot after changing this!
TOOLSTACK="xm"

You could also use ‘xl’, I will use xm in this case.

change default behaviour of XEN DomU management:

By default, the system will save the memory of running domUs when shutting dowm or restarting the Dom0. This usually takes a very long time and also this can cause the system to hang. So we deactivate this in /etc/default/xendomains

/etc/default/xendomains
1
2
3
4
5
[...]
XENDOMAINS_SAVE=""
[...]
XENDOMAINS_RESTORE=false
[...]

reboot and run

# xm list 

this should give you an output like this:

1
2
Name                                        ID   Mem VCPUs   State   Time(s)
Domain-0                                     0   511     8     r-----       9.9

also, the xl info command gives you the right amount of memory you have on your system:

output on a 64G machine
1
2
3
# xl info | grep memory
total_memory           : 65523
free_memory            : 64169

and that’s it.

coming up next:

DomU setup (paravirtualized Linux Guests, HVM Windows Guests)

recipe based on:

Edited on 2012-04-09:

  • changed xl to xm
  • added config chapter for /etc/default/xendomains

Playing With Nginx - Manipulating GET Parameters

| Comments

forced GET Parameters

If you want a virtual host or a location to be jailed to certain GET parameters, use the rewrite module:

force one GET parameter
1
2
3
location /list {
  rewrite ^(.*)$ $1?list=true
}

an even smarter solution is to transport existing GET parameters, too:

force one and preserve existing GET parameters
1
2
3
4
5
location /list {

    rewrite ^(.*)$ $1?list=true&$args break;

}

this way, nginx forwards all other GET parameters. The jailed “list=true” should be safe, too. In my tests, the app behind used the “first come first serve” method:

http://example.com/list/?list=false

rewrites to:

/list/?list=true&list=false

evaluates to:

list = true

use the reverse proxy module for API calls

With the above, accessing external APIs gives you more possibilities: To hide details of the api calls (credentials, keys, service name, etc.) just add parameters at proxy level, keeping it away from your app and your visitors:

hidden API call
1
2
3
4
5
6
location /example-api {
  # updated on Feb. 8th 2013
  rewrite ^/example-api/(.*)$ /$1?apikey=secretKey&userid=exampleuser&$args break;
  proxy_pass http://api.example.com;

}

http basic auth should work, too (I didn’t test this, Feedback appreciated!):

hidden API call with http basic auth
1
2
3
4
5
6
location /example-api {

  rewrite ^(.*)$ $1?apikey=secretKey&$args break;
  proxy_pass http://user:password@api.example.com;

}

Ssl Websocket Proxy With Stunnel Howto

| Comments

Recently we made up a new rails webapp using the pusher protocol in combination with Slanger as websocket server.

The site needed to support both, plain http and encrypted https, so I decided to start slanger in standard mode (no ssl) and put a ssl-terminating proxy in front to handle the wss:// URIs

There were rumors that pound was capable of proxying TCP requests. I work with pound for quite a long time and did not manage to get it working. However, stunnel offered a fast and solid solution:

The code snippets apply to Ubuntu 10.04, but this should work on other environments, too. I installed stunnel with

# aptitude install stunnel4

and ended up with this configuration:

/etc/stunnel/stunnel.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
; Certificate/key is needed in server mode and optional in client mode
cert = /path/to/cert-or-cert-chain.pem
key = /path/to/private.key

; Protocol version (all, SSLv2, SSLv3, TLSv1)
sslVersion = all
; no, we don't want SSLv2
options = NO_SSLv2
; Some extra strong ciphers
ciphers = ECDHE-RSA-AES256-SHA384:AES256-SHA256:RC4:HIGH:!MD5:!aNULL:!EDH:!AESGCM

; Some security enhancements for UNIX systems - comment them out on Win32
chroot = /var/lib/stunnel4/
setuid = stunnel4
setgid = stunnel4
; PID is created inside the chroot jail
pid = /stunnel4.pid

; Some performance tunings
socket = l:TCP_NODELAY=1
socket = r:TCP_NODELAY=1
;compression = zlib


[https]
accept  = <your public IP>:8443

; slanger server listens on port 8080
connect = <public or local IP>:8080

If you can spare an extra server or an additional IP Address for your websocket server, it may be better to use the standard ports 80 and 443.

Possible pitfall: make sure, the hostname (don’t use an IP Address!) of yor pusher clients matches the common name of the certificate provided to stunnel. Otherwise some browsers (chrome at least) will fail silently when connecting to secure websocket URIs (wss://example.com/).

Installing Icinga and Pnp4nagios on Ubuntu 12.04 (Precise Pangolin)

| Comments

This is actually so easy and painless, I had to write this down:

I presume installing apache2 is no problem for you. This short tutorial covers a very minimalistic icinga installation, no idoutils, no distribution, no check_mk. So, let’s do it:

aptitude install icinga pnp4nagios

and follow debconf’s instructions

In /etc/icinga/icinga.cfg change the following variable

process_performance_data=1

and set this one:

broker_module=/usr/lib/pnp4nagios/npcdmod.o config_file=/etc/pnp4nagios/npcd.cfg

Now, edit /etc/default/npcd and set

RUN="yes"

finally, enable the views in icingas standard templates: for hosts, edit /etc/icinga/objects/generic-host_icinga.cfg and add

action_url  /pnp4nagios/graph?host=$HOSTNAME$

for services, edit /etc/icinga/objects/generic-service_icinga.cfg and add:

action_url  /pnp4nagios/graph?host=$HOSTNAME$&srv=$SERVICEDESC$

One last thing: change this line in /etc/apache2/conf.d/pnp4nagios.conf from the “nagios3” directory to “icinga”:

AuthUserFile /etc/icinga/htpasswd.users

finally, start npcd and restart icinga by executing

# service apache2 restart
# service npcd start
# service icinga restart

login to http://hostname/icinga with user icingaadmin and the password you specified. enjoy icinga with pnp4nagios!

One last hint: If you’re still using nagios and thinking about using icinga: Just copy your nagios config files to icinga and be much happier. In most cases this awesome fork works out of the box!