In this post I will demonstrate how to use Ubuntu 10.04 as a Linux Container.
This post assumes you have configured your host, see my previous post if you need assistance configuring your host node.
This post has been updated. Since the original post I have learned a bit about the init scripts, learned about lxc, and ubuntu 10.04 was released. I found the majority of the problems with using LXC are with the boot scripts.
In this example I am installing LAMP, openssh-server, and UFW as I imagine these are popular options.You are free to install more or less , but these common tools serve as common examples and should allow you to adapt if you need additional or different services.
Basically we need to take 3 steps:
1. Use debootstrap install a (minimal) base system to be used as a container – use debootstrap to make a root file system (rootfs) for a LXC container using Ubuntu Lucid (10.04).
2. Generate a set of configuration files on the host.
3. Clean up the boot scripts within the container.
Note: Commands in this tutorial are run as root, so to obtain a root shell use:
sudo -i
Note: The working directory for this tutorial is /lxc , so config.ubuntu, fstab.ubuntu, and rootfs.ubuntu are both located in /lxc.
Make a rootfs via debootstrap
sudo -i
cd /lxc
debootstrap --variant=minbase --arch i386 lucid rootfs.ubuntu
Change “--arch i386” to “--arch amd64” for a 64 bit container.
Configure the container
Fix devices in rootfs.ubuntu/dev
This step is OPTIONAL !!. Use this script if you wish to minimize the devices in rootfs.ubuntu/dev (the defaults created with debootstrap work without any adjustments).
#!/bin/bash
# bodhi.zazen's lxc-config
# Makes default devices needed in lxc containers
# modified from http://lxc.teegra.net/
ROOT=$(pwd)
DEV=${ROOT}/dev
if [ $ROOT = '/' ]; then
printf "\033[22;35m\nDO NOT RUN ON THE HOST NODE\n\n"
tput sgr0
exit 1
fi
if [ ! -d $DEV ]; then
printf "\033[01;33m\nRun this script in rootfs\n\n"
tput sgr0
exit 1
fi
rm -rf ${DEV}
mkdir ${DEV}
mknod -m 666 ${DEV}/null c 1 3
mknod -m 666 ${DEV}/zero c 1 5
mknod -m 666 ${DEV}/random c 1 8
mknod -m 666 ${DEV}/urandom c 1 9
mkdir -m 755 ${DEV}/pts
mkdir -m 1777 ${DEV}/shm
mknod -m 666 ${DEV}/tty c 5 0
mknod -m 666 ${DEV}/tty0 c 4 0
mknod -m 666 ${DEV}/tty1 c 4 1
mknod -m 666 ${DEV}/tty2 c 4 2
mknod -m 666 ${DEV}/tty3 c 4 3
mknod -m 666 ${DEV}/tty4 c 4 4
mknod -m 600 ${DEV}/console c 5 1
mknod -m 666 ${DEV}/full c 1 7
mknod -m 600 ${DEV}/initctl p
mknod -m 666 ${DEV}/ptmx c 5 2
exit 0
The script is very slightly modified from This page and is saved in /usr/local/bin/lxc-config .
Make it executable :
chmod u+x /usr/local/bin/lxc-config
Run the script in rootfs.ubuntu
cd /lxc/rootfs.ubuntu
/usr/local/bin/lxc-config # fix /dev
Modify the rootfs
Edit the sources. Using any editor open /lxc/rootfs.ubuntu/etc/apt/sources.list and edit the contents to look like this :
deb http://us.archive.ubuntu.com/ubuntu/ lucid main universe multiverse
deb http://us.archive.ubuntu.com/ubuntu/ lucid-security main universe multiverse
chroot into rootfs.ubuntu and configure the container:
chroot /lxc/rootfs.ubuntu /bin/bash
Run the following commands in the chroot.
apt-get install --force-yes -y gpgv
apt-get update
# set locales
apt-get -y install language-pack-en
locale-gen en_US.UTF-8
/usr/sbin/update-locale LANG="en_US.UTF-8" LANGUAGE="en_US.UTF-8" LC_ALL="en_US.UTF-8" LC_CTYPE="C"
# Add to the installed applications
apt-get install -y adduser apt-utils console-setup iproute iptables mysql-server nano netbase openssh-blacklist openssh-blacklist-extra openssh-server php5 php5-mysql iputils-ping rsyslog sudo ufw vim
#Set a root passwd
passwd
# As an alternate to setting a root password, you may of course add a new user and configure sudo.
# Configure the hostname of the container and /etc/hosts
# Change "host_name" to your desired host name
# Change "192.168.0.60" to the ip address you wish to assign to the container
echo "host_name" > /etc/hostname
echo "127.0.0.1 localhost host_name" > /etc/hosts
echo "192.168.0.60 host_name" >> /etc/hosts
# Fix mtab
rm /etc/mtab
ln -s /proc/mounts /etc/mtab
Next edit /etc/environment and define your environmental variables:
LANG="en_US.UTF-8"
LANGUAGE="en_US.UTF-8"
LC_ALL="en_US.UTF-8"
LC_CTYPE="C"
Exit the chroot
#exit chroot
exit
Generate the HOST LXC configuration files
This is done on the host node and will set the container resources, networking, and confinement.
I call it config.ubuntu . Make sure the following information is accurate:
container name (lxc.utsname)
network (lxc.network.ipv4)
rootfs (lxc.rootfs)
fstab (lxc.mount)
/lxc/config.ubuntu
lxc.utsname = ubuntu
lxc.tty = 4
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
lxc.network.name = eth0
lxc.network.mtu = 1500
lxc.network.ipv4 = 192.168.0.10/24
lxc.rootfs = /lxc/rootfs.ubuntu
lxc.mount = /lxc/fstab.ubuntu
lxc.cgroup.devices.deny = a
# /dev/null and zero
lxc.cgroup.devices.allow = c 1:3 rwm
lxc.cgroup.devices.allow = c 1:5 rwm
# consoles
lxc.cgroup.devices.allow = c 5:1 rwm
lxc.cgroup.devices.allow = c 5:0 rwm
lxc.cgroup.devices.allow = c 4:0 rwm
lxc.cgroup.devices.allow = c 4:1 rwm
# /dev/{,u}random
lxc.cgroup.devices.allow = c 1:9 rwm
lxc.cgroup.devices.allow = c 1:8 rwm
# /dev/pts/* - pts namespaces are "coming soon"
lxc.cgroup.devices.allow = c 136:* rwm
lxc.cgroup.devices.allow = c 5:2 rwm
# rtc
lxc.cgroup.devices.allow = c 254:0 rwm
The following lines are critical !
lxc.network.ipv4 = 192.168.0.10/24
lxc.rootfs = /lxc/rootfs.ubuntu
lxc.mount = /lxc/fstab.ubuntu
lxc.cgroup.xxx
lxc.network.ipv4 sets the container ip address (192.168.0.10) and netmask ( /24 ).
lxc.rootfs instructs lxc-create to use pivot root rather then chroot and this is important for containment.
lxc.mount is a replacement for rootfs.ubuntu/etc/fstab . Use this file to define mount points in your container. This sample configuration file is the minimum, you may use bind to add shared directories.
lxc.cgroup.foo These lines define the resources available to the container (via cgroup).
Make a fstab for lxc
Do not confuse this with the host fstab !!
This file is used in place of rootfs.ubuntu/etc/fstab.
I call it fstab.ubuntu . Make sure the following information is accurate (use the full path):
/lxc/fstab.lucid
none /lxc/rootfs.ubuntu/dev/pts devpts defaults 0 0
none /lxc/rootfs.ubuntu/proc proc defaults 0 0
none /lxc/rootfs.ubuntu/sys sysfs defaults 0 0
none /lxc/rootfs.ubuntu/var/lock tmpfs defaults 0 0
none /lxc/rootfs.ubuntu/var/run tmpfs defaults 0 0
/etc/resolv.conf /lxc/rootfs.ubuntu/etc/resolv.conf none bind 0 0
Note: I am suggesting binding the host /etc/resolv.conf in the container for convenience. You may chattr +i /etc/resolv.conf if you wish.
Mount a shared directory: For example, to share /home between the host and your container add this line to fstab.ubuntu:
/home /lxc/rootfs.ubuntu/home none bind 0 0
Be sure to understand the security implications of sharing directories between the host and your containers.
Modify the init scripts
Many of the init (upstart) scripts are not necessary in LXC containers and will either fail, cause delays in starting your container, or send error messages (or all of the above).
Replacement init script
This script is essential. Starting a container (lxc-start) is NOT the same process as booting a computer. We have defined the mount points and networking in config.ubuntu and fstab.ubuntu . We need minimal boot scripts in our container.
In the case of Ubuntu the critical file we need to replace is rootfs.ubuntu/etc/init/rc-sysinit
rm -f /lxc/rootfs.ubuntu/etc/init/rc-sysinit
cat << 'EOF' > /lxc/rootfs.ubuntu/init/rc-sysinit
#!/bin/bash
# Whatever is needed to clear out old daemon/service pids from your container
rm -f $(find /var/run -name '*pid')
rm -f /var/lock/apache/*
route add default gw 192.168.0.1
exit 0
EOF
chmod a+x /lxc/rootfs.ubuntu/etc/init/rc-sysinit
This script is modified from Here .
If you wish to use dhcp in your containers you will need to get the networking init scripts working and configure rootfs.ubuntu/etc/network/interfaces. It can be done, but IMO it is more hassle then using a static IP as the static IP is set in config.ubuntu as a “one liner” (see above).
Remove as many init scripts as possible
MAKE SURE YOU RUN THESE COMMANDS IN THE LXC ROOTFS
cd /lxc/rootfs.ubuntu/etc/init
rm -f console* control* hwclock* module* mount* network-interface* plymouth* procps* tty{4,5,6}.conf udev* upstart*
We should have only the following file (init scripts) in /lxc/rootfs.ubuntu/etc/init (remove any init scripts I overlooked in my list above).
mysql.conf
rc-sysinit
rc.conf
ufw.conf
As you can see, we need only a few scripts. ufw.conf is the easiest way, IMO, to enable UFW , if you do not use ufw you may remove it.
We need to edit mysql.conf and ufw.conf and change the start line to read:
start on startup
All the other services should start automatically. If they do not either edit the service .conf file (start on startup usually works) or add a line to start them in rc-sysconf .
With servers not covered in this tutorial, I suggest you move them to a back up location rather then delete them, they try starting your container. If your service does not start , restore the script and change the “start on startup” line in the service.conf file.
Enable iptables / UFW
Iptables / UFW seem to work well , but the LXC container wants to load the relevant kernel modules.
Here is what I did to enable iptables / UFW :
From the host run
mkdir rootfs.lucid/lib/modules/2.6.32-22-generic/kernel
cp /lib/modules/2.6.32-22-generic/modules.dep rootfs.lucid/lib/modules/2.6.32-22-generic/
cp -R /lib/modules/2.6.32-22-generic/kernel/net rootfs.lucid/lib/modules/2.6.32-22-generic/kernel/
You will need to change “2.6.32-22-generic” to your actual kernel and update the LXC container with kernel updates.
You should now be able to run iptables or UFW in the container (once it is started of course).
Create and manage the container
Create the container:
lxc-create -f /lxc/conf.ubuntu -n ubuntu
Start the container
lxc-start -n ubuntu
Assuming you get no error messages , you can start the container with the -d option.
lxc-start -n ubuntu -d
You should now be able to access the container with either lxc-console or ssh
ssh root@192.168.0.60
lxc-console -n ubuntu
Exit lxc-console with Ctrl-a q (you do not need to log out).
Stop the container from the host node
lxc-stop -n ubuntu
Destroy the container
lxc-destroy -n ubuntu
Oddities
Just a few odds and ends …
1. rsyslog logs to the HOST NODE /var/log files, not the container.
2. If you edit the configuration files (config.ubuntu or fstab.ubuntu) you will need to stop, destroy, and recreate the container. This is fast and does not affect the rootfs.
3. How much HD space / RAM does it use ?
On my test system w/ 1 Gb RAM running an xfce desktop (and browsing the web …)
rootfs.ubuntu takes up 474 Mb of HD space.
With the lucid container stopped I am using 402 Mb RAM.
When I start the lucid container I jump up to a whopping 419 Mb RAM.
4. X applications. IMO the easiest way to forward X applications (graphical applications) from the container to the host is with ssh -X.
For example ssh -X user@container_ip xeyes works fine.
Alternately you can use a vnc server , Xephyr, or configure one of your consoles (Ctrl-alt-f3).
Based on your pages I have written a “shell-ready” HOWTO for Ubuntu lxc:
http://fex.rus.uni-stuttgart.de/lxc-ubuntu
I contains several enhancements like the lxc meta-tool:
http://fex.rus.uni-stuttgart.de/lxc.html
@Ulli Horlacher
Glad t know some of the information I posted has helped. Thank you for the ping back I will look at your links soon.
Pingback: LXC is Awesome | Jacob Lewallen
Hi,
root@vm1-dt:/lxc/rootfs.ubuntu# lxc-start -n ubuntu
lxc-start: no configuration file for ‘/sbin/init’ (may crash the host)
What could be the problem? I followed your description.
My LXC would run inside a Lucid KVM guest.
TIA,
I can go further:
root@vm1-dt:/home/rattila# lxc-start -n ubuntu
swapon: /dev/disk/by-uuid/8219733b-cc4d-4907-ac5e-f27da1fcf580: swapon failed: Device or resource busy
mountall: swapon /dev/disk/by-uuid/8219733b-cc4d-4907-ac5e-f27da1fcf580 [26] terminated with status 255
mountall: Problem activating swap: /dev/disk/by-uuid/8219733b-cc4d-4907-ac5e-f27da1fcf580
The previous problem was destroying container and I forgot re-create.
Did you write an init script ? did you make it executable ?
No.
I followed the above receipt.
I found a Debian description which is working.
I’m not a script or init guru. :-(
I just want the same functionality than Debian: SSH only machine which can be expandable later.
Hi,
Thanks for the enormous useful information on LXC configurations, Great Help.
Query: debootstrap process takes a long time to download & install each time. Is there a way to reduce this delay? Im looking to bring a large number of containers all at once to work around with.
Any ideas or suggestions?
Thanks
Mys
@Mys: I would suggest you use a local repository.
Thanks, local repository works :-) .
Can you help in this query as well …?
Is there a way to add my own custom applications to boot along with the containers? I have a large 80MB java app which I need to compile in all containers after booting. The host machine does have the executable to which I can point and execute but is there a way to debootstrap this while container creation like in windows startup programs.
Thanks & Regards
Mys
/lxc/rootfs.ubuntu# lxc-start -n ubuntu
lxc-start: no configuration file for ‘/sbin/init’ (may crash the host)
What could be the problem
Sounds as if you need a /sbin/init in the guest.
lxc-start: no configuration file for ‘/sbin/init’ (may crash the host)
lxc-start: no configuration file for ‘/sbin/init’ (may crash the host)
Checked the /sbin/init and found to be binary both in the host and the container.
@Enrass – Sorry, but I have no idea.
My suggestion would be to post to the LXC mailing list, be sure to include host and guest OS, version of LXC (you will almost certainly be asked to be running the most current version from git), and possibly the init script from the guest.
You are probably starting wrong container (not existing name). Check your container name using lxc-ls
> Query: debootstrap process takes a long time to download & install each
> time. Is there a way to reduce this delay? Im looking to bring a large
> number of containers all at once to work around with.
See http://fex.rus.uni-stuttgart.de/lxc.html
“lxc -C newvm” needs only 2 seconds on my LXC-server.
Ulli Horlacher
Ulli Horlacher
Set up a local cache. One method is at the bottom of the page here:
http://www.debian-administration.org/article/Installing_new_Debian_systems_with_debootstrap/print
Hi, I don’t understand this part regarding init scripts: “If they do not either edit the service .conf file (start on startup usually works) or add a line to start them in rc-sysconf”
For example, apache does not create a /etc/init/apache2.conf file to put start on startup. Were should I add the line? and what should that line say? I don’t find a rc-sysconf file :S
I’m using lxc-0.7.5 under ubuntu 10.04 with a lucid container. I’ve left the init scripts as they come by default after creating a container with lxc-lucid.
Thanks for your help
@Adrian Are you having a problem ?
If not, then you do not need to make any edits. If so, then you will need to edit the init scripts.
The problem is that apache and webmin doesn’t start on container startup. So each time I have to ssh to the container and start both services manually. I’m not sure what to check to make them start on startup. Thanks for answering
@Adrian
Then you will have to edit the init scripts. They are in /etc/init.
I am not familiar with them off the top of my head, but Ubuntu is in theory supporting LXC, so if you are not familiar with editing the init scripts then I suggest your next step is to file a bug report on Launchpad.
Along those lines, I find Debian Squeeze is a better option then Ubuntu and is what I use at least in openvz and lxc.
Thanks Bodhi, I’ll see what I can do, I need to get them to work for ubuntu as I’m converting existing lucid images to lxc
@Adrian
See:
http://upstart.ubuntu.com/getting-started.html
use
“start on startup”
usually works in LXC (and openvz) VPS
converting images can be a bit of a hassle, sometimes (usually) easier to back up the data and do a fresh install.
Pingback: 运维和开发 » Blog Archive » 配置ubuntu版的lxc容器
Please consider expanding your tutorial for sandboxie refugees who now use ubuntu. (LXC firefox)
@bob marley – On Ubuntu I would use Apparmor (rather then LXC). You would (hard) link the firefox binary to say /usr/local/bin/sandfox , then copy and modify the apparmor profile for firefox for sandfox.
Those who have problem with “lxc-start: no configuration file for ‘/sbin/init’ (may crash the host)”, just add “-f ” to the lxc-start. For example:
lxc-start -n vps101 -l DEBUG -f /var/lib/lxc/vps101/config
Pingback: Virtualization on Linux – Containers | Roman Yepishev
hi, I followed ur tutorial bu got some problem with the ssh/console function. I can ping my container but I can not ssh or lxc-console the container, it seems as if the sshd service is not start on startup.
any ideas? thanks
@Ryan – Hard to say. What version of Ubuntu ? What host / kernel ?
Probably best for you to file a bug report.
Personally, I use openvz, I find LXC is under rapid development and I got frustrated with the lack of documentation.
Hello everybody.
I used to have this problem : « lxc-start: no configuration file for ‘/sbin/init’ (may crash the host) ».
The problem was that I had another file in /var/lib/lxc/mylxccontainer/ than “config” and “rootfs”.
Be sure that there is nothing else and « lxc-start -n mylxccontainer » will work without specifying the configuration file. \o/
Yours is the only article on the net that I found that mentions everything about the locale setup. Thank you!
@Matthias – You are most welcome. It is an old post and (minor) configuration detail, but it is nice, IMO, to set locale.
Pingback: Использование контейнеров LXC в Debian/Ubuntu | Zit@i0