Kushal Das

FOSS and life. Kushal Das talks here.

kushal76uaid62oup5774umh654scnu5dwzh4u2534qxhcbi4wbab3ad.onion

Using rkt on my Fedora servers

Many of you already know that I moved all my web applications into containers on Fedora Atomic image based hosts. In the last few weeks, I moved a few of them from Docker to rkt on Fedora 25. I have previously written about trying out rkt in Fedora. Now I am going to talk about how can we build our own rkt based container images, and then use them in real life.

Installation of rkt

First I am going to install all the required dependencies, I added htop and tmux and vim on the list because I love to use them :)

$ sudo dnf install systemd-container firewalld vim htop tmux gpg wget
$ sudo systemctl enable firewalld
$ sudo systemctl start firewalld
$ sudo firewall-cmd --add-source=172.16.28.0/24 --zone=trusted
$ sudo setenforce Permissive

As you can see in the above-mentioned commands, rkt still does not work well with the SELinux on Fedora. We hope this problem will be solved soon.

Then install the rkt package as described in the upstream document.

$ sudo rkt run --interactive --dns=8.8.8.8 --insecure-options=image kushal.fedorapeople.org/rkt/fedora:25

The above-mentioned command downloads the Fedora 25 image I built and then executes the image. This is the base image for all of my other work images. You may not have to provide the DNS value, but I prefer to do so. The --interactive provides you an interactive prompt. If you forget to provide this flag on the command line, then your container will just exit. I was confused for some time and was trying to find out what was going on.

Building our znc container image

Now the next step is to build our own container images for particular applications. In this example first I am going to build one for znc. To build the images we will need acbuild tool. You can follow the instructions here to install it in the system.

I am assuming that you have your znc configuration handy. If you are installing for the first time, you can generate your configuration with the following command.

$ znc --makeconf

Now below is the znc.acb file for my znc container. We can use acbuild-script tool to create the container from this image.

#!/usr/bin/env acbuild-script

# Start the build with an empty ACI
begin

# Name the ACI
set-name kushal.fedorapeople.org/rkt/znc
dep add kushal.fedorapeople.org/rkt/fedora:25

run -- dnf update -y
run -- dnf install htop vim znc -y
run -- dnf clean all

mount add znchome /home/fedora/.znc
port add znc tcp 6667

run --  groupadd -r fedora -g 1000 
run -- useradd -u 1000 -d /home/fedora -r -g fedora fedora

set-user fedora

set-working-directory /home/fedora/
set-exec -- /usr/bin/znc --foreground 

# Write the result
write --overwrite znc-latest-linux-amd64.aci

If you look closely to the both mount and port adding command, you will see that I have assigned some name to the mount point, and also to the port (along with the protocol). Remember that in the rkt world, all mount points or ports work based on these assigned names. So, for one image HTTP name can be assigned to the standard port 80, but in another image, the author can choose to use port 8080 with the same name. While running the image we choose to decide how to map the names to the host side or vice-versa. Execute the following command to build our first image.

$ sudo acbuild-script znc.acb

If everything goes well, you will find an image named znc-latest-linux-amd64.aci in the current directory.

Running the container

$ sudo rkt --insecure-options=image --debug run --dns=8.8.8.8  --set-env=HOME=/home/fedora --volume znchome,kind=host,source=/home/kushal/znc,readOnly=false  --port znc:8010 znc-latest-linux-amd64.aci

Now let us dissect the above command. I am using --insecure-options=image option as I am not verifying the image, --debug flag helps to print some more output on the stdout. This helps to find any problem with a new image you are building. As I mentioned before I passed a DNS entry to the container using --dns=8.8.8.8. Next, I am overriding the $HOME environment value, I still have to dig more to find why it was pointing to /root/, but for now we will remember that --set-env can help us to set/override any environment inside the container.

Next, we mount /home/kushal/znc directory (which has all the znc configuration) in the mount name znchome and also specifying that it is not a readonly mount. In the same way we are doing a host port mapping of 8010 to the port named znc inside of the container. As the very last argument, I passed the image itself.

The following is the example where I am copying a binary (the ircbot application written in golang) into the image.

#!/usr/bin/env acbuild-script

# Start the build with an empty ACI
begin

# Name the ACI
set-name kushal.fedorapeople.org/rkt/ircbot
dep add kushal.fedorapeople.org/rkt/fedora:25

copy ./ircbot /usr/bin/ircbot

mount add mnt /mnt

set-working-directory /mnt
set-exec -- /usr/bin/ircbot

# Write the result
write --overwrite ircbot-latest-linux-amd64.aci

In future posts, I will explain how can you run the containers as systemd services. For starting, you can use a tmux session to keep them running. If you have any doubt, remember to go through the rkt documents. I found them very informative. You can also try to ask your doubts in the #rkt channel on Freenode.net.

Now it is an exercise for the reader to find out the steps to create an SELinux module from the audit log, and then use the same on the system. The last step should be putting the SELinux back on Enforcing mode.

$ sudo setenforce Enforcing

Fedora Atomic 24 is now available

Just in case you missed the news, Adam Miller already announced the availability of Fedora Atomic release based on Fedora 24. You can get it from the usual place. Dusty already uploaded the same into Atlas for Vagrant. You can try it out by the following.

    $ vagrant init fedora/24-atomic-host; vagrant up

As Adam mentioned in his mail, we are sorry for the delay, but we will keep improving the process. Thank you everyone for helping us with this release.

Event report: Fedora Cloud FAD 2016

Around a month back the Fedora Cloud Working Group met in Raleigh for two days for Cloud FAD. The goal of the meet was to agree about the future we want, to go through major action items for the coming releases. I reached Raleigh one day before, Adam Miller was my room mate for this trip. Managed to meet Spot after a long time, this was my first visit to mothership :) I also managed to meet my new teammate Randy Barlow.

Adam took the lead in the event, we first went through the topics from the FAD wiki page. Then arranged them as in the order we wanted to discuss.

Documentation was the first item. Everyone in the room agreed that it is the most important part of the project, we communicate with our users using the documentation. If a project can provide better documentation with clear examples, it will be able to attract more users to itself. Last year I have started a repo to have documents available in the faster manner, but that did not work out well. Jared volunteered, and also suggested to have an open repo where people can submit documents without worrying much about format. He will help to convert to the right format as required. We also noted few important examples/documents we want to see. Feel free to write about any of these and submit a pull request to the repo.

Automated testing was the next import point. We went through the current state of tests in Autocloud project. Most of our tests there came from various bugs filed against the images/tools. Dusty was very efficient in creating the corresponding issues on github from the RH bugzilla, which then in turn was converted into proper Python3 unittest based test case by the volunteers. Next we moved to the automated testing of layered image builds. Tflink provided valuable input on this discussion. We talked SPC(s), how the maintainers of the images will be responsible for the tests. For the tests related to the Cloud/Atomic images, the Cloud Working Group will be responsible.

There were few super long discussions related Fedora, Atomic, and Openshift projects. Many members from the Openshift development team also joined in the discussion. Results of these discussions will be seen in the mailing lists (a lot of content for this blog post). We also discussed about Vagrant, and related tools people are using or creating. With help from Randy, I managed to package vagrant-digitalocean plugin during the FAD. There will be a Fedora magazine post with more details on the same.

We also agreed upon having monthly updated base images. We still have to find out a few missing points related to the same so that we can have a smooth updated release.

Public cloud provider outreach was one of the last points we discussed. We have to pick different providers one by one, and have to make sure that we can provide updated releases to them for the consumption the users. The point of more documentation came up in this discussion too.

Report: Fedora 24 Cloud/Atomic test day

Last Tuesday we had a Fedora 24 test day about Fedora Cloud, and Atomic images. With help from Adam Williamson I managed to setup the test day. This was first time for me to use the test day web app, where the users can enter results from their tests.

Sayan helped to get us the list of AMI(s) for the Cloud base image. We also found our first bug from this test day here, it was not an issue in the images, but in fedimg. fedimg is the application which creates the AMI(s) in an automated way, and it was creating AMI(s) for the atomic images. Today sayan applied a hotfix for the same, I hope this will take care of issue.

While testing the Atomic image, I found docker was not working in the image, but it worked in the Cloud base image. I filed a bug on the same, it seems we already found the root cause in another bug. The other major issue was about upgrade of the Atomic image failing, and it was also a known issue.

In total 13 people volunteered in the test day from Fedora QA, and Cloud SIG groups. It was a successful event as we found some major issues, but we will be happy not to have any issues at all :)

Fedora 24 Cloud/Atomic test day tomorrow

Tomorrow we are going to have a test day for Fedora 24 Cloud/Atomic images. The details of the event can be found in the wiki page. We also setup an event in the testdays application, this is where you can see all the tests we want to tryout, you can also submit results into the same place.

What do you need to participate in this test day?

A recent Fedora system, where you can download the latest nightly builds for Fedora 24. If you have access to any Openstack or other cloud IAAS system where you can upload the image, and boot an instance, that will also do. In case you have vagrant setup in Mac or on Windows, you can download the corresponding vagrant-virtualbox.box image, and boot an instance.

Where to find us or place for getting any help?

We will be online at #fedora-test-day IRC channel in the freenode server.

Come and join us to make Fedora 24 a great release.

Event report: rootconf 2016

Rootconf is the largest DevOps, and Cloud infrastructure related conference in India. This year's event happened on 14-15th April in the MLR convention center, Bangalore. I traveled on the day one of the event from Pune. Woke up by 3AM in the morning, and then took the first flight to Bangalore. Picked up Ramky on my way to the venue. Managed to skip most of the (in)famous Bangalore traffic thanks to a govt holiday.

I carried the Fedora standee, and the table cloth from Pune, we set up the Fedora table in the conference. Meanwhile the place was getting full with attendees. We found so many ex-foss.in friends. Premshree, Pankaj, Raj, Vasundhar and many others I met after years. There were 300+ attendees in the event.

I should also mention about the Cat5 cable lanyard.

The day started with Zainab welcoming everyone is super fast manner :) The first talk of the day was " Happiness through Crash-Only software" by Antoine Grondin. The whole idea that failure is part of life, and we will make our software even more complex by trying to avoid is something we all should keep in mind. He gave some ideas about how Digital Ocean works in the back. Instead of a normal start/end he explained how having recovering from failed state as start, and considering that the process will fail at the end can help. Seeing hand drawn slides was another exciting thing for me, I was doing that quite a lot in my previous talks.

The next talk was from Raj Shekhar, who gave an overview of Mesos. Even in the limited time, his talk was a very right fit while still describing the use case. This was the first time I listened to a talk about Mesos, and it was a very good one.

After this during tea break we moved out the booth/table. We had Ramky, Lalatendu, Aditya, /me, and rtnpro in the table talking to the attendees. The Fedora DVD(s), and badges went out like hot cakes :) Through out two days we had many questions related to Project Atomic, the great looking stickers actually helped to get more attention. In between, I went through the details of my talk to Raj, he provided some valuable input.

Food is always great in hasgeek events, but having so many food startups from Bangalore in one place seriously added new flavor to the conference. hasgeek is not only growing itself, but making sure that they have a great community all across the conference, from talks to food, everything.

Rest of the day 1 we spent talking to people in, and around booth area. hasgeek gets some great recording done of their talks, so we can view them in future :) In the evening we had the dinner at the venue, once again too tasty food :) I went to rtnpro's house by 10pm as I still wanted to work on my slides.

Day 2

Photo by Lalatendu (Photo by Lalatendu)

Woke up by 6:30AM, and went through my slides once again. The title of my talk was "Failure at Cloud and rescued by Python". The agenda of the talk was to encourage DevOps/Sys-admins to write Python scripts than shell scripts. It was the first talk of the day. I hope it went well, the slides are available here. As I mentioned Ansible in my talk, we had many Ansible questions asked in the Fedora booth.

Next was Premshree, who talked about "Continuous deployment at Scale" with the detailed example of how they handle things at Etsy. Glad to see that they use a IRC bot to do the deployment :)

After lunch I attended "Working in and with Open Source Communities" from Bernd Erk. This talk was full with tips about communities, how to treat everyone as member. One major point was about a community leader who should become the balance between the people who talk the most, and the silent ones of the community. He also emphasized about getting new members in the community as "Because you will die someday" :)

Rest of the day we again spent in and around of the Fedora booth. Met Tarun Dua after a long time. I came back in a late night flight to Pune. I will post the links to the talk videos when I get it.

Quick way to get throw away VMs using Tunir

The latest Tunir package has a --debug option which can help us to get some quick vms up, where we can do some destructive work, and then just remove them. Below is an example to fire up two vms using Fedora Cloud base image using a quickjob.cfg file.

[general]
cpu = 1
ram = 1024

[vm1]
user = fedora
image = /home/Fedora-Cloud-Base-20141203-21.x86_64.qcow2

[vm2]
user = fedora
image = /home/Fedora-Cloud-Base-20141203-21.x86_64.qcow2

In the quickjob.txt file we just keep one command to check sanity :)

vm1 free -m

After we execute Tunir, we will something like below as output.

# tunir --multi quickjob
... lots of output ...

Non gating tests status:
Total:0
Passed:0
Failed:0
DEBUG MODE ON. Destroy from /tmp/tmpiNumV2/destroy.sh

The above mention directory also has the temporary private key to login to the instances. The output also contains the IP addresses of the VM(s). We can login like

# ssh fedora@192.168.122.28 -i /tmp/tmpiNumV2/private.pem -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no

The last two parts of the ssh command will make sure that we do not store the signature for the throwaway guests in the known_hosts file. To clean up afterwards we can do the following.

# sh /tmp/tmpiNumV2/destroy.sh

How to mount a raw guest image?

While testing the latest Fedora 24 cloud images, I had to mount it locally so that I inspect the files inside. We can do this by using the offset value calculated from the fdisk command.

# fdisk -l /var/lib/imagefactory/storage/7c34e40b-b27c-4cd9-ae05-459c93c98005.body
Disk /var/lib/imagefactory/storage/7c34e40b-b27c-4cd9-ae05-459c93c98005.body: 3 GiB, 3221225472 bytes, 6291456 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xfeecffb4

Device                                                                   Boot Start     End Sectors Size Id Type
/var/lib/imagefactory/storage/7c34e40b-b27c-4cd9-ae05-459c93c98005.body1 *     2048 6291455 6289408   3G 83 Linux

In this case the start value is 2048, and each sector is 512 bytes, so our offset value is 2048 * 512 = 1048576.

# mount -o offset=1048576 /var/lib/imagefactory/storage/7c34e40b-b27c-4cd9-ae05-459c93c98005.body /mnt
[root@fedora-build f23]# ls /mnt/
bin  boot  dev  etc  home  lib  lib64  lost+found  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var

State of tests for Fedora Cloud and Atomic in March 2016

Till Fedora 22 release we have tested our Cloud images only with manual help. The amazing Fedora QA team organized test days, and also published detailed documentation on the wiki about how to test the images. People tried to help as when possible, as not having access to a Cloud was a problem for many. The images are also big in size (than any random RPM), so that also meant only people with enough bandwidth can help.

During Fedora 23 release cycle, we worked on a change for Automated Two Week Atomic release. A part of it was about having automated testing, which we enabled using Autocloud project. This automatically tests every Cloud Base and Atomic qcow2 images, and libvirt, and Virtualbox based Vagrant images.

This post will explain the state of currently activated tests for the same. These tests are written as Python3 unittest cases.

At first we run 5 non-gating tests, failing any of these will not do a release blocking, but at the end of each run we get the summary for all of these non-gating tests. These include bzip2, cpio, diffutils, Audit.

Then we go ahead in testing the basics test cases, things like journald logging, package install (only on base image), SELinux should be enforcing, and no service should fail during startup of the image.

The next test is about checking that /tmp should be world writable. We then move into testing the mount status of the /tmp filesystem. If it is a link, then also it should behave properly. Our next test checks that for Vagrant images we are using predictable naming convention of network devices, basically checking the existing of eth0.

We then disable crond service, after a reboot we make sure that the service is still in disabled state, and do the rest of service manipulations. We also check that the user journald log file exists on that reboot, this comes from a regression test. Then we reboot the instance again.

After this second reboot, we test the status of the crond service once again, and then we move to our special tests cases related to Atomic host. Means these tests will run only on Atomic hosts (both cloud, and Vagrant ones).

In the first test we check if the package docker is at all installed or not. Yes, if you guessed that this is a regression test, then you are correct. We once had an image without the docker package inside :) Next, we test the docker storage setup, that should be up, and in running state. We then run the busybox image, and see that it should be able to run properly. The atomic command is used next to start the same container. Pulling in the latest Fedora image, and running it is the next test. As our next test, we try to mount / as /host in the container. The next test is about having /bin, /sbin, and /usr mounted as read only in the Atomic host. In our final test, we check that a privileged container should be able to talk to the host docker daemon.

We also have a github wiki page explaining how to write any new test case. Feel free to ping if you want to contribute, and make Fedora flying high in the Cloud.