Kushal Das

FOSS and life. Kushal Das talks here.


Fedora Cloud WG during last week of 2016-02

Fedora Cloud Working Group meets every Wednesday at 17:00UTC on #fedora-meeting-1 IRC channel on Freenode server. This week we had 15 people attending the meeting, which is in the regular range of the meeting attendees. The points need to be discussed in the meeting are generally being tracked on the fedorahosted trac as trac tickets with a special keyword meeting. This basically means if you want something to be discussed in the next cloud meeting, add a ticket there with the meeting keyword.

After the initial roll call, and discussions related to the action items from the last week, we moved into the tickets for this week's meeting. I had continued my action item on investigating adding a CDROM device to the embedded Vagrantfile in the Vagrant images we generate. But as it seems that is hardcoded inside of Imagefactory, I am feeling less motivated to add any such thing there.

Then we moved to discuss the FAD proposal, which still needs a lot of work. Next major discussion was related to the Fedora 24 changes from the Cloud WG. I have updated the ticket with the status of each change. It was also decided that nzwulfin will update the cloud list with more information related to Atomic Storage Clients change.

Next big discussion was related to Container "Packager" Guildelines and Naming Conventions. One of the major upcoming change is about being able to build layered images on Fedora Infrastructure as we build the RPM(s) in the current days. Adam Miller wrote the initial version of the documents for the packagers of those layered images. I have commented my open questions in the trac ticket, others also started doing so. Please take your time, and go through both the docs pointed in that ticket. As containers are taking a major part of all the cloud discussions, this will be a very valuable guide for the future container packagers.

I still have some open work left from this week. During Open Floor Steve Gordon brought up the issues Magnum developers are facing while using Fedora Atomic images. I will be digging more into that this week. The log of the meeting can be found here. Feel free to join in the #fedora-cloud channel if you have any open queries.

Updates from CentOS Cloud SIG

Back in 2014 we started working on CentOS Cloud SIG. This SIG started with focus on packaging, and maintaining different FOSS IaaS platforms. We are not a vendor specific group, means anyone who wants to keep their cloud project maintained in the CentOS ecosystem, are very welcome to join, and work in this SIG.

We have our regular IRC meeting on every Thursday at 15:00 UTC in the #centos-devel IRC channel on freenode. The regular meeting agenda is kept updated on a etherpad. The major points from this week's meeting are

  • We have an upcoming RDO test day on 10th March. More details are on this blog post.
  • There will be a live demo about TripleO, you can view it on 9th March in youtube.
  • OpenNebula folks were missing from the meeting, but we remember that they need more help in porting the required packages. So, if you want to start contributing OpenNebula or CentOS this can be your chance :)

The next meeting is in next week at the same time. See you all there.

Cloud work in last week (20150921)

Last week Dusty started looking into few bugs related to cloud base image. His first work was on the locale issues coming up in the base image. If you login to the system, you will see many warnings like:

-bash: warning: setlocale: LC_CTYPE: cannot change locale (en_US.UTF-8): No such file or directory
-bash: warning: setlocale: LC_COLLATE: cannot change locale (en_US.UTF-8): No such file or directory
-bash: warning: setlocale: LC_MESSAGES: cannot change locale (en_US.UTF-8): No such file or directory
-bash: warning: setlocale: LC_NUMERIC: cannot change locale (en_US.UTF-8): No such file or directory
-bash: warning: setlocale: LC_TIME: cannot change locale (en_US.UTF-8): No such file or directory

Last Monday I tested the patch locally first, and then pushed it to the spin-kickstarts . His next work was related to the cloud-base-vagrant image, where we removed extlinux, and used grub for booting, you can view the commit here.

I also worked on broken i386 cloud image, it was not getting built for last few weeks. While trying to build the last known broken kickstart locally, I found the dracut was on a loop, it never went to the next stage. With the help Ian McLeod, the amazing upstream author of Imagefactory, we chased down the issue in #anaconda, and Kevin (nirik) confirmed that Oz requires more RAM to build the i386 image. We tested that locally, Dennis will update the koji builders to get the issue fixed. If you are trying to build the i386 version of the Cloud image locally, remember to increase libvirt memory in /etc/oz/oz.cfg Rest of my time mostly went on tunir and autocloud project. There will be few more blog posts in coming days on them.

tunir 0.7 is out

Today I have released Tunir 0.7. Tunir is a simple CI which developers can even use in their laptops. There are few major changes in this release. The first one is about no database support. Tunir itself will not save any data anywhere, this also means --stateless command line argument is now unnecessary. Even if you do not pass that option, tunir will print out the output of the tests on STDOUT.

Second, and the biggest change is the ability to test on Vagrant boxes using vagrant-libvirt plugin. On a Fedora 22 system, you can install vagrant by the following command.

$ sudo dnf install vagrant-libvirt

The following is an example job configuration on Vagrant. This of course assumes that you have already downloaded that box file somewhere in the local filesystem. In case you have not noticed, we are now generating vagrant images for both Cloud base, and Atomic image in Fedora Project. You can download them from here.

  "name": "fedora",
  "type": "vagrant",
  "image": "/home/Fedora-Cloud-Atomic-Vagrant-22-20150521.x86_64.vagrant-libvirt.box",
  "ram": 2048,
  "user": "vagrant",
  "port": "22"

I have already built the rpms for Fedora, they are right now in the testing repo.

Testing systemd-networkd based Fedora 22 AMI(s)

Few days back I wrote about a locally built Fedora 22 image which has systemd-networkd handling the network configuration. You can test that image locally on your system, or on an Openstack Cloud. In case you want to test the same on AWS, we now have two AMI(s) for the same, one in the us-west-1, and the other in ap-southeast-1. Details about the AMI(s) are below:

Region AMI Name AMI ID Virtualization
ap-southeast-1 fedora22-networkd ami-e89895ba HVM

Start an instance from these images, and login. In case you want to use some different DNS settings, feel free to remove /etc/resolv.conf link, and put up a normal /etc/resolv.conf file with the content you want.

Testing Fedora Cloud image with systemd-networkd

One of the change proposal I have submitted for Fedora 23 is about having systemd-netowrkd for network configuration. You can find the change page here. Instead of carrying the old network-scripts, we wanted to move to networkd, which is a part of systemd. Couple of the notable benefits are about how it will help us to keep the image size sane by not bringing in any external dependencies, and also about similarity between many different distribution based cloud images from users' point of view. You can look into the discussions on the Talk page, and the trac ticket.

In the last week's cloud meeting we decided to have a build of Fedora 22 cloud image with systemd-networkd on it. I made the required changes, and did the local build. You can download the qcow2 image, remember it is 218MB. You can use it in any cloud environment in a normal way. If you want to learn, and play around with the configurations, you may want to read this page. Please try the image and tell us what do you think in the comments section of this post.

CentOS Cloud SIG update

For the last few months we are working on the Cloud Special Interest Group in the CentOS project. The goal of this SIG is to provide the basic guidelines and infrastructure required of FOSS cloud infrastructure projects so that we can build and maintain the packages inside official CentOS repositories.

We have regular meetings at 1500 UTC in every Thursday on #centos-devel IRC channel. You can find the last week's meeting log here. RDO (Openstack), Opennebula and Eucalyptus were the first few projects to come ahead, and participate in forming the SIG. We also have a good number of overlap with the Fedora Cloud SIG.

RDO is almost ready to do a formal release of Kilo on CentOS 7. The packages are in testing phase. Opennebula team has started the process to get the required packages built on CBS.

If you want to help feel free to join in the #centos-devel channel, and give us a shout. We do need more helping hands to package, and maintain various FOSS Cloud platforms.

There are also two GSoC projects under CentOS which are related to the Cloud SIG. The first one is "Cloud in a box", and the second one is "Lightweight Cloud Instance Contextualization Tool". Rich Bowen, and Haikel Guemar are the respective mentors for the projects.

My worknotes related to Fedora Cloud

I am keeping all my worknotes in a git repo and the same is getting rendered in readthedocs. I will try to explain the processes and tools involved in the Fedora Cloud SIG. The notes started with imagefactory project as we use the same to build the cloud base images in koji.

I am also having a faq section where I am putting up all the random questions coming in my mind.

If you have any questions related to the notes, feel free to ask me on IRC or you can create an issue in github, as usual patches are most welcome :D

s3cmd and Walrus for your private object storage

What is s3cmd?

s3cmd is the command line tool to access files in the Amazon S3 object storage. It is written in Python.

What is Walrus?

Walrus is the object storage of Eucalyptus.

Few more terms

Objects : All files which are stored in the object storage are known as objects.

Key : The filename of an object is known as key.

Bucket : It is the name of the place where people store the files, it behaves like directory with some limitations. Buckets must have unique names.

Install the latest s3cmd from Eucalyptus fork

Install latest version of s3cmd using git in a virtual environment.

$ virtualenv s3
$ source s3/bin/activate
$ git clone https://github.com/eucalyptus/s3cmd
$ cd s3cmd
$ python setup.py install

Configuration file

The default configuration filename is .s3cfg

Following is a minimal example.

secret_key = 3Jy1VKDZmVpwdsffdf8d2PsmfhVojfAW8RmFO2FD
host_base =
host_bucket =
service_path = /services/Walrus

access_key and secret_key are from your eucarc file. You also know the ip of the host.

To list buckets

$ s3cmd ls
2013-12-24 16:27  s3://foolto
2013-10-19 13:56  s3://mybucket
2013-10-24 15:35  s3://official
2013-12-03 14:47  s3://snapset-5599d8c1-296f-480f-b2be-1e2b03933e42

To list contents of a bucket

$ s3cmd ls s3://foolto
                       DIR   s3://foolto//t2/
2013-12-24 16:51     52680   s3://foolto/lscpu

To create a new bucket

$ s3cmd mb s3://testbucket
Bucket 's3://testbucket/' created

Uploading a file to the bucket

$ s3cmd put .s3cfg s3://testbucket/sample-configuration.txt
.s3cfg -> s3://testbucket/sample-configuration.txt  [1 of 1]
213 of 213   100% in    0s   668.66 B/s  done

In this example we uploaded the configuration file to the testbucket. We can list the bucket content once more.

$ s3cmd ls s3://testbucket/
2013-12-26 04:57       213   s3://testbucket/sample-configuration.txt

Downloading an object from the bucket

$ s3cmd get s3://testbucket/sample-configuration.txt
s3://testbucket/sample-configuration.txt -> ./sample-configuration.txt  [1 of 1]
213 of 213   100% in    0s  1315.45 B/s  done

Delete an object in the bucket

$ s3cmd del s3://testbucket/sample-configuration.txt
File s3://testbucket/sample-configuration.txt deleted

You can learn more about s3cmd from here. If you want to learn about more configuration values available, you can check this link.

Sessions on Eucalyptus

Yesterday we had a session on Eucalyptus in my house. Total 7 people attended the session including me. We started with an all-in-one cloud installation on the Inter NUC(s). After the cloud is up and running, I installed Fedora 20 cloud image on it.

During the installation we had some nice discussions around different technology choices and features of Eucalyptus. Few people also noted the key differences and similarities between OpenStack and Eucalyptus.

Today evening we had a session on "Open Cloud" using Eucalyptus Community Cloud on #dgplug channel on freenode. 15 people attended the session. We went through the different parts of the user console. People created security groups and key pairs. Everyone started their own instances (with little hiccups) and sshed into them. The UI is simple enough for the students to get the idea.

In future I will be doing more sessions on IRC, starting from installing your own private cloud to learning different technology using cloud. I will also put the notes on my blog. If you think I should cover any particular piece of technology please leave a comment in this post.