Kushal Das

FOSS and life. Kushal Das talks here.

kushal76uaid62oup5774umh654scnu5dwzh4u2534qxhcbi4wbab3ad.onion

Report: Fedora 24 Cloud/Atomic test day

Last Tuesday we had a Fedora 24 test day about Fedora Cloud, and Atomic images. With help from Adam Williamson I managed to setup the test day. This was first time for me to use the test day web app, where the users can enter results from their tests.

Sayan helped to get us the list of AMI(s) for the Cloud base image. We also found our first bug from this test day here, it was not an issue in the images, but in fedimg. fedimg is the application which creates the AMI(s) in an automated way, and it was creating AMI(s) for the atomic images. Today sayan applied a hotfix for the same, I hope this will take care of issue.

While testing the Atomic image, I found docker was not working in the image, but it worked in the Cloud base image. I filed a bug on the same, it seems we already found the root cause in another bug. The other major issue was about upgrade of the Atomic image failing, and it was also a known issue.

In total 13 people volunteered in the test day from Fedora QA, and Cloud SIG groups. It was a successful event as we found some major issues, but we will be happy not to have any issues at all :)

Fedora 24 Cloud/Atomic test day tomorrow

Tomorrow we are going to have a test day for Fedora 24 Cloud/Atomic images. The details of the event can be found in the wiki page. We also setup an event in the testdays application, this is where you can see all the tests we want to tryout, you can also submit results into the same place.

What do you need to participate in this test day?

A recent Fedora system, where you can download the latest nightly builds for Fedora 24. If you have access to any Openstack or other cloud IAAS system where you can upload the image, and boot an instance, that will also do. In case you have vagrant setup in Mac or on Windows, you can download the corresponding vagrant-virtualbox.box image, and boot an instance.

Where to find us or place for getting any help?

We will be online at #fedora-test-day IRC channel in the freenode server.

Come and join us to make Fedora 24 a great release.

Fedora Pune meetup April 2016

I actually never even announced the April meetup, but we had in total 13 people showing up for the meet. We moved the meet to my office from our usual space as I wanted to use the white board. At beginning I showed some example code about how to write unittests, and how are we using Python3 unittests in our Fedora Cloud/Atomic images automatically. Anwesha arranged some soft drinks, and snacks for everyone.

Praveen Kumar, Chandan Kumar, and /me were helping out people. Everyone tried to take one command, or one usecase in Fedora, and then tried to implement a test case for it. I still have 6 pull requests in the Tunirtests repository. I will go through them later on, and make sure that we merge them after the required clean ups. We always wanted to have such meetups so that people get something out of them, I hope last month's Fedora meetup was helpful. There are 5 reports in the event page of the wiki.

Btw, we are meeting again on May 14th.

dgplug.org is now using Lektor

Couple of years back we moved dgplug into a static website. But still there are requirements when we do want to update parts of the site in a timely manner. We also wanted to track the changes. We tried to maintain a sphinx based docs, but somehow we never managed to do well in that case.

This Sunday Suraj, Chandan, and /me were having a casual discussion, and the site management came up. I knew Armin wrote something new in the static sites side, but never managed to look into it. From the website of Lektor:

A flexible and powerful static content management system for building
complex and beautiful websites out of flat files — for people who do not
want to make a compromise between a CMS and a static blog engine.

So Lektor is for static cms, and that is what I was looking for. The dependency chain also looked reasonable. As usual documentation is in great shape. Yesterday I spent some time to setup a Fedora container which has Lektor inside along with a small flask based webapp. Sayan helped to fix the template/CSS issues.The web application listens for events from github webhoooks, and rebuilds the site. You can find the source for our site here, and the container Dockerfile and other details are here. Note: The web application is actually very crude in nature :)

Event report: rootconf 2016

Rootconf is the largest DevOps, and Cloud infrastructure related conference in India. This year's event happened on 14-15th April in the MLR convention center, Bangalore. I traveled on the day one of the event from Pune. Woke up by 3AM in the morning, and then took the first flight to Bangalore. Picked up Ramky on my way to the venue. Managed to skip most of the (in)famous Bangalore traffic thanks to a govt holiday.

I carried the Fedora standee, and the table cloth from Pune, we set up the Fedora table in the conference. Meanwhile the place was getting full with attendees. We found so many ex-foss.in friends. Premshree, Pankaj, Raj, Vasundhar and many others I met after years. There were 300+ attendees in the event.

I should also mention about the Cat5 cable lanyard.

The day started with Zainab welcoming everyone is super fast manner :) The first talk of the day was " Happiness through Crash-Only software" by Antoine Grondin. The whole idea that failure is part of life, and we will make our software even more complex by trying to avoid is something we all should keep in mind. He gave some ideas about how Digital Ocean works in the back. Instead of a normal start/end he explained how having recovering from failed state as start, and considering that the process will fail at the end can help. Seeing hand drawn slides was another exciting thing for me, I was doing that quite a lot in my previous talks.

The next talk was from Raj Shekhar, who gave an overview of Mesos. Even in the limited time, his talk was a very right fit while still describing the use case. This was the first time I listened to a talk about Mesos, and it was a very good one.

After this during tea break we moved out the booth/table. We had Ramky, Lalatendu, Aditya, /me, and rtnpro in the table talking to the attendees. The Fedora DVD(s), and badges went out like hot cakes :) Through out two days we had many questions related to Project Atomic, the great looking stickers actually helped to get more attention. In between, I went through the details of my talk to Raj, he provided some valuable input.

Food is always great in hasgeek events, but having so many food startups from Bangalore in one place seriously added new flavor to the conference. hasgeek is not only growing itself, but making sure that they have a great community all across the conference, from talks to food, everything.

Rest of the day 1 we spent talking to people in, and around booth area. hasgeek gets some great recording done of their talks, so we can view them in future :) In the evening we had the dinner at the venue, once again too tasty food :) I went to rtnpro's house by 10pm as I still wanted to work on my slides.

Day 2

Photo by Lalatendu (Photo by Lalatendu)

Woke up by 6:30AM, and went through my slides once again. The title of my talk was "Failure at Cloud and rescued by Python". The agenda of the talk was to encourage DevOps/Sys-admins to write Python scripts than shell scripts. It was the first talk of the day. I hope it went well, the slides are available here. As I mentioned Ansible in my talk, we had many Ansible questions asked in the Fedora booth.

Next was Premshree, who talked about "Continuous deployment at Scale" with the detailed example of how they handle things at Etsy. Glad to see that they use a IRC bot to do the deployment :)

After lunch I attended "Working in and with Open Source Communities" from Bernd Erk. This talk was full with tips about communities, how to treat everyone as member. One major point was about a community leader who should become the balance between the people who talk the most, and the silent ones of the community. He also emphasized about getting new members in the community as "Because you will die someday" :)

Rest of the day we again spent in and around of the Fedora booth. Met Tarun Dua after a long time. I came back in a late night flight to Pune. I will post the links to the talk videos when I get it.

Quick way to get throw away VMs using Tunir

The latest Tunir package has a --debug option which can help us to get some quick vms up, where we can do some destructive work, and then just remove them. Below is an example to fire up two vms using Fedora Cloud base image using a quickjob.cfg file.

[general]
cpu = 1
ram = 1024

[vm1]
user = fedora
image = /home/Fedora-Cloud-Base-20141203-21.x86_64.qcow2

[vm2]
user = fedora
image = /home/Fedora-Cloud-Base-20141203-21.x86_64.qcow2

In the quickjob.txt file we just keep one command to check sanity :)

vm1 free -m

After we execute Tunir, we will something like below as output.

# tunir --multi quickjob
... lots of output ...

Non gating tests status:
Total:0
Passed:0
Failed:0
DEBUG MODE ON. Destroy from /tmp/tmpiNumV2/destroy.sh

The above mention directory also has the temporary private key to login to the instances. The output also contains the IP addresses of the VM(s). We can login like

# ssh fedora@192.168.122.28 -i /tmp/tmpiNumV2/private.pem -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no

The last two parts of the ssh command will make sure that we do not store the signature for the throwaway guests in the known_hosts file. To clean up afterwards we can do the following.

# sh /tmp/tmpiNumV2/destroy.sh

How to mount a raw guest image?

While testing the latest Fedora 24 cloud images, I had to mount it locally so that I inspect the files inside. We can do this by using the offset value calculated from the fdisk command.

# fdisk -l /var/lib/imagefactory/storage/7c34e40b-b27c-4cd9-ae05-459c93c98005.body
Disk /var/lib/imagefactory/storage/7c34e40b-b27c-4cd9-ae05-459c93c98005.body: 3 GiB, 3221225472 bytes, 6291456 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xfeecffb4

Device                                                                   Boot Start     End Sectors Size Id Type
/var/lib/imagefactory/storage/7c34e40b-b27c-4cd9-ae05-459c93c98005.body1 *     2048 6291455 6289408   3G 83 Linux

In this case the start value is 2048, and each sector is 512 bytes, so our offset value is 2048 * 512 = 1048576.

# mount -o offset=1048576 /var/lib/imagefactory/storage/7c34e40b-b27c-4cd9-ae05-459c93c98005.body /mnt
[root@fedora-build f23]# ls /mnt/
bin  boot  dev  etc  home  lib  lib64  lost+found  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var

dgplug summer training student Dhriti Shikhar

  • Your name (blog/twitter) and what do you do

Name: Dhriti Shikhar

IRC nick: dhritishikhar

Twitter: https://twitter.com/DhritiShikhar

Blog: https://dhrish20.wordpress.com/

Github profile: https://github.com/DhritiShikhar/

Pagure profile: https://pagure.io/user/dhrish20

I am a final year Engineering student in Information Technology. I am a regular contributor to FOSS and a regular participant of Pune Python meetup.

  • How did you learn about the training?

I learnt about the dgplug training from Chandan Kumar.

  • How this training changed (if at all) your life?

The dgplug training was completely life changing and the best decision I have ever made. I can't imagine a more rewarding way to spend my free time. It helped me to learn the developer tools. I learnt how to use IRC, git, vim, Python. And as a result of that I started contributing to OpenStack and Fedora Infrastructure. I hope to continue learning from dgplug. 

  • Have you contributed to any upstream project(s)? If yes, then details.

I have contributed to:

  • the documentation of various modules in OpenStack
  • Fedora-Hubs
  • Pagure
  • FAS
  • Any tips for the next batch of participants.

DGPLUG is a great place to learn. If you want to contribute to Open Source, attend the training attentively, follow the mentor and ask questions. Show people your passion. Open Source contributions show your willingness to learn. Make sure you blog regularly about new technologies you learn.

Feel free to add anything else you want to talk about.

I am really grateful to Kushal Das for inspiring and guiding me. Also, to really learn and understand a new technology, you should try to implement it. This learn-by-doing spirit is the most valuable thing you will learn in this training. 

dgplug summer training student Suraj Deshmukh

  • Your name (blog/twitter) and what do you do

Name: Suraj Deshmukh

Twitter: @surajd_

Blog: https://deshmukhsuraj.wordpress.com/

What I do: I work as an Associate Software Engineer at Red Hat

  • How did you learn about the training?

From Chandan Kumar, when he was attending an event at my college.

  • How this training changed (if at all) your life?

It has entirely changed the course of my life. It paved a way into open-source for me. It helped me get a job at Red Hat. I found people with similar interests. I found people whom I can talk geeky stuff with ;). I learnt things that you otherwise learn the hard way. It wouldn't have had possible without training. I learnt how to get into community and start doing things, in a way made me confident enough to ask things on mailing lists. It has shown me how can you help others by writing code. Before training I just knew things by their name never got a chance or say push to start things but being in training helped me do that. And most of all I made friends for my life.

  • Have you contributed to any upstream project(s)? If yes, then details.

Before joining Red Hat, I have been contributing to Project Scapy and Openstack as a part of my job I contribute to project called as AtomicApp which is a sub-project of Project Atomic.

  • Any tips for the next batch of participants.

What I would like to say to new participants is that #dgplug is a ladder to reach open-source world and do stuff that really interests you, where you can hack with stuff and you are free to do things you like. And believe me, when you do things that you like without bothering if you are getting paid just for the sake of your passion, someday you will be paid to live your passion. Final word: get along with the training learn things, ask questions and start contributing to open-source project.

Testing containers using Kubernetes on Tunir version 0.15

Today I have released Tunir 0.l5. This release got a major rewrite of the code, and has many new features. One of them is setting up multiple VM(s) from Tunir itself. We now also have the ability to use Ansible (using 2.x) from within Tunir. Using these we are going to deploy Kubernetes on Fedora 23 Atomic images, and then we will deploy an example atomicapp which follows Nulecule specification.

I am running this on Fedora 23 system, you can grab the latest Tunir from koji. You will also need the Ansible 2.x from the testing repository.

Getting Kubernetes contrib repo

First we will get the latest Kubernetes contrib repo.

$ git clone https://github.com/kubernetes/contrib.git

Inside we will make changes to a group_vars file at contrib/ansible/group_vars/all.yml

diff --git a/ansible/group_vars/all.yml b/ansible/group_vars/all.yml
index 276ded1..ead74fd 100644
--- a/ansible/group_vars/all.yml
+++ b/ansible/group_vars/all.yml
@@ -14,7 +14,7 @@ cluster_name: cluster.local
# Account name of remote user. Ansible will use this user account to ssh into
# the managed machines. The user must be able to use sudo without asking
# for password unless ansible_sudo_pass is set
-#ansible_ssh_user: root
+ansible_ssh_user: fedora

# password for the ansible_ssh_user. If this is unset you will need to set up
# ssh keys so a password is not needed.

Setting up the Tunir job configuration

The new multivm setup requires a jobname.cfg file as the configuration. In this case I have already downloaded a Fedora Atomic cloud .qcow2 file under /mnt. I am going to use that.

[general]
cpu = 1
ram = 1024
ansible_dir = /home/user/contrib/ansible

[vm1]
user = fedora
image = /mnt/Fedora-Cloud-Atomic-23-20160308.x86_64.qcow2
hostname = kube-master.example.com

[vm2]
user = fedora
image = /mnt/Fedora-Cloud-Atomic-23-20160308.x86_64.qcow2
hostname = kube-node-01.example.com

[vm3]
user = fedora
image = /mnt/Fedora-Cloud-Atomic-23-20160308.x86_64.qcow2
hostname = kube-node-02.example.com

The above configuration file is mostly self explanatory. All VM(s) will have 1 virtual CPU, and 1024 MB of RAM. I also put up the directory which contains the ansible source. Next we have 3 VM definitions. I also have hostnames setup for each, this are the same hostnames which are mentioned in the inventory file. The inventory file should exist on the same directory with name inventory. If you do not want to mention such big names, you can simply use vm1, vm2, vm3 in the inventory file.

Now if we remember, we need a jobname.txt file containing the actual commands for testing. The following is from our file.

PLAYBOOK cluster.yml
vm1 sudo atomic run projectatomic/guestbookgo-atomicapp

In the first line we are mentioning to run the cluster playbook. In the second line we are putting in the actual atomic command to deploy guestbook app on our newly setup Kubernetes. You can understand that we mention which VM to execute as the first part of the line. If no vm is marked, Tunir assumes that the command has to run on vm1.

Now if we just execute Tunir, you will be able to see Kubernetes being setup, and then the guestbook app being deployed. You can add few more commands in the above mentioned file to see how many pods running, or even about the details of the pods.

$ sudo tunir --multi jobname

Debug mode

For the multivm setup, Tunir now has a debug mode which can be turned on by passing --debug in the command line. This will not destroy the VM(s) at the end of the test. It will also create a destroy.sh script for you, which you can run to destroy the VM(s), and remove all temporary directories. The path of the file will be given at the end of the Tunir run.

DEBUG MODE ON. Destroy from /tmp/tmp8KtIPO/destroy.sh

Please try these new features, and comment for any improvements you want.