Kushal Das

FOSS and life. Kushal Das talks here.

kushal76uaid62oup5774umh654scnu5dwzh4u2534qxhcbi4wbab3ad.onion

Fedora Cloud WG during last week of 2016-02

Fedora Cloud Working Group meets every Wednesday at 17:00UTC on #fedora-meeting-1 IRC channel on Freenode server. This week we had 15 people attending the meeting, which is in the regular range of the meeting attendees. The points need to be discussed in the meeting are generally being tracked on the fedorahosted trac as trac tickets with a special keyword meeting. This basically means if you want something to be discussed in the next cloud meeting, add a ticket there with the meeting keyword.

After the initial roll call, and discussions related to the action items from the last week, we moved into the tickets for this week's meeting. I had continued my action item on investigating adding a CDROM device to the embedded Vagrantfile in the Vagrant images we generate. But as it seems that is hardcoded inside of Imagefactory, I am feeling less motivated to add any such thing there.

Then we moved to discuss the FAD proposal, which still needs a lot of work. Next major discussion was related to the Fedora 24 changes from the Cloud WG. I have updated the ticket with the status of each change. It was also decided that nzwulfin will update the cloud list with more information related to Atomic Storage Clients change.

Next big discussion was related to Container "Packager" Guildelines and Naming Conventions. One of the major upcoming change is about being able to build layered images on Fedora Infrastructure as we build the RPM(s) in the current days. Adam Miller wrote the initial version of the documents for the packagers of those layered images. I have commented my open questions in the trac ticket, others also started doing so. Please take your time, and go through both the docs pointed in that ticket. As containers are taking a major part of all the cloud discussions, this will be a very valuable guide for the future container packagers.

I still have some open work left from this week. During Open Floor Steve Gordon brought up the issues Magnum developers are facing while using Fedora Atomic images. I will be digging more into that this week. The log of the meeting can be found here. Feel free to join in the #fedora-cloud channel if you have any open queries.

Updates from CentOS Cloud SIG

Back in 2014 we started working on CentOS Cloud SIG. This SIG started with focus on packaging, and maintaining different FOSS IaaS platforms. We are not a vendor specific group, means anyone who wants to keep their cloud project maintained in the CentOS ecosystem, are very welcome to join, and work in this SIG.

We have our regular IRC meeting on every Thursday at 15:00 UTC in the #centos-devel IRC channel on freenode. The regular meeting agenda is kept updated on a etherpad. The major points from this week's meeting are

  • We have an upcoming RDO test day on 10th March. More details are on this blog post.
  • There will be a live demo about TripleO, you can view it on 9th March in youtube.
  • OpenNebula folks were missing from the meeting, but we remember that they need more help in porting the required packages. So, if you want to start contributing OpenNebula or CentOS this can be your chance :)

The next meeting is in next week at the same time. See you all there.

retask 1.0 is out

Retask is a super simple Task Queue written in Python. It uses Redis in the backend, and works with both Python2, and Python3. The last official release was 0.4 back in 2013. The code base is very stable and we only received few queries about adding new features.

So, after almost 2+ years, I have made a new release, 1.0. Marking it super stable for using in the production. Currently it is being used in various projects within Fedora Infrastructure, and I also heard about internal usages in different companies. I have started writing this module because I was looking for something super simple to distribute jobs in different computers/processes.

You can install it using pip (updated rpm packages are coming soon).

$ pip install retask

Below is an example of queueing in some data using Python dictionaries.

from retask import Task
from retask import Queue
queue = Queue('example')
info1 = {'user':'kushal', 'url':'http://kushaldas.in'}
info2 = {'user':'fedora planet', 'url':'http://fedoraplanet.org/'}
task1 = Task(info1)
task2 = Task(info2)
queue.connect()
queue.enqueue(task1)
queue.enqueue(task2)

Go ahead, have a look into it. If you have any queries, feel free to ping me on IRC, or leave a comment here.

Tunir 0.13 is released and one year of development

Tunir 0.13 release is out. I already have a koji build in place for F23. There are two major feature updates in this release.

AWS support

Yes, we now support testing on AWS EC2. You will have to provide your access tokens in the job configuration along with the required AMI details. Tunir will boot up the instance, will run the tests, and then destroy the instance. There is documentation explaining the configuration details.

CentOS images

We now also support CentOS cloud images, and Vagrant images in Tunir. You can run the same tests on the CentOS images based on your need.

One year of development

I have started Tunir on Jan 12 2015, means it got more than one year of development history. At the beginning it was just a project to help me out with Fedora Cloud image testing. But it grew to a point where it is being used as the Autocloud backend to test Fedora Cloud, and Vagrant images. We will soon start testing the Fedora AMI(s) too using the same. Within this one year, there were total 7 contributors to the project. In total we are around 1k lines of Python code. I am personally using Tunir for various other projects too. One funny thing from the code commits timings, no commit on Sundays :)

No discriminatory tariffs for data services in India

Finally we have won. The Telecom Regulatory Authority of India has issued a press release some time ago telling that no one can charge different prices for different services on Internet. The fight was on in an epic scale, one side spent more than 100million in advertisements, and lobbying. The community fought back in the form of #SaveTheInternet, and it worked.

In case you are wondering what is this about, you can start reading the first post from Mahesh Murthy, and the reply to the response of Facebook. Trust me, it will be an amazing reading. There is also this excellent report from Lauren Smiley. There are also few other amazing replies from TRAI to Facebook about same.

Second Fedora Pune meetup in January

On 22nd of January evening we had the second Fedora meetup in Pune. There were 12 participants in this meet. We started a discussion about what happens when someone compiles a program written in C. Siddhesh started asking various questions about what do we think? He then started going into details for each steps into compiler, and assembler. We discussed about ELF files, went through various sections. After looking into __init, __fini, __main functions one of the participant said "my whole life was a lie!" :D No one thought about constructor, and destructor in a C program.

We also discussed about for loops in C, and in Python, and about list comprehensions. One more new point to me was that there are 6 registers available for each function call in x86. At the end the participants decided to find bugs/features in different projects they use regularly (we also suggested about Gnome apps), first everyone will try to fix those at home, and if they can not fix by next meeting, we will look into helping them during the meeting.

First January Fedora meetup in Pune

Last Friday we had the first Fedora meetup in January here in Pune. This was the first of the many upcoming meetups/workshops. The venue for this meetup was moved to Sayan's apartment as we never found a free meeting room in the local Red Hat office, and as it seems that we will continue to use the same venue for the future meetups.

I thought we will have around 8 people for the meetup, but we ended in having 18 people. This is pretty good as we spread the news of the event only in few places. There were 4 Fedora Ambassadors present in this meet, /me, Sayan, Praveen Kumar, and Siddesh. The day started with an introduction of all the participants. We found many first timers in the group. Next /me and Siddhesh explained about programming in FOSS in general, and why it is important to stick to any project while contributing.

As our first technical talk, Sayan explained his bugyou project. This is a new service in the Fedora Infrastructure which listens to any failed test runs in Autocloud, and files appropriate bugs in the system. He is coming up with a blog post with more details. Next I explained the Automated testing of the cloud/atomic images in Fedora. We went through few basic tests we have. Most people agreed that this can be a good starting point for any new contributor who is willing to write some Python 3 code :).

Because we never had any projector, couple of us went to my apartment in the floor above, and brought down the TV. Praveen Kumar then started his hands on session about Ansible. He demoed how to have Apache installed, and configured on any system. After that all participants wrote their own playbook to copy a file to a predefined location in their system.

At the end we discussed about what kind hands on sessions people want. In general we will be having 1 or 2 talks for 1 hour at max, and the next 2-3 hours should be hands on session where everyone will work on what they want. Our next meetup is on 22nd January, from 5pm onwards. We have a request to discuss rpm packaging, Python 2 to 3 migration, and a special session on C programming from our in house glibc hacker.

Fedora 23 on Tegra K1 Chromebook

Last year during Flock I got myself an Acer CB5 311 Chromebook with Nvidia Tegra K1 ARM board, and 2 GB ram. It is a very nice machine to run ChromeOS, but my goal behind getting the hardware was all about running Fedora on it. With the great help from Jon Disnard (IRC: masta) on #fedora-arm channel, I finally managed to do that this morning.

We started doing this yesterday, following the excellent guide at Fedora wiki. Though I had few changes, the major one was about having only two kernel partitions in the partition table. So the command to create the partition table using the sgdisk command becomes something like:

$ sgdisk -a 8192 -n 1:0:+30M -t 1:7F00 -c 1:"KERN-A" -A 1:=:0x011F000000000000 \
             -n 2:0:+30M   -t 2:7F00 -c 2:"KERN-B" -A 2:=:0x011A000000000000 \
             -n 3:0:+600M  -t 3:8300 -c 3:"BOOT"                             \
             -n 4:0:+600M  -t 4:8200 -c 4:"SWAP"                             \
             -n 5:0:0      -t 5:8300 -c 5:"ROOTFS" chromebook-xfce.raw

This change makes the partition number change in all the following commands given in the wiki. Remember to double check that you have right /etc/fstab file in the image so that it can mount /boot. You can get the UUID by using blkid command. Btw, the SD card came in as a different device on my laptop. So a few corresponding changes are given below.

$ sudo sgdisk -a8192 -e -d5 -n5:0:0 -t5:8300 -c5:'ROOTFS' -p /dev/mmcblk0
$ sudo e2fsck -f /dev/mmcblk0p5
$ sudo resize2fs /dev/mmcblk0p5

During my first boot on the Chromebook with the Fedora SD card, it got into the emergency shell, I had to manually mount /boot, and then pressed Ctrl+d to continue the default flow. I made the change to the /etc/fstab file to make sure that I don't have to mount /boot every time manually. I got XFCE working out of the box. glxgears showing a frame rate of 230+. But I could not get the wireless work. I decided to take a break, slept for 4 hours, and was in front of the computer at 6AM again.

So to fix the wireless issue I had to build the mainline kernel on that system. I had to mount the card on a banana pi running F23, and then chrooted into it. I installed the following packages.

# mkdir /tmp/t2
# mount /dev/sdb5 /tmp/t2
# mount -o bind /dev dev
# mount -o bind /proc proc
# mount -o bind /sys sys
# mount -t tmpfs tmpfs tmp
# chroot /tmp/t2
# dnf install @development-tools vboot-utils htop pss vim
# exit
# umount --recursive /tmp/t2

After I copied the mainline kernel to the SD card, I first added the .config file (I got this from masta), and then execute the following command to build

# make -j8

It takes a lot of time, so you can take a break, drink coffee, or even finish parts of a big book :). After the build was finished, I used the following script to install the newly built kernel, remember to change the PARTUUID as required. After a reboot, I have the system ready with wireless. Tested with a few of my Python applications, it seems to be a perfect box for development while travelling, or at conferences, as it has long battery life :). There are still a few glitches, like the touchpad does not work, but we will get those fixed in the coming days. This whole experiment made me the second person to run Fedora on a Tegra K1 Chromebook :D. In case you want to join in this exclusive club, come down to #fedora-arm channel on freenode, we will help you to setup Fedora on your Chromebook.

I am also looking forward to work with ARM64 boards in future, may be a Mustang board. If someone has a spare one, feel free to send one my way :)

Home storage cluster with Fedora and Gluster

In a previous post I wrote about the encrypted storage I am using at home. Using Banana Pi, and Fedora. But, for my photos I had to look for something else. I need redundancy. The of the self NAS boxes which can do network and RAID are super costly in my standard.

So, I decided to build that at home. The setup is simple that can be done in few hours. I have 2 Banana Pi(s) running Gluster, replicated 2TB hard drives over a local Gigabit network. My old Banana Pi is where I mounted the Gluster volume.

First set up the Fedora for Banana Pi(s).

I am using the minimal Fedora 23 images.

$ sudo fedora-arm-image-installer --image=/home/kdas/Fedora-Minimal-armhfp-23-10-sda.raw.xz --target=Bananapi --media=/dev/mmcblk0 --selinux=ON

Due to a bug in the F23 images, I had to remove initial-setup service from the installations.

$ rm /run/media/kdas/__/etc/systemd/system/multi-user.target.wants/initial-setup-text.service

Then I setup my ssh key on the cards.

$ USER=YOURUSERNAME
$ sudo mkdir /run/media/$USER/__/root/.ssh/
$ su -c 'cat /home/$USER/.ssh/id_rsa.pub >> /run/media/$USER/__/root/.ssh/authorized_keys'
$ sudo chmod -R u=rwX,o=,g= /run/media/$USER/__/root/.ssh/

Installing and enabling ntp

# dnf clean all
# dnf install ntp
# systemctl enable ntpd

Setting up the hostname

I just set the hostname on the all the 3 systems as gluster01, gluster02, and storage.

# hostnamectl set-hostname --static "gluster01"

Setting up static IP using networkd

I prefer to use Networkd on head less systems. So, I used the same to setup static network on all the systems.

# systemctl disable NetworkManager
# systemctl disable network
# systemctl enable systemd-networkd
# systemctl enable systemd-resolved
# systemctl start systemd-resolved
# systemctl start systemd-networkd
# rm -f /etc/resolv.conf
# ln -s /run/systemd/resolve/resolv.conf /etc/resolv.conf
# vi /etc/systemd/network/eth0.network

The configuration of the network file is given below. This is much easier for me to maintain than ifcfg files.

[Match]
Name=eth0
[Network]
Address=192.168.1.20/24
Gateway=192.168.1.1
# These are optional but worth mentioning
DNS=8.8.8.8
DNS=8.8.4.4
NTP=pool.ntp.org

Remember to setup all 3 systems in the similar way. Replace the IP/Gateway address as required. I also updated the /etc/hosts file in all the 3 systems so that they can talk to each other using hostname than IP addresses.

Setting up the new hard drives

First we create a new partition, and then format it as ext4. I also added the corresponding address in the fstab file so that it gets mounted automatically on /mnt.

# fdisk -c -u /dev/sda
# mkfs.ext4 /dev/sda1
# vim /etc/fstab

Setting up Gluster on the systems

Next big step is about setting up Gluster on both the gluster01 and on gluster02.

# dnf install glusterfs-server.armv7hl -y
# mkdir -p /mnt/brick/glusv0
# systemctl start glusterd

Next I had to enable the required ports in the firewalld. For now I have added eth0 on public zone.

# firewall-cmd --zone=public --add-interface=eth0
# firewall-cmd --zone=public --add-service=glusterfs

Remember to run the above commands on the both the Gluster systems. Now from gluster01 I enabled a peer. Finally we create the volume, and start it.

# gluster peer probe gluster02
# gluster peer status
# gluster volume create glusv0 replica 2 gluster01:/mnt/brick/glusv0 gluster02:/mnt/brick/glusv0
# gluster volume start glusv0
# gluster volume info

Mount the gluster volume on the third box.

# dnf install glusterfs-fuse -y
# mount -t glusterfs gluster01:/glusv0 /gluster -o backupvolfile-server=gluster02
# chown USERNAME -R /gluster/

Now you can use the mounted volume in any way you want. I also had a problem in keeping the systems properly. So, I used an old plastic rack to host the whole solution. Less than $5 in cost :)

Meeting agenda

The above is a real life experience in one of the company I worked for. You can imagine what all the Engineers were thinking. Those meetings were couple of hours long, and really boring.

In the community based IRC meetings, it is generally the opposite. Many meetings I attend already use a ticketing system (like Trac) to track the meeting agenda, or even use something like Gobby to keep the agenda. Others can join the session, and edit the agenda as required.

We now have defined meeting agenda for CentOS Cloud SIG. Feel free to add items to the list which will be discussed in the next meeting. CentOS Cloud SIG (CCS) is a group of people coming together to focus on packaging and maintaining different FOSS based Private cloud infrastructure applications that one can install and run natively on CentOS.