Kushal Das

FOSS and life. Kushal Das talks here.

kushal76uaid62oup5774umh654scnu5dwzh4u2534qxhcbi4wbab3ad.onion

New features in Gotun

Gotun is a written from scratch golang port of Tunir. Gotun can execute tests on remote systems, or it can run tests on OpenStack and AWS.

Installation from git

If you have a working golang setup, then you can use the standard go get command to install the latest version of gotun.

$ go get github.com/kushaldas/gotun

Configure based on YAML files

Gotun expects job configuration in a YAML file. The following is an example of a job for OpenStack.

---
BACKEND: "openstack"
NUMBER: 3

OS_AUTH_URL: "URL"
OS_TENANT_ID: "Your tenant id"
OS_USERNAME: "USERNAME"
OS_PASSWORD: "PASSWORD"
OS_REGION_NAME: "RegionOne"
OS_IMAGE: "Fedora-Atomic-24-20161031.0.x86_64.qcow2"
OS_FLAVOR: "m1.medium"
OS_SECURITY_GROUPS:
    - "group1"
    - "default"
OS_NETWORK: "NETWORK_POOL_ID"
OS_FLOATING_POOL: "POOL_NAME"
OS_KEYPAIR: "KEYPAIR NAME"
key: "Full path to the private key (.pem file)"

You can also point the OS_IMAGE to a local qcow2 image, which then will be uploaded to the cloud, and used. After the job is done, the image will be removed.

Multiple VM cluster on OpenStack

The OpenStack based jobs also support multiple VM(s). In the above example, we are actually creating three instances from the image mentioned.

Job file syntax

Gotun supports the same syntax of the actual tests of Tunir. Any line starting with ## means those are non-gating tests, even if they fail, the job will continue. For a cluster based job, use vm1, vm2 and similar numbers to mark which VM to be used for the command.

Rebuild of the instances on OpenStack

For OpenStack based jobs, gotun adds a new directive, REBUILD_SERVERS, which will rebuild all the instances. In case one of your tests does something destructive to any instance, using this new directive you can now rebuild all the instances, and start from scratch. The following is the tests file and output from one such job.

echo "hello asd" > ./hello.txt
vm1 sudo cat /etc/machine-id
mkdir {push,pull}
ls -l ./
pwd
REBUILD_SERVERS
sudo cat /etc/machine-id
ls -l ./
pwd
$ gotun --job fedora
Starts a new Tunir Job.

Server ID: e0d7b55a-f066-4ff8-923c-582f3c9be29b
Let us wait for the server to be in running state.
Time to assign a floating pointip.
Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Server ID: a0b810e6-0d7f-4c9e-bc4d-1e62b082673d
Let us wait for the server to be in running state.
Time to assign a floating pointip.
Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Executing:  echo "hello asd" > ./hello.txt
Executing:  vm1 sudo cat /etc/machine-id
Executing:  mkdir {push,pull}
Executing:  ls -l ./
Executing:  pwd
Going to rebuild: 209.132.184.241
Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Going to rebuild: 209.132.184.242
Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Executing:  sudo cat /etc/machine-id
Executing:  ls -l ./
Executing:  pwd

Result file at: /tmp/tunirresult_180507156


Job status: true


command: echo "hello asd" > ./hello.txt
status:true



command: sudo cat /etc/machine-id
status:true

e0d7b55af0664ff8923c582f3c9be29b


command: mkdir {push,pull}
status:true



command: ls -l ./
status:true

total 4
-rw-rw-r--. 1 fedora fedora 10 Jan 25 13:58 hello.txt
drwxrwxr-x. 2 fedora fedora  6 Jan 25 13:58 pull
drwxrwxr-x. 2 fedora fedora  6 Jan 25 13:58 push


command: pwd
status:true

/var/home/fedora


command: sudo cat /etc/machine-id
status:true

e0d7b55af0664ff8923c582f3c9be29b


command: ls -l ./
status:true

total 0


command: pwd
status:true

/var/home/fedora


Total Number of Tests:8
Total NonGating Tests:0
Total Failed Non Gating Tests:0

Success.

Using Ansible inside a job is now easier

Before running any actual test command, gotun creates a file called current_run_info.json in the job directory, we can now use that to create inventory file for Ansible. Then we can mark any Ansible playbook as a proper test in the job description.

#!/usr/bin/env python3
import json

data = None
with open("current_run_info.json") as fobj:
    data = json.loads(fobj.read())

user = data['user']
host1 = data['vm1']
host2 = data['vm2']
key = data['keyfile']

result = """{0} ansible_ssh_host={1} ansible_ssh_user={2} ansible_ssh_private_key_file={3}
{4} ansible_ssh_host={5} ansible_ssh_user={6} ansible_ssh_private_key_file={7}""".format(host1,host1,user,key,host2,host2,user,key)
with open("inventory", "w") as fobj:
    fobj.write(result)

The above-mentioned script is an example, we are reading the JSON file created by the gotun, and then writing to a new inventory file to be used by an Ansible playbook. The documentation has one example of running atomic-host-tests inside gotun.

You have any question, come and ask in the #fedora-cloud channel. You can contact me over twitter too.

My highest used Python function

I heard the above-mentioned lines many times in the past 10 years. Meanwhile I used Python to develop various kinds of applications. Starting from web-applications to system tools, small command line tools or complex application dealing with Operating System ABI. Using Python as a gluing language helped me many times in those cases. There is a custom function which I think is the highest used function in the applications I wrote.

system function call

The following is the Python3 version.

import subprocess

def system(cmd):
    """
    Invoke a shell command.
    :returns: A tuple of output, err message and return code
    """
    ret = subprocess.Popen(cmd, shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=True)
    out, err = ret.communicate()
    return out, err, ret.returncode

The function uses subprocess module. It takes any shell command as input, executes the command, and returns any output, error text, and exit code. Remember that in Python3 out, and err are bytes, not strings. So, we have to decode them to standard strings before using.

This small function enabled me to reuse any other already existing tool, and build on top of them. Some developers will advise doing everything in more Pythonic API. But, in many cases no API is available. I found using this function, and then parsing the text output is much simpler than writing a library first. There is an overhead of creating a new process, and running it, and there are times when it is a big overhead. But again, for simple tools or backend processes (which does have to provide a real-time output) this is helpful. You can use this to create a PoC application, and then you can profile the application to find all the pain points.

Developing Command Line Interpreters using python-cmd2

Many of you already know that I love command line applications. Let it be a simple command line tool, or something more complex with a full command line interface/interpreter (CLI) attached to it. Back in college days, I tried to write a few small applications in Java with broken implementations of CLI. Later when I started working with Python, I wanted to implement CLI(s) for various projects. Python already has a few great modules in the standard library, but, I am going to talk about one external library which I prefer to use a lot. Sometimes even for fun :)

Welcome to python-cmd2

python-cmd2 is a Python module which is written on top of the cmd module of the standard library. It can be used as a drop-in replacement. Through out this tutorial, we will learn how to use it for simple applications.

Installation

You can install it using pip, or standard package managers.

$ pip install cmd2
$ sudo dnf install python3-cmd2

First application

#!/usr/bin/env python3

from cmd2 import Cmd


class REPL(Cmd):

    def __init__(self):
        Cmd.__init__(self)


if __name__ == '__main__':
    app = REPL()
    app.cmdloop()

We created a class called REPL, and later called the cmdloop method from an object of the same class. This will give us a minimal CLI. We can type ! and then any bash command to execute. Below, I called the ls command. You can also start the Python interpreter by using py command.

$ python3 mycli.py 
(Cmd) 
(Cmd) !ls
a_test.png  badge.png  main.py	mycli.py
(Cmd) py
Python 3.5.2 (default, Sep 14 2016, 11:28:32) 
[GCC 6.2.1 20160901 (Red Hat 6.2.1-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.
(REPL)

        py <command>: Executes a Python command.
        py: Enters interactive Python mode.
        End with ``Ctrl-D`` (Unix) / ``Ctrl-Z`` (Windows), ``quit()``, '`exit()``.
        Non-python commands can be issued with ``cmd("your command")``.
        Run python code from external files with ``run("filename.py")``
        
>>> 
(Cmd) 

You can press Ctrl+d to quit or use quit/exit commands.

Let us add some commands

But, before that, we should add a better prompt. We can have a different prompt by changing the prompt variable of the Cmd class. We can also add some banner by adding text to the intro variable.

#!/usr/bin/env python3
from cmd2 import Cmd

class REPL(Cmd):
    prompt = "life> "
    intro = "Welcome to the real world!"

    def __init__(self):
        Cmd.__init__(self)


if __name__ == '__main__':
    app = REPL()
    app.cmdloop()
$ python3 mycli.py 
Welcome to the real world!
life> 

Any method inside our REPL class which starts with do_ will become a command in our tool. For example, we will add a loadaverage command to show the load average of our system. We will read /proc/loadavg file in our Linux computers to find this value.

#!/usr/bin/env python3

from cmd2 import Cmd


class REPL(Cmd):
    prompt = "life> "
    intro = "Welcome to the real world!"

    def __init__(self):
        Cmd.__init__(self)

    def do_loadaverage(self, line):
        with open('/proc/loadavg') as fobj:
            data = fobj.read()
        print(data)

if __name__ == '__main__':
    app = REPL()
    app.cmdloop()

The output looks like:

$ python3 mycli.py 
Welcome to the real world!
life> loadaverage
0.42 0.23 0.24 1/1024 16516

life> loadaverage
0.39 0.23 0.24 1/1025 16517

life> loadaverage
0.39 0.23 0.24 1/1025 16517

If you do not know about the values in this file, the first three values indicate the CPU/IO utilization of the last one, five and ten minutes back. Then we have the number of currently running processes and the total number of processes. The final column shows the last process ID used. You can also see that TAB will autocomplete the command in our shell. We can go back to the past commands by pressing the arrow keys. We can also press Ctrl+r to do a reverse search like the standard bash shell. This feature comes from the readline module. We can use that more, and add a history file to our tool.

import os
import atexit
import readline
from cmd2 import Cmd

history_file = os.path.expanduser('~/.mycli_history')
if not os.path.exists(history_file):
    with open(history_file, "w") as fobj:
        fobj.write("")
readline.read_history_file(history_file)
atexit.register(readline.write_history_file, history_file)



class REPL(Cmd):
    prompt = "life> "
    intro = "Welcome to the real world!"

    def __init__(self):
        Cmd.__init__(self)

    def do_loadaverage(self, line):
        with open('/proc/loadavg') as fobj:
            data = fobj.read()
        print(data)

if __name__ == '__main__':
    app = REPL()
    app.cmdloop()

Taking input in the commands

We can use the positional argument in our do_ methods to have arguments in our commands. Whatever input you are passing to the command, comes to the line variable in our example. We can use the same to do anything. For example, we can take any URL as input, and then check the status. We will use requests module for this example. We also used the Cmd.colorize method to add colors to our output text. I have added one extra command to make the tool more useful.

#!/usr/bin/env python3

import os
import atexit
import readline
import requests
from cmd2 import Cmd

history_file = os.path.expanduser('~/.mycli_history')
if not os.path.exists(history_file):
    with open(history_file, "w") as fobj:
        fobj.write("")
readline.read_history_file(history_file)
atexit.register(readline.write_history_file, history_file)



class REPL(Cmd):
    prompt = "life> "
    intro = "Welcome to the real world!"

    def __init__(self):
        Cmd.__init__(self)

    def do_loadaverage(self, line):
        with open('/proc/loadavg') as fobj:
            data = fobj.read()
        print(data)

    def do_status(self, line):
        if line:
            resp = requests.get(line)
            if resp.status_code == 200:
                print(self.colorize("200", "green"))
            else:
                print(self.colorize(str(resp.status_code), "red"))

	def do_alternativefacts(self, line):
    		print(self.colorize("Lies! Pure lies, and more lies.", "red"))

if __name__ == '__main__':
    app = REPL()
    app.cmdloop()

Building these little shells can be a lot of fun. The documentation has all the details, but, you should start reading from the standard lib cmd documentation. There is also the video from PyCon 2010.

University Connect at SICSR Pune

Last Thursday, I visited SICSR, Pune campus as part of University Connect program from Red Hat Pune. This was the second event in the University Connect program. Rupali, the backbone of all community events in the local Red Hat office, is also the primary driving force of this program. Priyanka, Varsha, Prathamesh and I reached office early morning and later went to the college in Rupali’s car. Sayak came to the venue directly from his house.

The event started with a brief inauguration ceremony. After that Rupali took the stage and talked about the University Connect program, and how Red Hat Pune is taking part in the local communities. I will let her write a blog post to explain in details :)

Next, I went up to start talking about the journey of Red Hat, various upstream projects we take part in. Discussing the various product lines Red Hat have. We discussed https://pagure.io and how many Indian students are contributing to that project with guidance from Pierre-YvesChibon. Because Python is already a known language among the students, I had many Python projects as examples.

Priyanka took the stage after me; she and Sayak are alumni of SICSR so it was nice for the students to them speaking. She talked about how contributing to Open Source projects can change one’s career. She told stories of her own life, talked about various points which can help a student to make their resume better.

Sayak then took the stage to talk about how to contribute to various open source projects. He talked and showed examples from Mozilla, Fedora, and KDE. He also pointed to the answer Sayan wrote to a Quora question.

At the end of the day, we had around 20 minutes of open QA session.

Desktop environments in my computer

I started my Linux journey with Gnome, as it was the default desktop environment in RHL. I took some time to find out about KDE. I guess I found out accidentally during re-installation. It used to be fun to have a desktop that looks different, behaves differently than the normal. During the earlier years in college while I was trying to find out more about Linux, using KDE marked me as a Linux expert. I was powered with the right syntax of mount command to mount the windows partitions and the xmms-mp3 rpm. I spent most of my time in the terminal.

Initial KDE days for me

I started my FOSS contribution as a KDE translator, and it was also my primary desktop environment. Though I have to admit, I had never heard the word “DE or desktop environment” till 2005. Slowly, I started learning about the various differences, and also the history behind KDE and Gnome. I also felt that the KDE UI looked more polished. But I had one major issue. Sometimes by mistake, I used to change something in the UI, wrong click or wrong drag and drop. I never managed to recover from those stupid mistakes. There was no way for me to go back to the default look and feel without deleting the whole configuration. You may find this really stupid, but my desktop usage knowledge was limited (and still is so), due to the usage of terminal based applications. Not sure about the exact date, but sometime during 2010, I became a full-time Gnome user. Not being able to mess around with my settings actually helped me in this case.

The days of Gnome

There aren’t many things to write about my usage of Gnome. I kept using whatever came through as default Fedora Gnome theme. As I spend a lot of time in terminals, it was never a big deal. I was not sure if I liked Gnome Shell, but I kept using it. Meanwhile, I tried LXDE/XFCE for a few days but went back to the default Fedora UI of Gnome every time. This was the story till the beginning of June 2016.

Introduction of i3wm

After PyCon 2016, I had another two-day meet in Raleigh, Fedora Cloud FAD. Adam Miller was my roommate during the four-day stay there. As he sat beside me in the meeting, I saw his desktop looked different. When asked, Adam gave a small demo on i3wm. Later that night, he pointed me to his dotfiles, and I started my journey with a tiling window manager for the first time. I have made a few minor changes to the configuration over time. I also use a .Xmodmap file to make sure that my configuration stays sane even with my Kinesis Advantage keyboard.

The power of using the keyboard for most of the tasks is what pulled me towards i3wm. It is always faster than moving my hand to the trackball mouse I use. I currently use a few different applications on different workspaces. I kept opening the same application in the same workspace every time. It, hence became muscle memory for me to switch to any application as required. Till now, except a few conference projectors, I never had to move to Gnome for anything else. The RAM usage is also very low as expected.

Though a few of my friends told me i3wm is difficult, I had a complete different reaction when I demoed this to Anwesha. She liked it immediately and started using it as her primary desktop. She finds it much easier for her to move between workspaces while working. I know she already demoed it to many others in conferences. :)

The thing which stayed same over the years is my usage of terminal. Learning about many more command line tools made my terminal having more tabs, and more number of tmux sessions in the servers.

Fedora Atomic Working Group update from 2017-01-17

This is an update from Fedora Atomic Working Group based on the IRC meeting on 2017-01-17. 14 people participated in the meeting, the full log of the meeting can be found here.

OverlayFS partition

We had a decision to have a docker partition in Fedora 26. The root partition sizing will also need to be fixed. One can read all the discussion on the same at the Pagure issue.

We also require help in writing the documentation for migration from Devicemapper -> Overlay -> Back.

How to compose your own Atomic tree?

Jason Brooks will update his document located at Project Atomic docs.

docker-storage-setup patches require more testing

There are pending patches which will require more testing before merging.

Goals and PRD of the working group

Josh Berkus is updating the goals and PRD documentation for the working group. Both short term and long term goals can be seen at this etherpad. The previous Cloud Working Group’s PRD is much longer than most of the other groups’ PRD, so we also discussed trimming the Atomic WG PRD.

Open floor discussion + other items

I updated the working group about a recent failure of the QCOW2 image on the Autocloud. It appears that if we boot the images with only one VCPU, and then after disabling chroyd service when we reboot, there is no defined time to have ssh service up and running.

Misc talked about the hardware plan for FOSP., and later he sent a detailed mail to the list on the same.

Antonio Murdaca (runcom) brought up the discussion about testing the latest Docker (1.13) and pushing it to F25. We decided to spend more time in testing it, and then only push to Fedora 25, otherwise it may break Kubernetes/Openshift. We will schedule a 1.13 testing week in the coming days.

Setting up a retro gaming console at home

Commodore 64 was the first computer I ever saw in 1989. Twice in a year I used to visit my grandparents' house in Kolkata, I used to get one or two hours to play with it. I remember, after a few years how I tried to read a book on Basic, with the help of an English-to-Bengali dictionary. In 1993, my mother went for a year-long course for her job. I somehow managed to convince my father to buy me an Indian clone of NES (Little Master) in the same year. That was also a life event for me. I had only one game cartridge, only after 1996 the Chinese NES clones entered our village market.

Bringing back the fun

During 2014, I noticed how people were using Raspberry Pi(s) as NES consoles. I decided to configure my own on a Pi2. Last night, I re-installed the system.

Introducing RetroPie

RetroPie turns your Raspberry Pi into a retro-gaming console. You can either download the pre-installed image from the site, or you can install it on top of the rasbian-lite. I followed the latter path.

As a first step I downloaded Raspbian Lite. It was around 200MB in size.

# dcfldd bs=4M if=2017-01-11-raspbian-jessie-lite.img of=/dev/mmcblk0

I have used the dcfldd command, you can use dd command too. Detailed instructions are here.

After booting up the newly installed Raspberry Pi, I just followed the manual installation instructions from the RetroPie wiki. I chose basic install option on the top of the main installation screen. Note that the screenshot in the wiki is old. It took a few hours for the installation to finish. I have USB gamepads bought from Amazon, which got configured on the first boot screen. For the full instruction set, read the wiki page.

Happy retro gaming everyone :)

Updates from PyCon Pune, 12th January

This is a small post about PyCon Pune 2017. We had our weekly volunteers meet on 12th January in the hackerspace. You can view all the open items in the GitHub issue tracker. I am writing down the major updates below:

Registration

We have already reached the ideal number of registrations for the conference. The registration will be closed tomorrow, 15th of January. This will help us to know the exact number of people attending, thus enabling us to provide the better facility.

Hotel and travel updates

All international speakers booked their tickets, the visa application process is going on.

Child care

Nisha made a contact with the Angels paradise academy for providing childcare.

T-shirts for the conference

T-shirts will be ordered by coming Tuesday. We want to have a final look at the material from a different vendor on this Sunday.

Speakers’ dinner

Anwesha is working on identifying the possible places.

Food for the conference

Nisha and Siddhesh identified Rajdhani as a possible vendor. They also tried out the Elite Meal box, and it was sufficient for one person. By providing box lunches, it will be faster and easier to manage the long lunch queues.

Big TODO items

  • Design of the badge.
  • Detailed instruction for the devsprint attendees.

Next meeting time

From now on we will be having two volunteers meet every week. The next meet is tomorrow 3pm, at the hackerspace. The address is reserved-bit, 337, Amanora Chambers (Above Amanora Mall), Hadapsar, Pune.

Hackerspace in Pune

More than 10 years back, I met some genius minds from CCC Berlin at foss.in, Harald Welte, Milosch Meriac and Tim Pritlove. I was in complete awe mode by seeing the knowledge they have, by looking at how helpful they are. Milosch became my mentor, and I learned a lot of things from him, starting from the first steps of soldering to how to compile firmware on my laptop.

In 2007, I visited Berlin for LinuxTag. I was staying with Milosch & Brita. I managed to spend 2 days in the CCC Berlin and that was an unbelievable experience for me. All of the knowledge on display and all of the approachable people there made me ask Milosch about how can we have something similar? The answer was simple, if there is no such club/organization, then create one with friends. That stayed in my mind forever.

In 2014, I gave a presentation in Hackerspace Santa Barbara, thanks to Gholms. I also visited Hackerspace SG during FOSSASIA 2015 and made many few friends. I went back to the Hackerspace SG again in 2016 after dropping our bags at the hotel, I just fall in love with the people and the place there.

Meanwhile, over the years, we tried to have a small setup downstairs in the flat where Sayan and Chandan used to live. We had our regular Fedora meetups there, and some coding sessions for new projects. But it was not a full-time place for everyone to meet and have fun by building things. We could not afford to rent out space anywhere nearby, even with 4-5 friends joining in together. I discussed the dream of having a local hackerspace with our friends, Siddhesh and Nisha Poyarekar, and that made a difference.

Birth of Hackerspace Pune

Nisha went ahead with the idea and founded https://reserved-bit.com, a hackerspace/makerspace in Pune. Last month I had to visit Kolkata urgently, it also means that I missed the initial days of setting up space. But as soon as we came back to Pune, I visited the space.

It is located at 337, the Amanora Chambers, just on top of the East Block at Amanora Mall. The location is great for another reason - if you are tired of hacking on things, you can go down and roam around in the mall. That also means we have Starbucks, KFC, and other food places just downstairs :)

There are lockers available for annual members. There is no place for cooking, but there is a microwave. You can find many other details on the website. For now, I am working from there in the afternoons and evenings. So, if you have free time, drop by to this new space, and say hi :)

Note1: Today we are meeting there at 3pm for the PyCon Pune volunteers meet.

January 2017 PyLadies Pune meetup

Like many of the previous PyLadies Pune meetups, I took a session in this month’s meetup too. System programming basics was the topic for my session. We did the session for around an hour, but as this month’s session also had a guest session over hangout, we could not go longer. We will do a full day workshop on the same topic in future.

Guest session from Paul Everitt

In his session, Paul wrote a very simple game using Arcade module in PyCharm. I relayed any doubts from the participants over chat to him, he responded to those in the live session. You can watch the recording of the full session on Youtube.

Paul had to wake up at 6:30 AM to take this session for us, so all of us were very happy and grateful that he did that for us.

This was the first time Anwesha could not attend the meetup due to Py’s health. We had around 28 people in the meetup, the highest till date.