Kushal Das4

FOSS and life. Kushal Das talks here.

Conference travel for speakers

In Free and Open Source Software culture, conferences became an important part of the community. Most of the projects or communities do the work over this beautiful thing known as the Internet, people are taking part from the warmth of their home. Conferences are the only time when we all get a chance to meet, discuss new ideas, share the knowledge among ourselves. Conference speakers are generally the volunteers who agree to spend a lot of time to prepare and then give the talk, do the QA session. This also involves a lot of travel, for any mid-sized to a big conference, you will always find at least couple of speakers traveling half of the world to give those talks.

We, the organizers of many of these conferences can do a few things which helps the speakers to have a trouble free mind.

Inform the talk selection result as soon as possible(aka. visa takes time)

We should inform the speakers about the talk selection result as soon as possible. International traveling still requires visa for many countries, and generally, they are difficult to obtain in less time. For example, I am an Indian, and for getting a visa for most countries, I will have to submit my last three years income tax documents, last 6 months bank statements, office leave letters, and many other documents. Obtaining the documents take time, and we should make sure that the speakers have plenty of time to get this done.

Help with the travel timings

The organizers are local to the conference host city, it is better you provide some insight about travel timings to your conference speakers. They may not want to take a redeye flight, or maybe taking that early morning will provide a much better experience by skipping all the city traffic.

Be in touch during local travel

For all the conferences we organized in Pune (FUDCon 2011, 2015, PyCon Pune 2017), many of the international speakers landed in Mumbai, and then we organized cabs to pick them up, mostly all of these are between 12-4AM. They had to travel next 4 hours in the cab, we made sure to club at least 2 of the speakers in each cab, and also making sure that the drivers can speak English. Remember the language problem, as not every country speaks English fluently, and that goes same to all speakers too. If the speakers are coming on their own from the airport, make sure that they have all the details, and please try to have someone waiting for them at the airport. It is better that you introduce your volunteer to the speakers before the conference. That way when the speakers come out of the airport, they will see a familiar face waiting for them.

One of our tactics was to operate the whole speakers travel from the same hotel speakers were staying. I was awake for the 2 nights and started talking to the speakers as soon as they entered their cabs. We also talked to them during their trip to make sure that everything is okay, and they don’t uncomfortable. Only sad part was that due to lack of light many missed the beautiful view of Mumbai-Pune expressway.

During the conference days, before or after, we make sure that we have enough local volunteers to provide any help as required to the speakers. Many times it happens that the speakers prefer to visit some of the tourist locations before or after the conferences. As hosts, it goes also to our list of responsibilities, helping with the cab booking or providing suggestions for the sightseeing.

I remember the long list of speakers, and the checkbox(s) beside their names to mark that the speaker has boarded safely in the flight for their return journey. Our job is not done till the time they reach back to their home, and we make sure to keep an eye for any emergency. Many of the ideas also go to the speakers who are coming for the first time to the city from other parts of the country. In a country like India, where we have 24 official languages, it is difficult for most of Indians to understand or speak the local language in any state other than their own.

Previously, we had experiences where someone felt sick during the conference, and we have to take them to visit doctors, and making sure to check that they are okay. Telling that we have a large conference, and we can not take care of all of the people is easy, but remember the amount of effort these speakers are putting to make your conference as their own, and providing a great experience to the conference attendees.

Event report: FOSSASIA 2017

FOSSASIA 2017 reminded me of foss.in. After a long time, finally, a conference which has the similar aspects. Similar kind of tight organizing team, the presence of upstream communities from different locations. The participation from the local Singapore tech community along with Hackerspace Singapore is a serious boost. This was my 4th FOSSASIA conference, and also 3rd time in Singapore. I should thank Mario, Hong, and rest of the organizers to make this event a very pleasant experience.

This time Sayan booked an Airbnb for Anwesha and me. Saptak + Medo + Siddhesh + Praveen Patil, and Pooja Yadav, & Pravin Kumar were also staying in the same Airbnb in the Chinatown. The conference venue was the Singapore Science Center just like last year. Having the conference in the same place helps as the MRT route is very easy to reach there on time.

The day before the conference we had a speakers meetup in the Singapore Microsoft office. We also received a tour of the office, the person in-charge also explained about how are managing an office without permanent seating positions.

Day one

The conference started at 9:24AM (as Hong asked us to remember the time). I attended the talks from Harish Pillay and Chan Cheow Hoe. The idea of having the CIO of the country coming to the conference and giving a talk on Open Source is a great feeling. In 2015 we had Minister for Foreign Affairs, Mr. Vivian Balakrishnan giving a keynote (and talking about the NodeJS code he wrote). The way govt. is taking part in the local community events is something other countries should try to learn. Of course, Singapore has the benefit of being small in size.

Though the day was full of talks related to AI and machine learning, there were two talks I was waiting to attend. After lunch, the first one was from Bunnie Huang, where he spoke about making technology more inclusive. He talked about Chibitronics. Before I traveled to Singapore, I actually asked him to get a copy of his new book, The Hardware Hacker. I got my copy signed by him after his talk :) (I already finished the book while coming back to India, more on that later in a separate blog post). I also met Xobs and found a Chibtronics Love-to-Code board in his pocket :)

Later in the day, Frank Karlitschek gave his keynote titled Protecting privacy with free software. He brought up the original idea of the Internet being decentralized. The last talk of the day was a panel discussion on Artificial Intelligence.

Day 2 & Day 3

I spent most of the time in the Python track, and in between jumping around different floors of the venue meeting people. I personally had a two-hour workshop on MicroPython and NodeMCU. Anwesha was busy in the PyLadies table along with Pooja. I forgot to show the poster of PyCon APAC in the Python track :( But you can still submit talks and attend the conference. Sadly this will clash with another conference for me.

Anwesha had her talk on day 3, and her laptop’s display decided to crash just before the talk. But finally the slides came back :) I also attended the SELinux workshop from Jason Zaman. He and few BSD friends convinced me to try out ZFS, and then build a new home storage with FreeNAS.

Now I have to wait for the next edition of FOSSASIA. It is a great place where I can meet my friends from different parts of the world, and share ideas :)

Having emojis in your Ubiquiti SSID

Ubiquiti management interface does not allow to have emojis in the network SSID. After asking over twitter about it, Donald Stufft pointed me to a hack in their forum.

First, create a second wifi network, and then you will have to copy the /tmp/system.cfg file from the access point.

scp yourusername@192.168.1.IP:/tmp/system.cfg .

After this open up the system.cfg in your favorite editor, replace the newly created network name with the emoji text you want. Copy it back to the access point. After this ssh into the access point, it will give you a busybox prompt. Type save command in the prompt, and then reboot the access point from the web-console. This will give you the SSID you were looking for :)

Testing Fedora MariaDB layered image using gotun

Testing Fedora MariaDB layered image testing using gotun

Yesterday Adam Miller announced the availability of the latest Fedora Layered Image release. The following container images are available in the Fedora registry:

  • registry.fedoraproject.org/f25/cockpit:130-1.3.f25docker
  • registry.fedoraproject.org/f25/cockpit:130
  • registry.fedoraproject.org/f25/cockpit
  • registry.fedoraproject.org/f25/kubernetes-node:0.1-3.f25docker
  • registry.fedoraproject.org/f25/kubernetes-node:0.1
  • registry.fedoraproject.org/f25/kubernetes-node
  • registry.fedoraproject.org/f25/mariadb:10.1-2.f25docker
  • registry.fedoraproject.org/f25/mariadb:10.1
  • registry.fedoraproject.org/f25/mariadb
  • registry.fedoraproject.org/f25/kubernetes-apiserver:0.1-3.f25docker
  • registry.fedoraproject.org/f25/kubernetes-apiserver:0.1
  • registry.fedoraproject.org/f25/kubernetes-apiserver
  • registry.fedoraproject.org/f25/kubernetes-master:0.1-5.f25docker
  • registry.fedoraproject.org/f25/kubernetes-master:0.1
  • registry.fedoraproject.org/f25/kubernetes-master
  • registry.fedoraproject.org/f25/flannel:0.1-3.f25docker
  • registry.fedoraproject.org/f25/flannel:0.1
  • registry.fedoraproject.org/f25/flannel
  • registry.fedoraproject.org/f25/kubernetes-proxy:0.1-3.f25docker
  • registry.fedoraproject.org/f25/kubernetes-proxy:0.1
  • registry.fedoraproject.org/f25/kubernetes-proxy
  • registry.fedoraproject.org/f25/etcd:0.1-5.f25docker
  • registry.fedoraproject.org/f25/etcd:0.1
  • registry.fedoraproject.org/f25/etcd
  • registry.fedoraproject.org/f25/toolchain:1-2.f25docker
  • registry.fedoraproject.org/f25/toolchain:1
  • registry.fedoraproject.org/f25/toolchain

Now, I am going to show how we can add a set of tests for the MariaDB container under gotun on a Fedora Atomic host. I will be firing up the job in the Fedora Infra Cloud.

Because of the nature of the Atomic host, I decided to write a set of tests in golang and build a static binary which can be executed inside of the Atomic host. This way we do not have to worry about any dependency in the host.

$ ldd tunirtests.test
	not a dynamic executable

Source code of the test

The following is the source for mysql_test.go.

package main

import (
	"testing"
	"database/sql"
	 _ "github.com/go-sql-driver/mysql"
)

func TestMariadb(t *testing.T) {
	db, err := sql.Open("mysql", "user:password@/dbname")
	if err != nil {
		t.Fatal("Can not connect to mariadb", err.Error())
	}
	defer db.Close()
	stmt := "CREATE TABLE IF NOT EXISTS funny (id int(5) NOT NULL AUTO_INCREMENT, name varchar(250), PRIMARY KEY(id))"
	_, err = db.Exec(stmt)
	if err != nil {
		t.Fatal("Can not create table", err.Error())
	}

	_, err = db.Exec("INSERT INTO funny VALUES (?,?)",1,"kushal")
	if err != nil {
		t.Fatal("Can not insert data", err.Error())
	}
	_, err = db.Exec("INSERT INTO funny VALUES (?,?)",2,"Python")
	if err != nil {
		t.Fatal("Can not insert data", err.Error())
	}
	rows, err := db.Query("SELECT * from funny")
	for rows.Next() {
		var uid int
		var name string
		err := rows.Scan(&uid, &name)
		if err != nil {
			t.Fatal("Error in selecting data", err.Error())
		}
		if uid == 1 {
			if name != "kushal" {
				t.Fatal("Oops, data mismatch", name)
			}
		}
	}

}

As you can see above that I have hardcoded username, password and dub name in the source. This is because while starting the Mariadb container, I can pass those values to the container using environment variables.

I am creating a table, inserting 2 rows in it. Then selecting those rows back to the tool.

the gotun job test file

The following is the content of the mariadb.txt file.

curl -O https://kushal.fedorapeople.org/tunirtests.test
sudo docker run -d --name mariadb_database -e MYSQL_USER=user -e MYSQL_PASSWORD=password -e MYSQL_DATABASE=dbname -p 3306:3306 registry.fedoraproject.org/f25/mariadb
SLEEP 30
chmod +x tunirtests.test
./tunirtests.test -test.run TestMariadb -test.v

At first, I am downloading the binary test file from my Fedora people space. We can also push it to the VM from the host using COPY directive. Next, I am running the docker command to fire up the container with all the required environment variables. After that, I am sleeping for 30 seconds as it takes time to get that container ready for usage. In the last line, I am executing the binary to test the container. Now, we can add more fancy command line arguments to the docker run command (like a data volume), and then test those too. But, I am going to keep those as an exercise for the reader :)

If you have any comment, feel free to drop me a mail, or tweet to @kushaldas.

The final output from the test is also given below.

./gotun --job mariadb
Starts a new Tunir Job.

Server ID: 9c4168ae-d8c4-4534-a53c-c2a291d4f6c5
Let us wait for the server to be in running state.
Time to assign a floating pointip.
Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Executing:  curl -O https://kushal.fedorapeople.org/tunirtests.test
Executing:  sudo docker run -d --name mariadb_database -e MYSQL_USER=user -e MYSQL_PASSWORD=password -e MYSQL_DATABASE=dbname -p 3306:3306 registry.fedoraproject.org/f25/mariadb
Sleeping for  30
Executing:  chmod +x tunirtests.test
Executing:  ./tunirtests.test -test.run TestMariadb -test.v
---------------


Result file at: /tmp/tunirresult_586858839


Job status: true


command: curl -O https://kushal.fedorapeople.org/tunirtests.test
status:true

  %!T(MISSING)otal    %!R(MISSING)eceived %!X(MISSING)ferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 4347k  100 4347k    0     0  3722k      0  0:00:01  0:00:01 --:--:-- 3725k


command: sudo docker run -d --name mariadb_database -e MYSQL_USER=user -e MYSQL_PASSWORD=password -e MYSQL_DATABASE=dbname -p 3306:3306 registry.fedoraproject.org/f25/mariadb
status:true

Unable to find image 'registry.fedoraproject.org/f25/mariadb:latest' locally
Trying to pull repository registry.fedoraproject.org/f25/mariadb ... 
sha256:9a2c3bc162b6b1a1c286302c3e635d77c6e31cbca5d354ed0a839c659e1ecfdc: Pulling from registry.fedoraproject.org/f25/mariadb
be44cf43edd1: Pull complete 
a87762b3425a: Pull complete 
Digest: sha256:9a2c3bc162b6b1a1c286302c3e635d77c6e31cbca5d354ed0a839c659e1ecfdc
Status: Downloaded newer image for registry.fedoraproject.org/f25/mariadb:latest
e9c72f1f41b1ec763418f969359c3319bdc4c95d8123fc3b5e6617768b90d00f


command: chmod +x tunirtests.test
status:true



command: ./tunirtests.test -test.run TestMariadb -test.v
status:true

=== RUN   TestMariadbConnect
--- PASS: TestMariadbConnect (0.20s)
PASS


Total Number of Tests:4
Total NonGating Tests:0
Total Failed Non Gating Tests:0

Success.

Running gotun inside Taskotron

A few weeks ago I wrote about how can we run gotun inside Jenkins. Following the same path, today I am writing how can we run gotun inside Taskotron. Tim Flink & Mike Ruckman (roshi) helped me to do the initial setup.

Setting up the system

In my laptop I created a job.yml following an example Tim posted on IRC.

name: runs gotun inside taskotron

environment:
  rpm:
    - docker

actions:
  - name: run the job
    shell:
      - gotun --job commands

I also had the real definition of the gotun job in the commands.yml file, and the tests in commands.txt file. In this job, I am firing up an instance in the Fedora Infra Cloud. After this, I just ran the runtask command to start the job.

$ sudo runtask -i foo-1.2.3 -t koji_build job.yml

This way we can put gotun inside Taskotron, and then execute the same set of tests we run currently for testing Fedora Atomic images.

Running OpenShift using Minishift

You may already hear about Kubernetes or you may be using it right now. OpenShift Origin is a distribution of Kubernetes, which is optimized for continuous development and multi-tenant deployment. It also powers the Red Hat OpenShift.

Minishift is the upcoming tool which will enable you to run OpenShift locally on your computer on a single node OpenShift cluster inside a VM. I am using it on a Fedora 25 laptop, with help of KVM. It can also be used on Windows or OSX. For KVM, I first had to install docker-machine-driver-kvm. Then downloaded the latest minishift from the releases page. Unzip, and put the binary in your path.

$ ./minishift start
Starting local OpenShift cluster using 'kvm' hypervisor...
E0209 20:42:29.927281    4638 start.go:135] Error starting the VM: Error creating the VM. Error creating machine: Error checking the host: Error checking and/or regenerating the certs: There was an error validating certificates for host "192.168.42.243:2376": tls: DialWithDialer timed out
You can attempt to regenerate them using 'docker-machine regenerate-certs [name]'.
Be advised that this will trigger a Docker daemon restart which might stop running containers.
. Retrying.
Provisioning OpenShift via '/home/kdas/.minishift/cache/oc/v1.4.1/oc [cluster up --use-existing-config --host-config-dir /var/lib/minishift/openshift.local.config --host-data-dir /var/lib/minishift/hostdata]'
-- Checking OpenShift client ... OK
-- Checking Docker client ... OK
-- Checking Docker version ... OK
-- Checking for existing OpenShift container ... OK
-- Checking for openshift/origin:v1.4.1 image ... 
   Pulling image openshift/origin:v1.4.1
   Pulled 0/3 layers, 3% complete
   Pulled 0/3 layers, 24% complete
   Pulled 0/3 layers, 45% complete
   Pulled 1/3 layers, 63% complete
   Pulled 2/3 layers, 81% complete
   Pulled 2/3 layers, 92% complete
   Pulled 3/3 layers, 100% complete
   Extracting
   Image pull complete
-- Checking Docker daemon configuration ... OK
-- Checking for available ports ... OK
-- Checking type of volume mount ... 
   Using Docker shared volumes for OpenShift volumes
-- Creating host directories ... OK
-- Finding server IP ... 
   Using 192.168.42.243 as the server IP
-- Starting OpenShift container ... 
   Creating initial OpenShift configuration
   Starting OpenShift using container 'origin'
   Waiting for API server to start listening
   OpenShift server started
-- Adding default OAuthClient redirect URIs ... OK
-- Installing registry ... OK
-- Installing router ... OK
-- Importing image streams ... OK
-- Importing templates ... OK
-- Login to server ... OK
-- Creating initial project "myproject" ... OK
-- Removing temporary directory ... OK
-- Server Information ... 
   OpenShift server started.
   The server is accessible via web console at:
       https://192.168.42.243:8443

   You are logged in as:
       User:     developer
       Password: developer

   To login as administrator:
       oc login -u system:admin

The oc binary is in ~/.minishift/cache/oc/v1.4.1/ directory, so you can add that in your PATH. If you open up the above-mentioned URL in your browser, you will find your OpenShift cluster is up and running well.

Now you can start reading the Using Minishift to start using your brand new OpenShift Cluster.

Running gotun inside Jenkins

By design gotun is a command line tool which can be called from other scripts, or any larger system. In the world of CI, Jenkins is the biggest name. So, one of the goals was also being able to execute within Jenkins for tests.

Setting up a Jenkins instance for test

vIf you don’t have a setup for Jenkins already, you can just create a new one for staging using the official container. For my example setup, I am using the same at http://status.kushaldas.in

Setting up the first job

My only concern was how to setup the secrets for authentication information on Jenkins (remember I am a newbie in Jenkins). This blog post helped me to get it done. In the first job, I am creating the configuration (if in future we add something dynamic like the image name there). The secrets are coming from the ENV variables as described in the gotun docs. In the job, I am running the Fedora Atomic tests on the image. Here is one example console output.

Running the upstream Atomic host tests in gotun inside Jenkins

My next task was to run the upstream Project Atomic host tests using the similar setup. All the configuration file for the tests are available on this git repo. As explained in a previous post, onevm.py creates the inventory file for Ansible, and then runsetup.sh executes the playbook. You can view the job output here.

For both the jobs, I am executing a Python script to create the job yaml files.

Keeping the tools simple

When I talk about programming or teach in a workshop, I keep repeating one motto: Try to keep things simple. Whenever I looked at the modern complex systems which are popular among the users, generally they are many simple tools working together solving a complex problem.

The Unix philosophy

Back in college days, I read about the Unix philosophy. It is a set of ideas and philosophical approaches to the software development. From the Wikipedia page, we can find four points.

  • Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new “features”.
  • Expect the output of every program to become the input to another, as yet unknown, program. Don’t clutter output with extraneous information. Avoid stringently columnar or binary input formats. Don’t insist on interactive input.
  • Design and build software, even operating systems, to be tried early, ideally within weeks. Don’t hesitate to throw away the clumsy parts and rebuild them.
  • Use tools in preference to unskilled help to lighten a programming task, even if you have to detour to build the tools and expect to throw some of them out after you’ve finished using them.

Do one thing

The first point is something many people decide to skip. The idea of a perfect system which can do everything leads to a complex over engineered software which takes many years to land in production. The first causality is the user of the system, then the team who has to maintain the system up & running. A simple system with proper documentation also attracts number of users. These first users also become your community. They try out the new features, provide valuable feedback. If you are working on an Open Source project, creating that community around your project is always important for the sustainability of the project.

Pipe and redirection between tools

Piping and redirection in Linux shells were another simple magic I learned during the early days in college. How a tool like grep can take the input stream and provide an output, which in turn can be used in the next tool, was one of the best thing I found in the terminal. As a developer, I spend a lot of time in the terminals, and we use piping and redirection in daily life many times.

Build and try and repeat

I think all of the modern agile followers know this point very well. Unless you are allowing your users to try out your tool and allow them to provide feedback, your tool will not be welcomed by the users. We, the programmers have this attitude that every problem can be solved by code, and our ideas are always the correct. No, that is not the case at all. Go out in the world, show your tool to as many people as possible, take feedback. Rewrite and rebuild your tool as required. If you wait for 3 years and hope that someone will force your tool to the users, that will not go well in long run.

Do One Thing and Do It Well

The whole idea of Do One Thing and Do It Well has been discussed many times. Search the term in your favorite search engine, and you will surely find many documents explaining the idea in details. Following this idea while designing tools or systems helped me till date. Tunir or gotun tried to follow the same ideas as much as possible. They are build to execute some command on a remote system and act accordingly to the exit codes. I think this is the one line description of both the tools. To verify if the tool is simple or not, I keep throwing the tool to the new users and go through the feedback.

Last night we received a mail from Dusty Mabe in the Fedora Cloud list, to test the updates-testing tree for Fedora Atomic. At the end of the email, he also gave the command to execute to rebase to the updates-testing tree.

# rpm-ostree rebase fedora-atomic/25/x86_64/testing/docker-host 

With that as input from upstream, it was just adding the command in one line on top of the current official Fedora Atomic tests, and followed by a reboot command and wait for the machine to come back online.

sudo rpm-ostree rebase fedora-atomic/25/x86_64/testing/docker-host
@@ sudo reboot
SLEEP 120
curl -O http://infrastructure.fedoraproject.org/infra/autocloud/tunirtests.tar.gz
tar -xzvf tunirtests.tar.gz
...

This helped me to find the regression in atomic command within the next few minutes while I was working on something else. As I reported the issue to the upstream, they are already working to find a solution (some discussion here). The simplicity of the tool helped me to get things done faster in this case.

Please let me know what do you think about this particular idea about designing software in the comments below.

Testing a redis container using gotun

Testing container is one of the major reason for the existence of the tool gotun. In this blog post, I am going to show you how easy it becomes to test a container. But, easy is still a relative term. If setting up your container requires very complex steps, then you will have to do those first. And then, finally, you can test the application. You can use Ansible to do the hard part of setting up the application.

Fedora Redis container

In our example, we will test the Fedora Redis container. Redis is an in-memory data structure store. We use it as key: value store, as a database. It can also hold some complex infrastructures.

Let me post the test commands first.

sudo docker run -d --name redis fedora/redis
sudo docker run  -d --link redis:rd --name client fedora/redis
COPY: ./redistests.py vm1:./
sudo python3 -m unittest redistests.RedisTest.test_set -v
sudo python3 -m unittest redistests.RedisTest.test_append -v
sudo docker rm -f client
sudo python3 -m unittest redistests.RedisTest.test_newclient -v

In the first line, I am starting a new redis container. In the second line, I am running a container named “client”. In the next two lines, I have two tests. I will be using this container in those two tests. Then I will delete the container, and as the part of the last test, we will create a new container, and connect to the redis server container to get the value for the key name.

In the third line, I am copying a Python3 unittest file over to the VM, and then executing the tests one by one. I wanted to check the output from the redis-cli command instead of the exit code, that is why I decided to use Python unittest module. It makes parsing the output text much easier, and we can do complex analysis of the output text. We follow the same style in the Fedora Atomic tests.

redistests.py

import unittest
import subprocess

def system(cmd):
    """
    Invoke a shell command.
    :returns: A tuple of output, err message, and return code
    """
    ret = subprocess.Popen(cmd, shell=True, stdin=subprocess.PIPE,
                stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=True)
    out, err = ret.communicate()
    return out.decode("utf-8"), err.decode("utf-8"), ret.returncode


class RedisTest(unittest.TestCase):

    def test_set(self):
        out, err, eid = system("docker exec client redis-cli -h rd SET name kushal")
        out, err, eid = system("docker exec client redis-cli -h rd GET name")
        self.assertEqual(out, "kushal\n")

    def test_append(self):
        system("docker exec client redis-cli -h rd DEL clients")
        out, err, eid = system("docker exec client redis-cli -h rd APPEND clients ABC")
        self.assertEqual(out, "3\n")
        out, err, eid = system("docker exec client redis-cli -h rd APPEND clients XYZ")
        self.assertEqual(out, "6\n")
        out, ere, eid = system("docker exec client redis-cli -h rd GET clients")
        self.assertEqual(out, "ABCXYZ\n")

    def test_newclient(self):
        out, err, eid = system("docker run --rm --link redis:r2d2 --name newclient fedora/redis redis-cli -h r2d2 GET name")
        self.assertEqual(out, "kushal\n")

The output from the job.

$ ./gotun --job commands
Starts a new Tunir Job.

Server ID: cb0f491d-a921-46dd-b6e1-3c4353884d6c
Let us wait for the server to be in running state.
Time to assign a floating pointip.
Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Executing:  sudo docker run -d --name redis fedora/redis
Executing:  sudo docker run  -d --link redis:rd --name client fedora/redis
Executing COPY:  scp -o StrictHostKeyChecking=no -r -i /home/kdas/Downloads/kushal-testday.pem ./redistests.py fedora@209.132.184.210:./
Executing:  sudo python3 -m unittest redistests.RedisTest.test_set -v
Executing:  sudo python3 -m unittest redistests.RedisTest.test_append -v
Executing:  sudo docker rm -f client
Executing:  sudo python3 -m unittest redistests.RedisTest.test_newclient -v
---------------


Result file at: /tmp/tunirresult_210751937


Job status: true


command: sudo docker run -d --name redis fedora/redis
status:true

Unable to find image 'fedora/redis:latest' locally
Trying to pull repository docker.io/fedora/redis ... 
sha256:1bb7b6c03575dd43db6e8e3fc0b06c0b3df0178af9ea382b6f9df577d053bd22: Pulling from docker.io/fedora/redis
a3ed95caeb02: Pull complete 
cadab8e68e03: Pull complete 
9db1d534feda: Pull complete 
a4d5e4eefba6: Pull complete 
Digest: sha256:1bb7b6c03575dd43db6e8e3fc0b06c0b3df0178af9ea382b6f9df577d053bd22
Status: Downloaded newer image for docker.io/fedora/redis:latest
b83c3f13fa698627447575651e585f3308e795f579ea772c66d02ae0f1c72939


command: sudo docker run  -d --link redis:rd --name client fedora/redis
status:true

6096e414e2f7c314c9601ae6eaedbb3faf9fb265b70a4f77e5ba7b9a93b19209


command: sudo python3 -m unittest redistests.RedisTest.test_set -v
status:true

test_set (redistests.RedisTest) ... ok

----------------------------------------------------------------------
Ran 1 test in 0.199s

OK


command: sudo python3 -m unittest redistests.RedisTest.test_append -v
status:true

test_append (redistests.RedisTest) ... ok

----------------------------------------------------------------------
Ran 1 test in 0.289s

OK


command: sudo docker rm -f client
status:true

client


command: sudo python3 -m unittest redistests.RedisTest.test_newclient -v
status:true

test_newclient (redistests.RedisTest) ... ok

----------------------------------------------------------------------
Ran 1 test in 1.298s

OK


Total Number of Tests:6
Total NonGating Tests:0
Total Failed Non Gating Tests:0

Success.