Kushal Das4

FOSS and life. Kushal Das talks here.

Keeping the tools simple

When I talk about programming or teach in a workshop, I keep repeating one motto: Try to keep things simple. Whenever I looked at the modern complex systems which are popular among the users, generally they are many simple tools working together solving a complex problem.

The Unix philosophy

Back in college days, I read about the Unix philosophy. It is a set of ideas and philosophical approaches to the software development. From the Wikipedia page, we can find four points.

  • Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new “features”.
  • Expect the output of every program to become the input to another, as yet unknown, program. Don’t clutter output with extraneous information. Avoid stringently columnar or binary input formats. Don’t insist on interactive input.
  • Design and build software, even operating systems, to be tried early, ideally within weeks. Don’t hesitate to throw away the clumsy parts and rebuild them.
  • Use tools in preference to unskilled help to lighten a programming task, even if you have to detour to build the tools and expect to throw some of them out after you’ve finished using them.

Do one thing

The first point is something many people decide to skip. The idea of a perfect system which can do everything leads to a complex over engineered software which takes many years to land in production. The first causality is the user of the system, then the team who has to maintain the system up & running. A simple system with proper documentation also attracts number of users. These first users also become your community. They try out the new features, provide valuable feedback. If you are working on an Open Source project, creating that community around your project is always important for the sustainability of the project.

Pipe and redirection between tools

Piping and redirection in Linux shells were another simple magic I learned during the early days in college. How a tool like grep can take the input stream and provide an output, which in turn can be used in the next tool, was one of the best thing I found in the terminal. As a developer, I spend a lot of time in the terminals, and we use piping and redirection in daily life many times.

Build and try and repeat

I think all of the modern agile followers know this point very well. Unless you are allowing your users to try out your tool and allow them to provide feedback, your tool will not be welcomed by the users. We, the programmers have this attitude that every problem can be solved by code, and our ideas are always the correct. No, that is not the case at all. Go out in the world, show your tool to as many people as possible, take feedback. Rewrite and rebuild your tool as required. If you wait for 3 years and hope that someone will force your tool to the users, that will not go well in long run.

Do One Thing and Do It Well

The whole idea of Do One Thing and Do It Well has been discussed many times. Search the term in your favorite search engine, and you will surely find many documents explaining the idea in details. Following this idea while designing tools or systems helped me till date. Tunir or gotun tried to follow the same ideas as much as possible. They are build to execute some command on a remote system and act accordingly to the exit codes. I think this is the one line description of both the tools. To verify if the tool is simple or not, I keep throwing the tool to the new users and go through the feedback.

Last night we received a mail from Dusty Mabe in the Fedora Cloud list, to test the updates-testing tree for Fedora Atomic. At the end of the email, he also gave the command to execute to rebase to the updates-testing tree.

# rpm-ostree rebase fedora-atomic/25/x86_64/testing/docker-host 

With that as input from upstream, it was just adding the command in one line on top of the current official Fedora Atomic tests, and followed by a reboot command and wait for the machine to come back online.

sudo rpm-ostree rebase fedora-atomic/25/x86_64/testing/docker-host
@@ sudo reboot
SLEEP 120
curl -O http://infrastructure.fedoraproject.org/infra/autocloud/tunirtests.tar.gz
tar -xzvf tunirtests.tar.gz
...

This helped me to find the regression in atomic command within the next few minutes while I was working on something else. As I reported the issue to the upstream, they are already working to find a solution (some discussion here). The simplicity of the tool helped me to get things done faster in this case.

Please let me know what do you think about this particular idea about designing software in the comments below.

Working over ssh in Python

Working with the remote servers is a common scenario for most of us. Sometimes, we do our actual work over those remote computers, sometimes our code does something for us in the remote systems. Even Vagrant instances on your laptop are also remote systems, you still have to ssh into those systems to get things done.

Setting up of the remote systems

Ansible is the tool we use in Fedora Infrastructure. All of our servers are configured using Ansible, and all of those playbooks/roles are in a public git repository. This means you can also setup your remote systems or servers in the exact same way Fedora Project does.

I also have many friends who manage their laptops or personal servers using Ansible. Setting up new development systems means just a run of a playbook for them. If you want to start doing the same, I suggest you have a look at the lightsaber built by Ralph Bean.

Working on the remote systems using Python

There will be always special cases where you will have to do something on a remote system from your application directly than calling an external tool (read my previous blog post on the same topic). Python has an excellent module called Paramiko to help us out. This is a Python implementation of SSHv2 protocol.

def run(host='127.0.0.1', port=22, user='root',
                  command='/bin/true', bufsize=-1, key_filename='',
                  timeout=120, pkey=None):
    """
    Excecutes a command using paramiko and returns the result.
    :param host: Host to connect
    :param port: The port number
    :param user: The username of the system
    :param command: The command to run
    :param key_filename: SSH private key file.
    :param pkey: RSAKey if we want to login with a in-memory key
    :return:
    """
    client = paramiko.SSHClient()
    client.set_missing_host_key_policy(paramiko.AutoAddPolicy())

    client.connect(hostname=host, port=port,
            username=user, key_filename=key_filename, banner_timeout=10)
    chan = client.get_transport().open_session()
    chan.settimeout(timeout)
    chan.set_combine_stderr(True)
    chan.get_pty()
    chan.exec_command(command)
    stdout = chan.makefile('r', bufsize)
    stdout_text = stdout.read()
    status = int(chan.recv_exit_status())
    client.close()
    return stdout_text, status

The above function is a modified version of the run function from Tunir codebase. We are creating a new client, and then connecting to the remote system. If you have an in-memory implementation of the RSA Key, then you can use the pkey parameter the connect method, otherwise, you can provide the full path to the private key file as shown in the above example. I also don’t want to verify or store the host key, the second line of the function adds a policy to make sure of that.

Working on the remote systems using Golang

Now, if you want to do the same in golang, it will not be much different. We will use golang’s crypto/ssh package. The following is taken from gotun project. Remember to fill in the proper error handling as required by your code. I am just copy-pasting the important parts from my code as an example.

func (t TunirVM) FromKeyFile() ssh.AuthMethod {
	file := t.KeyFile
	buffer, err := ioutil.ReadFile(file)
	if err != nil {
		return nil
	}

	key, err := ssh.ParsePrivateKey(buffer)
	if err != nil {
		return nil
	}
	return ssh.PublicKeys(key)
}

sshConfig := &ssh.ClientConfig{
	User: viper.GetString("USER"),
	Auth: []ssh.AuthMethod{
		vm.FromKeyFile(),
	},
}

connection, err := ssh.Dial("tcp", fmt.Sprintf("%s:%s", ip, port), sshConfig)
session, err = connection.NewSession()
defer session.Close()
output, err = session.CombinedOutput(actualcommand)

Creating an ssh connection to a remote system using either Python or Golang is not that difficult. Based on the use case choose to either have that power in your code or reuse an existing powerful tool like Ansible.

New features in Gotun

Gotun is a written from scratch golang port of Tunir. Gotun can execute tests on remote systems, or it can run tests on OpenStack and AWS.

Installation from git

If you have a working golang setup, then you can use the standard go get command to install the latest version of gotun.

$ go get github.com/kushaldas/gotun

Configure based on YAML files

Gotun expects job configuration in a YAML file. The following is an example of a job for OpenStack.

---
BACKEND: "openstack"
NUMBER: 3

OS_AUTH_URL: "URL"
OS_TENANT_ID: "Your tenant id"
OS_USERNAME: "USERNAME"
OS_PASSWORD: "PASSWORD"
OS_REGION_NAME: "RegionOne"
OS_IMAGE: "Fedora-Atomic-24-20161031.0.x86_64.qcow2"
OS_FLAVOR: "m1.medium"
OS_SECURITY_GROUPS:
    - "group1"
    - "default"
OS_NETWORK: "NETWORK_POOL_ID"
OS_FLOATING_POOL: "POOL_NAME"
OS_KEYPAIR: "KEYPAIR NAME"
key: "Full path to the private key (.pem file)"

You can also point the OS_IMAGE to a local qcow2 image, which then will be uploaded to the cloud, and used. After the job is done, the image will be removed.

Multiple VM cluster on OpenStack

The OpenStack based jobs also support multiple VM(s). In the above example, we are actually creating three instances from the image mentioned.

Job file syntax

Gotun supports the same syntax of the actual tests of Tunir. Any line starting with ## means those are non-gating tests, even if they fail, the job will continue. For a cluster based job, use vm1, vm2 and similar numbers to mark which VM to be used for the command.

Rebuild of the instances on OpenStack

For OpenStack based jobs, gotun adds a new directive, REBUILD_SERVERS, which will rebuild all the instances. In case one of your tests does something destructive to any instance, using this new directive you can now rebuild all the instances, and start from scratch. The following is the tests file and output from one such job.

echo "hello asd" > ./hello.txt
vm1 sudo cat /etc/machine-id
mkdir {push,pull}
ls -l ./
pwd
REBUILD_SERVERS
sudo cat /etc/machine-id
ls -l ./
pwd
$ gotun --job fedora
Starts a new Tunir Job.

Server ID: e0d7b55a-f066-4ff8-923c-582f3c9be29b
Let us wait for the server to be in running state.
Time to assign a floating pointip.
Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Server ID: a0b810e6-0d7f-4c9e-bc4d-1e62b082673d
Let us wait for the server to be in running state.
Time to assign a floating pointip.
Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Executing:  echo "hello asd" > ./hello.txt
Executing:  vm1 sudo cat /etc/machine-id
Executing:  mkdir {push,pull}
Executing:  ls -l ./
Executing:  pwd
Going to rebuild: 209.132.184.241
Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Going to rebuild: 209.132.184.242
Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Executing:  sudo cat /etc/machine-id
Executing:  ls -l ./
Executing:  pwd

Result file at: /tmp/tunirresult_180507156


Job status: true


command: echo "hello asd" > ./hello.txt
status:true



command: sudo cat /etc/machine-id
status:true

e0d7b55af0664ff8923c582f3c9be29b


command: mkdir {push,pull}
status:true



command: ls -l ./
status:true

total 4
-rw-rw-r--. 1 fedora fedora 10 Jan 25 13:58 hello.txt
drwxrwxr-x. 2 fedora fedora  6 Jan 25 13:58 pull
drwxrwxr-x. 2 fedora fedora  6 Jan 25 13:58 push


command: pwd
status:true

/var/home/fedora


command: sudo cat /etc/machine-id
status:true

e0d7b55af0664ff8923c582f3c9be29b


command: ls -l ./
status:true

total 0


command: pwd
status:true

/var/home/fedora


Total Number of Tests:8
Total NonGating Tests:0
Total Failed Non Gating Tests:0

Success.

Using Ansible inside a job is now easier

Before running any actual test command, gotun creates a file called current_run_info.json in the job directory, we can now use that to create inventory file for Ansible. Then we can mark any Ansible playbook as a proper test in the job description.

#!/usr/bin/env python3
import json

data = None
with open("current_run_info.json") as fobj:
    data = json.loads(fobj.read())

user = data['user']
host1 = data['vm1']
host2 = data['vm2']
key = data['keyfile']

result = """{0} ansible_ssh_host={1} ansible_ssh_user={2} ansible_ssh_private_key_file={3}
{4} ansible_ssh_host={5} ansible_ssh_user={6} ansible_ssh_private_key_file={7}""".format(host1,host1,user,key,host2,host2,user,key)
with open("inventory", "w") as fobj:
    fobj.write(result)

The above-mentioned script is an example, we are reading the JSON file created by the gotun, and then writing to a new inventory file to be used by an Ansible playbook. The documentation has one example of running atomic-host-tests inside gotun.

You have any question, come and ask in the #fedora-cloud channel. You can contact me over twitter too.

Shonku, a static blogging tool

I moved my blog into a static blog more than a year back using Nikola. It was a good move, writing was simple again. I spend more time in writing my blogs than thinking about formating or themes. But someplace back in my mind I was still looking for something even simpler. May be a minimalistic Nikola.

Just before PyCon US I started writing a new tool with this approach. I named it after Professor Shonku, my childhood hero. His logs from his diary was the source of the all stories :)

Shonku is written in golang, still many issues needs to be fixed but it can be used. I moved my blog into shonku few weeks back and things seems to be normal.

  • Posts in Markdown
  • Can have themes

The above two are the initial features I kept in mind while developing it. You can see a different theme here, the source of the theme can be found here.

The documentation will require more love. I am working on it slowly. The building/installation instructions work nicely, if you find any issue or want to make any feature request, please file an issue in github.

To try it out you can download a binary or build it from the source.

$ sha256sum shonku.bin

82476d8e4006da88bf09e1333597f8c0c1a31b3ddf2281aae54ee51e4eb43469 shonku.bin

Using golang with Eucalyptus

Here is an example code to start a new instance in your Eucalyptus cloud using golang. We are using goamz library.

package main

import (
    "fmt"
    "github.com/mitchellh/goamz/aws"
    "github.com/mitchellh/goamz/ec2"
    "net/http"
)

func main() {
    DefaultClient := &http.Client{
        Transport: &http.Transport{
            Proxy: http.ProxyFromEnvironment,
        },
    }

    auth := aws.Auth{"AKI98", "5KZb4GbQ", ""}
    mec2 := ec2.NewWithClient(
        auth,
        aws.Region{EC2Endpoint: "http://euca-ip:8773/services/Eucalyptus"},
        DefaultClient,
    )

    options := ec2.RunInstances{
        ImageId:      "emi-F5433303",
        InstanceType: "m1.xlarge",
        KeyName:      "foo",
    }
    resp, _ := mec2.RunInstances(&options)

    fmt.Println(resp)

}

Searching my blog

More than a month back, I added a small scale search engine which indexes only my blog entries, you can access it either using search box in the page or from search.

It is still very initial stage of the service, the indexer is in Python and the web service is written using golang (less then 200 lines, includes few lines of html too).

I still have to work on the ranking algorithm, right now it is very generic.

FAS OpenID authentication with golang

Fedora accounts system provides OpenID authentication. If you are using python-fedora to do authentication in your webapp, you already know what I am talking about. I wanted to do the same, but from a webservice written in golang. I found a few working openid solutions in golang but none of them provides the sreg and other extensions which FAS supports.

I pushed patched openid.go in my github which by default supports all the extensions for FAS. We have an example which shows how to access the Verify function and the output. Now you can have FAS authentication if your app too :)

id, err := openid.Verify(
    fullUrl,
    discoveryCache, nonceStore)