Kushal Das

FOSS and life. Kushal Das talks here.

kushal76uaid62oup5774umh654scnu5dwzh4u2534qxhcbi4wbab3ad.onion

PyLadies Pune meetup, February 2017

TL;DR - It was a lot of fun.

This month’s PyLadies Pune meetup was held in reserved-bit, the new hackerspace in Pune. The microbits were sent by Ntoll, without his help this workshop was not possible.

The morning

Anwesha left home earlier so that she can join in from the beginning. I came in late, as I had to get Py (our 2 years old daughter) ready for the day. By the time I reached in the hackerspace, the rest of the participants were discussing PyCon Pune, and how are they going to represent PyLadies in the conference.

After having a large round of coffee, I started with setting up the laptops for Microbit development. That involved getting the latest Mu editor. I precached the binaries on my laptop and shared over the local network for faster access. We also had 3 people with Windows on the laptops, so we downloaded the device driver as explained in the Mu documentation. By this time we had 10 participants in the meetup.

Just when I started handing over the devices to each participant, I figured that I left the main pack of the devices back at home. Sayan ran back to our house and brought us the packet of Microbits. Meanwhile, all participants wrote a script to find out the groups of the current user in the Linux systems. We shared a group file for the Windows users.

Programming with Microbit

I spoke about the hardware and backstory for few minutes. Then we dived into the world of MicroPython. Everyone started scrolling their favorite message into the display. People also opened up the official documentation of the microbit-micropython project. They started exploring the API of their own. After a few fun trial and errors, we moved into the world of music, and speech. I never tried these two modules before. Everyone plugged their earphones into the microbits using the alligator-chip cables. We also learned about handling button presses, and people were experimenting all the things together.

In the last part of the workshop, I demoed the radio module. I think that was the most fun part of the whole day. People started sending out various messages and seeing them live on each other's devices. Siddhesh and Nisha went outside of the hackerspace to find till how far they can receive the messages. It seems these small devices can cover a large area. People had enough time to experiment on their own. Looking at the enjoyment at their faces, we could understand how much fun they were having. We are going to see more of this during the PyCon Pune devsprints.

Testing Fedora Atomic Images using upstream Atomic Host tests

Project Atomic has a group of tests written in Ansible. In this blog post, I am going to show to use those along with gotun. I will be running the improved-sanity-test as suggested in the #atomic channel on Freenode.

createinventory.py

The following is the content of the createinventory.py file. We use this file to generate the inventory file for Ansible.

#!/usr/bin/env python

import json

data = None
with open("current_run_info.json") as fobj:
    data = json.loads(fobj.read())

user = data['user']
host1 = data['vm1']
key = data['keyfile']

result = """{0} ansible_ssh_host={1} ansible_ssh_user={2} ansible_ssh_private_key_file={3}
""".format(host1, host1,user,key)
with open("./inventory", "w") as fobj:
    fobj.write(result)

gotun job configuration and the actual job file

---
BACKEND: "openstack"
NUMBER: 1

OS_AUTH_URL: "https://fedorainfracloud.org:5000/v2.0"
USERNAME="username"
TENANT_ID="ID"
PASSWORD="PASSWORD"
OS_REGION_NAME: "RegionOne"
OS_IMAGE: "Fedora-Atomic-25-20170124.1.x86_64.qcow2"
OS_FLAVOR: "m1.medium"
OS_SECURITY_GROUPS:
    - "ssh-anywhere-cloudsig"
    - "default"
OS_NETWORK: "NETWORK ID"
OS_FLOATING_POOL: "external"
OS_KEYPAIR: "kushal-test"
key: "/home/kdas/kushal-test.pem"

Above fedora.yml configuration boots up the VM for us in the OpenStack environment. Then we have the actual test file called fedora.txt.

HOSTCOMMAND: ./createinventory.py
HOSTTEST: ansible-playbook -i inventory tests/improved-sanity-test/main.yml

Now when we run the job (remember, it takes a lot of time to run).

$ gotun --job fedora
Starts a new Tunir Job.

Server ID: 3e8cd0c7-bc79-435e-9cf9-169c5bc66b3a
Let us wait for the server to be in running state.
Time to assign a floating point IP.
Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.


Executing:  HOSTTEST: ansible-playbook -i inventory tests/improved-sanity-test/main.yml
---------------


Result file at: /tmp/tunirresult_326996419


Job status: true


command: ansible-playbook -i inventory tests/improved-sanity-test/main.yml
status:true


PLAY [Improved Sanity Test - Pre-Upgrade] **************************************

TASK [setup] *******************************************************************
ok: [209.132.184.162]

TASK [ansible_version_check : Fail if avc_major is not defined] ****************
skipping: [209.132.184.162]

TASK [ansible_version_check : Fail if avc_minor is not defined] ****************
skipping: [209.132.184.162]

TASK [ansible_version_check : Check if Ansible is the correct version (or newer)] ***
ok: [209.132.184.162] => {
    "changed": false, 
    "msg": "All assertions passed"
}

TASK [atomic_host_check : Determine if Atomic Host] ****************************
ok: [209.132.184.162]

<SKIPPING ALL THE OTHER OUTPUT>

TASK [var_files_present : Check for correct SELinux type] **********************
changed: [209.132.184.162]

PLAY RECAP *********************************************************************
209.132.184.162            : ok=260  changed=170  unreachable=0    failed=0   

[DEPRECATION WARNING]: always_run is deprecated. Use check_mode = no instead..

This feature will be removed in version 2.4. Deprecation warnings can be 
disabled by setting deprecation_warnings=False in ansible.cfg.
 [WARNING]: Consider using yum, dnf or zypper module rather than running rpm
 [WARNING]: Consider using get_url or uri module rather than running curl
[DEPRECATION WARNING]: always_run is deprecated. Use check_mode = no instead..

This feature will be removed in version 2.4. Deprecation warnings can be 
disabled by setting deprecation_warnings=False in ansible.cfg.
[DEPRECATION WARNING]: always_run is deprecated. Use check_mode = no instead..

This feature will be removed in version 2.4. Deprecation warnings can be 
disabled by setting deprecation_warnings=False in ansible.cfg.


Total Number of Tests:1
Total NonGating Tests:0
Total Failed Non Gating Tests:0

Success.

Previously, I blogged about how to use the upstream Kubernetes Ansible repo to test Kubernetes on Atomic images. Using this style, you can use any ansible playbook to do the real setup, and then use Ansible to test for you, or have your own set of test cases. Do let me know what do you think in the comments.

Writing Python Extensions in Rust

In December I spent few days with Rust. I wrote few lines of code and was trying to get in touch with the syntax and feeling of the language. One of the major things in my TODO list was figuring out how to write Python extensions in Rust. Armin Ronacher wrote this excellent post in the Sentry blog back in October, 2016. I decided to learn from the same code base. It is always much easier to make small changes and then see what actually change due the same. This is also my first usage of CFFI module. Before this, I always wrote Python C extensions from scratch. In this post I will assume that we already have a working Rust installation on your system, and then we will go ahead from that.

Creating the initial Rust project

I am already in my new project directory, which is empty.

$ cargo init
Created library project
$ ls
Cargo.toml src

Now, I am going to update the Cargo.toml file with the following content. Feel free to adjust based on your requirements.

[package]
name = "liblearn"
version = "0.1.0"
authors = ["Kushal Das <mail@kushaldas.in>"]

[lib]
name = "liblearn"
crate-type = ["cdylib"]

Using the crate-type attribute we tell the Rust compiler what kind of artifact to generate. We will create a dynamic system library for our example. On my Linux computer it will create a *.so file. You can read more about the crate-types here.

Next we update our src/lib.rs file. Here we are telling that we also have a src/ksum.rs file.

#[cfg(test)]
mod tests {
    #[test]
    fn it_works() {
    }
}

pub mod ksum;
use std::ffi::CStr;
use std::os::raw::{c_uint, c_char};


#[no_mangle]
pub unsafe extern "C" fn sum(a: c_uint, b: c_uint) -> c_uint {
	println!("{}, {}", a, b);
	a + b
}


#[no_mangle]
pub unsafe extern "C" fn onbytes(bytes: *const c_char) {
	let b = CStr::from_ptr(bytes);
	println!("{}", b.to_str().unwrap())
}

We have various types which can help us to handle the data coming from the C code. We also have two unsafe functions, the first is sum, where we are accepting two integers, and returning the addition of those values. We are also printing the integers just for our learning purpose.

We also have a onbytes function, in which we will take a Python bytes input, and just print it on the STDOUT. Remember this is just an example, so feel free to make changes and learn more :). The CStr::from_ptr function helps us with converting raw C string to a safe C string wrapper in Rust. Read the documentation for the same to know more.

All of the functions also have no_mangle attribute, so that Rust compiler does not mangle the names. This helps in using the functions in C code. Marking the functions extern will help in line of Rust FFI work. At this moment you should be able to build the Rust project with cargo build command.

Writing the Python code

Next we create a build.py file on the top directory, this will help us with CFFI. We will also need our C header file with proper definitions in it, include/liblearn.h

#ifndef LIBLEARN_H_INCLUDED
#define LIBLEARN_H_INCLUDED

unsigned int sum(unsigned int a, unsigned int b);
void onbytes(const char *bytes);
#endif

The build.py

import sys
import subprocess
from cffi import FFI


def _to_source(x):
    if sys.version_info >= (3, 0) and isinstance(x, bytes):
        x = x.decode('utf-8')
    return x


ffi = FFI()
ffi.cdef(_to_source(subprocess.Popen([
'cc', '-E', 'include/liblearn.h'],
stdout=subprocess.PIPE).communicate()[0]))
ffi.set_source('liblearn._sumnative', None)

Feel free to consult the CFFI documentation to learn things in depth. If you want to convert Rust Strings to Python and return them, I would suggest you to have a look at the unpack function.

The actual Python module source

We have liblearn/init.py file, which holds the actual code for the Python extension module we are writing.

import os
from ._sumnative import ffi as _ffi

_lib = _ffi.dlopen(os.path.join(os.path.dirname(__file__), '_liblearn.so'))

def sum(a, b):
    return _lib.sum(a,b)

def onbytes(word):
    return _lib.onbytes(word)

setup.py file

I am copy pasting the whole setup.py below. Most of it is self explanatory. I also kept the original comments which explain various points.

import os
import sys
import shutil
import subprocess

try:
    from wheel.bdist_wheel import bdist_wheel
except ImportError:
    bdist_wheel = None

from setuptools import setup, find_packages
from distutils.command.build_py import build_py
from distutils.command.build_ext import build_ext
from setuptools.dist import Distribution


# Build with clang if not otherwise specified.
if os.environ.get('LIBLEARN_MANYLINUX') == '1':
    os.environ.setdefault('CC', 'gcc')
    os.environ.setdefault('CXX', 'g++')
else:
    os.environ.setdefault('CC', 'clang')
    os.environ.setdefault('CXX', 'clang++')


PACKAGE = 'liblearn'
EXT_EXT = sys.platform == 'darwin' and '.dylib' or '.so'


def build_liblearn(base_path):
    lib_path = os.path.join(base_path, '_liblearn.so')
    here = os.path.abspath(os.path.dirname(__file__))
    cmdline = ['cargo', 'build', '--release']
    if not sys.stdout.isatty():
        cmdline.append('--color=always')
    rv = subprocess.Popen(cmdline, cwd=here).wait()
    if rv != 0:
        sys.exit(rv)
    src_path = os.path.join(here, 'target', 'release',
                            'libliblearn' + EXT_EXT)
    if os.path.isfile(src_path):
        shutil.copy2(src_path, lib_path)


class CustomBuildPy(build_py):
    def run(self):
        build_py.run(self)
        build_liblearn(os.path.join(self.build_lib, *PACKAGE.split('.')))


class CustomBuildExt(build_ext):
    def run(self):
        build_ext.run(self)
        if self.inplace:
            build_py = self.get_finalized_command('build_py')
            build_liblearn(build_py.get_package_dir(PACKAGE))


class BinaryDistribution(Distribution):
    """This is necessary because otherwise the wheel does not know that
    we have non pure information.
    """
    def has_ext_modules(foo):
        return True


cmdclass = {
    'build_ext': CustomBuildExt,
    'build_py': CustomBuildPy,
}


# The wheel generated carries a python unicode ABI tag.  We want to remove
# this since our wheel is actually universal as far as this goes since we
# never actually link against libpython.  Since there does not appear to
# be an API to do that, we just patch the internal function that wheel uses.
if bdist_wheel is not None:
    class CustomBdistWheel(bdist_wheel):
        def get_tag(self):
            rv = bdist_wheel.get_tag(self)
            return ('py2.py3', 'none') + rv[2:]
    cmdclass['bdist_wheel'] = CustomBdistWheel


setup(
    name='liblearn',
    version='0.1.0',
    url='http://github.com/kushaldas/liblearn',
    description='Module to learn writing Python extensions in rust',
    license='BSD',
    author='Kushal Das',
    author_email='kushaldas@gmail.com',
    packages=find_packages(),
    cffi_modules=['build.py:ffi'],
    cmdclass=cmdclass,
    include_package_data=True,
    zip_safe=False,
    platforms='any',
    install_requires=[
        'cffi>=1.6.0',
    ],
    setup_requires=[
        'cffi>=1.6.0'
    ],
    classifiers=[
        'Intended Audience :: Developers',
        'License :: OSI Approved :: BSD License',
        'Operating System :: OS Independent',
        'Programming Language :: Python',
        'Topic :: Software Development :: Libraries :: Python Modules'
    ],
    ext_modules=[],
    distclass=BinaryDistribution
)

Building the Python extension

$ python3 setup.py build
running build
running build_py
creating build/lib
creating build/lib/liblearn
copying liblearn/__init__.py -> build/lib/liblearn
Finished release [optimized] target(s) in 0.0 secs
generating cffi module 'build/lib/liblearn/_sumnative.py'
running build_ext

Now we have a build directory. We go inside of the build/lib directory, and try out the following.

$ python3
Python 3.5.2 (default, Sep 14 2016, 11:28:32)
[GCC 6.2.1 20160901 (Red Hat 6.2.1-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import liblearn
>>> liblearn.sum(12,30)
12, 30
42
>>> b = "Kushal in bengali কুূশল".encode("utf-8")
>>> liblearn.onbytes(b)
Kushal in bengali কুূশল

This post is only about how to start writing a new extension. My knowledge with Rust is very minimal. In future I will write more as I learn. You can find all the source files in github repo.

Thank you Siddhesh, and Armin for reviewing this post.

Testing Kubernetes on Fedora Atomic using gotun

Kubernetes is one of the major components of the modern container technologies. Previously, using Tunir I worked on ideas to test it automatically on Fedora Atomic images. The new gotun provides a better way to setup and test a complex system like Kubernetes.

In the following example I have used Fedora Infra OpenStack to setup 3 instances, and then used the upstream contrib/ansible repo to do the real setup of the Kubernetes. I have two scripts in my local directory where the ansible directory also exists. First, createinventory.py, which creates the ansible inventory file, and also a hosts file with the right IP addresses. We push this file to every VM and copy to /etc/ using sudo. You can easily do this over ansible, but I did not want to change or add anything to the git repo, that is why I am doing it like this.

#!/usr/bin/env python

import json

data = None
with open("current_run_info.json") as fobj:
    data = json.loads(fobj.read())

user = data['user']
host1 = data['vm1']
host2 = data['vm2']
host3 = data['vm3']
key = data['keyfile']

result = """kube-master.example.com ansible_ssh_host={0} ansible_ssh_user={1} ansible_ssh_private_key_file={2}
kube-node-01.example.com ansible_ssh_host={3} ansible_ssh_user={4} ansible_ssh_private_key_file={5}
kube-node-02.example.com ansible_ssh_host={6} ansible_ssh_user={7} ansible_ssh_private_key_file={8}

[masters]
kube-master.example.com
 
[etcd:children]
masters
 
[nodes]
kube-node-01.example.com
kube-node-02.example.com
""".format(host1,user,key,host2,user,key,host3,user,key)
with open("ansible/inventory/inventory", "w") as fobj:
    fobj.write(result)

hosts = """127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
{0} kube-master.example.com
{1} kube-node-01.example.com
{2} kube-node-02.example.com
""".format(host1,host2,host3)

with open("./hosts","w") as fobj:
    fobj.write(hosts)

Then, I also have a runsetup.sh script, which runs the actual script inside the ansible directory.

#!/usr/bin/sh
cd ./ansible/scripts/
export ANSIBLE_HOST_KEY_CHECKING=False
./deploy-cluster.sh

The following the job definition creates the 3VM(s) on the Cloud.

---
BACKEND: "openstack"
NUMBER: 3

OS_AUTH_URL: "https://fedorainfracloud.org:5000/v2.0"
TENANT_ID: "ID of the tenant"
USERNAME: "user"
PASSWORD: "password"
OS_REGION_NAME: "RegionOne"
OS_IMAGE: "Fedora-Atomic-25-20170124.1.x86_64.qcow2"
OS_FLAVOR: "m1.medium"
OS_SECURITY_GROUPS:
    - "ssh-anywhere-cloudsig"
    - "default"
    - "wide-open-cloudsig"
OS_NETWORK: "NETWORKID"
OS_FLOATING_POOL: "external"
OS_KEYPAIR: "kushal-testkey"
key: "/home/kdas/kushal-testkey.pem"

Then comes the final text file which contains all the actual test commands.

HOSTCOMMAND: ./createinventory.py
COPY: ./hosts vm1:./hosts
vm1 sudo mv ./hosts /etc/hosts
COPY: ./hosts vm2:./hosts
vm2 sudo mv ./hosts /etc/hosts
COPY: ./hosts vm3:./hosts
vm3 sudo mv ./hosts /etc/hosts
HOSTTEST: ./runsetup.sh
vm1 sudo kubectl get nodes
vm1 sudo atomic run projectatomic/guestbookgo-atomicapp
SLEEP 60
vm1 sudo kubectl get pods

Here I am using the old guestbook application, but you can choose to deploy any application for this fresh Kubernetes cluster, and then test if it is working fine or not. Please let me know in the comments what do you think about this idea. Btw, remember that the ansible playbook will take a long time to run.

Working over ssh in Python

Working with the remote servers is a common scenario for most of us. Sometimes, we do our actual work over those remote computers, sometimes our code does something for us in the remote systems. Even Vagrant instances on your laptop are also remote systems, you still have to ssh into those systems to get things done.

Setting up of the remote systems

Ansible is the tool we use in Fedora Infrastructure. All of our servers are configured using Ansible, and all of those playbooks/roles are in a public git repository. This means you can also setup your remote systems or servers in the exact same way Fedora Project does.

I also have many friends who manage their laptops or personal servers using Ansible. Setting up new development systems means just a run of a playbook for them. If you want to start doing the same, I suggest you have a look at the lightsaber built by Ralph Bean.

Working on the remote systems using Python

There will be always special cases where you will have to do something on a remote system from your application directly than calling an external tool (read my previous blog post on the same topic). Python has an excellent module called Paramiko to help us out. This is a Python implementation of SSHv2 protocol.

def run(host='127.0.0.1', port=22, user='root',
                  command='/bin/true', bufsize=-1, key_filename='',
                  timeout=120, pkey=None):
    """
    Excecutes a command using paramiko and returns the result.
    :param host: Host to connect
    :param port: The port number
    :param user: The username of the system
    :param command: The command to run
    :param key_filename: SSH private key file.
    :param pkey: RSAKey if we want to login with a in-memory key
    :return:
    """
    client = paramiko.SSHClient()
    client.set_missing_host_key_policy(paramiko.AutoAddPolicy())

    client.connect(hostname=host, port=port,
            username=user, key_filename=key_filename, banner_timeout=10)
    chan = client.get_transport().open_session()
    chan.settimeout(timeout)
    chan.set_combine_stderr(True)
    chan.get_pty()
    chan.exec_command(command)
    stdout = chan.makefile('r', bufsize)
    stdout_text = stdout.read()
    status = int(chan.recv_exit_status())
    client.close()
    return stdout_text, status

The above function is a modified version of the run function from Tunir codebase. We are creating a new client, and then connecting to the remote system. If you have an in-memory implementation of the RSA Key, then you can use the pkey parameter the connect method, otherwise, you can provide the full path to the private key file as shown in the above example. I also don’t want to verify or store the host key, the second line of the function adds a policy to make sure of that.

Working on the remote systems using Golang

Now, if you want to do the same in golang, it will not be much different. We will use golang’s crypto/ssh package. The following is taken from gotun project. Remember to fill in the proper error handling as required by your code. I am just copy-pasting the important parts from my code as an example.

func (t TunirVM) FromKeyFile() ssh.AuthMethod {
	file := t.KeyFile
	buffer, err := ioutil.ReadFile(file)
	if err != nil {
		return nil
	}

	key, err := ssh.ParsePrivateKey(buffer)
	if err != nil {
		return nil
	}
	return ssh.PublicKeys(key)
}

sshConfig := &ssh.ClientConfig{
	User: viper.GetString("USER"),
	Auth: []ssh.AuthMethod{
		vm.FromKeyFile(),
	},
}

connection, err := ssh.Dial("tcp", fmt.Sprintf("%s:%s", ip, port), sshConfig)
session, err = connection.NewSession()
defer session.Close()
output, err = session.CombinedOutput(actualcommand)

Creating an ssh connection to a remote system using either Python or Golang is not that difficult. Based on the use case choose to either have that power in your code or reuse an existing powerful tool like Ansible.

New features in Gotun

Gotun is a written from scratch golang port of Tunir. Gotun can execute tests on remote systems, or it can run tests on OpenStack and AWS.

Installation from git

If you have a working golang setup, then you can use the standard go get command to install the latest version of gotun.

$ go get github.com/kushaldas/gotun

Configure based on YAML files

Gotun expects job configuration in a YAML file. The following is an example of a job for OpenStack.

---
BACKEND: "openstack"
NUMBER: 3

OS_AUTH_URL: "URL"
OS_TENANT_ID: "Your tenant id"
OS_USERNAME: "USERNAME"
OS_PASSWORD: "PASSWORD"
OS_REGION_NAME: "RegionOne"
OS_IMAGE: "Fedora-Atomic-24-20161031.0.x86_64.qcow2"
OS_FLAVOR: "m1.medium"
OS_SECURITY_GROUPS:
    - "group1"
    - "default"
OS_NETWORK: "NETWORK_POOL_ID"
OS_FLOATING_POOL: "POOL_NAME"
OS_KEYPAIR: "KEYPAIR NAME"
key: "Full path to the private key (.pem file)"

You can also point the OS_IMAGE to a local qcow2 image, which then will be uploaded to the cloud, and used. After the job is done, the image will be removed.

Multiple VM cluster on OpenStack

The OpenStack based jobs also support multiple VM(s). In the above example, we are actually creating three instances from the image mentioned.

Job file syntax

Gotun supports the same syntax of the actual tests of Tunir. Any line starting with ## means those are non-gating tests, even if they fail, the job will continue. For a cluster based job, use vm1, vm2 and similar numbers to mark which VM to be used for the command.

Rebuild of the instances on OpenStack

For OpenStack based jobs, gotun adds a new directive, REBUILD_SERVERS, which will rebuild all the instances. In case one of your tests does something destructive to any instance, using this new directive you can now rebuild all the instances, and start from scratch. The following is the tests file and output from one such job.

echo "hello asd" > ./hello.txt
vm1 sudo cat /etc/machine-id
mkdir {push,pull}
ls -l ./
pwd
REBUILD_SERVERS
sudo cat /etc/machine-id
ls -l ./
pwd
$ gotun --job fedora
Starts a new Tunir Job.

Server ID: e0d7b55a-f066-4ff8-923c-582f3c9be29b
Let us wait for the server to be in running state.
Time to assign a floating pointip.
Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Server ID: a0b810e6-0d7f-4c9e-bc4d-1e62b082673d
Let us wait for the server to be in running state.
Time to assign a floating pointip.
Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Executing:  echo "hello asd" > ./hello.txt
Executing:  vm1 sudo cat /etc/machine-id
Executing:  mkdir {push,pull}
Executing:  ls -l ./
Executing:  pwd
Going to rebuild: 209.132.184.241
Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Going to rebuild: 209.132.184.242
Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Polling for a successful ssh connection.

Executing:  sudo cat /etc/machine-id
Executing:  ls -l ./
Executing:  pwd

Result file at: /tmp/tunirresult_180507156


Job status: true


command: echo "hello asd" > ./hello.txt
status:true



command: sudo cat /etc/machine-id
status:true

e0d7b55af0664ff8923c582f3c9be29b


command: mkdir {push,pull}
status:true



command: ls -l ./
status:true

total 4
-rw-rw-r--. 1 fedora fedora 10 Jan 25 13:58 hello.txt
drwxrwxr-x. 2 fedora fedora  6 Jan 25 13:58 pull
drwxrwxr-x. 2 fedora fedora  6 Jan 25 13:58 push


command: pwd
status:true

/var/home/fedora


command: sudo cat /etc/machine-id
status:true

e0d7b55af0664ff8923c582f3c9be29b


command: ls -l ./
status:true

total 0


command: pwd
status:true

/var/home/fedora


Total Number of Tests:8
Total NonGating Tests:0
Total Failed Non Gating Tests:0

Success.

Using Ansible inside a job is now easier

Before running any actual test command, gotun creates a file called current_run_info.json in the job directory, we can now use that to create inventory file for Ansible. Then we can mark any Ansible playbook as a proper test in the job description.

#!/usr/bin/env python3
import json

data = None
with open("current_run_info.json") as fobj:
    data = json.loads(fobj.read())

user = data['user']
host1 = data['vm1']
host2 = data['vm2']
key = data['keyfile']

result = """{0} ansible_ssh_host={1} ansible_ssh_user={2} ansible_ssh_private_key_file={3}
{4} ansible_ssh_host={5} ansible_ssh_user={6} ansible_ssh_private_key_file={7}""".format(host1,host1,user,key,host2,host2,user,key)
with open("inventory", "w") as fobj:
    fobj.write(result)

The above-mentioned script is an example, we are reading the JSON file created by the gotun, and then writing to a new inventory file to be used by an Ansible playbook. The documentation has one example of running atomic-host-tests inside gotun.

You have any question, come and ask in the #fedora-cloud channel. You can contact me over twitter too.

My highest used Python function

I heard the above-mentioned lines many times in the past 10 years. Meanwhile I used Python to develop various kinds of applications. Starting from web-applications to system tools, small command line tools or complex application dealing with Operating System ABI. Using Python as a gluing language helped me many times in those cases. There is a custom function which I think is the highest used function in the applications I wrote.

system function call

The following is the Python3 version.

import subprocess

def system(cmd):
    """
    Invoke a shell command.
    :returns: A tuple of output, err message and return code
    """
    ret = subprocess.Popen(cmd, shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=True)
    out, err = ret.communicate()
    return out, err, ret.returncode

The function uses subprocess module. It takes any shell command as input, executes the command, and returns any output, error text, and exit code. Remember that in Python3 out, and err are bytes, not strings. So, we have to decode them to standard strings before using.

This small function enabled me to reuse any other already existing tool, and build on top of them. Some developers will advise doing everything in more Pythonic API. But, in many cases no API is available. I found using this function, and then parsing the text output is much simpler than writing a library first. There is an overhead of creating a new process, and running it, and there are times when it is a big overhead. But again, for simple tools or backend processes (which does have to provide a real-time output) this is helpful. You can use this to create a PoC application, and then you can profile the application to find all the pain points.

Developing Command Line Interpreters using python-cmd2

Many of you already know that I love command line applications. Let it be a simple command line tool, or something more complex with a full command line interface/interpreter (CLI) attached to it. Back in college days, I tried to write a few small applications in Java with broken implementations of CLI. Later when I started working with Python, I wanted to implement CLI(s) for various projects. Python already has a few great modules in the standard library, but, I am going to talk about one external library which I prefer to use a lot. Sometimes even for fun :)

Welcome to python-cmd2

python-cmd2 is a Python module which is written on top of the cmd module of the standard library. It can be used as a drop-in replacement. Through out this tutorial, we will learn how to use it for simple applications.

Installation

You can install it using pip, or standard package managers.

$ pip install cmd2
$ sudo dnf install python3-cmd2

First application

#!/usr/bin/env python3

from cmd2 import Cmd


class REPL(Cmd):

    def __init__(self):
        Cmd.__init__(self)


if __name__ == '__main__':
    app = REPL()
    app.cmdloop()

We created a class called REPL, and later called the cmdloop method from an object of the same class. This will give us a minimal CLI. We can type ! and then any bash command to execute. Below, I called the ls command. You can also start the Python interpreter by using py command.

$ python3 mycli.py 
(Cmd) 
(Cmd) !ls
a_test.png  badge.png  main.py	mycli.py
(Cmd) py
Python 3.5.2 (default, Sep 14 2016, 11:28:32) 
[GCC 6.2.1 20160901 (Red Hat 6.2.1-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.
(REPL)

        py <command>: Executes a Python command.
        py: Enters interactive Python mode.
        End with ``Ctrl-D`` (Unix) / ``Ctrl-Z`` (Windows), ``quit()``, '`exit()``.
        Non-python commands can be issued with ``cmd("your command")``.
        Run python code from external files with ``run("filename.py")``
        
>>> 
(Cmd) 

You can press Ctrl+d to quit or use quit/exit commands.

Let us add some commands

But, before that, we should add a better prompt. We can have a different prompt by changing the prompt variable of the Cmd class. We can also add some banner by adding text to the intro variable.

#!/usr/bin/env python3
from cmd2 import Cmd

class REPL(Cmd):
    prompt = "life> "
    intro = "Welcome to the real world!"

    def __init__(self):
        Cmd.__init__(self)


if __name__ == '__main__':
    app = REPL()
    app.cmdloop()
$ python3 mycli.py 
Welcome to the real world!
life> 

Any method inside our REPL class which starts with do_ will become a command in our tool. For example, we will add a loadaverage command to show the load average of our system. We will read /proc/loadavg file in our Linux computers to find this value.

#!/usr/bin/env python3

from cmd2 import Cmd


class REPL(Cmd):
    prompt = "life> "
    intro = "Welcome to the real world!"

    def __init__(self):
        Cmd.__init__(self)

    def do_loadaverage(self, line):
        with open('/proc/loadavg') as fobj:
            data = fobj.read()
        print(data)

if __name__ == '__main__':
    app = REPL()
    app.cmdloop()

The output looks like:

$ python3 mycli.py 
Welcome to the real world!
life> loadaverage
0.42 0.23 0.24 1/1024 16516

life> loadaverage
0.39 0.23 0.24 1/1025 16517

life> loadaverage
0.39 0.23 0.24 1/1025 16517

If you do not know about the values in this file, the first three values indicate the CPU/IO utilization of the last one, five and ten minutes back. Then we have the number of currently running processes and the total number of processes. The final column shows the last process ID used. You can also see that TAB will autocomplete the command in our shell. We can go back to the past commands by pressing the arrow keys. We can also press Ctrl+r to do a reverse search like the standard bash shell. This feature comes from the readline module. We can use that more, and add a history file to our tool.

import os
import atexit
import readline
from cmd2 import Cmd

history_file = os.path.expanduser('~/.mycli_history')
if not os.path.exists(history_file):
    with open(history_file, "w") as fobj:
        fobj.write("")
readline.read_history_file(history_file)
atexit.register(readline.write_history_file, history_file)



class REPL(Cmd):
    prompt = "life> "
    intro = "Welcome to the real world!"

    def __init__(self):
        Cmd.__init__(self)

    def do_loadaverage(self, line):
        with open('/proc/loadavg') as fobj:
            data = fobj.read()
        print(data)

if __name__ == '__main__':
    app = REPL()
    app.cmdloop()

Taking input in the commands

We can use the positional argument in our do_ methods to have arguments in our commands. Whatever input you are passing to the command, comes to the line variable in our example. We can use the same to do anything. For example, we can take any URL as input, and then check the status. We will use requests module for this example. We also used the Cmd.colorize method to add colors to our output text. I have added one extra command to make the tool more useful.

#!/usr/bin/env python3

import os
import atexit
import readline
import requests
from cmd2 import Cmd

history_file = os.path.expanduser('~/.mycli_history')
if not os.path.exists(history_file):
    with open(history_file, "w") as fobj:
        fobj.write("")
readline.read_history_file(history_file)
atexit.register(readline.write_history_file, history_file)



class REPL(Cmd):
    prompt = "life> "
    intro = "Welcome to the real world!"

    def __init__(self):
        Cmd.__init__(self)

    def do_loadaverage(self, line):
        with open('/proc/loadavg') as fobj:
            data = fobj.read()
        print(data)

    def do_status(self, line):
        if line:
            resp = requests.get(line)
            if resp.status_code == 200:
                print(self.colorize("200", "green"))
            else:
                print(self.colorize(str(resp.status_code), "red"))

	def do_alternativefacts(self, line):
    		print(self.colorize("Lies! Pure lies, and more lies.", "red"))

if __name__ == '__main__':
    app = REPL()
    app.cmdloop()

Building these little shells can be a lot of fun. The documentation has all the details, but, you should start reading from the standard lib cmd documentation. There is also the video from PyCon 2010.

University Connect at SICSR Pune

Last Thursday, I visited SICSR, Pune campus as part of University Connect program from Red Hat Pune. This was the second event in the University Connect program. Rupali, the backbone of all community events in the local Red Hat office, is also the primary driving force of this program. Priyanka, Varsha, Prathamesh and I reached office early morning and later went to the college in Rupali’s car. Sayak came to the venue directly from his house.

The event started with a brief inauguration ceremony. After that Rupali took the stage and talked about the University Connect program, and how Red Hat Pune is taking part in the local communities. I will let her write a blog post to explain in details :)

Next, I went up to start talking about the journey of Red Hat, various upstream projects we take part in. Discussing the various product lines Red Hat have. We discussed https://pagure.io and how many Indian students are contributing to that project with guidance from Pierre-YvesChibon. Because Python is already a known language among the students, I had many Python projects as examples.

Priyanka took the stage after me; she and Sayak are alumni of SICSR so it was nice for the students to them speaking. She talked about how contributing to Open Source projects can change one’s career. She told stories of her own life, talked about various points which can help a student to make their resume better.

Sayak then took the stage to talk about how to contribute to various open source projects. He talked and showed examples from Mozilla, Fedora, and KDE. He also pointed to the answer Sayan wrote to a Quora question.

At the end of the day, we had around 20 minutes of open QA session.

Fedora Atomic Working Group update from 2017-01-17

This is an update from Fedora Atomic Working Group based on the IRC meeting on 2017-01-17. 14 people participated in the meeting, the full log of the meeting can be found here.

OverlayFS partition

We had a decision to have a docker partition in Fedora 26. The root partition sizing will also need to be fixed. One can read all the discussion on the same at the Pagure issue.

We also require help in writing the documentation for migration from Devicemapper -> Overlay -> Back.

How to compose your own Atomic tree?

Jason Brooks will update his document located at Project Atomic docs.

docker-storage-setup patches require more testing

There are pending patches which will require more testing before merging.

Goals and PRD of the working group

Josh Berkus is updating the goals and PRD documentation for the working group. Both short term and long term goals can be seen at this etherpad. The previous Cloud Working Group’s PRD is much longer than most of the other groups’ PRD, so we also discussed trimming the Atomic WG PRD.

Open floor discussion + other items

I updated the working group about a recent failure of the QCOW2 image on the Autocloud. It appears that if we boot the images with only one VCPU, and then after disabling chroyd service when we reboot, there is no defined time to have ssh service up and running.

Misc talked about the hardware plan for FOSP., and later he sent a detailed mail to the list on the same.

Antonio Murdaca (runcom) brought up the discussion about testing the latest Docker (1.13) and pushing it to F25. We decided to spend more time in testing it, and then only push to Fedora 25, otherwise it may break Kubernetes/Openshift. We will schedule a 1.13 testing week in the coming days.