Kushal Das4

FOSS and life. Kushal Das talks here.

kushal76uaid62oup5774umh654scnu5dwzh4u2534qxhcbi4wbab3ad.onion

Using Ansible to maintain your Qubes system

From the time I have started using Qubes OS, How to create and setup new AppVMs in an efficient way? remained an open question for me. I was mostly using the command line tool to create any new AppVMs and then manually setting all the properties after creation. I also did the package installations and other setup inside of the VMs manually.

If you never heard of Qubes before, you should check it out. Qubes takes a different approach to security, security by compartmentalization, different applications are separated by Qubes (VMs) . The base is running Fedora and then all other VMs are on top of Xen. It also provides a very tight integration of the tools to give a pleasant experience.

When I asked about how people maintain different VMs or templateVMs (from which the normal VMs spawn off), the answer was mostly bash scripts. The tools provided by the Qubes team are friendly to scripting. Though the official way to managing VMs is done by Salt project.

As we (at Freedom of the Press Founation) are working towards a Qubes based desktop client for SecureDrop, we also started using Salt to maintain the states of the VMs. I personally found Salt to be very confusing and a bit difficult to learn.

From the mailing list I also found out about https://github.com/Rudd-O/ansible-qubes, but, as I started reading the README, I figured that Salt is being used here too in the background. That made me rethink about the Ansible as a choice to maintain my Qubes.

Last weekend I pinged Trishna for some pointers on writing new plugins for Ansible, and later at night I also talked with Toshio about the Ansible plugins + modules.

Introducing Qubes Ansible

The result of those chats is Qubes Ansible. It has a qubesos module and a qubes connection plugin for Ansible.

I already have a PR opened to add the connection plugin into Ansible.

The actual module will still require a lot of work to become feature complete with the existing command line tools and also with the Salt. This project is under active development.

Good thing is that I am getting feedback+patches from the #qubes IRC channel (on Freenode). From the Qubes development team, marmarek provided some real valuable input to make the plugin easier to use.

Example playbook

---
- hosts: localhost
  connection: local

  tasks:
    - name: Make sure the development VM is present
      qubesos:
        guest: development2
        state: present
        properties:
          memory: 1200
          maxmem: 1400
          netvm: 'sys-firewall'
          template: 'debian-9'
          label: "blue"

    - name: Run the VM
      qubesos:
        guest: development2
        state: running

You can use the above playbook to create a development2 AppVM with the exact properties you want. The examples page has all the available options documented.

If you are using Qubes, please give it a try, and tell us how can we improve your experience of maintaining the system with Ansible. You can provide feedback in a Github issue or talk directly in the #qubes IRC channel.

vcrpy for web related tests

Couple of weeks ago, Jen pointed me to vcrpy. This is a Python implementation of Ruby’s library with same name.

What is vcrpy?

It is a Python module which helps to write faster and simple tests involving HTTP requests. It records all the HTTP interactions in plain text files (by default in a YAML file). This helps to write deterministic tests, and also to run them in offline.

It works well with the following Python modules.

  • requests
  • aiohttp
  • urllib3
  • tornado
  • urllib2
  • boto3

Usage example

Let us take a very simple test case.

import unittest
import requests

class TestExample(unittest.TestCase):

    def test_httpget(self):
        r = requests.get("https://httpbin.org/get?name=vcrpy&lang=Python")
        self.assertEqual(r.status_code, 200)
        data = r.json()
        self.assertEqual(data["args"]["name"], "vcrpy")
        self.assertEqual(data["args"]["lang"], "Python")


if __name__ == "__main__":
    unittest.main()

In the above code, we are making a HTTP GET request to the https://httpbin.org site and examining the returned JSON data. Running the test takes around 1.75 seconds in my computer.

$ python test_all.py
.
------------------------------------------------------------------
Ran 1 test in 1.752s

OK

Now, we can add vcrpy to this project.

import unittest
import vcr
import requests

class TestExample(unittest.TestCase):

    @vcr.use_cassette("test-httpget.yml")
    def test_httpget(self):
        r = requests.get("https://httpbin.org/get?name=vcrpy&lang=Python")
        self.assertEqual(r.status_code, 200)
        data = r.json()
        self.assertEqual(data["args"]["name"], "vcrpy")
        self.assertEqual(data["args"]["lang"], "Python")

if __name__ == "__main__":
    unittest.main()

We imported vcr module, and added a decorator vcr.use_cassette to our test function. Now, when we will execute the test again, vcrpy will record the HTTP call details in the mentioned YAML file, and use the same for the future test runs.

$ python test_all.py
.
------------------------------------------------------------------
Ran 1 test in 0.016s

OK

You all can also notice the time taken to run the test, around 0.2 second.

Read the project documentation for all the available options.

Job alert: Associate Site Reliability Engineer at FPF

We (at Freedom of the Press Foundation) are looking for an Associate Site Reliability Engineer.

This position is open to junior and entry-level applicants, and we recognize the need to provide on-the-job mentoring and support to help you familiarize yourself with the technology stack we use. In addition to the possibility of working in our New York or San Francisco offices, this position is open to remote work within American time zones.

Skills and Experience

  • Familiarity with remote systems administration of bare-metal or virtualized Linux servers.
  • Comfortable with shell and programming languages commonly used in an SRE context (e.g., Python, Go, Bash, Ruby).
  • Strong interest in honing skills required to empower a distributed software development and operations team through automation and systems maintenance.

For more details, please visit the job posting.

Are you thinking if you should apply or not?

YES, APPLY!. You are ready to apply for this position. You don’t have to ask anyone to confirm if you are ready or not. Unless you apply, you don’t have a chance to get the job.

So, the first step is to apply for the position, and then you can think about Impostor syndrome. We all have it. Some people will admit that in public, some people will not.

Using podman for containers

Podman is one of the newer tool in the container world, it can help you to run OCI containers in pods. It uses Buildah to build containers, and runc or any other OCI compliant runtime. Podman is being actively developed.

I have moved the two major bots we use for dgplug summer training (named batul and tenida) under podman and they are running well for the last few days.

Installation

I am using a Fedora 28 system, installation of podman is as simple as any other standard Fedora package.

$ sudo dnf install podman

While I was trying out podman, I found it was working perfectly in my DigitalOcean instance, but, not so much on the production vm. I was not being able to attach to the stdout.

When I tried to get help in #podman IRC channel, many responded, but none of the suggestions helped. Later, I gave access to the box to Matthew Heon, one of the developer of the tool. He identified the Indian timezone (+5:30) was too large for the timestamp buffer and thus causing this trouble.

The fix was pushed fast, and a Fedora build was also pushed to the testing repo.

Usage

To learn about different available commands, visit this page.

First step was to build the container images, it was as simple as:

$ sudo podman build -t kdas/imagename .

I reused my old Dockerfiles for the same. After this, it was just simple run commands to start the containers.