Kushal Das

FOSS and life. Kushal Das talks here.

kushal76uaid62oup5774umh654scnu5dwzh4u2534qxhcbi4wbab3ad.onion

Running SecureDrop inside of podman containers on Fedora 33

Last week, while setting up a Fedora 33 system, I thought of running the SecureDrop development container there, but using podman instead of the Docker setup we have.

I tried to make minimal changes to our existing scripts. Added a ~/bin/docker file, with podman $@ inside (and the sha-bang line).

Next, I provided the proper label for SELinux:

sudo chcon -Rt container_file_t securedrop

The SecureDrop container runs as the normal user inside of the Docker container. I can not do the same here as the filesystem gets mounted as root, and I can not write in it. So, had to modify one line in the bash script, and also disabled another function call which deletes the /dev/random file inside of the container.

diff --git a/securedrop/bin/dev-shell b/securedrop/bin/dev-shell
index ef424bc01..37215b551 100755
--- a/securedrop/bin/dev-shell
+++ b/securedrop/bin/dev-shell
@@ -72,7 +72,7 @@ function docker_run() {
            -e LANG=C.UTF-8 \
            -e PAGE_LAYOUT_LOCALES \
            -e PATH \
-           --user "${USER:-root}" \
+           --user root \
            --volume "${TOPLEVEL}:${TOPLEVEL}" \
            --workdir "${TOPLEVEL}/securedrop" \
            --name "${SD_CONTAINER}" \
diff --git a/securedrop/bin/run b/securedrop/bin/run
index e82cc6320..0c11aa8db 100755
--- a/securedrop/bin/run
+++ b/securedrop/bin/run
@@ -9,7 +9,7 @@ cd "${REPOROOT}/securedrop"
 source "${BASH_SOURCE%/*}/dev-deps"
 
 run_redis &
-urandom
+#urandom
 run_sass --watch &
 maybe_create_config_py
 reset_demo

This time I felt that build time for verifying each cached layer is much longer than what it used to be for podman. Maybe I am just mistaken. The SecureDrop web application is working very fine inside.

Package build containers

We also use containers to build Debian packages. And those molecule scenarios were failing as ansible.posix.synchronize module could not sync to a podman container. I asked if there is anyway to do that, and by the time I woke up, Adam Miller had a branch that fixed the issue. I directly used the same in my virtual environment. The package build was successful. Then, the testinfra tests failed due as it could not create the temporary directory inside of the container. I already opened an issue for the same.

Podman on Debian Buster

I use podman on all of my production servers, and also inside of the Qubes system in Fedora VMs. A few days ago I saw this post and thought of trying out the steps on my Debian Buster system.

But, it seems it requires one more installation step, so I am adding the updated installation steps for Debian Buster here.

Install all build dependencies

sudo apt -y install \
  gcc \
  make \
  cmake \
  git \
  btrfs-progs \
  golang-go \
  go-md2man \
  iptables \
  libassuan-dev \
  libc6-dev \
  libdevmapper-dev \
  libglib2.0-dev \
  libgpgme-dev \
  libgpg-error-dev \
  libostree-dev \
  libprotobuf-dev \
  libprotobuf-c-dev \
  libseccomp-dev \
  libselinux1-dev \
  libsystemd-dev \
  pkg-config \
  runc \
  uidmap \
  libapparmor-dev \
  libglib2.0-dev \
  libcap-dev \
  libseccomp-dev

Install latest Golang

Download and install latest golang and also make sure that you have a proper $GOPATH variable. You can read more here.

Install conmon

conmon is the OCI container runtime monitor. Install it via the following steps:

git clone https://github.com/containers/conmon
cd conmon
make
sudo make podman
sudo cp /usr/local/libexec/podman/conmon  /usr/local/bin/

Install CNI plugins

git clone https://github.com/containernetworking/plugins.git $GOPATH/src/github.com/containernetworking/plugins
cd $GOPATH/src/github.com/containernetworking/plugins
./build_linux.sh
sudo mkdir -p /usr/libexec/cni
sudo cp bin/* /usr/libexec/cni

Setup the bridge

sudo mkdir -p /etc/cni/net.d
curl -qsSL https://raw.githubusercontent.com/containers/libpod/master/cni/87-podman-bridge.conflist | sudo tee /etc/cni/net.d/99-loopback.conf

Create the configuration files

Next, we need configuration files for the registries and also the policy file.

sudo mkdir -p /etc/containers
sudo curl https://raw.githubusercontent.com/projectatomic/registries/master/registries.fedora -o /etc/containers/registries.conf
sudo curl https://raw.githubusercontent.com/containers/skopeo/master/default-policy.json -o /etc/containers/policy.json

Installing slirp4netns

slirp4netns is used for user-mode networking for unprivileged network namespaces. At the time of the writing this blog post, the latest release is 0.4.2.

git clone https://github.com/rootless-containers/slirp4netns
cd slirp4netns
./autogen.sh
./configure --prefix=/usr
make
sudo make install

Installing podman

Finally we are going to install podman.

git clone https://github.com/containers/libpod/ $GOPATH/src/github.com/containers/libpod
cd $GOPATH/src/github.com/containers/libpod
make
sudo make install

Testing podman

Now you can test podman on your Debian system.

podman pull fedora:latest
podman run -it --rm /usr/bin/bash fedora:latest

Flatpak application shortcuts on Qubes OS

In my last blog post, I wrote about Flatpak applications on Qubes OS AppVMs. Later, Alexander Larsson pointed out that running the actual application from the command line is still not user friendly, and Flatpak already solved it by providing proper desktop files for each of the application installed by Flatpak.

How to enable the Flatpak application shortcut in Qubes OS?

The Qubes documentation has detailed steps on how to add a shortcut only for a given AppVM or make it available from the template to all VMs. I decided to add it from the template, so that I can click on the Qubes Setting menu and add it for the exact AppVM. I did not want to modify the required files in dom0 by hand. The reason: just being lazy.

From my AppVM (where I have the Flatpak application installed), I copied the desktop file and also the icon to the template (Fedora 29 in this case).

qvm-copy /var/lib/flatpak/app/io.github.Hexchat/current/active/export/share/applications/io.github.Hexchat.desktop
qvm-copy /var/lib/flatpa/app/io.github.Hexchat/current/active/export/share/icons/hicolor/48x48/apps/io.github.Hexchat.png

Then in the template, I moved the files to their correct locations. I also modified the desktop file to mark that this is a Flatpak application.

sudo cp ~/QubesIncoming/xchat/io.github.Hexchat.desktop /usr/share/applications/io.github.Hexchat.desktop
sudo cp ~/QubesIncoming/xchat/io.github.Hexchat.png /usr/share/icons/hicolor/48x48/

After this, I refreshed, and then added the entry from the Qubes Settings, and, then the application is available in the menu.

Using podman for containers

Podman is one of the newer tool in the container world, it can help you to run OCI containers in pods. It uses Buildah to build containers, and runc or any other OCI compliant runtime. Podman is being actively developed.

I have moved the two major bots we use for dgplug summer training (named batul and tenida) under podman and they are running well for the last few days.

Installation

I am using a Fedora 28 system, installation of podman is as simple as any other standard Fedora package.

$ sudo dnf install podman

While I was trying out podman, I found it was working perfectly in my DigitalOcean instance, but, not so much on the production vm. I was not being able to attach to the stdout.

When I tried to get help in #podman IRC channel, many responded, but none of the suggestions helped. Later, I gave access to the box to Matthew Heon, one of the developer of the tool. He identified the Indian timezone (+5:30) was too large for the timestamp buffer and thus causing this trouble.

The fix was pushed fast, and a Fedora build was also pushed to the testing repo.

Usage

To learn about different available commands, visit this page.

First step was to build the container images, it was as simple as:

$ sudo podman build -t kdas/imagename .

I reused my old Dockerfiles for the same. After this, it was just simple run commands to start the containers.

Testing Kubernetes on Fedora Atomic using gotun

Kubernetes is one of the major components of the modern container technologies. Previously, using Tunir I worked on ideas to test it automatically on Fedora Atomic images. The new gotun provides a better way to setup and test a complex system like Kubernetes.

In the following example I have used Fedora Infra OpenStack to setup 3 instances, and then used the upstream contrib/ansible repo to do the real setup of the Kubernetes. I have two scripts in my local directory where the ansible directory also exists. First, createinventory.py, which creates the ansible inventory file, and also a hosts file with the right IP addresses. We push this file to every VM and copy to /etc/ using sudo. You can easily do this over ansible, but I did not want to change or add anything to the git repo, that is why I am doing it like this.

#!/usr/bin/env python

import json

data = None
with open("current_run_info.json") as fobj:
    data = json.loads(fobj.read())

user = data['user']
host1 = data['vm1']
host2 = data['vm2']
host3 = data['vm3']
key = data['keyfile']

result = """kube-master.example.com ansible_ssh_host={0} ansible_ssh_user={1} ansible_ssh_private_key_file={2}
kube-node-01.example.com ansible_ssh_host={3} ansible_ssh_user={4} ansible_ssh_private_key_file={5}
kube-node-02.example.com ansible_ssh_host={6} ansible_ssh_user={7} ansible_ssh_private_key_file={8}

[masters]
kube-master.example.com
 
[etcd:children]
masters
 
[nodes]
kube-node-01.example.com
kube-node-02.example.com
""".format(host1,user,key,host2,user,key,host3,user,key)
with open("ansible/inventory/inventory", "w") as fobj:
    fobj.write(result)

hosts = """127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
{0} kube-master.example.com
{1} kube-node-01.example.com
{2} kube-node-02.example.com
""".format(host1,host2,host3)

with open("./hosts","w") as fobj:
    fobj.write(hosts)

Then, I also have a runsetup.sh script, which runs the actual script inside the ansible directory.

#!/usr/bin/sh
cd ./ansible/scripts/
export ANSIBLE_HOST_KEY_CHECKING=False
./deploy-cluster.sh

The following the job definition creates the 3VM(s) on the Cloud.

---
BACKEND: "openstack"
NUMBER: 3

OS_AUTH_URL: "https://fedorainfracloud.org:5000/v2.0"
TENANT_ID: "ID of the tenant"
USERNAME: "user"
PASSWORD: "password"
OS_REGION_NAME: "RegionOne"
OS_IMAGE: "Fedora-Atomic-25-20170124.1.x86_64.qcow2"
OS_FLAVOR: "m1.medium"
OS_SECURITY_GROUPS:
    - "ssh-anywhere-cloudsig"
    - "default"
    - "wide-open-cloudsig"
OS_NETWORK: "NETWORKID"
OS_FLOATING_POOL: "external"
OS_KEYPAIR: "kushal-testkey"
key: "/home/kdas/kushal-testkey.pem"

Then comes the final text file which contains all the actual test commands.

HOSTCOMMAND: ./createinventory.py
COPY: ./hosts vm1:./hosts
vm1 sudo mv ./hosts /etc/hosts
COPY: ./hosts vm2:./hosts
vm2 sudo mv ./hosts /etc/hosts
COPY: ./hosts vm3:./hosts
vm3 sudo mv ./hosts /etc/hosts
HOSTTEST: ./runsetup.sh
vm1 sudo kubectl get nodes
vm1 sudo atomic run projectatomic/guestbookgo-atomicapp
SLEEP 60
vm1 sudo kubectl get pods

Here I am using the old guestbook application, but you can choose to deploy any application for this fresh Kubernetes cluster, and then test if it is working fine or not. Please let me know in the comments what do you think about this idea. Btw, remember that the ansible playbook will take a long time to run.

One week with Fedora Atomic in production

I was using containers for over a year in my personal servers. I was running a few services in those. For the last one week, I moved all my personal servers into Fedora Atomic, and running more number of services in those.

Server hardware & OS

These are actually VM(s) with couple of GB(s) of RAM, and a few CPU(s). I installed using the Fedora Atomic ISO image (get it from here) over virt-manager.

The containers inside

You can find all the Dockerfiles etc in the repo. Note: I still have to clean up a few of those.

Issues faced

In day zero the first box I installed, stopped printing anything on STDOUT, after a reboot I upgraded with atomic host upgrade command. I never had any other problem still now. So, try to stay updated.

Building my own rpm-ostree repo

My next target was to compose my own rpm-ostree repo. I used Patrick's workstation repo files for the same. In my fork I added couple of files for my own tree, and the build script. The whole work is done on a Fedora 24 container. You can view the repo here. This whole thing is exposed via another apache container. I will explain more about the steps in a future blog post.

What is next?

First step is clean up my old Dockerfiles. I will add up any future service as containers in those boxes. Even though we are automatically testing our images using Autocloud, using this in my production environment will help me find bugs in more convenient manner.

Testing containers using Kubernetes on Tunir version 0.15

Today I have released Tunir 0.l5. This release got a major rewrite of the code, and has many new features. One of them is setting up multiple VM(s) from Tunir itself. We now also have the ability to use Ansible (using 2.x) from within Tunir. Using these we are going to deploy Kubernetes on Fedora 23 Atomic images, and then we will deploy an example atomicapp which follows Nulecule specification.

I am running this on Fedora 23 system, you can grab the latest Tunir from koji. You will also need the Ansible 2.x from the testing repository.

Getting Kubernetes contrib repo

First we will get the latest Kubernetes contrib repo.

$ git clone https://github.com/kubernetes/contrib.git

Inside we will make changes to a group_vars file at contrib/ansible/group_vars/all.yml

diff --git a/ansible/group_vars/all.yml b/ansible/group_vars/all.yml
index 276ded1..ead74fd 100644
--- a/ansible/group_vars/all.yml
+++ b/ansible/group_vars/all.yml
@@ -14,7 +14,7 @@ cluster_name: cluster.local
# Account name of remote user. Ansible will use this user account to ssh into
# the managed machines. The user must be able to use sudo without asking
# for password unless ansible_sudo_pass is set
-#ansible_ssh_user: root
+ansible_ssh_user: fedora

# password for the ansible_ssh_user. If this is unset you will need to set up
# ssh keys so a password is not needed.

Setting up the Tunir job configuration

The new multivm setup requires a jobname.cfg file as the configuration. In this case I have already downloaded a Fedora Atomic cloud .qcow2 file under /mnt. I am going to use that.

[general]
cpu = 1
ram = 1024
ansible_dir = /home/user/contrib/ansible

[vm1]
user = fedora
image = /mnt/Fedora-Cloud-Atomic-23-20160308.x86_64.qcow2
hostname = kube-master.example.com

[vm2]
user = fedora
image = /mnt/Fedora-Cloud-Atomic-23-20160308.x86_64.qcow2
hostname = kube-node-01.example.com

[vm3]
user = fedora
image = /mnt/Fedora-Cloud-Atomic-23-20160308.x86_64.qcow2
hostname = kube-node-02.example.com

The above configuration file is mostly self explanatory. All VM(s) will have 1 virtual CPU, and 1024 MB of RAM. I also put up the directory which contains the ansible source. Next we have 3 VM definitions. I also have hostnames setup for each, this are the same hostnames which are mentioned in the inventory file. The inventory file should exist on the same directory with name inventory. If you do not want to mention such big names, you can simply use vm1, vm2, vm3 in the inventory file.

Now if we remember, we need a jobname.txt file containing the actual commands for testing. The following is from our file.

PLAYBOOK cluster.yml
vm1 sudo atomic run projectatomic/guestbookgo-atomicapp

In the first line we are mentioning to run the cluster playbook. In the second line we are putting in the actual atomic command to deploy guestbook app on our newly setup Kubernetes. You can understand that we mention which VM to execute as the first part of the line. If no vm is marked, Tunir assumes that the command has to run on vm1.

Now if we just execute Tunir, you will be able to see Kubernetes being setup, and then the guestbook app being deployed. You can add few more commands in the above mentioned file to see how many pods running, or even about the details of the pods.

$ sudo tunir --multi jobname

Debug mode

For the multivm setup, Tunir now has a debug mode which can be turned on by passing --debug in the command line. This will not destroy the VM(s) at the end of the test. It will also create a destroy.sh script for you, which you can run to destroy the VM(s), and remove all temporary directories. The path of the file will be given at the end of the Tunir run.

DEBUG MODE ON. Destroy from /tmp/tmp8KtIPO/destroy.sh

Please try these new features, and comment for any improvements you want.