Kushal Das

FOSS and life. Kushal Das talks here.

kushal76uaid62oup5774umh654scnu5dwzh4u2534qxhcbi4wbab3ad.onion

Defending against side channel attacks via dependencies

Yesterday Alex Birsan posted a blog explaining how he made a supply chain attack on various companies via dependencies. I was waiting for this blog from last August when we noticed the mentioned packages on PyPI (and removed). I reached out to Alex to figure out more about the packages, and he said he will write a blog post.

This is the same time when @rsobers also tweeted about how any similar attack works via DNS.

dns data exfiltration

At SecureDrop project, we spend a lot of time figuring out a way to defend against similar attacks. My PyCon US 2019 talk explained the full process. In simple words, we are doing the following:

  • Verify the source of every python dependency before adding/updating them. This is a manual step done by human eyes.
  • Build wheels from the verified source and store them into a package repository along with OpenPGP signed sha256sums.
  • Before building the Debian package, make sure to verify the OpenPGP signature, and also install only from the known package repository using verified (again via OpenPGP signature) wheel hashes (from the wheels we built ourselves).

Now, in the last few months, we modified these steps, and made it even easier. Now, all of our wheels are build from same known sources, and all of the wheels are then built as reproducible and stored in git (via LFS). The final Debian packages are again reproducible. Along with this and the above mentioned OpenPGP signature + package sha256sums verification via pip. We also pass --no-index the argument to pip now, to make sure that all the dependencies are getting picked up from the local directory.

Oh, and I also forgot to mention that all of the project source tarballs used in SecureDrop workstation package building are also reproducible. If you are interested in using the same in your Python project (or actually for any project's source tarball), feel free to have a look at this function.

There is also diff-review mailing list (very low volume), where we post a signed message of any source package update we review.

Introducing Tumpa, to make OpenPGP simple with smartcards

Generating OpenPGP keys in an offline air-gapped system and then moving them into a smart card is always a difficult task for me. To remember the steps and command-line options of gpg2 correctly and then following them in the same order is difficult, and I had trouble enough number of times in doing so when I think about someone who is not into the command line that much, how difficult these steps are for them.

While having a chat with Saptak a few weeks ago, we came up with the idea of writing a small desktop tool to help. I started adding more features into my Johnnycanencrypt for the same. The OpenPGP operations are possible due to the amazing Sequoia project.

Introducing Tumpa

The work on the main application started during the holiday break, and today I am happy to release 0.1.0 version of Tumpa to make specific OpenPGP operations simple to use. It uses Johnnycanencrypt inside, and does not depend on the gpg.

Here is a small demo of the application running in a Tails (VM) environment. I am creating a new OpenPGP key with encryption and signing subkeys, and then putting them into a Yubikey. We are also setting the card holder's name via our tool.

Tumpa demo

We can also reset any Yubikey with just a click.

Reset Yubikey

You can download the Debian Buster package for Tails from the release page from Github. You can run from the source in Mac or Fedora too. But, if you are doing any real key generation, then you should try to do it in an air-gapped system.

You can install the package as dpkg -i ./tumpa_0.1.0+buster+nmu1_all.deb inside of Tails.

What are the current available features?

  • We can create a new OpenPGP key along with selected subkeys using Curve25519. By default, the tool will add three years for the expiration of the subkeys.
  • We can move the subkeys to a smart card. We tested only against Yubikeys as that is what we have.
  • We can set the name and public key URL on the card.
  • We can set the user pin and the admin pin of the smart card
  • We can reset a Yubikey.
  • We can export the public key for a selected key.

What is next?

A lot of work :) This is just the beginning. There are a ton of features we planned, and we will slowly add those. The UI also requires a lot of work and touch from a real UX person.

The default application will be very simple to use, and we will also have many advanced features, say changing subkey expiration dates, creating new subkeys, etc. for the advanced users.

We are also conducting user interviews (which takes around 20 minutes of time). If you have some time to spare to talk to us and provide feedback, please feel free to ping us via Twitter/mastodon/IRC.

We are available on #tumpa channel on Freenode. Come over and say hi :)

There are a lot of people I should thank for this release. Here is a quick list at random. Maybe I miss many names here, but you know that we could not do this without your help and guidance.

  • Sequoia team for all the guidance on OpenPGP.
  • Milosch Meriac for providing the guidance (and a ton of hardware).
  • Vincent Breitmoser, for keep explaining OpenKeyChain codebase to me to understand smart card operations
  • Anwesha Das for fixing the CI failures for Johnnycanencrypt, and documentation PRs.
  • Harlo and Micah, for all the amazing input for months.
  • Saptak Sengupta for being the amazing co-maintainer.

How to get a TLS certificate for a domain in your local network?

How to get a TLS certificate for a domain inside of my local network? This was a question for me for a long time. I thought of creating a real subdomain, getting the certificate, and copying over the files locally, and then enforcing local domain names via the DNS or /etc/hosts. But, during the TLS training from Scott Helme, I learned about getting certificates via DNS challenge using acme.sh.

I use DreamHost nameservers for most of m domains. I got an API_KEY from them for only DNS manipulation.

Next, I just had to execute one single command along with the API_KEY to fetch fresh and hot certificate from Let's Encrypt.

The following command fetches for fire.das.community subdomain.

DH_API_KEY=MYAPIKEY acme.sh --issue --dns dns_dreamhost -d fire.das.community

There is a wiki page listing how to use acme.sh tool for various DNS providers.

Open Source Project Criticality Score 2020 for python projects

I just now found about Open Source Project Criticality Score under the Open Source Security Foundation (OpnSSF) from Daniel Stenberg's blog post.

He wrote about the critical C projects (all calculations are done only for Github based projects), so I decided to look at the list of the Python projects.

It is a score between 0 (least critical) and 1 (most critical), and the algorithm and details are explained in the repository.

The list of top 10 Python projects in their resultset

It is interesting to see that the CPython is at number 8 in the list and the top two projects are configuration management systems.

You can see all the different language details here.

PrivChat with Tor: 2020-12-11 Tor advancing human rights

Tomorrow, on 11th December, 2020 at 18:00UTC, Tor will host the third edition of the PrivChat event, to discuss with some real life Tor users and talk about how Tor helps them to defend human rights. You can watch it live on Youtube.

image of snowden

The event will be hosted by Edward Snowden, President, Freedom of the Press Foundation. We have Alison Macrina (Founder, Library Freedom Project), Berhan Taye ( Africa Policy Manager and Global Internet Shutdowns Lead, Access Now) and Ramy Raoof (Security Labs Technologist, Amnesty International) as participants in the discussion.

Story of debugging exit 0

For more than a month, my primary task at SecureDrop land is to make the project ready for a distribution update. The current system runs on Ubuntu Xenial, and the goal is to upgrade to Ubuntu Focal. The deadline is around February 2021, and we will also disable Onion service v2 in the same release.

Tracking issue

There is a tracking issue related to all the changes for Focal. After the initial Python language-related updates, I had to fix the Debian packages for the same. Then comes the Ansible roles for the whole system, setting up the rebase + full integration tests for the system in CI. A lot of new issues were found when we managed to get Focal based staging instances in the CI.

OSSEC test failure

This particular test is failing on Focal staging. It checks for the port 1514 listening on UDP on the monitor server via OSSEC. The first thing we noticed that the output of the command is different in Xenial and on Focal. While looking at the details, we also noticed that the OSSEC service is failing on Focal. Now, this starts via a sysv script in the /etc/init.d/ directory. My initial try was to follow the solution mentioned here. But, the maild service was still failing most of the time. Later, I decided to move to a more straightforward forking based service file. Works well for the OSSEC monitoring service. So, I decided to have the same systemd service file for the agent in the app server.

But, the installation step via Ansible failed before any configuration. When we install the .deb packages, our configuration provider package tries to restart the service via the post-installation script. And that fails with a complaint that /var/ossec/etc/client.keys is not found. If I try to start the service manually, I get the same error with an exit code 1.

At this moment, I was trying to identify how was it working before, we are using the same codebase + the sysv on Xenial, but package installation works. The systemd generates the following service file:

# Automatically generated by systemd-sysv-generator

[Unit]
Documentation=man:systemd-sysv-generator(8)
SourcePath=/etc/init.d/ossec
Description=LSB: Start daemon at boot time
After=remote-fs.target

[Service]
Type=forking
Restart=no
TimeoutSec=5min
IgnoreSIGPIPE=no
KillMode=process
GuessMainPID=no
RemainAfterExit=yes
SuccessExitStatus=5 6
ExecStart=/etc/init.d/ossec start
ExecStop=/etc/init.d/ossec stop

After digging for hours, I noticed exit 0 at the end of the old sysv script. This file was last modified by late James Dolan around seven years ago, and we never had to touch the same.

Now I am planning to have 1 in the SuccessExitStaus= line of the service file for the agent. This will keep the behavior the same as the old sysv file.

[Unit]
Description=OSSEC service

[Service]
Type=forking
ExecStart=/var/ossec/bin/ossec-control start
ExecStop=/var/ossec/bin/ossec-control stop
RemainAfterExit=True
SuccessExitStatus=1

[Install]
WantedBy=multi-user.target

Johnnycanencrypt 0.4.0 released

Last night I released 0.4.0 of johnnycanencrypt module for OpenPGP in Python. This release has one update in the creating new key API. Now, we can pass one single UID as a string, or multiple in a list, or even pass None to the key creation method. This means we can have User ID-less certificates, which sequoia-pgp allows.

I also managed to fix the bug so that users can use pip to install the latest release from https://pypi.org.

You will need the rust toolchain, I generally install from https://rustup.rs.

For Fedora

sudo dnf install nettle clang clang-devel nettle-devel python3-devel

For Debian/Ubuntu

sudo apt install -y python3-dev libnettle6 nettle-dev libhogweed4 python3-pip python3-venv clang

Remember to upgrade your pip version inside of the virtual environment if you are in Buster.

For macOS

Install nettle via brew.

Installing the package

❯ python3 -m pip install johnnycanencrypt
Collecting johnnycanencrypt
  Downloading https://files.pythonhosted.org/packages/50/98/53ae56eb208ebcc6288397a66cf8ac9af5de53b8bbae5fd27be7cd8bb9d7/johnnycanencrypt-0.4.0.tar.gz (128kB)
     |████████████████████████████████| 133kB 6.4MB/s
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
    Preparing wheel metadata ... done
Building wheels for collected packages: johnnycanencrypt
  Building wheel for johnnycanencrypt (PEP 517) ... done
  Created wheel for johnnycanencrypt: filename=johnnycanencrypt-0.4.0-cp37-cp37m-macosx_10_7_x86_64.whl size=1586569 sha256=41ab04d3758479a063a6c42d07a15684beb21b1f305d2f8b02e820cb15853ae1
  Stored in directory: /Users/kdas/Library/Caches/pip/wheels/3f/63/03/8afa8176c89b9afefc11f48c3b3867cd6dcc82e865c310c90d
Successfully built johnnycanencrypt
Installing collected packages: johnnycanencrypt
Successfully installed johnnycanencrypt-0.4.0
WARNING: You are using pip version 19.2.3, however version 20.2.4 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.

Now, you can import the module inside of your virtual environment :)

Note: In the future, I may change the name of the module to something more meaningful :)

High load average while package building on Fedora 33

Enabling Link time optimization (LTO) with rpmbuild is one of the new features of Fedora 33. I read the changeset page once and went back only after I did the Tor package builds locally.

While building the package, I noticed that suddenly there are many processes with /usr/libexec/gcc/x86_64-redhat-linux/10/lto1 and my load average reached 55+. Here is a screenshot I managed to take in between.

high load average

Alembic migration errors on SQLite

We use SQLite3 as the database in SecureDrop. We use SQLAlchemy to talk the database and Alembic for migrations. Some of those migrations are written by hand.

Most of my work time in the last month went to getting things ready for Ubuntu Focal 20.04. We currently use Ubuntu Xenial 16.04. During this, I noticed 17 test failures related to the Alembic on Focal but works fine on Xenial. After digging a bit more, these are due to the missing reference to temporary tables we used during migrations. With some more digging, I found this entry on the SQLite website:

Compatibility Note: The behavior of ALTER TABLE when renaming a table was enhanced in versions 3.25.0 (2018-09-15) and 3.26.0 (2018-12-01) in order to carry the rename operation forward into triggers and views that reference the renamed table. This is considered an improvement. Applications that depend on the older (and arguably buggy) behavior can use the PRAGMA legacy_alter_table=ON statement or the SQLITE_DBCONFIG_LEGACY_ALTER_TABLE configuration parameter on sqlite3_db_config() interface to make ALTER TABLE RENAME behave as it did prior to version 3.25.0.

This is what causing the test failures as SQLite upgraded to 3.31.1 on Focal from 3.11.0 on Xenial.

According to the docs, we can fix the error by adding the following in the env.py.

diff --git a/securedrop/alembic/env.py b/securedrop/alembic/env.py
index c16d34a5a..d6bce65b5 100644
--- a/securedrop/alembic/env.py
+++ b/securedrop/alembic/env.py
@@ -5,6 +5,8 @@ import sys
 
 from alembic import context
 from sqlalchemy import engine_from_config, pool
+from sqlalchemy.engine import Engine
+from sqlalchemy import event
 from logging.config import fileConfig
 from os import path
 
@@ -16,6 +18,12 @@ fileConfig(config.config_file_name)
 sys.path.insert(0, path.realpath(path.join(path.dirname(__file__), '..')))
 from db import db  # noqa
 
+@event.listens_for(Engine, "connect")
+def set_sqlite_pragma(dbapi_connection, connection_record):
+    cursor = dbapi_connection.cursor()
+    cursor.execute("PRAGMA legacy_alter_table=ON")
+    cursor.close()
+
 try:
     # These imports are only needed for offline generation of automigrations.
     # Importing them in a prod-like environment breaks things.

Later, John found an even simpler way to do the same for only the migrations impacted.

Running SecureDrop inside of podman containers on Fedora 33

Last week, while setting up a Fedora 33 system, I thought of running the SecureDrop development container there, but using podman instead of the Docker setup we have.

I tried to make minimal changes to our existing scripts. Added a ~/bin/docker file, with podman $@ inside (and the sha-bang line).

Next, I provided the proper label for SELinux:

sudo chcon -Rt container_file_t securedrop

The SecureDrop container runs as the normal user inside of the Docker container. I can not do the same here as the filesystem gets mounted as root, and I can not write in it. So, had to modify one line in the bash script, and also disabled another function call which deletes the /dev/random file inside of the container.

diff --git a/securedrop/bin/dev-shell b/securedrop/bin/dev-shell
index ef424bc01..37215b551 100755
--- a/securedrop/bin/dev-shell
+++ b/securedrop/bin/dev-shell
@@ -72,7 +72,7 @@ function docker_run() {
            -e LANG=C.UTF-8 \
            -e PAGE_LAYOUT_LOCALES \
            -e PATH \
-           --user "${USER:-root}" \
+           --user root \
            --volume "${TOPLEVEL}:${TOPLEVEL}" \
            --workdir "${TOPLEVEL}/securedrop" \
            --name "${SD_CONTAINER}" \
diff --git a/securedrop/bin/run b/securedrop/bin/run
index e82cc6320..0c11aa8db 100755
--- a/securedrop/bin/run
+++ b/securedrop/bin/run
@@ -9,7 +9,7 @@ cd "${REPOROOT}/securedrop"
 source "${BASH_SOURCE%/*}/dev-deps"
 
 run_redis &
-urandom
+#urandom
 run_sass --watch &
 maybe_create_config_py
 reset_demo

This time I felt that build time for verifying each cached layer is much longer than what it used to be for podman. Maybe I am just mistaken. The SecureDrop web application is working very fine inside.

Package build containers

We also use containers to build Debian packages. And those molecule scenarios were failing as ansible.posix.synchronize module could not sync to a podman container. I asked if there is anyway to do that, and by the time I woke up, Adam Miller had a branch that fixed the issue. I directly used the same in my virtual environment. The package build was successful. Then, the testinfra tests failed due as it could not create the temporary directory inside of the container. I already opened an issue for the same.