Kushal Das

FOSS and life. Kushal Das talks here.

kushal76uaid62oup5774umh654scnu5dwzh4u2534qxhcbi4wbab3ad.onion

Get a TLS certificate for your onion service

For a long time, I wanted to have a certificate for the onion address of my blog. Digicert was the only CA who was providing those certificates with an Extended Validation. Those are costly and suitable for an organization to get, but not for me personally, especially due to the cost.

TLS certificate working

A few days ago, on IRC, I found out that Harica is providing Domain validation for the onion sites for around €30 per year. I jumped in to get one. At the same time, ahf was also getting his certificate. He helped me with the configuration for nginx.

How to get your own certificate?

  • Make sure you have your site running as Tor v3 onion service
  • Create an account at https://cm.harica.gr/
  • Goto server certificates on the left bar, and make a new request for your domain, provide the onion address as requested in the form.
  • It will give you the option to upload a CSR Certificate Signing Request. You can generate one by openssl req -newkey rsa:4096 -keyout kushaldas.in.onion.key -out csr.csr. For the common name, provide the same onion address.
  • After the click on the website, it will ask you to download a file and put it in your web root inside of .well-known/pki-validation/ directory. Make sure that you can access the file over Tor Browser.
  • When you click the final submission button, the system will take some time to verify the domain. After payment, you should be able to download the certificate with the full chain (the file ending with .p7b). There are 3 options on the webpage, so please remember to download the correct file :)
  • You will have to convert it into PEM format, I used the command ahf showed me: openssl pkcs7 -inform pem -in kushaldas.in.p7b -print_certs -out kushaldas.in.onion.chain.pem -outform pem

Setting up nginx

This part will be the same as any other standard nginx configuration. The following is what I use. Please uncomment the Strict-Transport-Security header line only after you are sure everything is working fine.

server {
	listen unix:/var/run/tor-hs-kushal.sock;

    server_name kushal76uaid62oup5774umh654scnu5dwzh4u2534qxhcbi4wbab3ad.onion;
    access_log /var/log/nginx/kushal_onion-access.log;

    location / {
	return 301 https://$host$request_uri;
    }

}

server {
    listen unix:/var/run/tor-hs-kushal-https.sock ssl http2;

    server_name kushal76uaid62oup5774umh654scnu5dwzh4u2534qxhcbi4wbab3ad.onion;
    access_log /var/log/nginx/kushal_onion-access.log;

    ssl_certificate /etc/pki/kushaldas.in.onion.chain.pem;
	ssl_certificate_key /etc/pki/kushaldas.in.onion.open.key;

    #add_header Strict-Transport-Security "max-age=63072000; includeSubdomains";
	add_header X-Frame-Options DENY;
	add_header X-Content-Type-Options nosniff;
    # Turn on OCSP stapling as recommended at
    # https://community.letsencrypt.org/t/integration-guide/13123
    # requires nginx version >= 1.3.7
    ssl_stapling on;
    ssl_stapling_verify on;

    # modern configuration. tweak to your needs.
    ssl_protocols TLSv1.2;
    ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';
    ssl_prefer_server_ciphers on;

	index index.html;
	root /var/www/kushaldas.in;

	location / {
		try_files $uri $uri/ =404;
	}
}

I also have the following configuration in the /etc/tor/torrc file to use the unix socket files.

HiddenServiceDir /var/lib/tor/hs-kushal/
HiddenServiceVersion 3
HiddenServicePort 80 unix:/var/run/tor-hs-kushal-me.sock
HiddenServicePort 443 unix:/var/run/tor-hs-kushal-https.sock

In case you want to know more about why do you need the certificate for your onion address, the Tor Project has a very nice explanation.

Defending against side channel attacks via dependencies

Yesterday Alex Birsan posted a blog explaining how he made a supply chain attack on various companies via dependencies. I was waiting for this blog from last August when we noticed the mentioned packages on PyPI (and removed). I reached out to Alex to figure out more about the packages, and he said he will write a blog post.

This is the same time when @rsobers also tweeted about how any similar attack works via DNS.

dns data exfiltration

At SecureDrop project, we spend a lot of time figuring out a way to defend against similar attacks. My PyCon US 2019 talk explained the full process. In simple words, we are doing the following:

  • Verify the source of every python dependency before adding/updating them. This is a manual step done by human eyes.
  • Build wheels from the verified source and store them into a package repository along with OpenPGP signed sha256sums.
  • Before building the Debian package, make sure to verify the OpenPGP signature, and also install only from the known package repository using verified (again via OpenPGP signature) wheel hashes (from the wheels we built ourselves).

Now, in the last few months, we modified these steps, and made it even easier. Now, all of our wheels are build from same known sources, and all of the wheels are then built as reproducible and stored in git (via LFS). The final Debian packages are again reproducible. Along with this and the above mentioned OpenPGP signature + package sha256sums verification via pip. We also pass --no-index the argument to pip now, to make sure that all the dependencies are getting picked up from the local directory.

Oh, and I also forgot to mention that all of the project source tarballs used in SecureDrop workstation package building are also reproducible. If you are interested in using the same in your Python project (or actually for any project's source tarball), feel free to have a look at this function.

There is also diff-review mailing list (very low volume), where we post a signed message of any source package update we review.

PrivChat with Tor: 2020-12-11 Tor advancing human rights

Tomorrow, on 11th December, 2020 at 18:00UTC, Tor will host the third edition of the PrivChat event, to discuss with some real life Tor users and talk about how Tor helps them to defend human rights. You can watch it live on Youtube.

image of snowden

The event will be hosted by Edward Snowden, President, Freedom of the Press Foundation. We have Alison Macrina (Founder, Library Freedom Project), Berhan Taye ( Africa Policy Manager and Global Internet Shutdowns Lead, Access Now) and Ramy Raoof (Security Labs Technologist, Amnesty International) as participants in the discussion.

Johnnycanencrypt 0.4.0 released

Last night I released 0.4.0 of johnnycanencrypt module for OpenPGP in Python. This release has one update in the creating new key API. Now, we can pass one single UID as a string, or multiple in a list, or even pass None to the key creation method. This means we can have User ID-less certificates, which sequoia-pgp allows.

I also managed to fix the bug so that users can use pip to install the latest release from https://pypi.org.

You will need the rust toolchain, I generally install from https://rustup.rs.

For Fedora

sudo dnf install nettle clang clang-devel nettle-devel python3-devel

For Debian/Ubuntu

sudo apt install -y python3-dev libnettle6 nettle-dev libhogweed4 python3-pip python3-venv clang

Remember to upgrade your pip version inside of the virtual environment if you are in Buster.

For macOS

Install nettle via brew.

Installing the package

❯ python3 -m pip install johnnycanencrypt
Collecting johnnycanencrypt
  Downloading https://files.pythonhosted.org/packages/50/98/53ae56eb208ebcc6288397a66cf8ac9af5de53b8bbae5fd27be7cd8bb9d7/johnnycanencrypt-0.4.0.tar.gz (128kB)
     |████████████████████████████████| 133kB 6.4MB/s
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
    Preparing wheel metadata ... done
Building wheels for collected packages: johnnycanencrypt
  Building wheel for johnnycanencrypt (PEP 517) ... done
  Created wheel for johnnycanencrypt: filename=johnnycanencrypt-0.4.0-cp37-cp37m-macosx_10_7_x86_64.whl size=1586569 sha256=41ab04d3758479a063a6c42d07a15684beb21b1f305d2f8b02e820cb15853ae1
  Stored in directory: /Users/kdas/Library/Caches/pip/wheels/3f/63/03/8afa8176c89b9afefc11f48c3b3867cd6dcc82e865c310c90d
Successfully built johnnycanencrypt
Installing collected packages: johnnycanencrypt
Successfully installed johnnycanencrypt-0.4.0
WARNING: You are using pip version 19.2.3, however version 20.2.4 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.

Now, you can import the module inside of your virtual environment :)

Note: In the future, I may change the name of the module to something more meaningful :)

High load average while package building on Fedora 33

Enabling Link time optimization (LTO) with rpmbuild is one of the new features of Fedora 33. I read the changeset page once and went back only after I did the Tor package builds locally.

While building the package, I noticed that suddenly there are many processes with /usr/libexec/gcc/x86_64-redhat-linux/10/lto1 and my load average reached 55+. Here is a screenshot I managed to take in between.

high load average

Running SecureDrop inside of podman containers on Fedora 33

Last week, while setting up a Fedora 33 system, I thought of running the SecureDrop development container there, but using podman instead of the Docker setup we have.

I tried to make minimal changes to our existing scripts. Added a ~/bin/docker file, with podman $@ inside (and the sha-bang line).

Next, I provided the proper label for SELinux:

sudo chcon -Rt container_file_t securedrop

The SecureDrop container runs as the normal user inside of the Docker container. I can not do the same here as the filesystem gets mounted as root, and I can not write in it. So, had to modify one line in the bash script, and also disabled another function call which deletes the /dev/random file inside of the container.

diff --git a/securedrop/bin/dev-shell b/securedrop/bin/dev-shell
index ef424bc01..37215b551 100755
--- a/securedrop/bin/dev-shell
+++ b/securedrop/bin/dev-shell
@@ -72,7 +72,7 @@ function docker_run() {
            -e LANG=C.UTF-8 \
            -e PAGE_LAYOUT_LOCALES \
            -e PATH \
-           --user "${USER:-root}" \
+           --user root \
            --volume "${TOPLEVEL}:${TOPLEVEL}" \
            --workdir "${TOPLEVEL}/securedrop" \
            --name "${SD_CONTAINER}" \
diff --git a/securedrop/bin/run b/securedrop/bin/run
index e82cc6320..0c11aa8db 100755
--- a/securedrop/bin/run
+++ b/securedrop/bin/run
@@ -9,7 +9,7 @@ cd "${REPOROOT}/securedrop"
 source "${BASH_SOURCE%/*}/dev-deps"
 
 run_redis &
-urandom
+#urandom
 run_sass --watch &
 maybe_create_config_py
 reset_demo

This time I felt that build time for verifying each cached layer is much longer than what it used to be for podman. Maybe I am just mistaken. The SecureDrop web application is working very fine inside.

Package build containers

We also use containers to build Debian packages. And those molecule scenarios were failing as ansible.posix.synchronize module could not sync to a podman container. I asked if there is anyway to do that, and by the time I woke up, Adam Miller had a branch that fixed the issue. I directly used the same in my virtual environment. The package build was successful. Then, the testinfra tests failed due as it could not create the temporary directory inside of the container. I already opened an issue for the same.

Fixing errors on my blog's feed

For the last few weeks, my blog feed was not showing up in the Fedora Planet. While trying to figure out what is wrong, Nirik pointed me to the 4 errors in the feed according to the W3C validator. If you don't know, I use a self developed Rust application called khata for my static blog. This means I had to fix these errors.

  • Missing guid, just adding the guid to the feed items solved this.
  • Relative URLs, this had to be fixed via the pulldown_cmark parser.
  • Datetime error as error said "not RFC822" value. I am using chrono library, and was using to_rfc2822 call. Now, creating by hand with format RFC822 value.
  • There is still one open issue dependent on the upstream fix.

The changes are in the git. I am using a build from there. I will make a release after the final remaining issue is fixed.

Oh, I also noticed how bad the code looks now as I can understand Rust better :)

Also, the other Planets, like Python and Tor, are still working for my feed.