Kushal Das

FOSS and life. Kushal Das talks here.

kushal76uaid62oup5774umh654scnu5dwzh4u2534qxhcbi4wbab3ad.onion

One week with Fedora Atomic in production

I was using containers for over a year in my personal servers. I was running a few services in those. For the last one week, I moved all my personal servers into Fedora Atomic, and running more number of services in those.

Server hardware & OS

These are actually VM(s) with couple of GB(s) of RAM, and a few CPU(s). I installed using the Fedora Atomic ISO image (get it from here) over virt-manager.

The containers inside

You can find all the Dockerfiles etc in the repo. Note: I still have to clean up a few of those.

Issues faced

In day zero the first box I installed, stopped printing anything on STDOUT, after a reboot I upgraded with atomic host upgrade command. I never had any other problem still now. So, try to stay updated.

Building my own rpm-ostree repo

My next target was to compose my own rpm-ostree repo. I used Patrick's workstation repo files for the same. In my fork I added couple of files for my own tree, and the build script. The whole work is done on a Fedora 24 container. You can view the repo here. This whole thing is exposed via another apache container. I will explain more about the steps in a future blog post.

What is next?

First step is clean up my old Dockerfiles. I will add up any future service as containers in those boxes. Even though we are automatically testing our images using Autocloud, using this in my production environment will help me find bugs in more convenient manner.

Trying out rkt v1.14.0 on Fedora 24

Few days back we had rkt v1.14.0 release from CoreOS. You can read details about the release in their official blog post. I decided to give it a try on a Fedora 24 box, I followed the official documentation. The first step was to download rkt, and acbuild tools.

To download, and install acbuild tool, I did the following: (Btw, as it was a cloud instance, I just moved the binaries to my sbin path)

$ wget https://github.com/containers/build/releases/download/v0.4.0/acbuild-v0.4.0.tar.gz
$ tar xzvf acbuild-v0.4.0.tar.gz
$ sudo mv acbuild-v0.4.0/* /usr/sbin/

Now for rkt, do the following.

$ wget https://github.com/coreos/rkt/releases/download/v1.14.0/rkt-v1.14.0.tar.gz
$ tar xzvf rkt-v1.14.0.tar.gz
$ cd rkt-v1.14.0
$ ./rkt help
$ sudo cp -r init/systemd/* /usr/lib/systemd/

Now I had to modify a path inside ./scripts/setup-data-dir.sh file, at line 58 I wrote the following.

systemd-tmpfiles --create /usr/lib/systemd/tmpfiles.d/rkt.conf

Next step is to execute the script. This will create the required directories, and fix the permission issues. Before that I will also create a group, and add my current user to the group. Remember to logout, and login again for it.

$ sudo groupadd rkt
$ export WHOAMI=$(whoami); sudo gpasswd -a $WHOAMI rkt
$ sudo ./scripts/setup-data-dir.sh

rkt documentation suggests to disable SELinux for trying out, I tried to run it with SELinux in Enforcing mode, and then created local policy based on the errors. I have also opened a bug for the rawhide package.

# ausearch -c 'systemd' --raw | audit2allow -M localrktrawhide
# semodule -i localrktrawhide.pp

After all this we are finally in a state to start using rkt in the system.

The Try out document says to trust the signing key of etcd, I am going to do that, and then test by fetchin the image.

$ sudo ./rkt trust --prefix coreos.com/etcd
$ ./rkt fetch coreos.com/etcd:v2.3.7
image: searching for app image coreos.com/etcd
image: remote fetching from URL "https://github.com/coreos/etcd/releases/download/v2.3.7/etcd-v2.3.7-linux-amd64.aci"
image: keys already exist for prefix "coreos.com/etcd", not fetching again
image: downloading signature from https://github.com/coreos/etcd/releases/download/v2.3.7/etcd-v2.3.7-linux-amd64.aci.asc
Downloading signature: [=======================================] 490 B/490 B
Downloading ACI: [=============================================] 8.52 MB/8.52 MB
image: signature verified:
  CoreOS Application Signing Key <security@coreos.com>
  sha512-7d28419b27d5ae56cca97f4c6ccdd309

You can view the images, with a image list subcommand.

$ ./rkt image list
ID                      NAME                                    SIZE    IMPORT TIME     LAST USED
sha512-5f362df82594     coreos.com/rkt/stage1-coreos:1.14.0     162MiB  1 day ago       1 day ago
sha512-86450bda7ae9     example.com/hello:0.0.1                 7.2MiB  15 hours ago    15 hours ago
sha512-7d28419b27d5     coreos.com/etcd:v2.3.7                  31MiB   48 seconds ago  48 seconds ago

From here, you can just follow the getting started guide. I used the debug flag to see what is going on.

$ sudo ./rkt --insecure-options=image --debug run ../hello/hello-0.0.1-linux-amd64.aci                                
image: using image from local store for image name coreos.com/rkt/stage1-coreos:1.14.0
image: using image from file ../hello/hello-0.0.1-linux-amd64.aci
stage0: Preparing stage1
stage0: Writing image manifest
stage0: Loading image sha512-86450bda7ae972c9507007bd7dc19a386011a8d865698547f31caba4898d1ebe
stage0: Writing image manifest
stage0: Writing pod manifest
stage0: Setting up stage1
stage0: Wrote filesystem to /var/lib/rkt/pods/run/d738b5e3-3fe9-4beb-ae5c-3e8f4153ee57
stage0: Pivoting to filesystem /var/lib/rkt/pods/run/d738b5e3-3fe9-4beb-ae5c-3e8f4153ee57
stage0: Execing /init
networking: loading networks from /etc/rkt/net.d
networking: loading network default with type ptp
Spawning container rkt-d738b5e3-3fe9-4beb-ae5c-3e8f4153ee57 on /var/lib/rkt/pods/run/d738b5e3-3fe9-4beb-ae5c-3e8f4153ee57/stage1/rootfs.
Press ^] three times within 1s to kill container.
systemd 231 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK -SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT -GNUTLS -ACL +XZ -LZ4 +SECCOM$ +BLKID -ELFUTILS +KMOD -IDN)
Detected virtualization rkt.
Detected architecture x86-64.

Welcome to Linux!

Set hostname to <rkt-d738b5e3-3fe9-4beb-ae5c-3e8f4153ee57>.
[  OK  ] Listening on Journal Socket.
[  OK  ] Created slice system.slice.
         Starting Create /etc/passwd and /etc/group...
[  OK  ] Created slice system-prepare\x2dapp.slice.
[  OK  ] Started Pod shutdown.
[  OK  ] Started hello Reaper.
[  OK  ] Listening on Journal Socket (/dev/log).
         Starting Journal Service...
[  OK  ] Started Create /etc/passwd and /etc/group.
[  OK  ] Started Journal Service.
         Starting Prepare minimum environment for chrooted applications...
[  OK  ] Started Prepare minimum environment for chrooted applications.
[  OK  ] Started Application=hello Image=example.com/hello.
[  OK  ] Reached target rkt apps target.
[111534.724440] hello[5]: 2016/09/10 14:48:59 request from 172.16.28.1:35438

While the above container was running, I tested it out from another terminal, and then stopped it.

$ ./rkt list
UUID            APP     IMAGE NAME              STATE   CREATED         STARTED         NETWORKS
865b862e        hello   example.com/hello:0.0.1 running 8 seconds ago   8 seconds ago   default:ip4=172.16.28.2
$ curl 172.16.28.2:5000
hello
$ sudo ./rkt stop 865b862e
"865b862e-21f5-43e0-a280-3b4520dad97c"

I hope this post will help you to try out rkt on a Fedora system. Feel free to comment if you have question, or ask over twitter.

Setting up a home music system with Raspberry Pi3 and MPD

I had one Raspberry Pi3 in my home office (actually it was missing for few months). Found it two nights back, and decided to put it in use. Best part is the on-board WiFi. This means I can put it anywhere in the house, and still access it. I generally use Raspbian on my Pi(s), so did the same this time too. After booting from a fresh SD card, did the following to install Music Player Daemon.

$ sudo apt-get update
$ sudo apt-get upgrade
$ sudo apt-get install mpd
$ sudo systemctl enable mpd
$ sudo systemctl start mpd

This got MPD running on the system, the default location of the songs directory is /var/lib/mpd/music. You can setup the location in /etc/mpd.conf file. But this time when ever I changed a song, the service stopped. I had to restart the service. After poking around for sometime, I found that I have to uncomment the following from the mpd.conf file.

device "hw:0,0"

I also changed the value of mixer_type to software, that enables volume control from the software. After a restart everything worked as planned. I have MPD clients on my phone (and also even on Anwesha's phone, and on mother-in-law's tablet), and also on laptop.

The above is the screenshot of the client on phone. On my Fedora laptop I installed a new client called cantata.

$ sudo dnf install cantata

If you have any tips on MPD, or on similar setups, feel free to comment, or drop a note on twitter/mail. Happy music everyone.

Event report: Flock 2016

This year's Flock was help in Krakow, Poland, from 2nd to 5th August. /me, and Sayan started our journey on 30th from Pune, and we reached Krakow finally on 31st afternoon. Ratnadeep joined us from the Frankfurt. Patrick was my roommate this time, he reached few hours after we reached.

Day -1

Woke up early, and started welcoming people near the hotel entrance. Slowly the whole hotel was filling up with Flock attendees. Around noon, few of us decided to visit the Oskar Schindler's Enamel Factory. This is place I had in my visit list for a long time (most of those are historical places), finally managed to check that off. Then walked back to city center, and finally back to the hotel.

Started meeting a lot more people in the hotel lobby. The usual stay up till late night continued in this conference too. Only issue was about getting up early, somehow I could not wake up early, and write down the daily reports as I did last year.

Day 0

Managed to reach breakfast with enough time to eat before the keynote starts. Not being able to find good espresso was an issue, but Amanda later pointed me to the right place. I don't know how she manages to do this magic everytime, she can really remove/fix any blocker for any project :)

Received the conference badge, and other event swags from registration desk. This one is till date the most beautiful badge I have seen. Mathew gave his keynote on "The state of Fedora". Among many other important stats he shared, one point was really noticeable for me. For every single Red Hat employee who contributes to Fedora, there are at least two contributors from the community. This is a nice sign of a healthy community.

After the keynote I started attending the hallway tracks as usual. I went to this conference with a long of topics I need discuss with various people. I managed to do all of those talks over the 4 days of the event. Got tons of input about my work, and about the project ideas. Now this is the time to make those suggestions into solid contributions.

Later I went into the "The state of Fedora-infra" talk. This was important to me personally, as this gives as easy way to revisit all the infrastructure work going on. Later in the day I attended Fedora Magazine, and university outreach talk.

In the evening there was "Tour of Krakow", but Fedora Engineering team had a team dinner. As this is only time when all of us meet physically. Food was once again superb.

Day 1

As I mentioned before it was really difficult to wake up. But somehow managed to do that, and reached downstairs before the keynote started. Scratch was mentioned in the keynote as tool they use. Next was usual hallway talks, in the second half I attended the diversity panel talk, and then moved to Pagure talk. I knew that there were a huge list new cool features in Pagure, but learning about them directly from the upstream author is always a different thing. Pingou's slides also contained many friends' names, which is always a big happy thing :)

My talk on Testing containers using Tunir was one of the last talk of the day. You can go through the whole presentation, and if you want to see any of the demos, click on those slides. That will open a new tab with a shell console, type as you normally type in any shell (you can type any char), and press Enter as required. I use Tunir to test my personal containers which I run on production. This talk I tried to various such use cases.

At night we went out the river cruising. Before coming back few of us visited the famous Wawel Dragon. I also met Scot Collier for the first time. It is always to meet people with whom you work regularly over internet.

Day 2

It started with lightening talks. I spoke for few minutes about dgplug summer training. You can find the list of talks here. After this in the same room we had "Meet the FAmSCo" session. I managed to meet Gerold, Kanarip, Fabian after 9 years in this Flock. Christoph Wickert took notes, and discussed the major action items in last week's FAmSCo IRC meeting too. Next I attended "Infrastructure workshop", after that as usual hallway tracks for me. I was looking forward to have a chat with Dodji Seketeli about his, and Sinny's work related about ABI stability. Later at night few of us managed to stay up till almost 5AM working :)

Day 3

Last day of Flock 2016. Even after the early morning bed time, somehow managed to pull myself out of the bed, and came down to the lobby. Rest of the day I spent by just talking to people. Various project ideas, demos of ongoing work, working on future goals.

Conclusion

Personally I had a long list of items which I wanted to talk to various people. I think I managed to cross off all of those. Got enough feedback to work on items. In the coming days I will blog on those action items. Meanwhile you can view the photos from the event.

Fedora Atomic 24 is now available

Just in case you missed the news, Adam Miller already announced the availability of Fedora Atomic release based on Fedora 24. You can get it from the usual place. Dusty already uploaded the same into Atlas for Vagrant. You can try it out by the following.

    $ vagrant init fedora/24-atomic-host; vagrant up

As Adam mentioned in his mail, we are sorry for the delay, but we will keep improving the process. Thank you everyone for helping us with this release.

Fedora mirror at home with improved hardware

It was always a dream to have a fully functional Fedora mirror in the local network which I can use. I tried many times before, mostly with copying rpms from office, carrying them around in hard drive, etc. But never managed to setup a working mirror which will just work (even though setting it up was not that difficult). My house currently has 3 different network (from 3 different providers) and at any point of time 1 of them stays down 😔

Hardware

If you remember my post on home storage, I was using Banana Pi(s). They are still very nice, and Fedora runs on them properly, but they were not very powerful, things like rsync was crawling on them. This PyCon, I received Minnowboard Turbot from John Hawley(Thanks a lot once again). It took time to get them setup (as I don't have a monitor with HDMI, I had to steal the TV from the front room), they are finally up in my own production environment. Installation of Fedora was super easy, just used the latest Fedora 24 from a live USB stick, and I was ready to go.

In the picture above you can see two of those running, you can also see a Banana Pi in the back.

Syncing up the mirror

Now for my work, I mostly need x86_64, and nothing else (I update my ARM boards, but not regularly). So following the tips in #fedora-noc channel from smooge, and puiterwijk, and some tips from this wiki page, I started rsyncing the 24GA. This was around 55GB, and took me some days to get it in. Mean while Chandan helped me by syncing the updates repo. Right now I have a cron job which syncs the update repo every night.

Remember to add the following your Apache virtualhost configuration

  AddType application/octet-stream .iso
  AddType application/octet-stream .rpm

Event report: Fedora 24 release party Pune

Last Saturday we had the Fedora 24 release party in Pune. This was actually done along with our regular Fedora meetup, and in the same location. We had a few new faces this time. But most of the regular attendees attended the meetup.

Chandan Kumar started the day with a presentation about new features in Fedora 24. We tried to see a few those in our laptops. During the discussions Parag , and Siddhesh pointed how important is self learning for us. Siddhesh also talked about a few project ideas on Glibc. In one of the previous meet, many of the attendees sent PR to the Fedora Autocloud testcases. We talked about those for few minutes. From yesterday, I am actually merging them in master.

As a group we decided to work on to make a modern version of glibc documentation. There is no git repo yet, but I will provide the links when we have something to show. As a group our goal is to do more upstream contribution. One thing I noticed that most of the attendees were dgplug summer training participants.

Thank you Patrick

Patrick

This post is say thank you to Patrick Uiterwijk, who is part of Fedora Infrastructure sysadmin team (along with nirik, and smooge). He is one of our silent warrior who keeps the Fedora Infrastructure running. Many may not heard about him, unless you came down to any of our admin or development IRC channels, and asked for help. I personally ask him for help in many different areas, starting for basic sysadmin questions, to complex deployment issues, to programming ideas. He is one of the hackers I know who can code in assembly to the web frontend as required. He is our main force behind Fedora Infrastructure Cloud, and spam fighting :) He is also our new Fedora Infrastructure Security Officer. If any of our new and young contributor thinks that she/he is too young to do things, you may want to look at Patrick's work. He started contributing back in 2012, and he is right now 24 :)

So, thank you once again for all the help.

Event report: Fedora Cloud FAD 2016

Around a month back the Fedora Cloud Working Group met in Raleigh for two days for Cloud FAD. The goal of the meet was to agree about the future we want, to go through major action items for the coming releases. I reached Raleigh one day before, Adam Miller was my room mate for this trip. Managed to meet Spot after a long time, this was my first visit to mothership :) I also managed to meet my new teammate Randy Barlow.

Adam took the lead in the event, we first went through the topics from the FAD wiki page. Then arranged them as in the order we wanted to discuss.

Documentation was the first item. Everyone in the room agreed that it is the most important part of the project, we communicate with our users using the documentation. If a project can provide better documentation with clear examples, it will be able to attract more users to itself. Last year I have started a repo to have documents available in the faster manner, but that did not work out well. Jared volunteered, and also suggested to have an open repo where people can submit documents without worrying much about format. He will help to convert to the right format as required. We also noted few important examples/documents we want to see. Feel free to write about any of these and submit a pull request to the repo.

Automated testing was the next import point. We went through the current state of tests in Autocloud project. Most of our tests there came from various bugs filed against the images/tools. Dusty was very efficient in creating the corresponding issues on github from the RH bugzilla, which then in turn was converted into proper Python3 unittest based test case by the volunteers. Next we moved to the automated testing of layered image builds. Tflink provided valuable input on this discussion. We talked SPC(s), how the maintainers of the images will be responsible for the tests. For the tests related to the Cloud/Atomic images, the Cloud Working Group will be responsible.

There were few super long discussions related Fedora, Atomic, and Openshift projects. Many members from the Openshift development team also joined in the discussion. Results of these discussions will be seen in the mailing lists (a lot of content for this blog post). We also discussed about Vagrant, and related tools people are using or creating. With help from Randy, I managed to package vagrant-digitalocean plugin during the FAD. There will be a Fedora magazine post with more details on the same.

We also agreed upon having monthly updated base images. We still have to find out a few missing points related to the same so that we can have a smooth updated release.

Public cloud provider outreach was one of the last points we discussed. We have to pick different providers one by one, and have to make sure that we can provide updated releases to them for the consumption the users. The point of more documentation came up in this discussion too.

Summer training 2016 is on

The 9th edition of dgplug summer training started few weeks back. This year in the IRC channel (#dgplug on freenode) we saw around 186+ nicks participating in the sessions. Till now we have went through communication guidelines, IRC, mailing list how to, a text editor ( Vim in this case), blogging, basic bash commands, a few more bit advanced bash commands. We also learned about reStructured Text, and Sphinx. We also managed to live demos to all students from the mentor's terminal.

Even though most of the participants are new to Linux, we can see a better quality of questions coming from them, better point is that most of them are not afraid about asking questions. Which is a very good sign to see in the training. Few days back Trishna Guha took the first guest session of this year, she talked about her experience in the summer training last year, and how she has become an upstream contributor. If you read the logs, then you can see people asked all shorts of questions to her.

We also have a feed aggregator up where you can read the blog posts of the participants. I still have to add a few more blogs to this as many sent in the feed URLs much later.

If you are an upstream contributor, and you want to encourage this group of new participants, please drop me a note. We would love to see more upstream folks talking to these new/upcoming contributors. The sessions run on IRC, it can be in a time when you are available. So just ping me if you are interested to take a session :)