strugee.net

Blog

Show only posts from 2014 2015 2016 2017 2018 2019 or categorized in personal, development, politics and explanations

pump.io 5.1.1, Docker images, and sunsetting Node 4 support

It's been a (relatively) long time since we've put anything on this blog, and I think it's high time for an update - especially since there are so many exciting things afoot! Not only is pump.io 5.1.1 now on npm, but we have new experimental Docker images! With upstream having already dropped security support, we're also planning to drop support for Node 4 soon.

Let's take these one at a time.

pump.io 5.1.1

Several months ago I landed a patch from contributor Camilo QS fixing a bug in pump.io's session handling in a route serving uploads. This bug made it so that non-public uploads would always return HTTP 403 Unauthorized, even if the user actually was authorized. Clearly, this makes uploads unusable for people who don't like to post everything publicly. Evan suggested that we should backport this bugfix since it's so high-impact, and I agree. So that's what pump.io 5.1.1 contains: a bugfix for uploads. Since it's a patch release 5.1.1 is a drop-in replacement for any 5.x pump.io release, so I'd highly encourage administrators to upgrade as soon as it's convenient. We'd also love it if you file any bugs you find, and feel free to get in touch with the community if you need help or have questions. As a reminder, you can subscribe to our low-volume announce mailing list to get email when we put out new releases like this. Also, I would be remiss if I didn't mention that my signing key setup has changed temporarily - see here if you want to cryptographically verify the 5.1.1 release.

If you're on an npm-based install, you can upgrade with npm install -g pump.io@5.1.1. If you're on a source-based install, you can upgrade by integrating the latest commits in the 5.1.x branch. See here for the changelog.

But that's not all. pump.io 5.1.1 also includes another exciting change: with this release, we've integrated automation to relase pump.io Docker images too.

Docker images

We've wanted to release pump.io Docker images for a long time. But Docker has a well-known problem: security vulnerabilities in Docker Hub images are rampant. Even though we've had a Dockerfile in the repository for a while thanks to contributor thunfisch, we didn't want to release official Docker images if we weren't sure we could always provide security support for them.

Unfortunately, Docker the company has done very little to address this problem. Most of their solutions are aimed at image consumers, not authors. Docker Hub has some capacity for automatically rebuilding images, but unfortunately, it's not enough and you end up having to roll everything yourself anwyay. Pretty disappointing - so we had to get creative.

Our solution to this problem is to utilize Travis CI's cron functionality. Every day, Travis will automatically trigger jobs that do nothing but build pump.io Docker images. These images are then pushed to Docker Hub. If nothing has changed, Docker Hub recognizes that the "new" images are actually identical with what's already there, and nothing happens. But if there has been a change, like a native dependency receiving a security update, then the image ID will change and Docker Hub will accept the updated image. This cronjob is enabled for the 5.1.x branch and master (which as a side effect, means that alpha Docker images are published within 24 hours of a git push), and in the future it will be enabled on all branches that we actively support. Thus, Docker users can easily set up automation to ensure that they're running insecure images for, at most, 24 hours.

If you're interested in trying out the Docker images, we'd love to know how it goes. They should still be treated as experimental at the moment, and early feedback would be super useful. You can read more details in our ReadTheDocs documentation.

Note that there are still more changes that we'd like to make to the Docker images. These changes didn't make it into the 5.1.1 packaging since they felt too invasive for a patch release. Instead we plan to make them in the next release, which is planned to be semver-major. Which brings me neatly to the last topic...

Sunsetting Node 4, 5, and 7 support

We had a good run, but it's time to say goodbye: Node.js upstream has marked Node 4.x as end-of-life, and in accordance with our version policy, we're doing the same. Since this is a semver-major change, we're also taking the opportunity to drop support for Node 5.x and Node 7.x. These changes have been made as of commit 32ad78, and soon we'll be ripping out old code used to support these versions, as well as upgrading dependencies that have recently started requiring newer Nodes.

Anyone still on these versions is encouraged to upgrade as soon as possible, as Node.js upstream is no longer providing security support for them. Administrators can use the NodeSource packages, or they can try out our new Docker images, which use a modern Node version internally.

Please reach out to the community if you need any help making the transition. And good luck!


New temporary signing keys

So unfortunately, recently I have lost my Nitrokey, which I liked very much. In addition to this being fairly upsetting, I am now left with the sticky situation of not being able to sign code - while I have a master key (from which I can generate new subkeys), I'm currently at college and my copy of the key is sitting 3,000 miles away at home.

To get around this situation, I've generated a temporary signing keypair. This keypair is set to expire after 3 months (and I don't intend to renew it). When I have access to my master keypair, I will revoke the old subkeys, generate new subkeys (it was probably time, anyway) and revoke the temporary keypair.

The new fingerprint is D825FD54D9B940FF0FFFB31AA4FDB7BE12F63EC3. I have uploaded the key to GitHub as well as the Ubuntu keyserver and keys.gnupg.net (just as my original was). The key is also incorporated into my Keybase account so that you can bootstrap trust in it, if you want to verify software signatures or whatever.


How to evaluate domain registrars: a DNS story

Adriel: Any recommendations of where I should buy domain names from?

Me: I've heard gandi.net is good and they support free software projects and organizations - I'm on Namecheap at the moment but it's janky and sometimes unreliable, and basically the only reason I'm still with them is inertia

Adriel: What makes the difference? Why does it matter where it comes from? I guess a better question is, what am I actually buying when I buy a domain?

Narrator: DNS is fraught with peril and complexity, and many hours would go by before Adriel got his answer...

This is a cleaned-up version of an explanation of how the DNS works and what owning a domain name really means. The original was much worse, because I typed it on my phone into a WhatsApp group chat and didn't edit at all. But you, dear reader, get the new-and-improved version! I'll start with Adriel's original question - why does it matter where my domain name comes from - and then transition into talking about how the DNS works and how the domain registar plays into that picture.

Here is, in my highly-opinionated opinion, the golden rule of registrar shopping - the #1 thing you have to know: all registrars are sketchy. Therefore you are looking for the least terrible option.

Once you understand this, you're looking at four things registrars have to provide:

  1. DNS services
  2. Whois data
  3. The web UI for managing everything
  4. Support

These are the most important, but depending on your usecase you might also want to examine value-add services registrars provide. Most registrars will also host your email and provide shared hosting, or sometimes a virtual private server, or VPS. A VPS is a box where you get root and can do whatever you want, including hosting a web server (but you have to do it yourself); shared hosting is where you get a managed web server installation that's shared (get it?) with other people. You get write access to a particular directory on the box that the web server is configured to serve from. (Registrars often also provide TLS/HTTPS certificates, but now that Let's Encrypt exists, why you'd pay for one unless you need an EV cert is beyond me.)

The third and fourth responsibilities are pretty easy to understand. Is the web UI butt-ugly or buggy, and is the support staff friendly and responsive. So I want to focus on the first responsibility, DNS, because that can be super confusing (I don't really understand the second, Whois, and anyway this post is long enough already). Here's the tl;dr: the registrar provides the servers that are responsible for answering DNS queries. Even if you use a third-party provider your registrar is involved, because they have to serve NS records that basically say "hey look over at this other provider for the REAL records."

Let's break down exactly what that means. Before I start I should note that if you've ever heard people say something along the lines of, "it'll take up to 24 hours for DNS to propogate," you should forget that. It's a convenient lie that people (including myself sometimes!) use to explain DNS' caching behavior more easily.

DNS works by recursively resolving each component in a domain name. It's probably easiest to demonstrate how this works by walking through the steps clients like browsers take when they're trying to resolve a domain name into a numeric IP address. So say we're trying to resolve example.com, with no help from anybody else.

Our first step is to look up the authoritative nameservers for the com component. In other words, we're looking up the servers that have the absolute final word as to what DNS records com has. (More on exactly how this first step works later.) Once we've found the com nameservers, we issue a DNS query asking them where we can find the nameservers for example.com. The answer we get back will point to example.com's registrar. Always. Even if they're using a different DNS service - the registar just points to the other service with NS records.

Let's pause in our journey to resolve example.com for a second to consider the implications of this. This means that the registrar is always involved in DNS lookups for a domain, which is important for two reasons:

  1. If the registrar's DNS goes down so does your website
  2. If you want to use DNSSEC on your domain (which of course I have Opinions™ on but whatever) your registrar has to support it, because the way DNSSEC works is that EVERY link in the lookup chain MUST be signed, or the entire chain of trust falls apart[1]

Just things to bear in mind.

Anyway, we're almost done resolving example.com. We've found its nameserver, so all we have to do is issue our original query. Maybe we need an IPv4 address, in which case we'd ask the nameserver for all A records for example.com; maybe we want IPv6 addresses instead in which case we'd ask for AAAA records. (If you want to know more about DNS record types, Wikipedia has a nice list!) If the registrar is the authoritative nameserver for example.com it'll respond normally, if example.com uses a 3rd-party DNS host, the registrar will respond with NS records. In the former case, we've gotten the data we originally set out to get; in the latter, we simply issue our query again to the nameserver the registrar pointed us at - which we now know to be authoritative - and if all goes well, we'll get a regular response and be done. As a reminder, the "authoritative nameserver" for a given domain is whatever nameserver contains the authoritative data for that domain. So we say e.g. "such-and-such a registrar's nameservers are authoritative for example.com." For example.net the authoritative nameservers could be completely different.

The overarching goal of the recursive resolution procedure I just laid out is simply to find that veeery last nameserver in the chain which can speak authoritatively for the domain we're interested in. Your registar's job, as far as DNS is concerned, is to put the domain in the nameservers for the top-level domain (or TLD - com in example.com's case) and to either serve regular DNS records or point recursive DNS resolvers at the authoritative nameserver that will. There's also some other paperwork involved I think, but I wouldn't know much about that.

As an aside, I should note that normally, your computer doesn't do all this recursive stuff. There will be some DNS server upstream - perhaps running on your router or run by your ISP - that does it for you. This is to alleviate your computer from having to know how to do this and also because it makes stuff like caching work better. Speaking of caching, let's talk about the real explanation behind the admittedly super-convenient "24 hour propogation" lie. Now that we know how DNS resolution works the idea of "DNS propogation" is pretty simple to understand - it comes from caching. All that recursion stuff is expensive in terms of time (and nameserver load), so we want to avoid doing it whenever possible by generating responses from a local cache. This is accomplished by a Time To Live (TTL) associated with every DNS record response. The TTL basically says how long the record is valid for, and therefore how long it can be cached. When you change your DNS records and wait for it to "propogate", really you're just waiting for caches to expire. That means, though, that if you plan ahead you can lower your TTL to a few seconds, wait until everybody's cache expires (so everyone sees the new TTL), and then make your change[2]. This would lower downtime to a minimum. To be polite you'd then raise your TTL again. If you want to know more about this, here is an excellent Server Fault question that rehashes this explanation and then describes an exponential backoff strategy you might use to make such a change as efficiently as possible.

And with that, we've covered most of what you need to know about how DNS works, except for one thing - that mysterious first step. How did we get the nameservers for com?

The answer is the DNS root. The DNS root servers are at the absolute top of the DNS hierarchy, and they're responsible for resolving TLDs. In fact, if I was to nitpick, I'd point out that I lied earlier when I said we were trying to resolve example.com. In reality, we're trying to resolve example.com. (note the trailing .). If we read this new domain name, which is now fully qualified, from right-to-left, we get the chain we used to lookup the final authoritative nameserver. It goes: empty string, indicating the root "zone" (to use DNS parlance) -> com -> example. DNS is technically distributed but organizationally highly centralized around the DNS root, and trust in the DNS root, and people who don't like that tend to run alternative DNS roots (your friendly author is sympathetic to this position). But that's a story for another time.

Hopefully you now have a much better idea of how the DNS works and why your registrar has to be involved in it. I'm 95% sure this post is accurate, but I could always be wrong so if you spot a mistake, please feel free to contact me either by the usual ways or by Zulip if you're a Recurser. Good luck!

Footnotes:

[1]: FWIW this is how Namecheap screwed up my domain - I turned on their DNSSEC option, but they had messed up their signing so that suddenly any resolver which validated DNSSEC would reject the legitimate responses as invalid. This was made extremely difficult to debug by the fact that you don't really know if a resolver is doing this for you, and often if DNSSEC validation fails that failure won't be passed directly on to the client (the client instead will get NXDOMAIN or something, which is basically the DNS way of saying "there's nothing there"). So resolution would fail on half the networks I tested from, with no clear difference. To make matters worse, when I went to turn of Namecheap's DNSSEC support because my website was down for a basically semi-random sampling of my visitors, the change didn't actually propogate to production! Like, I flipped the switch in the admin panel and it literally did nothing. So I had to escalate a support ticket to get them to purge all the DNSSEC stuff from production. Kinda ridiculous!

[2]: do note, however, that people sometimes don't follow the standard and set a lower bound (i.e. minimum) on TTLs. So if you make a change make sure stuff running on your old IP address won't wreak havoc if it gets production traffic.


Winter break retrospective & spring semester goals

Tonight I'll have been back at college for a full week, and I wanted to write up a little retrospective of winter break to see what I accomplished (and in particular, which goals I completed or missed).

You may wish to skip directly to the executive summary.

Resolved goal: Node.js manpage PR

I didn't complete this goal per se, but I did at least resolve it by closing the Pull Request. I felt pretty bad about it (especially because I kept saying I intended to finish it) but honestly, it became clear to me that I'd just lost the motivation to keep going with it. I would love it if this was included in Node.js core, but I just consistently have higher priorities. So rather than leave it hanging and cluttering up the Pull Requests view, I just closed it to reflect reality. I made sure not to delete the branch though, in case someone (distant future me?) wants to pick it up again.

Failed goal: deal with GPG keysigning

I did nothing to push this goal forward. While I made numerous improvements to my GPG setup, I did not actually sign anyone's key, which was what this goal was about. This feels unfortunate. (I also do not have access to the private key material in college, and am certainly not about to ask that it be shipped to me.)

Partially completed goal: push Debian packaging work forward

There were two components to this: Profanity packaging upgrades and getting the new filter-other-days packaging into Debian. I made no progress on the Profanity packaging. However, I did fix a misconfiguration in my .reportbugrc (which annoyingly had previously sent my incredibly detailed email about Profanity packaging to /dev/null) and then submitted an ITP (Intent To Package bug, which is a Debian thing) for filter-other-days. I then used that ITP bug number to fix the last .deb Lintian warning (although see below). Then I paired with Anja who is, as always, an angel, and we figured out the weirdness that is dput and mentors.debian.net. Finally I was able to upload filter-other-days(!) BUT I was in for a rude awakening - apparently Lintian checks for .debs and .dscs are different. So while my binary package was Lintian-clean, my source package unfortunately wasn't. This is something I will need to work on in the weeks to come. That being said, I'm still pretty proud of what I've accomplished here! I've made significant progress on this front.

Completed goal: lazymention v1

One of the first things I did was ship lazymention 1.0.0 - and I wrote an announcement blog post to accompany it! (In fact, I syndicated that blog post to IndieNews using lazymention, which felt pretty damn awesome.) I got some great feedback on it from the IndieWeb community, and my lobste.rs submission - which also got some great engagement - even made the front page, which was pretty unreal! I still have a lot more work to do with lazymention - in particular, it doesn't actually respect edits (so it'll resend Webmentions with every job submission) - but for now it works well. I'm super pleased with it, and have integrated it into my site deployment workflow. I even wrote a module so other people can do this, too!

Failed goals: ActivityPub in pump.io core, pump.io PR review

No progress on this front. I did start hacking on a telemetry server which will eventually be helpful for our ActivityPub rollout, but that did not in any way directly help fulfill these goals. I also released 5.1 stable, but that's pretty routine by this point.

Partially completed goal: two blog posts per week

I stuck with this goal all the way up until the final week, where I didn't write any. (Although I wrote about my GPG keys around the time I actually flew back to college.) The first week, I wrote about my thoughts on shell script and about lazymention; the second, I wrote about the pump.io 5.1 stable release and about talking to Pull Request reviewers if you think they're wrong.

Failed stretch goal: paper editing

I did absolutely no editing on the paper I intend to get published (which I originally wrote for a writing class). This was a stretch goal though, so that's totally fine.

Additional activity: steevie maintenance

After I finally found the cable I needed, I swapped out the cable that connects steevie's motherboard with the drives' SATA ports. This seemed to significantly improve disk performance, although there are still noticeable performance problems. I'm very, very happy to have finally gotten this done.

Additional activity: Tor relay migration from Amazon EC2 to DigitalOcean

After getting some advice on tor-relays, I finally sat down and looked into moving my relay away from Amazon Web Services. This is because AWS bills by usage, which for a Tor relay ends up being incredibly inefficient. It turned out to actually be way easier than I thought, which only served to make me mad that I hadn't done it sooner. In any case, I now save approximately $240/year AND I can push 1000GB/month as opposed to the 10GB/month I pushed before. In the words of the commit where I made this change: "this change made possible by the fact that I'm no longer getting billed up the wazoo by Amazon Web Services." Here's a of a Tor Metrics graph (captured today) that shows the jump:

Tor Metrics graph

Anyway, I'm super happy I can contribute more to the Tor network and save lots of money in the process. That being said I am pretty damn salty I didn't realize this in the four years I've been running a Tor relay.

Additional activity: offandonagain.org maintenance

After turning on NodeSecurity monitoring for offandonagain.org, I found out that the module that underlies it, node-serve-random, had some vulnerable dependencies. Not only did I fix this, I wrote a large test suite for the module and found (and fixed!) several bugs in the process. Writing a test suite also allowed me to turn on Greenkeeper for the module, which will be a huge help to me in keeping the module up-to-date.

Additional activity: Stratic work

First off, I released beta 7 of generator-stratic! Nothing major, just some polishing stuff. Stratic is getting very close to the point where I might want to start promoting it more aggressively, or declaring it stable, and (with a lot of super-helpful feedback from my family) I worked on something that's super important before we get there: a logo!

Here are two of my favorites so far:

Yellow background with a centered black file icon and a asteroid coming up from earth in the midddle and a pipe to the right Yellow background with a centered black file icon and a rocket coming up from earth in the midddle and a pipe to the right

These are based off the JS logo, in case you hadn't seen it before:

Black JS text in the bottom-right corner of a yellow background

Anyway, I have to post another iteration in the GitHub issue based on some feedback from Saul (who I had a lovely lunch with) - he thinks I should reverse it so the pipe is on the left, so it looks like the file is coming out of the pipe. But anyway you should comment there if you have feedback!

Additional activity: IndieWeb stuff

I attended Homebrew Website Club in San Fransisco, which was incredible. I got to meet a bunch of new people, as well as say hi to Ryan and Tantek again, which was so nice - it's always just better to talk in real life. Tantek said (at least if I recall correctly) that my laptop was one of the best-stickered laptops he'd ever seen, which made me feel just unbelievably special. He also gave me a microformats sticker (and helped me figure out where to put it), which I had on my old laptop and had been really missing, as well as a Let's Encrypt sticker. The latter was so special I elected to put it on the inside of my laptop, which I reserve only for really special things (basically a Recurse Center refucktoring sticker and a sticker of Noah in this video, which he handed to me like a business card the first time we met). Anyway, every time I look at the Let's Encrypt sticker I just feel so happy. I love Let's Encrypt so damn much.

Homebrew Website Club was super inspiring, so when I got back to where I was staying (at my mom's boyfriend's house) I started implementing an IndieWeb-style social stream for strugee.net. It still needs some polishing but is actually pretty close to being done. Who knows when I'll have time to finish it, but it's getting there! I'm so freaking excited, honestly. Also, I added proper timestamp mf2 metadata to my site, as well as a visual display for post edits, and I expanded what type of Webmentions the display code recognizes too!

Executive summary

I resolved or completed 2 goals, partially completed 2 goals, failed 3 goals, and failed 1 stretch goal. Additionally I did significant work in 5 other areas. Out of the goals I set for myself, I completed 51% (Debian packaging work is ~2/5 complete; blog posts were written 2/3 of the time); not counting the stretch goal, I completed 61.2%. I'm pretty happy with what I got done during this period; however, while I was productive, the numbers show that I did a mediocre job sticking to my goals. In the future I should focus on making more realistic goals and then sticking to them (though not too much - it is a break, after all!).

Speaking of which, partway through break I felt like I was on the edge of burnout, which to me was a very clear sign that I was pushing myself way too hard during a time I should have been unwinding. Because of that I cut what I was doing a lot, which helped pretty dramatically. In fact, I think without that I wouldn't have been able to do some of the later stuff, like all the IndieWeb work. So that's another reason I have to find a way to balance sticking to goals and just relaxing (which doesn't necessarily mean not coding, just doing whatever coding I feel like in the moment) - I feel like I was pushing myself too hard to meet my goals (and then getting distracted and not meeting them) and that's what led to the feeling. Obviously there are different constraints for e.g. schoolwork; here I'm referring only to free time like breaks.

Spring semester goals

With that in mind, I want to set some very broad goals for spring semester. I may update this list as time goes on, but for now I have four overarching goals I want to accomplish (besides the usual day-to-day code maintenance stuff):

  • Finish editing the paper I wrote last semester on freedom-respecting software and intersectionality, and get it published
  • Make some measurable progress on my Push-based Two-factor Authentication IETF draft
  • Get access to the University of Rochester darkroom and start developing/printing photos again
  • Start pushing the University of Rochester library (and maybe the journalism department?) to start adopting Tor technologies

I'm excited to see how it goes!


Improving GPG security

Recently I've been putting some effort into improving the security of my GPG key setup, and I thought I would take a moment to document it since I'm really excited!

Nitrokey

First, and most importantly, I have recently acquired a Nitrokey Storage. After I initialized the internal storage keys (which took a really long time), I used gpg --edit-key to edit my local keyring. I selected my first subkey, since in my day-to-day keyring the master key's private component is stripped, and issued keytocard to move the subkey to the Nitrokey. Then I repeated the process for the other subkey.

In the middle of this I did run into an annoying issue: GPG was giving me errors about not having a pinentry, even though the pinentry-curses and pinentry-gnome3 packages were clearly installed. I had been dealing with this issue pretty much since I set up the system, and I had been working around it by issuing echo "test" | gpg2 --pinentry-mode loopback --clearsign > /dev/null every time I wanted to perform a key operation. This worked because I was forcing GPG to not use the system pinentry program and instead just prompt directly on the local TTY; since I had put in the password, gpg-agent would then have the password cached for a while so I could do the key operation without GPG needing to prompt for a password (and thus without the pinentry error). However, this didn't seem to work for --edit-key, which I found supremely annoying.

However this turned out to be a good thing because it forced me to finally deal with the issue. I tried lots of things in an effort to figure out what was going on: I ran dpkg-reconfigure pinentry-gnome3, dpkg-reconfigure gnupg2, and I even manually ran /usr/bin/pinentry to make sure it was working. Turns out that, like many helpful protocols, the pinentry protocol lets you send HELP, and if you do so you'll get back a really nice list of possible commands. I played around with this and was able to get GNOME Shell to prompt me for a password, which was then echoed back to me in the terminal!

Despite feeling cool because of that, I still had the pinentry problem. So finally I just started searching all the GPG manpages for mentions of "pinentry". I looked at gpg(1) first, which was unhelpful, and then I looked at gpgconf(1). That one was also mostly unhelpful, but the "SEE ALSO" section did make me think to look at gpg-agent(1), where I hit upon the solution. Turns out gpg-agent(1) has a note about pinentry programs right at the very top, in the "DESCRIPTION" section:

Please make sure that a proper pinentry program has been installed under the default filename (which is system dependent) or use the option pinentry-program to specify the full name of that program.

The mention of the pinentry-program option led me pretty immediately to my solution. I had originally copied my .gnupg directory from my old MacBook Pro, and apparently GPGTools - a Mac package that integrated GPG nicely with the environment (as well as providing a GUI I never used) - had added its own pinentry-program line to gpg-agent.conf. That line pointed at a path installed by GPGTools, which of course didn't exist on my new Linux system. As soon as I removed the line, --edit-key worked like a charm. (I've also just added gpg-agent.conf to my dotfiles so I notice this kind of thing in the future.)

So far, I'm really enjoying my Nitrokey. It works really well and the app is pretty good, although the menu can be pretty glitchy sometimes. I've used the password manager for a couple high-security passwords (mostly bank passwords) which is great, and I've switched my two-factor authentication for GitHub from FreeOTP on my phone to the Nitrokey since GitHub is a super important account and I really want to make sure people can't push code as me.

There are only two problems I've had with the Nitrokey so far. The first is that it's slow. I notice a significant pause when I do any crypto operation, probably somewhere between a half a second to a second. This hits me quite often since I sign all my Git commits; however I suspect I'll get used to this, and the security benefits are well worth the wait anyway. The other problem is that the Nitrokey doesn't support FIDO U2F authentication. This wasn't a surprise (I knew it wouldn't when I was shopping models) but is nevertheless a problem I would like to deal with (which means getting a second device). The basic reason is just that U2F is newer than the Nitrokey I have. Other than those, though, I would highly recommend Nitrokey. The device is durable, too - I just carry it around in my pocket. (I briefly considered putting it on my keychain - for those of you who haven't met me in person, I have my keychain on an easily-detachable connector attached to a belt loop - but I decided against it because my keychain is kinda hefty.)

Keybase

In addition to the Nitrokey, I've also finally started using Keybase!

For a long time I wasn't too sure about Keybase. I felt like people should really be meeting in person and doing keysigning parties, and I didn't like that they encourage you to upload a private key to them, even if it's password-protected. Eventually I softened my position a little bit and got an invite from Christopher Sheats (back then you needed an invite) but I only made it halfway through the install process before getting distracted and forgetting about it for, you know, several years.

This time, though, I decided to finally get my act together. Do I still kinda think it's a bummer that Keybase encourages private key uploads? Sure. Are real-life keysignings better? Absolutely. But even though they're better, a lot of experience trying to do them and teach them has thoroughly convinced me that they're just too impractical. There are lots of people who might need to at least have some trust in my key - for example, to verify software signatures - and this is a pretty decent solution for them. Not to mention a novel and interesting solution. Plus, it's possible to use Keybase in such a way that you're not compromising security in any way, which is the way I do it.

So tl;dr: I'm on the Keybase bandwagon now. My profile is also now linked to from my GPG keys page.

Safe for master key

Finally, my dad's wife's safe has recently been moved into our house and is conveniently sitting next to my computer. Currently, I keep my master key in a file on a flash drive with an encrypted LUKS container. When I need to access my master key, this file gets unlocked with cryptsetup and then mounted somewhere on my laptop, and I pass the --homedir option to gpg to point it at the mount location. This is better than just keeping the master lying around day-to-day, but still pretty unideal as I'm exposing it to a potentially compromised, non-airgapped computer. Therefore I plan to get a Raspberry Pi (or something similar) and put it in the safe so I can use it as a fully trusted computer that's never been connected to the internet (and is therefore very hard to compromise). I'll keep the Pi in the safe to provide greater assurance that it hasn't been tampered with, as well as to provide a physical level of redundancy for the key material's security. This will hopefully happen Real Soon Now™ - I can't wait!


~