strugee.net

Posts categorized as "security"

pump.io DOMPurify security fixes available

Recently the cross-site-scripting sanitization library that pump.io uses, DOMPurify, published several security advisories for mXSS vulnerabilities affecting browsers based on the Blink rendering engine - you can find the latest one, for example, here. As we've done in the past, the pump.io project is publishing security releases to ensure that everyone is using the latest version of DOMPurify. Per our security support policy, we are providing patches for the current stable release and the previous stable release:

  1. pump.io 5.1.2 has been updated to pump.io 5.1.3
  2. pump.io 5.0.2 has been updated to pump.io 5.0.3

As these are security releases we encourage administrators to upgrade as soon as possible. Both 5.1.3 and 5.0.3 are drop-in replacements for their predecessors. If you have pump.io 5.1 installed via npm - our recommended configuration - you can upgrade with:

$ npm install -g pump.io@5

If you're on pump.io 5.0, we recommend that you also run the above command to upgrade to 5.1 - it's a drop-in replacement for 5.0. However, if you want to stick with 5.0 for the time being, you can install a patched version with:

$ npm install -g pump.io@5.0

Note that if you have a source-based install, the above commands won't work and you will need to upgrade however you usually do - this will depend on how exactly you have pump.io set up.

If you need help, or if you have questions about these security releases, get in touch with the community.


New temporary signing keys

So unfortunately, recently I have lost my Nitrokey, which I liked very much. In addition to this being fairly upsetting, I am now left with the sticky situation of not being able to sign code - while I have a master key (from which I can generate new subkeys), I'm currently at college and my copy of the key is sitting 3,000 miles away at home.

To get around this situation, I've generated a temporary signing keypair. This keypair is set to expire after 3 months (and I don't intend to renew it). When I have access to my master keypair, I will revoke the old subkeys, generate new subkeys (it was probably time, anyway) and revoke the temporary keypair.

The new fingerprint is D825FD54D9B940FF0FFFB31AA4FDB7BE12F63EC3. I have uploaded the key to GitHub as well as the Ubuntu keyserver and keys.gnupg.net (just as my original was). The key is also incorporated into my Keybase account so that you can bootstrap trust in it, if you want to verify software signatures or whatever.


Improving GPG security

Recently I've been putting some effort into improving the security of my GPG key setup, and I thought I would take a moment to document it since I'm really excited!

Nitrokey

First, and most importantly, I have recently acquired a Nitrokey Storage. After I initialized the internal storage keys (which took a really long time), I used gpg --edit-key to edit my local keyring. I selected my first subkey, since in my day-to-day keyring the master key's private component is stripped, and issued keytocard to move the subkey to the Nitrokey. Then I repeated the process for the other subkey.

In the middle of this I did run into an annoying issue: GPG was giving me errors about not having a pinentry, even though the pinentry-curses and pinentry-gnome3 packages were clearly installed. I had been dealing with this issue pretty much since I set up the system, and I had been working around it by issuing echo "test" | gpg2 --pinentry-mode loopback --clearsign > /dev/null every time I wanted to perform a key operation. This worked because I was forcing GPG to not use the system pinentry program and instead just prompt directly on the local TTY; since I had put in the password, gpg-agent would then have the password cached for a while so I could do the key operation without GPG needing to prompt for a password (and thus without the pinentry error). However, this didn't seem to work for --edit-key, which I found supremely annoying.

However this turned out to be a good thing because it forced me to finally deal with the issue. I tried lots of things in an effort to figure out what was going on: I ran dpkg-reconfigure pinentry-gnome3, dpkg-reconfigure gnupg2, and I even manually ran /usr/bin/pinentry to make sure it was working. Turns out that, like many helpful protocols, the pinentry protocol lets you send HELP, and if you do so you'll get back a really nice list of possible commands. I played around with this and was able to get GNOME Shell to prompt me for a password, which was then echoed back to me in the terminal!

Despite feeling cool because of that, I still had the pinentry problem. So finally I just started searching all the GPG manpages for mentions of "pinentry". I looked at gpg(1) first, which was unhelpful, and then I looked at gpgconf(1). That one was also mostly unhelpful, but the "SEE ALSO" section did make me think to look at gpg-agent(1), where I hit upon the solution. Turns out gpg-agent(1) has a note about pinentry programs right at the very top, in the "DESCRIPTION" section:

Please make sure that a proper pinentry program has been installed under the default filename (which is system dependent) or use the option pinentry-program to specify the full name of that program.

The mention of the pinentry-program option led me pretty immediately to my solution. I had originally copied my .gnupg directory from my old MacBook Pro, and apparently GPGTools - a Mac package that integrated GPG nicely with the environment (as well as providing a GUI I never used) - had added its own pinentry-program line to gpg-agent.conf. That line pointed at a path installed by GPGTools, which of course didn't exist on my new Linux system. As soon as I removed the line, --edit-key worked like a charm. (I've also just added gpg-agent.conf to my dotfiles so I notice this kind of thing in the future.)

So far, I'm really enjoying my Nitrokey. It works really well and the app is pretty good, although the menu can be pretty glitchy sometimes. I've used the password manager for a couple high-security passwords (mostly bank passwords) which is great, and I've switched my two-factor authentication for GitHub from FreeOTP on my phone to the Nitrokey since GitHub is a super important account and I really want to make sure people can't push code as me.

There are only two problems I've had with the Nitrokey so far. The first is that it's slow. I notice a significant pause when I do any crypto operation, probably somewhere between a half a second to a second. This hits me quite often since I sign all my Git commits; however I suspect I'll get used to this, and the security benefits are well worth the wait anyway. The other problem is that the Nitrokey doesn't support FIDO U2F authentication. This wasn't a surprise (I knew it wouldn't when I was shopping models) but is nevertheless a problem I would like to deal with (which means getting a second device). The basic reason is just that U2F is newer than the Nitrokey I have. Other than those, though, I would highly recommend Nitrokey. The device is durable, too - I just carry it around in my pocket. (I briefly considered putting it on my keychain - for those of you who haven't met me in person, I have my keychain on an easily-detachable connector attached to a belt loop - but I decided against it because my keychain is kinda hefty.)

Keybase

In addition to the Nitrokey, I've also finally started using Keybase!

For a long time I wasn't too sure about Keybase. I felt like people should really be meeting in person and doing keysigning parties, and I didn't like that they encourage you to upload a private key to them, even if it's password-protected. Eventually I softened my position a little bit and got an invite from Christopher Sheats (back then you needed an invite) but I only made it halfway through the install process before getting distracted and forgetting about it for, you know, several years.

This time, though, I decided to finally get my act together. Do I still kinda think it's a bummer that Keybase encourages private key uploads? Sure. Are real-life keysignings better? Absolutely. But even though they're better, a lot of experience trying to do them and teach them has thoroughly convinced me that they're just too impractical. There are lots of people who might need to at least have some trust in my key - for example, to verify software signatures - and this is a pretty decent solution for them. Not to mention a novel and interesting solution. Plus, it's possible to use Keybase in such a way that you're not compromising security in any way, which is the way I do it.

So tl;dr: I'm on the Keybase bandwagon now. My profile is also now linked to from my GPG keys page.

Safe for master key

Finally, my dad's wife's safe has recently been moved into our house and is conveniently sitting next to my computer. Currently, I keep my master key in a file on a flash drive with an encrypted LUKS container. When I need to access my master key, this file gets unlocked with cryptsetup and then mounted somewhere on my laptop, and I pass the --homedir option to gpg to point it at the mount location. This is better than just keeping the master lying around day-to-day, but still pretty unideal as I'm exposing it to a potentially compromised, non-airgapped computer. Therefore I plan to get a Raspberry Pi (or something similar) and put it in the safe so I can use it as a fully trusted computer that's never been connected to the internet (and is therefore very hard to compromise). I'll keep the Pi in the safe to provide greater assurance that it hasn't been tampered with, as well as to provide a physical level of redundancy for the key material's security. This will hopefully happen Real Soon Now™ - I can't wait!


filter-other-days: Artificial Ignorance-compatible logfile date filtering

I've just published version 1.0 of my latest project, filter-other-days - a shell script to filter logfiles for today's date in an Artificial Ignorance-compatible way.

If you haven't heard of Artificial Ignorance, it's something you should look into cause it's pretty awesome. Here's the tl;dr: it doesn't make sense to look for all the "interesting" things in logfiles, because it's not actually possible to enumerate all the failure conditions of a system. So instead what we do is throw away entries that we're sure are just routine. Since we've gotten rid of all the uninteresting entries, whatever is left has to be interesting.

I find this pretty compelling, and decided to start implementing it on my Tor relay. I quickly realized that my ideal workflow would be to configure cron to send me email with a daily report of interesting log entries. However, this presented a problem: how to get just today's log entries? I wanted to be able to handle all logfiles at once instead of receiving different reports for different logs, so I had to be able to parse all logfiles in the same way. My relay runs on FreeBSD, so the logs are unstructured text files, and even worse, several daemons (like Tor itself) write timestamps in a different format - this makes parsing all logfiles at once super difficult because I couldn't just trivially grep for today's date since that would end up dropping legitimate entries from logfiles that formatted their timestamps differently.

I briefly considered trying to write a regex to match all sorts of different timestamp formats, but quickly rejected this idea as too fragile. There are a lot of moving parts in a modern operating system - what if e.g. a daemon changed its defaults about how to format timestamps? Or, more likely, what if I simply missed a particular format present in my logs? Then I'd be accidentally throwing away an entire logfile. To solve this problem, I decided to apply the same idea behind Artificial Ignorance - if I couldn't reliably, 100% match log entries from today's date, I could do the next best thing and attempt to discard all entries from other dates. In this case the worst that could happen is me recieving irrelevant information, and I'd be basically guaranteed to never miss an legitimate entry from today.

filter-other-days is a working implementation of this design. Originally I put it with the other random scripts I keep with my dotfiles, but it quickly became obvious that it was useful as a standalone project. So I extracted it into its own repository, which now lives on GitHub. From there I continued to improve the script while adding a test suite and writing extensive documentation (including a Unix manpage - I always feel like a wizardly hacker when writing those things). This took, by my estimation, somewhere between 10 and 15 hours because this is actually a shockingly non-trivial problem, but mostly because regexes are hard.

But today I finally finished! So I'm super excited to announce that version 1.0 of filter-other-days is now available. You can either clone it from GitHub or download a tarball (and the accompanying signature, if you want). It works pretty well already, but I have some ideas for future directions the project could go:

  1. Logic allowing you to actually specify the date you want to filter for, instead of assuming it's today (though you actually can already get this behavior using faketime; that's what the test suite does)
  2. Removal of the dependency on GNU seq - this is, to my knowledge, the only non-POSIX requirement of filter-other-days
  3. Debian package, maybe?
  4. More log formats (please report bugs if you have formats filter-other-days doesn't recognize - which you probably do!)

If you find this project useful, let me know! I'd love to hear about how people are using it. Or if it breaks (or doesn't fill your usecases), please report bugs or send patches - I love those, too! Either way, may the logs be with you!


pump.io denial-of-service security fixes now available

Recently some denial-of-service vulnerabilities were discovered in various modules that pump.io indirectly depends on. I've bumped Express and send to pull in patched versions, and I've updated our fork of connect-auth to require a patched version of Connect, too. The remaining vulnerabilities I've confirmed don't affect us.

Because of these version bumps, I've just put out security releases which all administrators are encouraged to upgrade to as soon as possible. A semver-major release (5.0.0) was released within the past 6 months so per our security support policy this means there are three new releases:

  1. pump.io 5.0.2 replaces 5.0.0 and is available now on npm
  2. pump.io 4.1.3 replaces 4.1.2 and is available now on npm
  3. pump.io 4.0.2 will replace 4.0.1 and is currently undergoing automated testing (it'll be on npm shortly) Update: pump.io 4.0.2 is now on npm

As these are security releases we encourage admins to upgrade as soon as possible. If you're on 5.0.0 installed via npm - our recommended configuration - you can upgrade by issuing:

$ npm install -g pump.io@5

If you're on 4.1.3, you can upgrade by issuing:

$ npm install -g pump.io@4

And when 4.0.2 is out, if you're on 4.0.1 you can upgrade by issuing:

$ npm install -g pump.io@4.0

Note though that 4.1.3 is a drop-in replacement for 4.0.2, so you should consider just upgrading to that instead. Or even better, upgrade to 5.x!

If you don't have an npm-based install, you'll have to upgrade however you normally do. How to do this will depend on your particular setup.

As always, if you need help, you should get in touch with the community. I'd also like to specifically thank Jason Self, who generously deployed a 24-hour private beta of these fixes on Datamost. One of the version bumps was ever-so-slightly risky, and being able to test things in production before rolling out patches for the entire network was invaluable. I wouldn't be as confident as I am in these releases without his help. So thanks, Jason - I really appreciate it!


~