strugee.net

Blog

Show only posts from 2014 2015 2016 2017 2018 2019 or categorized in personal, development, politics and explanations

Stratic part one is done!

Whooooooooooo!

I am so, so, so thrilled to announce that the first part of Stratic is complete! And you can see the result right here on strugee.net, since this blog post was generated with Stratic!

tl;dr:

var rename = require('gulp-rename');
var markdown = require('gulp-markdown');
var parse = require('stratic-parse-header');
var straticToJson = require('stratic-post-to-json-data');
var jadeTemplate = require('gulp-jade-template');
var dateInPath = require('stratic-date-in-path');

gulp.task('posts', function() {
    return gulp.src('src/blog/*.md')
               .pipe(parse())
               .pipe(markdown())
               .pipe(dateInPath())
               .pipe(straticToJson())
               .pipe(jadeTemplate('src/blog/post.jade'))
               .pipe(rename({ extname: '.html' }))
               .pipe(gulp.dest('dist/blog'));
});

How gorgeous is that?? Let me explain how it works. (I'll assume the reader is familiar with Gulp and Node.js.)

So the gulp.src() call is pretty obvious. We just read all the blog posts into the stream. Note, however, that gulp.src() doesn't stream text, per se - it streams Vinyl file objects. This will become important later.

Now, the first piece of custom Stratic code that we use is the stratic-parse-header module. This module takes a Markdown file with a standard Stratic header (see my original announcement for details), parses the header, strips it out, then returns the new, headerless Markdown. However, the new Vinyl file object has a couple of new properties from the parsing phase - specifically, file.title, file.author, file.time, and file.categories now exist. This is why the fact that Vinyl is used is important - now any Gulp plugin downstream from where parse() is run can use all of these values in whatever way it wants. (See the README for more details.)

Now our Vinyl file object is only the content of the post, and it has additional Stratic metadata attached to it. Awesome! The next thing that we do is render the Markdown, just using a standard Gulp plugin for this. Easy breezy. After that, we pipe to the stratic-date-in-path module, which adds the year and month to paths. For example, without stratic-date-in-path, this blog post would be at https://strugee.net/blog/stratic-part-one. However, since I do use stratic-date-in-path, the post lives at https://strugee.net/blog/2016/05/stratic-part-one instead. Nice, right? Eventually I'll write code to generate pretty indexes for each year and month - that's what Stratic part 2 is for.

The next thing we do is pipe to the stratic-post-to-json-data module. This module is specifically designed to work with the gulp-jade-template module, which expects the file contents to be some JSON that will be given as data to a Jade template, whose rendered HTML becomes the new file contents. What sets up that JSON? You guessed it - stratic-post-to-json-data. That's all it does. It just creates an object that contains the metadata and the actual post text, runs it through JSON.stringify(), and sets the file contents equal to the result. Just how gulp-jade-template likes it.

And with that, we've successfully rendered a blog post. Whooooooooooo! I'm so pumped about this software. The call to rename() is just a little housekeeping, and then we write the whole thing back to disk with gulp.dest(). Awesome.

It's worth noting that the real beauty in this code isn't what the code actually does, but the extreme modularity of the whole thing. Unlike projects like Jekyll or even Wintersmith, this isn't a giant, monolithic framework. It's all standard Node and Gulp. Note how (for example) we didn't need a custom plugin for Markdown - we just used the standard gulp-markdown. Don't like Markdown? No problem. Write something to extract post metadata from your preferred format, replace parse() with that and markdown() with a different renderer, and you're golden. All the rest will continue to work the exact same - adding dates to paths, rendering the template, etc. - because everything's decoupled from everything else. Each component can be trivially swapped out and replaced with something new and better, and the rest of the system continues to work. Gorgeous.

I've got to go now, but I'm not done blogging. I'll be back soon to talk about the work going on in pump.io, and I'll be back (much?) later to talk about Stratic part two (aka, pretty indexes).

Whooooooooooooooooooooo!


Re: Bitcoin, Magical Thinking, and Political Ideology

Editorial note: I published this almost three years ago on my Tumblr, which I keep semi-private and so don't want to link to. This is a verbatim repost from there, despite the fact that I disagree with some portions of this text nowadays.

Bitcoin, Magical Thinking, and Political Ideology

edwardspoonhands:

I get asked all the time what I think of BitCon…this guy says it better than I could.

+1 for linking to something by Alex Payne. I love him.

About the actual content, at one point Alex says this:

We’re told that Bitcoin “fixes serious problems with existing payment systems that depend on centralized services to verify the validity of transactions.” If by “fixes” you mean “ignores”, then yes: a Bitcoin transaction, like cash, comes with the certainty that a definite quantity of a store of value has changed hands, and little else. How this verifies any “validity” or cuts down on fraud I’m not sure; stolen Bitcoins are spent as easily as stolen cash, which is why theft of Bitcoins has been rampant.

I think the concern isn’t with fraud or validity. The problem that Bitcoin solves is with the centralized banking model. The fundamental idea behind Bitcoin is that it cannot be centrally controlled or taken down, like the internet. It is impossible to flip a switch and “turn off” the Bitcoin network. It is possible to do that with a centralized bank: in that case, “flipping the switch” ends up being “shut down the bank”. Or, “drive the bank out of business”.

If Bitcoin’s strength comes from decentralization, why pour millions into a single company? Ah, because Coinbase provides an “accessible interface to the Bitcoin protocol”, we’re told. We must centralize to decentralize, you see; such is the perverse logic of capital co-opting power. In order for Bitcoin to grow a thriving ecosystem, it apparently needs a US-based, VC-backed company that has “worked closely with banks and regulators to ensure that the service is safe and compliant”.

Maybe the problem isn’t with Bitcoin itself, but with what Coinbase is doing with the Bitcoin protocol. Now, to be clear, I think the Bitcoin to USD bridge aspect of Coinbase is OK. But I think that this paragraph is very, very true: it is perverse that we have centralized a decentralized protocol. Decentralized protocols tend to be very dangerous, IMHO, because of the tendency of users to just go with the most popular provider because it’s the easiest solution, and then effectively centralizing the network in the process.

Anyone remember XMPP? XMPP was supposed to be great. It was supposed to be the future of communications on the internet. But in practice, XMPP servers are unreliable. It’s hard to find one that works well. I don’t actually use XMPP a lot myself, but there are a lot of problems with connections, chat requests being undone (so you have to add a contact for a second time), etc. So what happened because of these problems (and the fact that everyone uses Gmail)? The most-used XMPP server is talk.google.com. At least it was, until Google replaced Google Talk with Google Hangouts, which uses a proprietary protocol with no XMPP bridge - and so the network got screwed over, because suddenly, a lot of people upgraded to Hangouts and cut themselves off from the XMPP network. The sad truth, though, is that it almost doesn’t matter. Take a survey of any random Google Talk user. I will bet you $100 that less than 1 out of 50 people who you talk to won’t know that Google Talk is based on XMPP, much less what XMPP is. And there’s almost zero chance that they understand why XMPP matters, or why federated protocols and networks matter.

I’m getting off track, though. So back to Bitcoin and Coinbase. I think what Coinbase is doing by hosting people’s Bitcoin wallets and transactions is fundamentally wrong, because I truly believe that it damages the Bitcoin ecosystem. The centralization in Coinbase is, IMHO, a major problem.

I wonder if this will be solved with a project like arkOS. I mean, maybe the solutionv that will ultimately happen is for people to spin up their own instances of a Coinbase-like Bitcoin wallet. I think it’s pretty clear that people, in general, like cloud apps better than desktop apps. Access from any computer is a really nice feature to have. Maybe projects like arkOS will help decentralized protocols like Bitcoin remain decentralized in practice.

Or maybe Bitcoin will effectively die, just like XMPP did. I mean, sure, XMPP is still a network. But no one really uses it consciously. Almost everyone who uses XMPP nowadays does so accidentally, through a service that just happens to have an XMPP bridge. Google Talk was a prime example of this, but it’s dead. Now, I’ll bet money that the most-used XMPP provider is Facebook. Never knew that Facebook Chat had an XMPP bridge? That’s because Facebook doesn’t advertise it; the only time it’s mentioned is in the developer docs. The mainstream does not care about the XMPP protocol, because we centralized it. What the mainstream does care about is the services that we centralized it on: Google and Facebook. And that’s a real problem.


Programming as an art form

The other day I described programming to someone. I pointed out that it's actually pretty easy to teach yourself programming languages, especially since after a while you start to carry over concepts from other languages. But what surprised me most about my own explanation(!) was when I compared programming to art: it's the kind of thing where you can just try stuff out and see what works and what doesn't, with no real consequences.

Since I said that, I've actually been thinking about it quite a bit. Programming is traditionally described as an activity closely related to mathematics, and to a certain extent, this makes a lot of sense, because of the logical skills that go into programming. You have to be able to reason your way through situations in order to effectively debug a program, which means logically eliminating possible points of failure. This is where math skills become very important.

But coding isn't just about logic. At OSBRIDGE this year, I attended a session about the beauty of code - it's hard to describe to someone who doesn't live and breathe code, but we all know it when we see it. We as a community value elegance in code; clever algorithms; thinking outside the box, and as I said in my Just Do It slides, the mere existance of Ruby proves this. So when I described programming as being like art, part of where I was getting that is the analogy I actually said (being able to easily mess around), but part of it was coming from my appreciation of the beauty of code. Part of it was coming from my sense of the aesthetic properties of programming.

I want us, as a community, in both our regular coding but also our educational outreach, to stop pretending that programming is so logical that it is math. Yes, there are elements of mathematics in coding. Lots of it, even. But to treat programming as a branch of mathematics is doing a disservice to the practice. So in addition to treating programming as a form of math, I want us to start treating programming as a form of art. There is such a thing as ugly code. The entire concept of refactoring would barely exist if that wasn't true. So let's start truly appreciating the aesthetic beauty of code, and let's start teaching that. To be honest, I'm not sure how you would teach that. But it couldn't hurt to try.

But even if we can and should treat it as both of those things, that doesn't mean that we should make that the be-all-end-all of how we describe programming. I truly believe that programming is not a branch of mathematics. And it's not an art form, either. Programming is neither of those things and both of those things; it is something entirely new, and we should treat it as such. If this isn't true, why do people swear by certain software? Why do people (including myself) aggressively sticker their laptops to showcase what software they use? And if this isn't true, how is it possible that people love their code?


Revisiting my Tor relay

(Okay, so I miserably failed my blog-every-day thing. Shut up. Maybe next time I'll try every week or something... anyway.)

A couple of days ago I logged into the Tor relay I run to show someone the ARM graphs. I had a fair amount of traffic, so the graphs were fairly impressive, but I'm also in the habit of running apt-get update; apt-get upgrade every time I log into a server, so I did that too. To my surprise, I got a message telling me that there was a dependency problem with my kernel! So like the great sysadmin I am, I looked at such a fundamental system problem, shrugged my shoulders, and said, "oh, I should probably fix that". And then logged out.

Well, I did end up fixing it today. And boy, was it an adventure. My first step was to ignore the APT problems and edit my torrc, to reflect a) the fact that I'm not eligible for the AWS Free Tier anymore (so I needed to throttle bandwidth), b) my new email, and c) my new GPG key. With that being done, I knew that I could easily have the system fix dependency problems by doing a simple apt-get install -f. Easy!

Well, no. That tried to install some Linux kernel headers, which seemed all well and good, until I got this:

Unpacking linux-headers-3.2.0-90 (from .../linux-headers-3.2.0-90_3.2.0-90.128_all.deb) ...
dpkg: error processing /var/cache/apt/archives/linux-headers-3.2.0-90_3.2.0-90.128_all.deb (--unpack):
unable to create `/usr/src/linux-headers-3.2.0-90/arch/arm/plat-pxa/include/plat/dma.h.dpkg-new' (while processing `./usr/src/linux-headers-3.2.0-90/arch/arm/plat-pxa/include/plat/dma.h'): No space left on device
No apport report written because the error message indicates a disk full error
dpkg-deb: error: subprocess paste was killed by signal (Broken pipe)

Um, what? How am I out of free space? Okay, whatever. I knew that there were probably a lot of packages cached in /var/cache/apt/, including old, vulnerable packages that had been replaced by the unattended upgrades system. I did an ls, and found only about five .deb files - something must have been automatically cleaning that directory. I was getting a little worried now, but I nuked the files anyway and reran apt-get install -f. Same thing. Well, okay, maybe I didn't get rid of enough stuff? How much did I need?

$ df -h /
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1      4.0G  2.2G  1.6G  59% /

At this point I'm in full-on "something-is-seriously-wrong-and-I-need-to-recover" mode. How was it possible that I had only used 59% of the filesystem, but dpkg was saying my disk was full? A little searching the internet later, I found the culprit:

$ df -i
Filesystem     Inodes  IUsed IFree IUse% Mounted on
/dev/xvda1     262144 257479  4665   99% /
udev            74758    377 74381    1% /dev
tmpfs           76179    259 75920    1% /run
none            76179      3 76176    1% /run/lock
none            76179      1 76178    1% /run/shm

I hadn't run out of disk space. But I had run out of inodes. (Isn't this supposed to happen to other people?)

I tried removing some stuff via APT, but that refused to do anything due to the dependency problems. My next thought was that there were probably a bunch of old processes running that were essentially holding a bunch of inodes hostage. I couldn't install debian-goodies, so I couldn't use checkrestart, but I improvised by looping over all running services in a for loop, and restarting them.

Still nothing.

I'm not proud of what I did next. But I was backed into a corner, so I did something only dpkg is supposed to do. I ran rm -r on a couple directories in /usr/src. And boy, it was like magic. Suddenly apt-get install -f worked like a charm. It started to upgrade a couple packages, rebuilding some GRUB configuration files... and then came to a screeching halt.

Setting up linux-headers-3.2.0-90-virtual (3.2.0-90.128) ...
dpkg: dependency problems prevent configuration of linux-headers-virtual:
linux-headers-virtual depends on linux-headers-3.2.0-68-virtual; however:
Package linux-headers-3.2.0-68-virtual is not installed.
dpkg: error processing linux-headers-virtual (--configure):
dependency problems - leaving unconfigured
No apport report written because the error message indicates its a followup error from a previous failure.
dpkg: dependency problems prevent configuration of linux-virtual:
linux-virtual depends on linux-headers-virtual (= 3.2.0.68.81); however:
Package linux-headers-virtual is not configured yet.
dpkg: error processing linux-virtual (--configure):
dependency problems - leaving unconfigured
No apport report written because the error message indicates its a followup error from a previous failure.
Errors were encountered while processing:
linux-headers-virtual
linux-virtual
E: Sub-process /usr/bin/dpkg returned an error code (1)

Are you kidding?? More errors?

Turns out that APT is essentially the only thing on this system that makes large changes to the filesystem. So the probability that APT would be the program to trigger the inode limit was pretty high. It started an upgrade run, then got interrupted in the middle by the "no space left on device" error, leaving the dependency tree in a state that we in the tech community call "100% totally screwed". (This is the technical term.)

I'll spare you the gory details, but I ended up trying to chase down packages in the Ubuntu archive, running ubuntu-support-status because I was wondering if the packages I was looking for actually weren't in the archive, because they were unsupported, using aptitude instead of apt-get (because aptitude's dependency resolver tends to be better), etc. Finally the solution turned out to be doing dpkg --install on the exact right .debs in the exact right order, which finally satisfied APT's dependency woes, allowed apt-get install -f to fix the configuration problems, and allowed the hundreds of packages which had been waiting for an upgrade to finally install. Whew!

Anyway, I need to upgrade the version of Ubuntu the system is on (currently it's 12.04.5 LTS), because Tor is out of date (among other reasons). However, since that will involve taking the system down for a reboot, I wanted to memorialize the following:

$ uptime
00:01:47 up 392 days, 17:15,  1 user,  load average: 0.05, 0.04, 0.05

Holy moly. This system is bordering on 400 days of uptime. That's over a year of continuous run time! Astonishing.

Wish me luck with this upgrade...

tl;dr: inode limits are killer.


2 lbs of stickers

First things first

This year I am again attempting to blog every day of summer. I've done this sporadically in the past (and never very well), but I'm trying again this year. Maybe this time will be different. (This is actually my fourth day of summer, but I wanted some time off. Whatever.)

2 lbs of stickers??

So recently (i.e. within the past year) I've become more active within the local Seattle community. I'm now on the organizing committee for the Seattle chapter of TA3M (Techno Activism, 3rd Mondays), and am actually running another CryptoParty this coming Monday. Now, within the wider Seattle activism/tech scene, there's this bag of stickers. It used to be owned by a friend of mine named Elcaset, who apparently just took home some leftovers one day and then kept receiving more and more, because people associated him with stickers. Sadly, Elcaset had to move away from the Seattle area. And since we wanted to keep the stickers, someone needed to keep the bag. And so I ended up with a giant, giant bag of stickers.

Me being me, I decided to sort them and separate them, so that they were easier to lay out on a table. Then I separated them by category. And then, just for fun (and since there were so many of them), I decided to weigh them.

The heavyweight champ by far is the Free Software Foundation bag, with a whopping 8.25 oz of stickers. Yeah, when I said "heavy" "weight", I meant it. Next we have local usergroups (FreeGeek Seattle, TA3M Seattle, Seattle Privacy and one other whose logo I know is local, but I don't recognize) tied with distributions, both at 5.225 oz. Next up is advocacy groups, namely the EFF and the ACLU of Washington. I'll admit to cheating on this one, since the EFF has some assorted trinkets that aren't actually stickers, but whatever. This bag was 4.5 oz. After advocacy groups is the miscellaneous bag, which has software projects not associated with anything else, activism films, conferences (aka LinuxFest Northwest) and political parties (aka the Pirate Party). Finally, as we approach the bottom, the security/anonymity bag had 2.75 oz, followed closely by the DuckDuckGo bag, which had 2.725. Finally, the LibreOffice/document freedom bag had 2 oz of stickers.

What all this is to say is that I now own a lot of stickers. Specifically, I own an astonishing 2 lbs, 1.625 oz of stickers. Seriously. 2 pounds of what is essentially paper. That's almost unbelievable. (I should point out at this point that I got that number from weighing all of them, not by adding up the above figures, so there may be a slight difference.)

Anyway. I guess I'll have plenty of material to hand out at my CryptoParty.

Other things

Other things that have happened recently-ish: I'm going to be in Advanced Photography next year, which I'm super excited about! Also, I joined the SAAS dance program, which should be really fun. At first it was a bit inconvenient, because even though I really wanted to be in dance, with Advanced Photography and Intermediate Dance, I didn't have room in the fall for a Study Hall (which I'm going to need because of college apps). So I ended up deciding to only do robotics on the weekends. Not only that, but the dance show is inevitably the same weekend as LFNW, so I was going to miss Game Night and the Saturday sessions and the afterparty. But things worked out, because apparently they had so many people auditioning this year that they split Intermediate Dance into two sections, and I'm in the one that has class in the winter and spring. That means that not only can I take a study hall in the fall, but our dance show will be at a different time of year, so I can go to LinuxFest Northwest, too. Whoo!


~