strugee.net

Posts categorized as "musings"

'Free software' phrasing considered harmful

For a while now I've been avoiding using the term "free software."

Why? It's just plain confusing to people. I know Richard Stallman will tell you that it means freedom, not gratis. It doesn't matter. It's still ambiguous and needlessly conflates two different concepts.

Instead of "free software," I propose "freedom-respecting software" as a replacement. This phrasing is not only unambigous, it also does a much more effective job of communicating the general meaning of the term without further explanation. (Of course you'll probably still need to explain it, but you'll have to spend a lot less time doing so.) The one problem with this phrasing is that it's longer, but even that doesn't hold water - because of the aformentioned problems with "free software," people actually don't say "free software" all that much; instead, they say "free (as in freedom) software" which is unambiguous, but awkward on multiple levels. Not only is it a less eloquent way of describing the concept, but gramatically speaking it's really terrible as it puts a parenthetical qualifier in-between an adjective and a noun, which just sounds terrible and unnatural. Seriously, say both of them out loud. "Freedom-respecting software" and "free (as in freedom) software" - which one sounds like less of a mouthful?

Hence, I think "free software" as a term should be considered harmful, and replaced with "freedom-respecting software" instead.

Edit 0:58 10/10/16:

Another advantage of "freedom-respecting software" is that it's still closely related to the old term, allowing for a much easier pivot. Consider "libre software" which AFAICT had the same goals as this proposal but never really took off - in part, I think, because it sounds very different from an already-established term. (Another way of putting this is that it's conceptually an improvement to an existing term instead of being something brand-new, and therefore all existing associations will carry over with far more ease.)

I'd also point out that the problem of ambiguity is more serious than I've said above. First of all, generally speaking I'm suspicious of any proposal or argument that begins or ends with "we just need to educate people more." Education is an important part of the freedom-respecting software movement - remember, that movement is by and for the people - but I think that argument is too frequently simply an excuse for a poor initial design. (Security, I'm looking at you.) Second, such an ambiguity also muddles our search results. When people search for "free software" they do get our stuff (a fact that I was pleasantly surprised by!) but they also get loads and loads of pages for gratis Windows crapware. That's unideal and it is unlikely to change, ever. Even if people were able to readily grasp the distinction between freedom and gratis that we're pitching, we will never, ever have enough influence on the language people use to get them to stop using "free" to mean gratis - which means that Google will continue showing gratis crapware as "free software."

Finally, as pointed out by some excellent PRISM Break contributors, me writing this blog post and then talking about it occasionally is a far dumber idea than directly contacting the FSF, which I now intend to do Real Soon Now™.


GitHub's 'squash and merge' default considered harmful

Recently GitHub launched two new ways to merge Pull Requests: "rebase and merge" and "squash and merge". That means that projects now have three ways to merge PRs:

  • Merge - creates a merge commit merging the branch (even if it's fast-forwardable)
  • Rebase and merge - rebases on top of the target branch and fast-forwards
  • Squash and merge - rebases on top of the target branch, squashes all commits into a single commit, and fast-forwards

Now, the default is "squash and merge", because apparently that's what people find to be the "prettiest" history. This bothers me for one simple reason: a squash and merge default means a history destruction default[1].

The whole point of commit squashing is to destroy history. Sometimes that's fine - for example, I might squash a typo fix into an earlier commit, because who cares that I typo'd vare instead of var? However, anything less trivial than typo fixes is valuable information about how the project evolved. Even if all of the commits just add stuff, and don't change what's happened earlier in the branch (i.e. even if the direction the implementation's taking doesn't change part-way through) the history contained in the branch is still valuable, because the branch's shortlog will give you a nice overview of exactly what changes happened in the branch. Now, you could of course make the argument that commit squashing shows that same information because by default, in both Git and GitHub, the commit messages being squashed are included in the suggested final commit message. I prefer keeping the individual commits, but that's a valid argument.

However, that doesn't change the fact that in cases where the implementation direction does change part-way through, GitHub's default is actively promoting the irrevocable[2] destruction of valuable history. Lots and lots of people use the GitHub Merge Button, especially those who are new to Git. This default is causing those people to unwittingly destroy valuable information. Sure, it looks nicer in the commit log, and I totally advocate for using squash and merge when it makes sense. But those cases are few and far between - it's basically just a small changes, plus a couple typo fixes or additions. And besides, I think it's far better to have a default of an ugly history rather than a default of an incomplete history. The former may not be the prettiest to look at, but the latter has the potential to actively stop people from doing their jobs[3].

For those curious, here's when I use each mode of the GitHub Merge Button:

  • Merge - when I have a long-running branch that made significant changes and/or diverged significantly from the target branch. In this case, it's valuable to clearly distinguish what's part of the project and what isn't. Rebase and merge is no good because then it's not clear in the history when the branch started and ended. This is particularly evident when looking at git log --graph.
  • Rebase and merge - what I use most of the time. I use this when there were a couple small commits that were interesting enough to preserve the difference, but the overall change wasn't so huge that it needs to be clearly distinguished in the history. This provides a nice and pretty commit graph.
  • Squash and merge - I rarely use this. When I do, it's because all of the commits on some branch are so trivial, they really don't matter. Mostly this means that the overall change is tiny, and the only additional commits that are added are small additions to the first.

So there you have it. How I use GitHub's Merge Button, and why I think the "squash and merge" default should be considered harmful.

Footnotes:

[1]: I'd like to point out that this is only a problem in Git. Mercurial has (or will have shortly) Changeset Evolution, which keeps track of how changesets evolve over time. I.e. when you rewrite history, you're not losing any information.

[2]: I'm sure some of you are about to excitedly tell me about a fantastic tool called the reflog, and I really should read Pro Git because it's a fantastic book and has an entire chapter on data recovery. I know. The reflog is not the right answer for this; not only is it local to (likely) a single developer's machine, but it only stretches back a couple months and only works if the old, dangling commits aren't garbage-collected. By the time someone might be interested in looking at the history that was lost, it's probably far, far too late.

[3]: Another rarely-encountered but very serious problem with both "squash and merge" and "rebase and merge" is when people merge upstream changes in a PR. This is a perfectly legitimate workflow - PRs are great for discussing changes, etc. (although a lot of people think they're the only way to merge things, so they open PRs and then immediately merge them - this, IMHO, is very much not a legitimate workflow) - but if you do anything but merge (which includes both GitHub's "Merge" option and fast-forwarding locally on the CLI), you may have a Very Bad Time the next time you go to merge upstream changes to your fork. Why? Because in Git's view, the changes you merged the first time haven't actually been merged. After all, commit SHAs are nowhere to be found in the tree, since when you rewrote history you changed those IDs! Git has no way of knowing that your rewritten commits and the supposedly "unmerged" commits are basically equivalent. (Mercurial, on the other hand, would've kept track of this information and would have no problem at all.)


Android freedom

Recently I backed up and restored both my Android phone and my Android tablet. There were a couple reasons for both of these. The tablet had been borked for quite a long time (any time I tried to upgrade it from the Android 5.0 build it was running, it failed - also the thing just froze randomly). The phone was on the CyanogenMod nightly channel and I wanted to switch to the snapshot channel, plus within the past couple days both WiFi and the cell network data connection straight up stopped working so it was pretty unusable. At first I wanted to switch to CopperheadOS on both devices. CopperheadOS doesn't support GApps and will probably never support GApps (for very good reasons), and I said, maybe I can make it work. Sadly, I couldn't - I still regrettably need stuff from the Google Play Store. (The tablet ended up back on stock because I want fast upgrades, and the phone ended up on CyanogenMod because they have the fastest upgrades while still offering root.)

The whole experience made me think, though - what would it take to create something that functioned like GApps, but respected your freedom? I'm sure some people reading are already scrambling to link me to their favorite Google Play Services reimplementation, but this isn't the only thing that's in GApps. You gotta think about the user experience, too. Such a system should be able to:

  • Provide the nice APIs that Google Play Services does
  • Store your photos in the cloud, like Google Photos
  • Related to the above, automatically backup and restore apps and their data
  • Transfer from other devices, similar to the above item
  • Support functionality like Google Now
  • Ditto for Google Assistant
  • Integrate into the initial device setup to configure all this stuff

I'm sure there are more that I've missed.

Honestly, we're actually quite close to this. The first bullet can be mostly accomplished with something like microG. Automatic backup and photo storage needs a UI, but fundamentally can be accomplished with any generic WebDAV implementation. F-Droid can be used as the app store. Imagine this: you take a bunch of photos and install a bunch of freedom-respecting apps on your phone. Then, you get a new one. When you set it up, the phone prompts you to sign in to your WebDAV account (which could be e.g. ownCloud, or a WebDAV implementation on Sandstorm) and then automatically reinstalls all your apps from F-Droid, as well as retrieving their data from ownCloud. When you open the built-in gallery app, all your photos are already there because they're seamlessly backed up to the cloud. Your cloud. Transfer can be accomplished in a lot of ways, but I can easily see it building on the above.

Supporting something like Google Now is non-trivial, but I've even already proposed a feature for Huginn that would make this possible. Google Assistant would be very, very difficult, but even without that, we'd have come a long way.

This reality is not that far off. What's missing is some UI pieces and a nice ZIP that can be flashed on top of ROMs, similar to how GApps are flashed today. So who's going to put it all together?

(I suppose I've just volunteered myself - oh well... I'll just add it to my endless list of projects.)


Re: Bitcoin, Magical Thinking, and Political Ideology

Editorial note: I published this almost three years ago on my Tumblr, which I keep semi-private and so don't want to link to. This is a verbatim repost from there, despite the fact that I disagree with some portions of this text nowadays.

Bitcoin, Magical Thinking, and Political Ideology

edwardspoonhands:

I get asked all the time what I think of BitCon…this guy says it better than I could.

+1 for linking to something by Alex Payne. I love him.

About the actual content, at one point Alex says this:

We’re told that Bitcoin “fixes serious problems with existing payment systems that depend on centralized services to verify the validity of transactions.” If by “fixes” you mean “ignores”, then yes: a Bitcoin transaction, like cash, comes with the certainty that a definite quantity of a store of value has changed hands, and little else. How this verifies any “validity” or cuts down on fraud I’m not sure; stolen Bitcoins are spent as easily as stolen cash, which is why theft of Bitcoins has been rampant.

I think the concern isn’t with fraud or validity. The problem that Bitcoin solves is with the centralized banking model. The fundamental idea behind Bitcoin is that it cannot be centrally controlled or taken down, like the internet. It is impossible to flip a switch and “turn off” the Bitcoin network. It is possible to do that with a centralized bank: in that case, “flipping the switch” ends up being “shut down the bank”. Or, “drive the bank out of business”.

If Bitcoin’s strength comes from decentralization, why pour millions into a single company? Ah, because Coinbase provides an “accessible interface to the Bitcoin protocol”, we’re told. We must centralize to decentralize, you see; such is the perverse logic of capital co-opting power. In order for Bitcoin to grow a thriving ecosystem, it apparently needs a US-based, VC-backed company that has “worked closely with banks and regulators to ensure that the service is safe and compliant”.

Maybe the problem isn’t with Bitcoin itself, but with what Coinbase is doing with the Bitcoin protocol. Now, to be clear, I think the Bitcoin to USD bridge aspect of Coinbase is OK. But I think that this paragraph is very, very true: it is perverse that we have centralized a decentralized protocol. Decentralized protocols tend to be very dangerous, IMHO, because of the tendency of users to just go with the most popular provider because it’s the easiest solution, and then effectively centralizing the network in the process.

Anyone remember XMPP? XMPP was supposed to be great. It was supposed to be the future of communications on the internet. But in practice, XMPP servers are unreliable. It’s hard to find one that works well. I don’t actually use XMPP a lot myself, but there are a lot of problems with connections, chat requests being undone (so you have to add a contact for a second time), etc. So what happened because of these problems (and the fact that everyone uses Gmail)? The most-used XMPP server is talk.google.com. At least it was, until Google replaced Google Talk with Google Hangouts, which uses a proprietary protocol with no XMPP bridge - and so the network got screwed over, because suddenly, a lot of people upgraded to Hangouts and cut themselves off from the XMPP network. The sad truth, though, is that it almost doesn’t matter. Take a survey of any random Google Talk user. I will bet you $100 that less than 1 out of 50 people who you talk to won’t know that Google Talk is based on XMPP, much less what XMPP is. And there’s almost zero chance that they understand why XMPP matters, or why federated protocols and networks matter.

I’m getting off track, though. So back to Bitcoin and Coinbase. I think what Coinbase is doing by hosting people’s Bitcoin wallets and transactions is fundamentally wrong, because I truly believe that it damages the Bitcoin ecosystem. The centralization in Coinbase is, IMHO, a major problem.

I wonder if this will be solved with a project like arkOS. I mean, maybe the solutionv that will ultimately happen is for people to spin up their own instances of a Coinbase-like Bitcoin wallet. I think it’s pretty clear that people, in general, like cloud apps better than desktop apps. Access from any computer is a really nice feature to have. Maybe projects like arkOS will help decentralized protocols like Bitcoin remain decentralized in practice.

Or maybe Bitcoin will effectively die, just like XMPP did. I mean, sure, XMPP is still a network. But no one really uses it consciously. Almost everyone who uses XMPP nowadays does so accidentally, through a service that just happens to have an XMPP bridge. Google Talk was a prime example of this, but it’s dead. Now, I’ll bet money that the most-used XMPP provider is Facebook. Never knew that Facebook Chat had an XMPP bridge? That’s because Facebook doesn’t advertise it; the only time it’s mentioned is in the developer docs. The mainstream does not care about the XMPP protocol, because we centralized it. What the mainstream does care about is the services that we centralized it on: Google and Facebook. And that’s a real problem.


Programming as an art form

The other day I described programming to someone. I pointed out that it's actually pretty easy to teach yourself programming languages, especially since after a while you start to carry over concepts from other languages. But what surprised me most about my own explanation(!) was when I compared programming to art: it's the kind of thing where you can just try stuff out and see what works and what doesn't, with no real consequences.

Since I said that, I've actually been thinking about it quite a bit. Programming is traditionally described as an activity closely related to mathematics, and to a certain extent, this makes a lot of sense, because of the logical skills that go into programming. You have to be able to reason your way through situations in order to effectively debug a program, which means logically eliminating possible points of failure. This is where math skills become very important.

But coding isn't just about logic. At OSBRIDGE this year, I attended a session about the beauty of code - it's hard to describe to someone who doesn't live and breathe code, but we all know it when we see it. We as a community value elegance in code; clever algorithms; thinking outside the box, and as I said in my Just Do It slides, the mere existance of Ruby proves this. So when I described programming as being like art, part of where I was getting that is the analogy I actually said (being able to easily mess around), but part of it was coming from my appreciation of the beauty of code. Part of it was coming from my sense of the aesthetic properties of programming.

I want us, as a community, in both our regular coding but also our educational outreach, to stop pretending that programming is so logical that it is math. Yes, there are elements of mathematics in coding. Lots of it, even. But to treat programming as a branch of mathematics is doing a disservice to the practice. So in addition to treating programming as a form of math, I want us to start treating programming as a form of art. There is such a thing as ugly code. The entire concept of refactoring would barely exist if that wasn't true. So let's start truly appreciating the aesthetic beauty of code, and let's start teaching that. To be honest, I'm not sure how you would teach that. But it couldn't hurt to try.

But even if we can and should treat it as both of those things, that doesn't mean that we should make that the be-all-end-all of how we describe programming. I truly believe that programming is not a branch of mathematics. And it's not an art form, either. Programming is neither of those things and both of those things; it is something entirely new, and we should treat it as such. If this isn't true, why do people swear by certain software? Why do people (including myself) aggressively sticker their laptops to showcase what software they use? And if this isn't true, how is it possible that people love their code?


~