[lang]

Present Perfect

Personal
Projects
Packages
Patches
Presents
Linux

Picture Gallery
Present Perfect

Getting Things Done with CouchDB, part 2: Security

Filed under: couchdb,Hacking,mushin,Python — Thomas @ 16:26

2012-09-15
16:26

(... wherein I disappoint the readers who were planning to drop my database)

So, I assume you now know what mushin is.

The goal for my nine/eleven hack day was simply to add authentication everywhere to mushin, making it as user-friendly as possible, and as secure as couchdb was going to let me.

Now, CouchDB's security story has always been a little confusing to me. It's gotten better over the years, though, so it was time to revisit it and see how far we could get.

By default, CouchDB listens only on localhost, uses plaintext HTTP, and is in Admin Party mode. Which means, anyone is an admin, and anyone who can do a request on localhost can create and delete databases or documents. This is really useful for playing around with CouchDB, learning its REST API using curl. So easy in fact that it's hard to go away from that simplicity (I would not be surprised to find that there are companies out there running couchdb on localhost and unprotected).

Authorization

What can users do ? What type of users are there ?

In a nutshell, couchdb has three levels of permissions:

  • server admin: can do anything; create, delete databases, replicate, ... Is server wide. Think of it as root for couchdb.
  • database admin: can do anything to a database; including changing design documents
  • database reader: can read documents from the database, and (confusingly) write normal documents, but not design documents

CouchDB: The Definitive Guide sadly only mentions the server admin and admin party, and is not as definitive as its title suggests. A slightly better reference is (although I still haven't cleared up what roles are to be used for, beside the internal _admin role).
By far the clearest explanation of security-related concepts in CouchDB is in
Jan Lernhardt's CouchBase blog post.

I'll come back to how these different objects/permissions get configured in a later post.

Authentication

How do you tell CouchDB who you are, so it can decide what it lets you do ?

By default, CouchDB has the following authentication handlers:

  • OAuth
  • cookie authentication
  • HTTP basic authentication, RFC 2617

But wait a minute... To tell the database who I am, I have a choice between OAuth (which isn't documented anywhere, and there doesn't seem to be an actual working example of it, but I assume this was contributed by desktopcouch), cookie authentication (which creates a session and a cookie for later use, but to create the session you need to use a different authentication mechanism in the first place), or basic authentication (which is easy to sniff).

So, in practice, at some point the password is going to be sent over plaintext, and your choice basically is between once, a few times (every time you let your cookie time out, which happens after ten minutes, although you get a new cookie on every request), or every single time. I'm not a security expert, but that doesn't sound good enough.

And typically, my solution would be to switch to https:// and SSL to solve that part. Since CouchDB 1.1 this is included, although I haven't tried it yet (since I'm still on couchdb 1.0.3 because that's what I got working on my phone)

Now, my use case is a command-line application. This adds some additional security and usability concerns:

  • When invoking gtd (the mushin command-line client) to add a task, it would be great if I didn't have to specify my username and password every single time. Luckily, gtd can be started as a command line interpreter, so that helps.
  • It would be great if I didn't have to specify a password on the command line, either as part of a URL (for example, when replicating) or as an option. I really hate to see passwords either in the process list or in shell history or in a config file, and typically I will use my lowest-quality passwords for apps that force me to do this, and want to avoid writing software that has no other option.

The Plan

In the end, the plan of attack started to clear up:

  • Get away from Admin Party in CouchDB, add an admin user
  • Create a new user in the _users database in CouchDB, with the same name as my system username
  • Create a _security object on the mushin database, and allow my username as a reader.
  • to connect to couchdb from gtd, use the current OS user, and ask for the password on the terminal
  • Use Paisley's username/password and basic auth support. This means auth details still go over the network in plaintext. Add an Authenticator class that integrates with Paisley such that, when CouchDB refuses the operation, the authenticator can be asked to provide username and password to repeat the same request with. Together with a simple implementation that asks for the password on the terminal, this handles the security problem of passing the password to the application.
  • Use the cookie mechanism to avoid sending username/password every time. Create a session using the name and password, then store the cookie, and use that instead for the next request. Anytime you get a new cookie, use that from now on. This was relatively easy to do since paisley has changed to use the new

    twisted.web.agent.Agent

    and so it was easy to add a cookie-handling Agent together with the cookielib module.

  • A tricky bit was replication. When you replicate, you tell one CouchDB server to replicate to or from a database on another CouchDB server - from the point of view of the first one. On the one hand, CouchDB sometimes gives confusing response codes; for example, a 404 in the case where the remote database refuses access to the db, but a 401 in the case where the local database refuses access. On the other hand, we have to give our couchdb database the authentication information for the other one - again, we have to pass username and password, in plaintext, as part of the POST body for replication. I doubt there is a way to do this with cookies or oauth, although I don't know. And in any case, you're not even guaranteed that you can get an oauth token or cookie from the other database, since that database might not even be reachable by you (although this wouldn't be a common case). The best I could do here is, again, ask for the password on the terminal if username is given but password is not.
  • Don't log the password anywhere visibly; replace it with asterisks wherever it makes sense (Incidentally, later on I found out that couchdb does exactly the same on its console logging. Excellent.)
  • Upgrade to use CouchDB 1.1 everywhere, and do everything over SSL
  • Figure out OAuth, possibly stealing techniques from desktopcouch. For a command-line client, it would make sense that my os user is allowed to authenticate to a local couchdb instance only once per, say, X session, and a simple 'gtd add U:5 @home p:mushin add oauth support' would not ask for a password.

I made it pretty far down the list, stopping short at upgrading to couchdb 1.1

But at least tomorrow at work, people will not be able to get at my tasks on their first attempt. (Not that there's anything secret in there anyway, but I digress)

Getting Things Done with CouchDB, part 1: how did I get here?

Filed under: couchdb,Hacking,mushin — Thomas @ 20:47

2012-09-11
20:47

(... where I spend a whole post setting the scene)

Today is a day off in Barcelona - to some people it's a national holiday. To me, it's a big old opportunity to spend a whole day hacking on something I want to work on for myself.

And today, I wanted to go back to hack on mushin. What is mushin, you ask? I thought you'd never. It's going to take all of this post to explain, and I won't even get to today's hack part.

mushin is an application to implement the Getting Things Done approach. I follow this approach with varying degrees of success, but it's what has worked best for me so far.

I was never really happy with any of the tools available that claimed to implement it. For me, the basic requirements are:

  • available off-line (no online-only solutions)
  • data available across devices. I use my laptop, home desktop, and work desktop regularly; and when I'm in the store and I only have my phone with me I want to be able to see what I was supposed to buy whenever I'm in the @shopping context)
  • easy to add new tasks; preferably a command-line. I add tasks during meetings, when I'm on the phone, when I'm talking to someone, or simply typing them on my phone, so it has to be easy and quick.

This excluded many solutions at the time I first started looking for one. I recall RememberTheMilk was popular at the time, but since I was spending at least four hours a week on plane trips back then, and planes were excellent places to do GTD reviewing, it was simply not an option for me.

I don't know if Getting things GNOME already existed back then. When I first looked at it, it was basically a local-only application, but since then it's evolved to letting you synchronize tasks online, although it still looks like it's an added-on feature instead of an integral design choice. I should try it again someday.

Anyway, I ended up using yagtd, which is a command-line application operating on a text file. I put the text file in subversion, and then proceeded to use it across three computers (back then I did not have a real smartphone yet), and cursing every time I forgot to update from subversion or commit to subversion. At least the conflicts were usually easy to manage since yagtd basically stores one line per 'thing'.

And then I discovered CouchDB and I did what they told me to - I relaxed. I created a personal project called 'things' that took most of yagtd's functionality but put all the data in CouchDB. CouchDB solved two of the three requirements of my list above - it promised to make it possible to have my data available locally, even when offline, and to be able to synchronize it across devices. (Of course, I later figured out that it's nice in theory but not that simple in practice - but the basics are there)

Even though I really liked the name 'things', because, you know, if you're writing a GTD application, the things you are doing are actually, 'things'. But I realized it was a stupid ungoogleable name, so I ended up going for something Japanesey that was close enough to 'mind like water' and stumbled on the name 'mushin' - (bonus: it started with an m, and for some reason I'm accumulating personal projects that start with m)

So I happily hacked away on mushin, making it have the same featureset as yagtd, but with couchdb as the backend. Originally I used python-couchdb, since for a command-line application it's not strictly necessary to write non-blocking code. This was almost three years ago, and I've been using this application pretty much every day since then. (In fact, I have 2153 open things to do, and a well-rested mind that typically isn't too concerned about forgetting about stuff, because everything I need to worry about is *somewhere* in those 2153 open things. And some days that statement is truer than on others!)

I wonder how many people by now think I'm a classic case of NIH - surely lots of people are happily using tools for GTD already. The way I convinced myself that it made sense to create this tool is because I was incredibly excited about the promise of CouchDB (and I still am, although I 'm confused about what's going on in CouchDB land, but more on that in another post).

Maybe I was a rarity back then, with my work desktop in Barcelona, my laptop, and my home desktop in Belgium, and wanting my data in all three, all the time. In the back of my mind I was still planning to write the Ultimate Music Application, with automatic synchronization of tracks and ratings synchronized across devices, and I thought that a simple GTD application would be an excellent testing ground to see if CouchDB really could deliver on the promises it was making to my future self.

Over time, I adapted mushin. At some point, I got an N900, and I wanted mushin to work on it, and a command-line client didn't make that much sense. So I wrote a GUI frontend for it, and it was time to port mushin over to use Twisted, so that all calls could be done asynchronously and integrate nicely with the GUI. I switched from python-couchdb to Paisley, a CouchDB client from Twisted. (At one time I was even tricked into thinking I was the maintainer, appointed by the previous maintainer according to the last-touched rule, but then someone else happily forked Paisley from under me, and it now lives on github. I copied over the python-couchdb Document/Schema/Mapping code, because I liked how it worked.

And there I had a Maemo client, using most of the same code, and with access to the same data as all my desktops and laptop. I had reached my requirements, in a fashion.

It wasn't ideal: replication had to be triggered manually. I had a command to do so, but any restart of a couchdb server, or being offline too long (a few hours of my laptop not being online or even on, for example) would break the replication. In practice, you still somehow need to initiate the replication, or write code around that to do that for you. Especially with my phone, which I usually don't have online, it's easy to forget and find yourself at the store without having synced first and not remembering exactly what it was I was supposed to buy. But it was good enough.

But I never really mentioned anything about his project, and the reason was simple. I was using CouchDB in Admin Party mode (anyone can do anything), and to be able to replicate (without fiddling with tunnels and the like) I had couchdb listening on 0.0.0.0 So if anyone had known I was using this tool, it would have been very easy to take a look at all my tasks (bad), assign me some new tasks (even worse), or, you know, drop my whole database of things (I used to think that was bad, but today I'm not so convinced anymore?)

So I decided to rely on security by obscurity, and never write about my GTD application.

But now I did, so if you're still with me, you may be excited about the prospect of getting onto a network of mine and dropping my database, to release me from the mental pressure of 2153 things to be done?

Ah, well... That's what part 2 is going to be about. Stay tuned.

In the meantime, I converted my SVN repository to git and threw it on github. I only run it uninstalled at the moment, not ready yet to be packaged, but hey, if you're brave... have a go.

Evolution backup recovery

Filed under: Open Source — Thomas @ 15:36

2012-04-01
15:36

I pretty much never drink and hack, and last Friday's evening is a good reason why. I was having a rare beer and managed to spill part of it on my keyboard and desk. So I turned the keyboard around, started cleaning it as fast as I could, forgetting to actually unplug it. I called it a night because nothing good was going to come from that night anymore.

And on Saturday morning I noticed that my INBOX was gone. Hm, is it really gone? Yep, gone from my laptop too. Crap, must have deleted it on the server by accident while cleaning my keyboard...

And because my NAS is a little full lately, I haven't been as diligent with backups as I normally have been. Hm, and the modest cache on my N900 isn't very useful either...

Luckily, evolution on my work machine was shut down for some reason, so yay, it has a reasonably fresh cache of my INBOX!

Except that it's not all that straightforward to actually get this cache back into Evolution. Just copying its contents to an existing or new folder doesn't do anything. The files themselves are split up versions of the actual email, assumingly because the evo guys thought it would be faster to search header and body by splitting them off from the attachments and saving them separately, inventing their own caching format. Which is fine, but makes it impossible to actually restore a backup with...

After lots of Googling, I stumbled upon this tool that did the trick for me. A lot of hours wasted over a bunch of emails... But what would happen if I really lost my IMAP server mail ? Run this script by hand on all the folders ? Shudder...

git bash prompt

Filed under: General — Thomas @ 20:39

2012-03-29
20:39

I've been having fun recently on a new project where I put myself through all sorts of pain by nesting git submodules into team submodules into platform submodules and so on. The goal here is to be able to tag a root repository and thus identify exact commit hashes of all the submodules to any level. This was an idea Andoni had when he was working on livetranscoding in response to a request of mine where I want to be able to use a single 'tag' to identify a complete deployment.

That's been working better than I expected, and I even hacked git-submodule-tools so that I can do git rlog and get a recursive git log between two root version tags, and get a list of every commit between the master and all submodules. That's pretty neat for writing out release notes.

However, the way I embedded submodules causes a bit of pain when going back and forth. One of my hackers once gave me a PS1 bash prompt that includes info of which git branch you're on in your shell prompt. So today I decided to extend that a little, and I now have this:


(b:release-0.2.x d:deploy-pro-2012-03-29) [thomas@otto platform]$ ls
Makefile platform puppet RELEASE-0.2.1
(b:release-0.2.x d:deploy-pro-2012-03-29) [thomas@otto platform]$ cd puppet/pro/
(s:puppet/pro b:release-0.2.x d:v0.2.1) [thomas@otto pro]$

This is showing me submodule name, branch, and description of the current commit.

If you want this for your prompting fun too, here's the github repo

In the near future, simple portknocking for fun and profit with bash!

Puppet pains

Filed under: sysadmin — Thomas @ 14:53

2012-03-27
14:53

The jury is still out on puppet as far as I'm concerned.

On the one hand, of course I relish that feeling of ultimate power you are promised over all those machines... I appreciate the incremental improvements it lets you make, and have it give you the feeling that anything will be possible.

But sometimes, it is just so painful to deal with. Agent runs are incredibly slow. It really shouldn't take over a minute for a simple configuration with four machines. Also, does it really need to be eating 400 MB of RAM while it does so ? And when running with the default included web server (is that webrick ?), I have to restart my puppetmaster for every single run because there is this one multiple definition that I can't figure out that simply goes away when you restart, but comes back after an agent run:

err: Could not retrieve catalog from remote server: Error 400 on SERVER: Duplicate definition: Class[Firewall::Drop] is already defined; cannot redefine at /etc/puppet/environments/testing/modules/manifests/firewall/drop.pp:19 on node esp

And sometimes it's just painfully silly. I just spent two hours trying to figure out why my production machine couldn't complete its puppet run.

All it was telling me was

Could not evaluate: 'test' is not executable

After a lot of googling, I stumbled on this ticket. And indeed, I had a file called 'test' in my /root directory.

I couldn't agree with the reporter more:

I find it incredibly un-pragmatic to have policies fail to run whenever someone creates a file in root which matches the name of an executable I am running.

« Previous PageNext Page »
picture