[lang]

Present Perfect

Personal
Projects
Packages
Patches
Presents
Linux

Picture Gallery
Present Perfect

git bash prompt

Filed under: General — Thomas @ 20:39

2012-03-29
20:39

I've been having fun recently on a new project where I put myself through all sorts of pain by nesting git submodules into team submodules into platform submodules and so on. The goal here is to be able to tag a root repository and thus identify exact commit hashes of all the submodules to any level. This was an idea Andoni had when he was working on livetranscoding in response to a request of mine where I want to be able to use a single 'tag' to identify a complete deployment.

That's been working better than I expected, and I even hacked git-submodule-tools so that I can do git rlog and get a recursive git log between two root version tags, and get a list of every commit between the master and all submodules. That's pretty neat for writing out release notes.

However, the way I embedded submodules causes a bit of pain when going back and forth. One of my hackers once gave me a PS1 bash prompt that includes info of which git branch you're on in your shell prompt. So today I decided to extend that a little, and I now have this:


(b:release-0.2.x d:deploy-pro-2012-03-29) [thomas@otto platform]$ ls
Makefile platform puppet RELEASE-0.2.1
(b:release-0.2.x d:deploy-pro-2012-03-29) [thomas@otto platform]$ cd puppet/pro/
(s:puppet/pro b:release-0.2.x d:v0.2.1) [thomas@otto pro]$

This is showing me submodule name, branch, and description of the current commit.

If you want this for your prompting fun too, here's the github repo

In the near future, simple portknocking for fun and profit with bash!

Puppet pains

Filed under: sysadmin — Thomas @ 14:53

2012-03-27
14:53

The jury is still out on puppet as far as I'm concerned.

On the one hand, of course I relish that feeling of ultimate power you are promised over all those machines... I appreciate the incremental improvements it lets you make, and have it give you the feeling that anything will be possible.

But sometimes, it is just so painful to deal with. Agent runs are incredibly slow. It really shouldn't take over a minute for a simple configuration with four machines. Also, does it really need to be eating 400 MB of RAM while it does so ? And when running with the default included web server (is that webrick ?), I have to restart my puppetmaster for every single run because there is this one multiple definition that I can't figure out that simply goes away when you restart, but comes back after an agent run:

err: Could not retrieve catalog from remote server: Error 400 on SERVER: Duplicate definition: Class[Firewall::Drop] is already defined; cannot redefine at /etc/puppet/environments/testing/modules/manifests/firewall/drop.pp:19 on node esp

And sometimes it's just painfully silly. I just spent two hours trying to figure out why my production machine couldn't complete its puppet run.

All it was telling me was

Could not evaluate: 'test' is not executable

After a lot of googling, I stumbled on this ticket. And indeed, I had a file called 'test' in my /root directory.

I couldn't agree with the reporter more:

I find it incredibly un-pragmatic to have policies fail to run whenever someone creates a file in root which matches the name of an executable I am running.

DAD hacking

Filed under: DAD — Thomas @ 00:06

2012-03-18
00:06

On the bad side of life, I was planning to go to an awesome Calcotada in Lleida today, but I spent last night awake until 4:30 with an upset stomach, so I had to cancel and stay home feeling like shit.

On the good side of life, I really had no excuse left to not do a little long overdue hacking on Digital Audio Database.

I still use it regularly to listen to music, but the GNonLin-based player is just really not very stable. I should really just rewrite it using simply adder just like roughly ten years ago, but my brain won't be able to do that. So instead I decided to clean up the web-based WebSockets using player I prototyped at OVC last year.

I started with some refactoring, clearly defining model/view/controller base classes and adapting the player and playerview classes to them.

WebSocket code seems to need an update every few months - I pulled the latest revision of txWebsocket on my fork, so my recent browsers actually play music again.

Since the last time I hacked on this, I actually added my 1500+ freshly ripped cd's, in FLAC format - which browsers don't actually support.

So, first off, I added an option for the scheduler - responsible for picking tracks, and picking audio files to represent them - to filter by extension. It's not ideal, but it will do for now, and I punched that filter through the levels of abstraction in DAD. I now start it filtering on .mp3 and .oga, and so Chrome can play back all the tracks the scheduler throws at it.

The web-based player just loads tracks and timing info from the scheduler relative to page load time. I've been wanting to make that absolute for a while, so I did just that - the player server schedules tracks for epoch seconds now through websockets.

I had an entertaining half hour listening to the awesome echo effects obtained by having three chrome pages simultaneously playing the jukebox schedule - each page being slightly out of sync with the others.

As I'll be wanting to use a smallish computer for music playback using a browser, I adapted the code to not use localhost any longer, but do everything with relative URL's. Voila - the laptop now plays music too, a little bit more out of sync, and of course through its own speakers, adding to the eerie effect.

As an encore, I wanted to stumble my way through some jquery code, to which I'm a certified newbie. I want a nice background slideshow related to the current artist, and I pulled together echonest and bgstretcher-2 as an experiment.

That seems to work relatively well, except that the slideshow plugin doesn't let you reload a new set of images to cycle through. And some of the other ones I tried instead after that seemed to have the same problem.

Oh well, it's a start. If anyone knows of a good jquery background slideshow plugin that lets me update the list of url's for images at any time, let me know!

More adventures in puppet

Filed under: General,Hacking,sysadmin — Thomas @ 23:32

2012-03-04
23:32

After last week's Linode incident I was getting a bit more worried about security than usual. That coincided with the fact that I found I couldn't run puppet on one of my linodes, and some digging turned up that it was because /tmp was owned by uid:gid 1000:1000. Since I didn't know the details of the breakin (and I hadn't slept more than 4 hours for two nights, one of which involving a Flumotion DVB problem), I had no choice but to be paranoid about it. And it took me a good half hour to realize that I had inflicted this problem on myself - a botched rsync command (rsync arv . root@somehost:/tmp).

So I wasn't hacked, but I still felt I needed to tighten security a bit. So I thought I'd go with something simple to deploy using puppet - port knocking.

Now, that would be pretty easy to do if I just deployed firewall rules in a single set. But I started deploying firewall rules using the puppetlabs firewall module, which allows me to group rules per service. So that's the direction I wanted to head off into.

On saturday, I worked on remembering enough iptables to actually understand how port knocking works in a firewall. Among other things, I realized that our current port knocking is not ideal - it uses only two ports. They're in descending order, so usually they would not be triggered by a normal port scan, but they would be triggered by one in reverse order. That is probably why most sources recommend using three ports, where the third port is between the first two, so they're out of order.

So I wanted to start by getting the rules right, and understanding them. I started with this post, and found a few problems in it that I managed to work out. The fixed version is this:

UPLINK="p21p1"
#
# Comma seperated list of ports to protect with no spaces.
SERVICES="22,3306"
#
# Location of iptables command
IPTABLES='/sbin/iptables'

# in stage1, connects on 3456 get added to knock2 list
${IPTABLES} -N stage1
${IPTABLES} -A stage1 -m recent --remove --name knock
${IPTABLES} -A stage1 -p tcp --dport 3456 -m recent --set --name knock2

# in stage2, connects on 2345 get added to heaven list
${IPTABLES} -N stage2
${IPTABLES} -A stage2 -m recent --remove --name knock2
${IPTABLES} -A stage2 -p tcp --dport 2345 -m recent --set --name heaven

# at the door:
# - jump to stage2 with a shot at heaven if you're on list knock2
# - jump to stage1 with a shot at knock2 if you're on list knock
# - get on knock list if connecting t0 1234
${IPTABLES} -N door
${IPTABLES} -A door -m recent --rcheck --seconds 5 --name knock2 -j stage2
${IPTABLES} -A door -m recent --rcheck --seconds 5 --name knock -j stage1
${IPTABLES} -A door -p tcp --dport 1234 -m recent --set --name knock

${IPTABLES} -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
${IPTABLES} -A INPUT -p tcp --match multiport --dport ${SERVICES} -i ${UPLINK} -m recent --rcheck --seconds 5 --name heaven -j ACCEPT
${IPTABLES} -A INPUT -p tcp --syn -j door

# close everything else
${IPTABLES} -A INPUT -j REJECT --reject-with icmp-port-unreachable

And it gives me this iptables state:
71242

So the next step was to reproduce these rules using puppet firewall rules.

Immediately I ran into the first problem - we need to add new chains, and there doesn't seem to be a way to do that in the firewall resource. At the same time, it uses the recent iptables module, and none of that is implemented either. I spent a bunch of hours trying to add this, but since I don't really know Ruby and I've only started using Puppet for real in the last two weeks, that wasn't working out well. So then I thought, why not look in the bug tracker and see if anyone else tried to do this ? I ask my chains question on IRC, while I find a ticket about recent support. A minute later danblack replies on IRC with a link to a branch that supports creating chains - the same person that made the recent branch.

This must be a sign - the same person helping me with my problem in two different ways, with two branches? Today will be a git-merging to-the-death hacking session, fueled by the leftovers of yesterday's mexicaganza leftovers.

I start with the branch that lets you create chains, which works well enough, bar some documentation issues. I create a new branch and merge this one on, ending up in a clean rebase.

Next is the recent branch. I merge that one on. I choose to merge in this case, because I hope it will be easier to make the fixes needed in both branches, but still pull everything together on my portknock branch, and merge in updates every time.

This branch has more issues - rake test doesn't even pass. So I start digging through the failing testcases, adding print debugs and learning just enough ruby to be dangerous.

I slowly get better at fixing bugs. I create minimal .pp files in my /etc/puppet/manifests so I can test just one rule with e.g. puppet apply manifests/recent.pp

The firewall module hinges around being able to convert a rule to a hash as expressed in puppet, and back again, so that puppet can know that a rule is already present and does not need to be executed. I add a conversion unit test for each of the features that tests these basic operations, but I end up actually fixing the bugs by sprinkling print's and testing with a single apply.

I learn to do service iptables restart; service iptables stop to reset my firewall and start cleanly. It takes me a while to realize when I botched the firewall so that I can't even google (in my case, forgetting to have -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
) - not helped by the fact that for the last two weeks the network on my home desktop is really flaky, and simply stops working after some activity, forcing me to restart NetworkManager and reload network modules.

I start getting an intuition for how puppet's basic resource model works. For example, if a second puppet run produces output, something's wrong. I end up fixing lots of parsing bugs because of that - once I notice that a run tells me something like

notice: /Firewall[999 drop all other requests]/chain: chain changed '-p' to 'INPUT'
notice: Firewall[999 drop all other requests](provider=iptables): Properties changed - updating rule

I know that, even though the result seems to work, I have some parsing bug, and I can attack that bug by adding another unit test and adding more prints for a simple rule.

I learn that, even though the run may seem clean, if the module didn't figure out that it already had a rule (again, because of bogus parsing), it just adds the same rule again - another thing we don't want. That gets fixed on a few branches too.

And then I get to the point where my puppet apply brings all the rules together - except it still does not work. And I notice one little missing rule: ${IPTABLES} -A INPUT -p tcp --syn -j door

And I learn about --syn, and --tcp-flags, and to my dismay, there is no support for tcp-flags anywhere. There is a ticket for TCP flags matching support, but nobody worked on it.

So I think, how hard can it be, with everything I've learned today? And I get onto it. It turns out it's harder than expected. Before today, all firewall resource properties swallowed exactly one argument - for example, -p (proto). In the recent module, some properties are flags, and don't have an argument, so I had to support that with some hacks.

The rule_to_hash function works by taking an iptables rule line, and stripping off the parameters from the back in reverse order one by one, but leaving the arguments there. At the end, it has a list of keys it saw, and hopefully, a string of arguments that match the keys, but in reverse order. (I would have done this by stripping the line of both parameter and argument(s) and putting those on a list, but that's just me)

But the --tcp-flags parameter takes two arguments - a mask of flags, and a list of flags that needs to be set. So I hack it in by adding double quotes around it, so it looks the same way a --comment does (except --comment is always quoted in iptables --list-rules output), and handle it specially. But after some fidgeting, that works too!

And my final screenshot for the day:
71245

So, today's result:

Now, I have a working node that implements port knocking:

node 'ana' {

$port1 = '1234'
$port2 = '3456'
$port3 = '2345'

$dports = [22, 3306]

$seconds = 5

firewall { "000 accept all icmp requests":
proto => "icmp",
action => "accept",
}

firewall { "001 accept all established connections":
proto => "all",
state => ["RELATED", "ESTABLISHED"],
action => "accept",
}

firewall { "999 drop all other requests":
chain => "INPUT",
proto => "tcp",
action => "reject",
}

firewallchain { [':stage1:', ':stage2:', ':door:']:
}

# door
firewall { "098 knock2 goes to stage2":
chain => "door",
recent_command => "rcheck",
recent_name => "knock2",
recent_seconds => $seconds,
jump => "stage2",
require => [
Firewallchain[':door:'],
Firewallchain[':stage2:'],
]
}

firewall { "099 knock goes to stage1":
chain => "door",
recent_command => "rcheck",
recent_name => "knock",
recent_seconds => $seconds,
jump => "stage1",
require => [
Firewallchain[':door:'],
Firewallchain[':stage1:'],
]
}

firewall { "100 knock on port $port1 sets knock":
chain => "door",
proto => 'tcp',
recent_name => 'knock',
recent_command => 'set',
dport => $port1,
require => [
Firewallchain[':door:'],
]
}

# stage 1
firewall { "101 stage1 remove knock":
chain => "stage1",
recent_name => "knock",
recent_command => "remove",
require => Firewallchain[':stage1:'],
}

firewall { "102 stage1 set knock2 on $port2":
chain => "stage1",
recent_name => "knock2",
recent_command => "set",
proto => "tcp",
dport => $port2,
require => Firewallchain[':stage1:'],
}

# stage 2
firewall { "103 stage2 remove knock":
chain => "stage2",
recent_name => "knock",
recent_command => "remove",
require => Firewallchain[':stage2:'],
}

firewall { "104 stage2 set heaven on $port3":
chain => "stage2",
recent_name => "heaven",
recent_command => "set",
proto => "tcp",
dport => $port3,
require => Firewallchain[':stage2:'],
}

# let people in heaven
firewall { "105 heaven let connections through":
chain => "INPUT",
proto => "tcp",
recent_command => "rcheck",
recent_name => "heaven",
recent_seconds => $seconds,
dport => $dports,
action => accept,
require => Firewallchain[':stage2:'],
}

firewall { "106 connection initiation to door":
# FIXME: specifying chain explicitly breaks insert_order !
chain => "INPUT",
proto => "tcp",
tcp_flags => "FIN,SYN,RST,ACK SYN",
jump => "door",
require => [
Firewallchain[':door:'],
]
}
}

and I can log in with

nc -w 1 ana 1234; nc -w 1 ana 3456; nc -w 1 ana 2345; ssh -A ana

Lessons learned today:

  • watch iptables -nvL is an absolutely excellent way of learning more about your firewall - you see your rules and the traffic on them in real time. It made it really easy to see for example the first nc command triggering the knock.
  • Puppet is reasonably hackable - I was learning quickly as I progressed through test and bug after test and bug.
  • I still don't like ruby, and we may never be friends, but at least it's something I'm capable of learning. Puppet might just end up being the trigger.

Tomorrow, I need to clean up the firewall rules into something reusable, and deploy it on the platform.

picture