[lang]

Present Perfect

Personal
Projects
Packages
Patches
Presents
Linux

Picture Gallery
Present Perfect

morituri 0.1.1 ‘Dead’ released!

Filed under: Hacking,morituri,Releases — Thomas @ 23:30

2010-04-16
23:30

For some unfathomable reason it's been a rather productive two weeks of short ##morituri hacking sessions.

I was wondering why I wasn't getting any feedback or trac tickets, until I found out that a) I had five month old patches lying around in my trac and b) I forgot to configure the ticket mails properly.

That, spurred with actual bug reports from blizzard who seems to have ripped 600 CD's with this piece of code already (more than me, in any case), kept me going towards a new release.

So, this new release adds, among other things:

  • 'rip image encode' to encode a lossless image to a lossy one (vorbis, mp3, ...)
  • tagging tracks with MusicBrainz id's
  • 'rip image retag' to apply up-to-date musicbrainz info (including id's) to existing rips. I did this one specifically for Chris when he found out none of his rips had the MusicBrainz id's and I felt guilty.
  • added an auto-generated man page.
  • Generate a complete list of known drive offsets to try with 'rip offset find' based on the AccurateRip database.
  • improved the basic Task code I wrote for abstracting asynchronous operations that can be hooked into a GLib mainloop or Twisted reactor. Exception information is now more useful.

A bunch of bugs were fixed too, and I especially want to thank Peter Oliver who provided me with three patches that I sadly overlooked. I hope he comes back.

In any case, enjoy the code and start ripping!

As for the next release, I've already started on ripping the data track (which ended up being easier than I thought, using dd and wrapping it in a Task parsing the output). However, I haven't yet been able to write a full image back to a CD, for various reasons. First of all, the .cue files I generate have multiple FILE statements, which doesn't seem to be supported by wodim, and only recently was added to cdrecord. Second, actually writing the data track so far has given me only errors.

It has been possible to rewrite the .iso file into one that can be mounted, and I might have to actually do the writing of discs by first decoding to a single file then writing from that. We'll see.

On the other hand, with tag reading and writing tasks now written, I might start using those in a separate application to finally start managing my music across my different machines.

National in London

Filed under: Music — Thomas @ 16:11

16:11

Yeah baby!

Here was my script for the day:


while true; do urlwatch | grep NEW && gst-launch playbin uri=file:///home/thomas/The\ National\ -\ Afraid\ of\ Everyone.mp3; TZ=EST date; sleep 15; done

The script was watching a page on the website of the Royal Albert Hall in London where some additional tickets were supposed to go on sale for The National on May 6th. If it detected changes it was supposed to play the latest new track from the album they're about to release...

For some strange reason, urlwatch actually failed to notify me, but luckily I was doing some regular refreshes as well, and now I have two tickets. After two completely botched up ticket sales originally, this time the tickets went on sale more or less at the right time, with few people knowing about it, increasing my chances of actually getting them.

Finally I have something to look forward to... Now to find someone who wants to go too.

Twisted PB to JSON-RPC bridge

Filed under: Hacking,Python,Twisted — Thomas @ 11:58

11:58

Day two of the internal platform training sessions. Today is hacking and bugfixing day.

I wanted to take a stab at the task of creating an RPC interface to expose our Perspective Broker interface. I have very little experience with RPC systems (apart from PB, and moap's use of Trac's RPC) so this was a good opportunity to get my feet wet.

Smart hacking is lazy hacking, so I started by Googling. After some false positives, I found a Twisted JSON-RPC project, and since it was maintained by Duncan McGreggor it gave me hope that it would work.

And so it did. I wrote a simple adapter object that takes an instance of a PB root and proxies jsonrpc_* calls to remote_* calls.

This is just a proof of concept; obviously there are many caveats. The main thing being that currently you can only use it for remote_ calls with simple objects that txjsonrpc supports (although it looks like for example it supports a deferred, so sweet).

Obviously, one of the attractions of PB is that you can transfer objects. At the least this bridge could be extended to support getting references to objects and then invoking methods on them or passing them as arguments.

In any case, good enough for a one hour hack.

The code is in my tests repository and you can check out with

svn co https://thomas.apestaart.org/thomas/svn/tests/twisted/pb2jr

As for actually using it - after writing it Jan and I discussed how it should be used, and he told me he doesn't actually want to run this in the same process where we run our PB interface. Instead he wants it to acts a proxy, which would at best mean doing code inspection or importing of the PB Root in the proxy to be able to provide the JSON-RPC interface dynamically, and you wouldn't be sure if the running code's PB Root is the same as whatever you have on disk in your proxy.

I'm not very convinced about the coupling argument, because this is just a thin layer on top of the PB server, and everything will end up going through the PB server anyway, so I don't see any gain from decoupling here.

I lost interest while discussing, so I'm going to leave it in its current state for now. Although I would have preferred to just plug ahead, do some of the cooler introspection bits, provide a web UI to look at methods and invoke them, and so on.

NetworkManager for server-type machines?

Filed under: General — Thomas @ 22:47

2010-04-07
22:47

Fedora comes with NetworkManager enabled out of the box. AFAICT it starts the network as part of a service script in init.d, then later on after logging in it reconnects as a user. At least, that's what it looks like to me.

The problem is, I just came home, and for some reason the media server was down. The media server also has the primary of my new DRBD setup, and bringing the machine back up didn't bring back the DRBD sync.

Digging deeper into /var/log/messages, it turned out the network wasn't active when drbd got started, and so it failed to connect to the peer.

Obviously, if this machine reboots I want drbd to Just Work.

NetworkManager's init script is S27, drbd is S70, I would have expected the network to be up by the time drbd kicks in. It looks like this wasn't the case. And I can foresee situations where NetworkManager re-connecting after my (automatic) user login not helping either if anything in the background is trying to connect to the network.

What should I be doing instead on a machine like this ? Remove NetworkManager entirely ? Is that even still possible today ? How do you set it up?

Home storage strategy

Filed under: Hacking — Thomas @ 23:54

2010-04-05
23:54

Drives fail.

Somehow they fail more often around me.

I used to swear by software RAID-1. I loved it because if either disk fails, the other disk still contains all the data and you can get to it and copy it off and so on.

Except that in reality other things get in the way when something fails. Drives may fail at the same time (because of a bad power supply, a power surge, a computer failure, ...). A particular low point was the time when my home server's RAID had a failing disk (which had all my backups on it), and I took out the drive, and added in a new drive to start copying, with both drives hanging out a little outside of the computer case, and at precisely that point a motherboard box dropped out of the shelves two meters higher, and landed with one of its points right on top of the good RAID drive, breaking it. I lost all my backups.

So I learned the hard way that most problems with RAID happen precisely at the time you need to phsyically manipulate the system when one of the drives fail.

Ever since I was wondering how I can do my storage and backup strategy better. I have some other ideas on how I want to back up my stuff from four computers in two different locations (three if you count 'laptop' as one of them) depending on what types of files they are (my pictures I need to make sure I don't lose, while most music I can re-rip if I really have to), which should go in a different post (and if you know of any good descriptions of home backup approaches for all these files we seem to collect, please share!).

But it's clear that part of the solution should involve storing files across more than one computer, preferably in a transparant way. I thought about setting up one drive in one machine, then doing a nightly rsync or dirvish snapshot of that drive, but with lots of big media files moving around it might not be the best solution.

So I was excited when I came across drbd, which implements a distributed file system that mirrors to a secondary disk over the network.

I got two new 2TB drives last week, installed them in my media server and home server (which took quite some fiddling, including blowing out a power supply which exploded in a green flash, sigh), and read through the pretty clear and decent docs, and after 50 hours of syncing empty disks, I now have this:

[root@davedina thomas]# cat /proc/drbd
version: 8.3.6 (api:88/proto:86-91)
GIT-hash: f3606c47cc6fcf6b3f086e425cb34af8b7a81bbf build by bachbuilder@, 2010-02-12 18:23:20

1: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r----
ns:1905605948 nr:0 dw:4 dr:1905606717 al:0 bm:116305 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
[root@davedina thomas]# df --si /mnt/split/
Filesystem Size Used Avail Use% Mounted on
/dev/drbd1 2.0T 206M 1.9T 1% /mnt/split

Sweet! Now to actually copy some files and run some failover tests before trusting my important files to it...

picture