A few weeks ago my root hard drive died on the media machine at home. Time to do the upgrade dance on a new drive. I jumped from Fedora 11 to Fedora 14.
Boy was I in for a surprise display-wise – it felt like it was 2001 all over again.
I connect the media machine with an NVidia GeForce 6200 card and its VGA output to the big screen TV. That worked fine before, albeit with the proprietary NVidia drivers. I don’t use the DVI output because I don’t have a cable.
So, the monitor preferences only show resolutions up to 1024×768, when in Fedora 11 it had no problem doing 1920×1080. I tried to fiddle with some xrandr stuff adding modelines but didn’t find anything that worked well. It was a bit of a pain too; you’re supposed to be able to delete modes you added, but I just got
$ xrandr --rmmode "ATSC-1080-60p"
X Error of failed request: BadAccess (attempt to access private resource denied)
Major opcode of failed request: 149 (RANDR)
Minor opcode of failed request: 17 (RRDestroyMode)
Serial number of failed request: 27
Current serial number in output stream: 28
whenever I tried.
In the end I created a little script that made testing and adjusting mode lines easier for me, like so:
xrandr --newmode "$MODE" 148.5 1920 2000 2056 2200 1080 1082 1088 1125
xrandr --addmode VGA-1 "$MODE"
xrandr --output VGA-1 --mode "$MODE"
I tried to install the nvidia drivers from rpmforge. Sadly the latest kernel oopses on this machine (not sure yet why), and there were no built modules for the original Fedora 14 kernel release. After I realized that all older kernels are removed from updates and can be gotten from Koji, the build system, I was on my way to reboot into a working kernel with nvidia drivers installed.
Except that those only found 640×480 and 320×240 resolutions. And adding modelines using xrandr doesn’t even work there.
Remove all nvidia drivers, reboot with a nouveau driver enabled, and tinker some more. None of the lines in this MythTV modeline database for Sonys actually worked. The ones I generated with cvt or gtf where displaced way to the right.
Eventually I stumbled upon this HTPC howto with an ATSC-1080-60p modeline that almost worked – the image was just slightly to the right. So, re-reading ESR’s XFree86 modeline howto (after ten years or so ?) helped me do the final adjustments. Now just to make the settings permanent.
Of course, the proper fix would just have been to plug in a DVI to HDMI cable, and rely on EDID (which I assume works). Haven’t bought the cable yet though. Neither my Sony TV nor my Sony amplifier have a DVI input, and I don’t know of a way to pull in digital sound through a DVI to HDMI converter.
But I do wonder why the system was able to automatically detect and go to 1920×1080 in my previous (but broken) Fedora 11 setup…
Six years ago, before I moved to Barcelona, my digital music collection was well-organized and simple.
I had my CD’s. And I had a copy of all of them, in Ogg, on my Dave/Dina box. I had very few downloaded tracks, and I didn’t really listen to those much. All my music on Dave/Dina was tracked in DAD, a project I did with my former housemate Kristof.
Life was great. Every new CD was ripped directly in Dave/Dina, imported in DAD, and from there it could be rated. So each track was immediately available for the dynamic playlists DAD generated. Those playlists were then played directly on the Dave/Dina box attached to the living room stereo, my desktop, or the kitchen or bathroom computers (small Compaq IA1 machines from the Golden Bubble days).
Today, it’s all a mess.
I have music (ripped, bought online, downloaded, or copied) on the following devices:
- My elisa machine (which holds all the old Dave/Dina content)
- My home desktop
- My laptop
- My work desktop
- My Nokia N800
- My Cowon A3 media player
- Kristien’s iPod
I haven’t ripped a single CD in the last 4 years since I stopped working on Dave/Dina, so these days I also listen to CD’s on either our small portable stereo or the PS3. I have 200+ CD’s still waiting to be ripped.
So my music listening has become erratic, listening to either the old ‘good’ playlist from Dave/Dina that hasn’t changed for the last 6 years, and, while good, is getting stale; or whatever I recently ended up downloading, for a specific album, but random play is terrible when going through those directories, and of course each album is on some different machine or device.
The last two years, I’ve grown more and more annoyed at this situation. So one of my goals for 2009 was to finally *do* something about it. I realize that music is one of the things I love most in life, and my life would be better with the music I buy and find in it as soon as I have it. So what little hacking time I have left before real life begins (you know, kids and stuff) is going to go in code that is going to make my music experience better.
Having goals is a good way to direct your hacking. I’ve come up with five major projects I need to work on to get my music where I want it to be. All of these are projects I’ve had thoughts on in the past, but never really gotten to. Over the last few years a lot of new ideas and technologies have arrived that would help a lot now however.
- Re-rip all my CD’s in a lossless format, with perfect quality, according to a certain website’s standard
- Find a replacement for DAD, or make one. It should be able to track rips, tracks, different encodings of the same recording, different versions of tracks, parts of tracks (hidden tracks for example), and different collections across devices that it should be able to synchronize. Think ‘put 10 GB of the best songs on my N800’, and each time I’d connect my N800 it would automatically add new ones and remove old ones.
- Improve on the rating system DAD used to have, reusing ideas from a project at the radio stations all those years ago. Have a bunch of fuzzy parameters on each track which would allow much richer controls for song selection. Possibly turn it into a collaborative website if it makes sense. This hinges on uniquely identifying each track, for which acoustic fingerprinting would be a good solution. Basically, I want to use the power of the web and the music lovers to improve song selection. last.fm and pandora are going in the right direction, but don’t really satisfy me.
- Write a player that does the automatic mixing the way Dave/Dina used to, or better.
- Make a LEGO Mindstorms robot to automatically rip all my CD’s again
I’ve been tackling each of these separately, which turned out to actually be a good thing. Each time I’m stuck on one of them, I can work on any of the others. For example, I’ve been stuck on (4) for a long time, waiting for Edward to fix some bugs in gnonlin, so I switched to (1), writing code to parse .CUE files, implement CDDB disc id calculation, and AccurateRip verification of ripped images.
I also waited on a friend who I worked with at said radio station to confirm that he doesn’t have any backups either of the database for (3).
Since this is what I’ll be hacking on in my spare time in the forseeable future, I’ll probably blog about the different pieces as well. I’ll start with some more technical information on (1), the ripping part, which I’ve been working on the last month, in a separate post.
But man, I look forward to ripping my CD’s from the last four years and actually listening to those songs regularly, rather than once in a while.
Isn’t it amazing how our parents actually had to get up from their desk, go over to the turntable, and actually flip an LP over if they wanted to hear something else for 20 minutes ?
(For the lazy or impatient, eycandy links at the bottom)
Weeks like this are hard. For some reason, I can’t get to bed when I want to and end up getting up a few times and trying to go back to sleep. For some reason my head keeps spinning and I keep thinking about stuff that needs doing. This week was supposed to be a week with a little more focus on Elisa, and while I assisted in some discussions, I have not been able to do much yet. Every day it seemed there was some problem with something someone was doing that needed some assistance, or a bunch of meetings to be in.
Days like those that sap my energy and consume my time without really allowing me to make progress on my TODO list are black karma. So I try to offset them by increasing my white karma, trying to help out someone else to do something. I was feeling guilty of never being able to sit down with Phillipe and figure out his needs for the new Elisa website, so when I finally got back to my disk after meeting X I saw the following on IRC:
<philn> MikeS: is it easy to do screencasting with flumotion? because i need to make a new one for elisa
I have been wondering the same thing and thinking it should be easy, but never got round to trying it. And lately I’ve been wanting to do screencasts of some of my projects, like Flumotion and moap. While I hope Istanbul will work for me sometime in the future, I am not entirely sure it is a good approach for screencasting Elisa, because Istanbul encodes to Theora on the same machine, so it would not leave much CPU to show off the fancy Elisa effects.
So I decided to give it my best shot and stay at work until I got this working.
So, the first step was simple enough. Remember how GStreamer has this crack gst-launch command that we advocate as being useful for prototyping, but that people keep treating as a real player application and spawn it from other programs ? Havoc used to poke fun at blog posts saying “See how easy it is to do stuff with GStreamer ? This pipeline walks your dog and fills up your fridge” and then some random hodgepodge of characters, slashes, numbers and exclamation marks.
Well, good news – flumotion has the same insanity ! There is flumotion-inspect to tell you what components there are, and there is flumotion-launch to launch flumotion flows for rapid prototyping. (And yes, on our side, we still really mean “rapid prototyping”.)
So, first stab:
flumotion-launch -d 4 pipeline-producer pipeline="ximagesrc" ! theora-encoder ! ogg-muxer ! disk-consumer directory=`pwd`
Well, that starts up, does stuff, then quickly consumes a lot of CPU. When I stop it, I have a file in my local directory that plays back my desktop. Most players crash on it because XVideo doesn’t like 1600×1050 videos. So let’s limit the size a little.
flumotion-launch -d 4 pipeline-producer pipeline="ximagesrc endx=800 endy=600" ! theora-encoder ! ogg-muxer ! disk-consumer directory=`pwd`
OK, that gives me a video of the size I want, and works reasonably well. At this point we are still running on only one machine though. That means that our machine is doing the theora encoding, which is a heavy load to carry. flumotion-launch is only for prototyping, and doesn’t have a way of distributing components across machines.
So, next step. I take a spare machine in the office which will do theora encoding for me. I start a manager on it, and a worker.
I connect to the spare machine from my own machine using flumotion-admin, and the wizard pops up. I go through the wizard, choosing the videotest producer for video, and the soundcard for audio. I pick Theora bitrate settings, and choose to save the stream to disk instead of streaming it. At the end, I export the configuration to disk, and I start editing it.
There are three things I need to do. For the components that I want to run on my own desktop, I set the worker name to “producer”. For the others, I leave the worker name to what it was. The second thing I need to do is replace the videotest-producer section with the pipeline producer component. And the third thing I need to do is to start a worker on my local desktop with the name set to “producer”, and make it connect to the remote manager.
After this, I stop the flow in the admin client, clean out all components, and load the new configuration. The flow starts up, and all components turn happy. My memory usage is increasing steadily and then dropping again. Something is probably up, and in the latest version of Flumotion I have some additional things I can look at to help me find problems. I can look at the video-producer component, which is producing a raw video feed, and see if the next component (the encoder) connecting to it over the network is losing any buffers. And yes, as expected, every so often it drops 450 buffers, which is what happens in Flumotion if a component further down the flow doesn’t read fast enough.
So, how do I speed things up ? There are three things I can do. I can reduce the framerate, but I would like to show the fluid animations. I can reduce the size a little, so I change to 640×480. But a good reduction in bandwidth can be achieved by already converting the image format in the producer component from raw RGB to I420, a different colorspace that Theora will use anyway and uses less bytes per frame.
Making these changes allowed me to do a reasonably good capture without dropping too much frames. The only problem is, I’m still doing a screencast of the top left part of my desktop. I just want to screencast the Elisa application and nothing else. Here’s where I cheated a little – I started offsetting the ximagesrc coordinates to make sure the capture fell inside the Elisa application window. This makes it look like it runs fullscreen.
So, reload config, restart flow, start Elisa, and do some stuff in it. This is my first screencast, and I was focusing on how to make a screencast, not what to show off in a screencast. I’m sure Philippe will do a better job tomorrow if he wants to give this a try.
So without further ado, here is a link to the Elisa screencast (using Cortado to watch it), or a link to a playlist to open in a player, or a link to the Ogg file directly.
And here is the Flumotion config file.
Bug I noticed in Flumotion when trying this: having the manager on the producer machine confused the worker on the encoder machine.
Bugs in Elisa I noticed: Elisa doesn’t think my Ogg video files are in fact video, playing a folder of images when playing music makes Elisa stop accepting input events.
Todo for Flumotion: add screencasting to the scenario.
By the way, Philippe will be talking about Elisa at FOSDEM in Brussels, Belgium, this weekend, so if you are there, be sure to catch your dose of eyecandy. I will be there as well, as will Edward, Andy and Florian.
Mission accomplished, 23:07, time to go home.
So, after waiting for months and months on a DirectFB backend for Elisa so I could use my home Dave/Dina machine with it (It has a venerable Matrox G550 which is the best type of card to use with DirectFB), I caved in last week and joined the dark side by buying an NVidia GeForce FX 5200 card and using the binary-only drivers. It took some tweaking and digging, but I managed to get the card a) output with TV/out, b) look good enough to be comparable to the Matrox video output, and c) avoid any tearing effects by enabling all the VSYNC options the tools provided me.
Loic recommended a higher-numbered card, but all cards in the series he recommended have fans, except for one particular model that you have to mail-order from Asia. I want as few fans as possible in my media box.
I hate saying this, but the NVidia configuration tools actually work pretty damn well once you get the hang of it. Most changes are done on the fly (without restarting X), which is a welcome change from the usual hackery. And when I read the README for these drivers and look at the huge amounts of tweaking you can do with these cards (genlocking 4 cards together ? Are they serious ? Is there any open driver out there that can come close to offering something like this ?) I have some amount of respect for the NVidia Linux engineers. I hope the nouveau guys are going to hang in there and deliver on their mission goals, because they have a lot of work cut out for them.
I’m not advocating closed drivers at all, but for the time being I have decided to be practical about this and start hacking on Elisa and actually use it at home, and the NVidia card I got achieves that for now. I could get an LCD TV to avoid having to mess with TV/out, but I feel that there is still a large group of users that want to use something like Elisa with their existing analog TV.
Ironically enough Julien started hacking on a DirectFB renderer for Elisa a few days after I got the NVidia card :)
After that, I upgraded my base distro from FC4 i386 to FC6 x86_64. Again I reacquainted myself with the painful process of getting LIRC to work. Every time I put some serious effort into getting the media box up to date, I am forced to deal with LIRC, and every time I wonder why this has to be so painful. Here are some of the things about lirc that bother me to no end since I started using it five years ago:
- Why is this stuff not in the kernel ?
- Why is it so incredibly hard to set up ?
- Why do all the configuration files for remotes use different namings for the same keys ? This should be a matter of policy dictated by the project. Why does one config file use “FastForward”, another “FWD”, and yet another “>>|” to mean the same thing, forcing someone to change the configuration of LIRC to work with their applications when they change the remote they use ?
- Why is there only a terrible command-line application to “train” for your remote ? Elisa should be able to make a simple graphical remote trainer application that helps you set this up from scratch in a user-friendly way.
- The worst part for me is the fact that this iMon device being used on my Silverstone media box comes with this dinky remote that has some sort of jogwheel that is able to simulate a mouse. Except the driver in lirc ignores that completely, so where every other remote has at the very least UP/DOWN/LEFT/RIGHT as part of the things it can generate, this one can’t. And then you get this forum where some guy put an original patch to make the jog wheel be able to generate up/down/left/right, and then you have a bunch of people modifying this patch, customizing, ripping out bits that fail to compile with later kernels, … But this stuff never gets upstream
- A year ago I took one of those patches and fixed the kernel module to be a lot more consistent about the events it generates. The kernel driver actually receives fine-grained x/y coordinates from the jog wheel, and then synthesizes the four directions from this. It was doing so with a pretty bad algorithm, making it really hard to consistently go in a given direction. I had a patch for that. Except it needed updating for the newer 2.6.19 kernel, and apparently some things have changed in the input device layer. Sometimes I don’t understand why it’s ok for something as basic as the kernel to change their internal API all the time, while something like GStreamer, and pretty much everything in the GNOME stack, is forced to be held up to such high API standards
Anyway, after all the hardware hacking and distro updating, I have an elisa running at home, with the remote working, and able to show me Veronica Mars, Heroes and Battlestar Galactica.
Next tasks: push Fedora Extras packages, look at why subtitle rendering is so slow that a 2GHz Athlon64 cannot play the video fast enough from within Elisa, and taking a look at the music parts.
I was dead tired this weekend so I didn’t feel like doing anything intellectually challenging. So I spent some time this weekend working some more on DAD. I actually quite enjoy working on PHP. The reason I do, I think, is because it’s nice to take a bunch of code that actually works, even though it might not be structured correctly yet, and restructure it. PHP allows you to do stuff very quickly but also very ugly, and still have it work. Someone on IRC today said PHP is the BASIC of the web. Makes sense – lots of people learn PHP as their first language, and sometimes it shows.
Anyway, in the case of DAD, the code is quite good, but sometimes hackish in places. Sometimes Kristof just wanted to move quickly because I nagged for features. And when I tried something myself I didn’t know enough about the advanced concepts to do it correctly.
So this weekend I focused on writing a class for the concept of having a popup detailing progress while some background action is taking a bit of time. I ended up learning about sessions to do it nicely (a previous hack used two temp files to track progress and errors) and it was a lot easier than I had expected. I worked on the class in a test directory using a bunch of sleep()’s, forcing myself to get it exactly right first before integrating it.
And when that was done, integration was dead easy, the code looks nice, is well-documented, and can now be used to delete a few hundred lines of code. I love it when a plan comes together.
Meanwhile, I’ve started to think again about my plans for a world-wide audio database. Lots of projects already exist, and all of them have fundamental flaws in either design, setup or community. Each time I think about it, I seem to solve a few more conceptual problems, and actually start believing that one day my ideas might actually make sense. There are a bunch of tricky bits to get right, and the hard problem will be finding people in the beginning that a) love music enough to see the value; b) have technical skills and c) have the tenacity to work on it for some time before it starts to be usable.
In other words, the plan will probably involve me having money to give other people work after finding a way to make this be sensible from a business point of view…
But I still have time, and I’m not quite happy yet with what I have so far.