[lang]

Present Perfect

Personal
Projects
Packages
Patches
Presents
Linux

Picture Gallery
Present Perfect

website update

Filed under: General — Thomas @ 00:52

2007-02-28
00:52

Every once in a while I come back to thinking about what the ideal setup would be to manage and develop web sites. My wish list, as always, is:

  • have a separate development and live copy online
  • have the ability to have a local development copy (on my host)
  • track these versions in a version control system
  • make it as easy as possible to update
  • make it as easy as possible to downgrade the live version back to a previously working version
  • ideally, all updates can be made without touching the server (if it's just code)

In the past, I mostly used a custom shell script to release from the dev version to the live version. Basically, it would copy over the whole dev tree, then it would copy over specific files in a specific subpath to make up the live tree.

This has served me well in the past, but in the end it is too much hassle and I rarely updated my website because of it. Doing a "release" of a website does not reflect the real-world need of continuously updating websites.

I finally decided to do the most obvious thing, but with a twist:

  • my subversion repository has a dev/ and www/ subdirectory
  • www/ is branched from dev/
  • post-commit hooks on the server automatically update either dev/ or www/ depending on which got commited to (using svnlook)
  • I use svnmerge.py - and only svnmerge.py ! - to merge groups of changes from dev to www. I only pull a complete set of patches from dev so that every commit to www results in a working site.

So, the workflow is now much simpler.

  • Hack as much as I want on a local copy, in a local checkout, on a local machine, inside the dev/ tree
  • When I want to test on the live machine, commit to the dev/ tree and check the auto-updated online copy
  • Repeat until the online copy works
  • svnmerge.py merge on the www/ branch to pull changes from the dev/ branch
  • svn commit -F svnmerge-commit-message.txt to update the live copy

Update: just used this complete system to hack in my first actual change to the website.  For the longest time, my lists had no markers because I removed them using CSS, because I disliked having them on the right where the categories list is shown.  My lack of CSS skills combined with the arduous hacking cycle stopped me from getting it right.  So yes, this system is much easier.

Elisa

Filed under: Dave/Dina,Fluendo,Hacking — Thomas @ 23:15

2007-02-21
23:15

(For the lazy or impatient, eycandy links at the bottom)

Weeks like this are hard. For some reason, I can't get to bed when I want to and end up getting up a few times and trying to go back to sleep. For some reason my head keeps spinning and I keep thinking about stuff that needs doing. This week was supposed to be a week with a little more focus on Elisa, and while I assisted in some discussions, I have not been able to do much yet. Every day it seemed there was some problem with something someone was doing that needed some assistance, or a bunch of meetings to be in.

Days like those that sap my energy and consume my time without really allowing me to make progress on my TODO list are black karma. So I try to offset them by increasing my white karma, trying to help out someone else to do something. I was feeling guilty of never being able to sit down with Phillipe and figure out his needs for the new Elisa website, so when I finally got back to my disk after meeting X I saw the following on IRC:
<philn> MikeS: is it easy to do screencasting with flumotion? because i need to make a new one for elisa

I have been wondering the same thing and thinking it should be easy, but never got round to trying it. And lately I've been wanting to do screencasts of some of my projects, like Flumotion and moap. While I hope Istanbul will work for me sometime in the future, I am not entirely sure it is a good approach for screencasting Elisa, because Istanbul encodes to Theora on the same machine, so it would not leave much CPU to show off the fancy Elisa effects.

So I decided to give it my best shot and stay at work until I got this working.

So, the first step was simple enough. Remember how GStreamer has this crack gst-launch command that we advocate as being useful for prototyping, but that people keep treating as a real player application and spawn it from other programs ? Havoc used to poke fun at blog posts saying "See how easy it is to do stuff with GStreamer ? This pipeline walks your dog and fills up your fridge" and then some random hodgepodge of characters, slashes, numbers and exclamation marks.

Well, good news - flumotion has the same insanity ! There is flumotion-inspect to tell you what components there are, and there is flumotion-launch to launch flumotion flows for rapid prototyping. (And yes, on our side, we still really mean "rapid prototyping".)

So, first stab:

flumotion-launch -d 4 pipeline-producer pipeline="ximagesrc" ! theora-encoder ! ogg-muxer ! disk-consumer directory=`pwd`

Well, that starts up, does stuff, then quickly consumes a lot of CPU. When I stop it, I have a file in my local directory that plays back my desktop. Most players crash on it because XVideo doesn't like 1600x1050 videos. So let's limit the size a little.


flumotion-launch -d 4 pipeline-producer pipeline="ximagesrc endx=800 endy=600" ! theora-encoder ! ogg-muxer ! disk-consumer directory=`pwd`

OK, that gives me a video of the size I want, and works reasonably well. At this point we are still running on only one machine though. That means that our machine is doing the theora encoding, which is a heavy load to carry. flumotion-launch is only for prototyping, and doesn't have a way of distributing components across machines.

So, next step. I take a spare machine in the office which will do theora encoding for me. I start a manager on it, and a worker.
I connect to the spare machine from my own machine using flumotion-admin, and the wizard pops up. I go through the wizard, choosing the videotest producer for video, and the soundcard for audio. I pick Theora bitrate settings, and choose to save the stream to disk instead of streaming it. At the end, I export the configuration to disk, and I start editing it.

There are three things I need to do. For the components that I want to run on my own desktop, I set the worker name to "producer". For the others, I leave the worker name to what it was. The second thing I need to do is replace the videotest-producer section with the pipeline producer component. And the third thing I need to do is to start a worker on my local desktop with the name set to "producer", and make it connect to the remote manager.

After this, I stop the flow in the admin client, clean out all components, and load the new configuration. The flow starts up, and all components turn happy. My memory usage is increasing steadily and then dropping again. Something is probably up, and in the latest version of Flumotion I have some additional things I can look at to help me find problems. I can look at the video-producer component, which is producing a raw video feed, and see if the next component (the encoder) connecting to it over the network is losing any buffers. And yes, as expected, every so often it drops 450 buffers, which is what happens in Flumotion if a component further down the flow doesn't read fast enough.

So, how do I speed things up ? There are three things I can do. I can reduce the framerate, but I would like to show the fluid animations. I can reduce the size a little, so I change to 640x480. But a good reduction in bandwidth can be achieved by already converting the image format in the producer component from raw RGB to I420, a different colorspace that Theora will use anyway and uses less bytes per frame.

Making these changes allowed me to do a reasonably good capture without dropping too much frames. The only problem is, I'm still doing a screencast of the top left part of my desktop. I just want to screencast the Elisa application and nothing else. Here's where I cheated a little - I started offsetting the ximagesrc coordinates to make sure the capture fell inside the Elisa application window. This makes it look like it runs fullscreen.

So, reload config, restart flow, start Elisa, and do some stuff in it. This is my first screencast, and I was focusing on how to make a screencast, not what to show off in a screencast. I'm sure Philippe will do a better job tomorrow if he wants to give this a try.

So without further ado, here is a link to the Elisa screencast (using Cortado to watch it), or a link to a playlist to open in a player, or a link to the Ogg file directly.

And here is the Flumotion config file.

Bug I noticed in Flumotion when trying this: having the manager on the producer machine confused the worker on the encoder machine.

Bugs in Elisa I noticed: Elisa doesn't think my Ogg video files are in fact video, playing a folder of images when playing music makes Elisa stop accepting input events.

Todo for Flumotion: add screencasting to the scenario.

By the way, Philippe will be talking about Elisa at FOSDEM in Brussels, Belgium, this weekend, so if you are there, be sure to catch your dose of eyecandy. I will be there as well, as will Edward, Andy and Florian.

Mission accomplished, 23:07, time to go home.

elisa and lirc hacking

Filed under: Dave/Dina,Fluendo,Hacking — Thomas @ 00:21

2007-02-20
00:21

So, after waiting for months and months on a DirectFB backend for Elisa so I could use my home Dave/Dina machine with it (It has a venerable Matrox G550 which is the best type of card to use with DirectFB), I caved in last week and joined the dark side by buying an NVidia GeForce FX 5200 card and using the binary-only drivers. It took some tweaking and digging, but I managed to get the card a) output with TV/out, b) look good enough to be comparable to the Matrox video output, and c) avoid any tearing effects by enabling all the VSYNC options the tools provided me.

Loic recommended a higher-numbered card, but all cards in the series he recommended have fans, except for one particular model that you have to mail-order from Asia. I want as few fans as possible in my media box.

I hate saying this, but the NVidia configuration tools actually work pretty damn well once you get the hang of it. Most changes are done on the fly (without restarting X), which is a welcome change from the usual hackery. And when I read the README for these drivers and look at the huge amounts of tweaking you can do with these cards (genlocking 4 cards together ? Are they serious ? Is there any open driver out there that can come close to offering something like this ?) I have some amount of respect for the NVidia Linux engineers. I hope the nouveau guys are going to hang in there and deliver on their mission goals, because they have a lot of work cut out for them.

I'm not advocating closed drivers at all, but for the time being I have decided to be practical about this and start hacking on Elisa and actually use it at home, and the NVidia card I got achieves that for now. I could get an LCD TV to avoid having to mess with TV/out, but I feel that there is still a large group of users that want to use something like Elisa with their existing analog TV.

Ironically enough Julien started hacking on a DirectFB renderer for Elisa a few days after I got the NVidia card :)

After that, I upgraded my base distro from FC4 i386 to FC6 x86_64. Again I reacquainted myself with the painful process of getting LIRC to work. Every time I put some serious effort into getting the media box up to date, I am forced to deal with LIRC, and every time I wonder why this has to be so painful. Here are some of the things about lirc that bother me to no end since I started using it five years ago:

  • Why is this stuff not in the kernel ?
  • Why is it so incredibly hard to set up ?
  • Why do all the configuration files for remotes use different namings for the same keys ? This should be a matter of policy dictated by the project. Why does one config file use "FastForward", another "FWD", and yet another ">>|" to mean the same thing, forcing someone to change the configuration of LIRC to work with their applications when they change the remote they use ?
  • Why is there only a terrible command-line application to "train" for your remote ? Elisa should be able to make a simple graphical remote trainer application that helps you set this up from scratch in a user-friendly way.
  • The worst part for me is the fact that this iMon device being used on my Silverstone media box comes with this dinky remote that has some sort of jogwheel that is able to simulate a mouse. Except the driver in lirc ignores that completely, so where every other remote has at the very least UP/DOWN/LEFT/RIGHT as part of the things it can generate, this one can't. And then you get this forum where some guy put an original patch to make the jog wheel be able to generate up/down/left/right, and then you have a bunch of people modifying this patch, customizing, ripping out bits that fail to compile with later kernels, ... But this stuff never gets upstream
  • A year ago I took one of those patches and fixed the kernel module to be a lot more consistent about the events it generates. The kernel driver actually receives fine-grained x/y coordinates from the jog wheel, and then synthesizes the four directions from this. It was doing so with a pretty bad algorithm, making it really hard to consistently go in a given direction. I had a patch for that. Except it needed updating for the newer 2.6.19 kernel, and apparently some things have changed in the input device layer. Sometimes I don't understand why it's ok for something as basic as the kernel to change their internal API all the time, while something like GStreamer, and pretty much everything in the GNOME stack, is forced to be held up to such high API standards
  • .

    Anyway, after all the hardware hacking and distro updating, I have an elisa running at home, with the remote working, and able to show me Veronica Mars, Heroes and Battlestar Galactica.

    Next tasks: push Fedora Extras packages, look at why subtitle rendering is so slow that a 2GHz Athlon64 cannot play the video fast enough from within Elisa, and taking a look at the music parts.

Things I don’t like to see when getting back to work

Filed under: Hacking — Thomas @ 10:59

2007-02-19
10:59

My screen not responding to the keyboard. Having to log in to it from another machine. Top telling me X is using 75% of my memory, of which 2 GB are real and 4 GB are virtual. Having to kill firefox and instantly seeing the X memory use drop from 75% to 3.6%

How long are we going to have to put up with Firefox's blatant abuse of memory ? Is it so hard to free a few pixmaps once in a while so that just moving my mouse pointer doesn't cause swap storms ? Shouldn't my packager override whatever insane default is causing firefox to ALWAYS HANG ON TO EVERY SINGLE BIT IT EVER ALLOCATES ? I think I'd gladly switch to whatever distro that manages to do the right thing with firefox.

Update: I've been getting links from people to pages that explain "how to get Firefox's memory use under control." None of these pages address the issue that I seem to have - Firefox is causing X to hold on to way too much memory. Is it storing pixmaps on the X server ? I have no idea. But it is not simply a matter of having a cache of pages.

Regardless - pages that explain how to fix Firefox's memory are not the right solution. Firefox should be *sensible by default*. Whether they want to provide people options to tweak the size of the bullet, the acceleration of the bullet, and the metal alloy used by the bullet they shoot themselves in the foot with, is entirely up to them.

more Spain

Filed under: Spain — Thomas @ 11:10

2007-02-18
11:10

If your suit's no good - who you gonna call ?

Last Saturday I was picking up Kristien from the airport. I was driving down the long stretch of Marina, before going onto Litoral. I stopped for a traffic light, and while I was waiting for green, all the street lights were turning off a nd on in rapid succession, as if someone in the city's control room was rebooting the whole light system. This went on for as long as I was driving down Marina, and possibly longer. An eerie experience...

Next Page »
picture