|
|
Had a migraine attack this weekend so I took it relatively easy on the hacking side. I finally finished setting up some new buildbots, and as part of that I rebuilt some packages for CentOS/RHEL 5.
So I also updated my package page that now has direct download links for release packages for my repo, to make it easier for you guys to install stuff from there if you're interested.
I use these repositories for prereleases and releases of the projects I work on, as well as packages that I have submitted to Fedora or otherwise needed and haven't had time to submit properly yet. You can still browse the repositories to take a peek.
Had a migraine attack this weekend so I took it relatively easy on the hacking side. I finally finished setting up some new buildbots, and as part of that I...
This is the first release of hopefully many.
Morituri is a cd ripper that aims for quality over speed, offering features similar to Exact Audio Copy on Windows to make as perfect rips as possible.
For more information, see the trac page
The tao Fedora 11 repository has the release and dependencies.
Enjoy!
FEATURES
--------
- support for MusicBrainz for metadata lookup
- support for AccurateRip verification
- detects sample read offset of drives
- performs test and copy rip
- detects and rips Hidden Track One Audio
- templates for file and directory naming
- support for lossless encoding only for now
- tagging using GStreamer
- for now, only a command line client (rip) is shipped
REQUIREMENTS
------------
- cdparanoia, for the actual ripping
- cdrdao, for session, TOC, pregap, and ISRC extraction
- GStreamer and its python bindings, for encoding
- python musicbrainz2, for metadata lookup
- pycdio, for drive identification (optional)
This is the first release of hopefully many. Morituri is a cd ripper that aims for quality over speed, offering features similar to Exact Audio Copy on Windows to make...
The last few months news about streaming to iPhone 3.0 has been making the rounds. I've been holding off commenting on it for a while since I didn't actually look into it much and didn't want to base anything on hearsay. And I don't even have - or want - an iPhone!
Last week I took some time to read the IETF draft and the Apple developer introduction.
On my next plane ride I quickly hacked together a simple segmenter in Python, and tried it the next day at work to see that it sort-of-worked for about a minute.
And yesterday evening, during Nerd Night, I changed my original plans (since Wiebe cancelled, I wasn't going to work on the Spykee robot yet) and decided to go back to the iPhone streaming hacking.
After tweaking mpegtsmux to do something useful with GStreamer's GST_BUFFER_FLAG_DELTA_UNIT and teaching the segmenter to always start a new segment on a non-delta-unit, and after switching to a black videotestrc with a timeoverlay (the normal one seems to trigger a weird encoder bug in our H264 encoder, need some help from our Fluendo codec gurus for that), I started a simple stream last night:
71118
I left it running for the night.
And this morning when I got up, it was still going strong, and I left it pass the 10 hour mark:
71122
So, a good first step.
Hope to finish up some loose ends across the week to make this work inside Flumotion.
I'll leave you with my first impressions on this Apple creation:
- Naming a draft 'HTTP Live Streaming' pretending this is something new after years of Shoutcast - Icecast - Flumotion is either plain ignorance or typical Apple hubris. At least qualify the name with something like 'segmented', 'TS', or 'high-latency', Apple. Come on, play nice for once.
- The streaming system is very different from your typical streaming system. Effectively, this approach creates a live stream by segmenting a live feed into a sequence of MPEG Transport Stream segments at a regular interval. This has some benefits and drawbacks.
- The key concept is now the playlist file, an extension of .m3u called .m3u8. This playlist file is the entry point into the stream, as it lists the segments that make up the stream.
- This playlist file can reference other playlist files. This is what enables adaptive bandwidth streaming.
- One clear benefit that Apple was aiming for is that they effectively managed to separate the preparation part from the streaming part - the actual streaming can be handled by any old web server that can serve up files. I'm sure this is the main benefit they had in mind. The benefit is two-fold: first of all, it's easy and cheap to install web servers, and second, you get all the benefits of using a bog-standard protocol like HTTP: firewall acceptance, proxy and caching support, edge caching, ... Take for example the fact that a company like Akamai charges more for some streaming protocols because they have to deploy specific servers and can't use all their edge infrastructure for it.
- Another benefit is that you are generating the data for your live and ondemand streaming at the same time. The transport segments can be reused as is for ondemand .m3u8 streams. This blending of live and ondemand is something we started thinking about with the developers at Flumotion too.
- A third benefit is how easy this system would make it to do load balancing on a platform. In most streaming services, a connection is long-lived, and hard to migrate between servers. Since in Apple's live HTTP streaming the stream consists of several short files, you can switch servers by updating the playlists, effectively migrating the streaming sessions to another machine within a minute.
- As for drawbacks, the biggest drawback I see is the latency. In this system, the latency is at least the segmentation interval times three. This is because the playlist should only contain finished segments, and the spec mandates that the player have at least three segments loaded (one playing, two preloaded) to work. So, the recommended interval of 10 seconds gives you at best a 30 second latency. I don't really understand why they didn't work around this limitation somehow (for example, by allowing a growing transport stream in the playlist, marked as such, or referencing future files, marked as such), because this is where live iPhone streaming is going to catch the biggest amount of flak, if our customers' opinion about latency in general is anything to go by.
- Another possible drawback is the typical problem with most HTTP streaming systems - no synchronization of server and client clocks. Computer clocks typically don't match in speed, so in practice this usually means that the client's buffer will eventually underrun (causing pauses) or overrun (usually causing players to stop). In practice this is not that big of a deal, and I doubt on the iPhone sessions will be long enough to really make this a problem.
Whether this will become a general-purpose streaming protocol remains to be seen. I would assume that Apple is at least going to make this work in a future update of OSX. For us though it is an exciting development, allowing us to showcase the flexibility of our design to this new protocol. And while I saw some fellow GStreamer developers griping about this new way of streaming, there as well it should be seen as an advantage, since (in theory at least) the flexible GStreamer design should make it possible to write a source element for this protocol that abstracts the streaming implementation and just feeds the re-assembled transport stream much like a dvb or firewire element would do.
The last few months news about streaming to iPhone 3.0 has been making the rounds. I've been holding off commenting on it for a while since I didn't actually look...
This morning Carl (who just turned father for the second time) called me to ask me if I was interested in going to see Leonard Cohen tonight. It took me all of 10 seconds to decide. Leonard Cohen is someone I don't have much music by, but I've always wanted to see him live, and never tried to get tickets because it's the kind of four hour queue nightmare that has stopped me from seeing, say, U2 or Bruce Springsteen.
I was a bit worried that only knowing a few of the songs would detract from my enjoyment, but the contrary was true. He played all the songs I was hoping to hear, and the other songs ranged from pretty good to stellar as well. So I need to dive more into his catalogue.
It probably helped the atmosphere that today was his 75th birthday, and it also didn't hurt that his Spanish guitar player was playing a home match. He put on a 3hour+ show, with stellar renditions of personal favourites like "Take This Waltz" (with a perfect female voice too), "I'm Your Man", "Cure For Love", "Dance me to the End of Love", and "First we take Manhattan". Awesome show, and well worth the 60 euros impulse decision.
In sharp contrast to last Saturday's Archive show in Belgium. It was planned well in advance, started on the wrong foot because I had forgotten I bought a ticket for my sister but she wouldn't be able to make it, and not finding anyone to replace her. I ended up driving around for an hour just trying to find a parking space (should have taken my bike, would have been faster), and then didn't manage to sell the extra ticket.
As for the show, it seems Archive are reconnecting with their original hip-hop-influenced roots, even welcoming again into the collective the MC they had originally. I personally prefer their more progressive/rock side. I didn't know the new album very well yet, and my personal favouite 'You all look the same to me' didn't bring many songs to the set list - except for the awesome encore of 'Again', the opener from that album, lasting the full 15 minutes. They didn't play 'Fuck You', puzzling since it's a fan favourite. I thought the show was good, but it really hurt that I didn't know most of the songs.
In the end I paid the same price for Leonard Cohen, he played double the time, and I enjoyed it a lot more. Hope he comes around again!
This morning Carl (who just turned father for the second time) called me to ask me if I was interested in going to see Leonard Cohen tonight. It took me...
Usually when I run out of space on my laptop I do a du --max-depth=1 to a file in my home dir, wait, look at it, then drill down to the biggest dir and repeat. I find some stuff, delete some stuff, free up some space, then continue.
Each time I think 'there's got to be a better way to do this', and today I remembered the name 'baobab'. I was surprised to find out that a) this was already included by default in GNOME and b) already installed on my system. I didn't find it where the online docs said I would (on my F-11 system it's under System Tools instead of Accessories - I'm guessing that's a Fedora decision).
It took a while to run on my home directory (about 10 minutes I think), but I just used to drill down to a few levels, and freed up 2.5 GB of space in under 5 minutes. With my manual system I'm lucky if I delete half a gig in 15 minutes!
Excellent, excellent tool, and flawlessly executed. Apart from launching it I didn't need to learn anything, it just worked as expected, and it told me what I needed to know in less time than before. Fabio Marzocca, you're my hero of the week!
Usually when I run out of space on my laptop I do a du --max-depth=1 to a file in my home dir, wait, look at it, then drill down to...
Next Page »
|