[lang]

Present Perfect

Personal
Projects
Packages
Patches
Presents
Linux

Picture Gallery
Present Perfect

iPhone 3.0 live HTTP streaming

Filed under: Belgium,Flumotion,Hacking,Nerd Night — Thomas @ 11:52 am

2009-9-26
11:52 am

The last few months news about streaming to iPhone 3.0 has been making the rounds. I’ve been holding off commenting on it for a while since I didn’t actually look into it much and didn’t want to base anything on hearsay. And I don’t even have – or want – an iPhone!

Last week I took some time to read the IETF draft and the Apple developer introduction.

On my next plane ride I quickly hacked together a simple segmenter in Python, and tried it the next day at work to see that it sort-of-worked for about a minute.

And yesterday evening, during Nerd Night, I changed my original plans (since Wiebe cancelled, I wasn’t going to work on the Spykee robot yet) and decided to go back to the iPhone streaming hacking.

After tweaking mpegtsmux to do something useful with GStreamer’s GST_BUFFER_FLAG_DELTA_UNIT and teaching the segmenter to always start a new segment on a non-delta-unit, and after switching to a black videotestrc with a timeoverlay (the normal one seems to trigger a weird encoder bug in our H264 encoder, need some help from our Fluendo codec gurus for that), I started a simple stream last night:

10419_159196618178_644028178_3590270_4998823_n

I left it running for the night.

And this morning when I got up, it was still going strong, and I left it pass the 10 hour mark:

26092009(003)

So, a good first step.

Hope to finish up some loose ends across the week to make this work inside Flumotion.

I’ll leave you with my first impressions on this Apple creation:

  • Naming a draft ‘HTTP Live Streaming’ pretending this is something new after years of Shoutcast – IcecastFlumotion is either plain ignorance or typical Apple hubris. At least qualify the name with something like ‘segmented’, ‘TS’, or ‘high-latency’, Apple. Come on, play nice for once.
  • The streaming system is very different from your typical streaming system. Effectively, this approach creates a live stream by segmenting a live feed into a sequence of MPEG Transport Stream segments at a regular interval. This has some benefits and drawbacks.
  • The key concept is now the playlist file, an extension of .m3u called .m3u8. This playlist file is the entry point into the stream, as it lists the segments that make up the stream.
  • This playlist file can reference other playlist files. This is what enables adaptive bandwidth streaming.
  • One clear benefit that Apple was aiming for is that they effectively managed to separate the preparation part from the streaming part – the actual streaming can be handled by any old web server that can serve up files. I’m sure this is the main benefit they had in mind. The benefit is two-fold: first of all, it’s easy and cheap to install web servers, and second, you get all the benefits of using a bog-standard protocol like HTTP: firewall acceptance, proxy and caching support, edge caching, … Take for example the fact that a company like Akamai charges more for some streaming protocols because they have to deploy specific servers and can’t use all their edge infrastructure for it.
  • Another benefit is that you are generating the data for your live and ondemand streaming at the same time. The transport segments can be reused as is for ondemand .m3u8 streams. This blending of live and ondemand is something we started thinking about with the developers at Flumotion too.
  • A third benefit is how easy this system would make it to do load balancing on a platform. In most streaming services, a connection is long-lived, and hard to migrate between servers. Since in Apple’s live HTTP streaming the stream consists of several short files, you can switch servers by updating the playlists, effectively migrating the streaming sessions to another machine within a minute.
  • As for drawbacks, the biggest drawback I see is the latency. In this system, the latency is at least the segmentation interval times three. This is because the playlist should only contain finished segments, and the spec mandates that the player have at least three segments loaded (one playing, two preloaded) to work. So, the recommended interval of 10 seconds gives you at best a 30 second latency. I don’t really understand why they didn’t work around this limitation somehow (for example, by allowing a growing transport stream in the playlist, marked as such, or referencing future files, marked as such), because this is where live iPhone streaming is going to catch the biggest amount of flak, if our customers’ opinion about latency in general is anything to go by.
  • Another possible drawback is the typical problem with most HTTP streaming systems – no synchronization of server and client clocks. Computer clocks typically don’t match in speed, so in practice this usually means that the client’s buffer will eventually underrun (causing pauses) or overrun (usually causing players to stop). In practice this is not that big of a deal, and I doubt on the iPhone sessions will be long enough to really make this a problem.

Whether this will become a general-purpose streaming protocol remains to be seen. I would assume that Apple is at least going to make this work in a future update of OSX. For us though it is an exciting development, allowing us to showcase the flexibility of our design to this new protocol. And while I saw some fellow GStreamer developers griping about this new way of streaming, there as well it should be seen as an advantage, since (in theory at least) the flexible GStreamer design should make it possible to write a source element for this protocol that abstracts the streaming implementation and just feeds the re-assembled transport stream much like a dvb or firewire element would do.

17 Comments »

  1. … Soooo, in your bullet list, you first say “Oh come on Apple, don’t pretend like you invented something”, and in the next one, you say they did..? I am puzzled.

    Comment by LE — 2009-9-26 @ 1:03 pm

  2. [...] the original post here: Thomas Vander Stichele: iPhone 3.0 live HTTP streaming Share and [...]

    Pingback by Thomas Vander Stichele: iPhone 3.0 live HTTP streaming | TuxWire : The Linux Blog — 2009-9-26 @ 1:35 pm

  3. LE, they definitely invented something. They just didn’t invent ‘HTTP Live Streaming’, because it existed before they did it (in other forms).

    It’s like saying you discovered electricity when all you did was invent a solar panel.

    Comment by Thomas — 2009-9-26 @ 3:17 pm

  4. I haven’t fully reviewed the draft, and this may be my biased square peg into a round hole, but Metalink might be a community alternative.

    It’s an XML format for describing downloads, and could be easily extended.

    Right now it’s for listing multiple URIs and checksums for load balancing/failover and download repair. (It’s mostly used for large files).

    We originally wanted to specify alternate encoding/bitrates as well, but that was cut to slim the spec down.

    http://tools.ietf.org/html/draft-bryan-metalink

    Comment by Ant Bryan — 2009-9-26 @ 7:52 pm

  5. Usually Apple creates new stuff in favor of a feature that the others don’t provide, or provide well or licensing, etc…

    Yea there was Icecast and many others, but adoption of those protocols in a commercial field isn’t very large, so there has to be a good reason why this was created.

    Comment by Hugh Isaacs II — 2009-9-27 @ 12:38 am

  6. Hugh: I doubt commercial adoption of Shoutcast isn’t very large.

    Comment by Thomas — 2009-9-27 @ 7:02 am

  7. Awesome work, Thomas – I was curious what was so special about the new Apple HTTP live streaming protocol and hearing it from you and seeing it work in Flumotion is just totally awesome.

    Comment by Silvia Pfeiffer — 2009-9-28 @ 8:24 am

  8. the playlist makes it possible to skip forward or backwards in a mp3 ‘stream’. afaik this was not possible with other mp3 streaming solutions. true?

    Comment by Andy — 2009-10-14 @ 8:05 pm

  9. [...] Because the service uses playlists with the .m3u8 extension, you can tell that it’s using HTTP Live Streaming, hence iPhone [...]

    Pingback by Watch live UK TV on your iPhone | All About iPhone.net — 2009-10-14 @ 11:28 pm

  10. @Andy: not really, there’s always a way. I have a flumotion patch lying around for example that implements a server-side buffer, and the client can ask for how the stream was ‘in the past’. IIRC RTSP also has ways to do this.

    It does come more natural in Apple’s idea of HTTP streaming though.

    Comment by Thomas — 2009-10-15 @ 10:49 am

  11. Hi
    I would like to get know , Which Open source segmenter was used for Iphone 2.0 to Chop vedios.
    I am in hurry. Looking forward for your quick reply.

    Comment by Saibi Rocker — 2010-8-27 @ 12:57 am

  12. No open source one was used, we made our own.

    Comment by Thomas — 2010-8-30 @ 2:53 pm

  13. I have read somewhere that the server keeps on updating the playlist file in case of live video in “http live streaming” protocol. But the question is: does the server append the urls of new set of videos to the old one or replaces the old urls with new one? I m bit confused in the mechanism of the playlist file in live video!

    Comment by Pradeep — 2010-10-21 @ 11:30 am

  14. @Pradeep: indeed, in the case of Apple’s HTTP Live Streaming, the playlist being updated every X seconds is an integral part. The server usually does a a sliding window; ie each time a new fragment is added at the bottom one is removed from the top. The server could however decide to not remove from the top, and effectively create a larger buffer for timeshifting.

    Comment by Thomas — 2010-10-22 @ 11:46 am

  15. @Thomas: ya, it make sense that server should decide whether to append or not. But I have two analysis as to why it should be sliding window and why it should append to the existing playlist.

    1. why it should be sliding window?
    say the playlist file holds 5 videos each of 10 sec (total of 50 sec). At time 0 sec, user A requests the playlist file and starts watching videos {1,2,3,4,5}, now if user B request the same playlist file after say 55 sec, he should still be able to watch the latest videos {6,7,8,9,10} since it is live. If it is not a sliding window and if it appends, then the playlist file would now contain all the videos {1,2,..10}, so user B may end up watching the video from the beginning.

    2. why it should append?
    say the playlist file holds 5 videos each of 10 sec (total of 50 sec). At time 0 sec, user A request the playlist file and start watching the videos{1,2,3,4,5}, by the time user A downloads all the first 5 videos and request the playlist again, the playlist may end up updating videos {8,9,10,11,12} as there is no sync between the two playlist files. In this case, some video chunks {6,7} may be lost. So it should append the videos all the time so that no video chunk is lost.

    Am I missing something or is there anything wrong in my analysis here? I’m confused !!

    Any thoughts on this?

    Comment by Pradeep — 2010-10-27 @ 8:12 am

  16. Great thread… so I am looking at this a little differently. Hear me out before shooting down this strange need:

    What if the streaming were used with known files which could be pre-compiled on a list… or perhaps dynamically compiled in a sense like streaming, except it might be that static titles were not queued in advance of the runtime…

    I want the first use- devised to thread together known videos using the Streaming format to queue them seamlessly. Why not target a playlist? I will undoubtedly USE a playlist in the implementation, however, I need this for the player popping up from a WEBPAGE. Go figure that.

    So- by extension- and hopefully within a couple weeks, I can create an implementation which might do something really interesting- like pipe in dynamically added listee videos to a steady stream in the same instance of the web-”embedded” video player on iPhone.

    Comment by bellasys — 2011-2-12 @ 7:17 pm

  17. Hey there. Thanks a lot for the good article. It’s sure makes some issues clearer.

    Can you tell whether there GStreamer supports the HLS server side? Can you point me to a pipeline/code that can be embedded in the server side in order to transmit the TS chunks?

    I know how to create a TS stream/file, but I do not know how to segment these stream/file into 10-seconds chunk. I also don’t know how to manage the play list while segmentation is done.

    Thanks in advance

    Comment by aviv — 2012-1-24 @ 8:54 am

RSS feed for comments on this post. TrackBack URL

Leave a comment

picture