GStreamer Plugin Writer's Guide (0.8.3.2) | ||
---|---|---|
<<< Previous | Next >>> |
Scheduling is, in short, a method for making sure that every element gets called once in a while to process data and prepare data for the next element. Likewise, a kernel has a scheduler to for processes, and your brain is a very complex scheduler too in a way. Randomly calling elements' chain functions won't bring us far, however, so you'll understand that the schedulers in GStreamer are a bit more complex than this. However, as a start, it's a nice picture. GStreamer currently provides two schedulers: a basic scheduler and an optimal scheduler. As the name says, the basic scheduler ("basic") is an unoptimized, but very complete and simple scheduler. The optimal scheduler ("opt"), on the other hand, is optimized for media processing, but therefore also more complex.
Note that schedulers only operate on one thread. If your pipeline contains
multiple threads, each thread will run with a separate scheduler. That is
the reason why two elements running in different threads need a queue-like
element (a DECOUPLED
element) in between them.
The basic scheduler assumes that each element is its own process. We don't use UNIX processes or POSIX threads for this, however; instead, we use so-called co-threads. Co-threads are threads that run besides each other, but only one is active at a time. The advantage of co-threads over normal threads is that they're lightweight. The disadvantage is that UNIX or POSIX do not provide such a thing, so we need to include our own co-threads stack for this to run.
The task of the scheduler here is to control which co-thread runs at what time. A well-written scheduler based on co-threads will let an element run until it outputs one piece of data. Upon pushing one piece of data to the next element, it will let the next element run, and so on. Whenever a running element requires data from the previous element, the scheduler will switch to that previous element and run that element until it has provided data for use in the next element.
This method of running elements as needed has the disadvantage that a lot of data will often be queued in between two elements, as the one element has provided data but the other element hasn't actually used it yet. These storages of in-between-data are called bufpens, and they can be visualized as a light "queue".
Note that since every element runs in its own (co-)thread, this scheduler is rather heavy on your system for larger pipelines.
<<< Previous | Home | Next >>> |
Advanced Filter Concepts | Up | The Optimal Scheduler |