docs/random/ensonic/distributed.txt: add some ideas about doing distributed processing

Original commit message from CVS:
* docs/random/ensonic/distributed.txt:
add some ideas about doing distributed processing
* docs/random/ensonic/profiling.txt:
get_rusage look promising
This commit is contained in:
Stefan Kost 2006-10-20 11:36:56 +00:00
parent 14da85cf94
commit dc159be1b2
3 changed files with 41 additions and 0 deletions

View file

@ -1,3 +1,11 @@
2006-10-20 Stefan Kost <ensonic@users.sf.net>
* docs/random/ensonic/distributed.txt:
add some ideas about doing distributed processing
* docs/random/ensonic/profiling.txt:
get_rusage look promising
2006-10-18 Stefan Kost <ensonic@users.sf.net>
* docs/manual/basics-helloworld.xml:

View file

@ -0,0 +1,32 @@
$Id$
= distributed gstreamer pipelines =
The idea is to have a proxy element for remote elements so that you can treat
the whole pipeline as a local one. The proxy element creates the real instance
by talking to GOD (GStreamer Object Daemon, GObject Daemon, ...) on the
respective machine.
At runtime when the proxy-element receives data it sends it to the remote
element and after processing it gets it back and forwards it to the element.
The challenge is to optimize links when multiple conected elements are on the
same remote machine so that the data gets passed directly there.
== proxy creation ==
In addition to
GstElement* gst_element_factory_make (const gchar *factoryname,
const gchar *name);
we need:
GstElement* gst_element_factory_make_remote (const gchar *factoryname,
const gchar *name,
GstRemoteFactory *remote);
and some API to get a remote factory handle via hostname lookup, ip address
lookup or even zeroconf (avahi).
== issues / thoughts ==
* we need to distribute the clock

View file

@ -74,4 +74,5 @@ $Id$
to the whole pipeline
* check get_rusage() based cpu usage detection in buzztard
this together with pad_probes could gives us decent application level profiles