mirror of
https://gitlab.freedesktop.org/gstreamer/gstreamer.git
synced 2025-04-26 04:36:20 +00:00
docs/random/ensonic/: more thinking
Original commit message from CVS: * docs/random/ensonic/embedded.txt: * docs/random/ensonic/profiling.txt: * docs/random/ensonic/receipies.txt: more thinking
This commit is contained in:
parent
47976eb0c2
commit
170662e2c1
4 changed files with 70 additions and 1 deletions
|
@ -1,3 +1,10 @@
|
||||||
|
2006-11-15 Stefan Kost <ensonic@users.sf.net>
|
||||||
|
|
||||||
|
* docs/random/ensonic/embedded.txt:
|
||||||
|
* docs/random/ensonic/profiling.txt:
|
||||||
|
* docs/random/ensonic/receipies.txt:
|
||||||
|
more thinking
|
||||||
|
|
||||||
2006-11-13 Wim Taymans <wim@fluendo.com>
|
2006-11-13 Wim Taymans <wim@fluendo.com>
|
||||||
|
|
||||||
Patch by: Mark Nauwelaerts <manauw at skynet dot be>
|
Patch by: Mark Nauwelaerts <manauw at skynet dot be>
|
||||||
|
|
20
docs/random/ensonic/embedded.txt
Normal file
20
docs/random/ensonic/embedded.txt
Normal file
|
@ -0,0 +1,20 @@
|
||||||
|
$ID$
|
||||||
|
|
||||||
|
= embedded =
|
||||||
|
|
||||||
|
== index handling ==
|
||||||
|
|
||||||
|
For avidemux I currently have a big patch doing memory optimized index handling.
|
||||||
|
It basically thins out the index to save memory. Right now it only keeps index
|
||||||
|
entries marked with the avi keyframe flag.
|
||||||
|
|
||||||
|
In gstreamer core we have some indexing objects. They are curently used nowhere.
|
||||||
|
The idea is to use them and to make the index strategy plugable or configurable
|
||||||
|
at run time.
|
||||||
|
|
||||||
|
The challenge is then to rewrite muxers and demuxers to use them instead of the
|
||||||
|
built in index logic.
|
||||||
|
|
||||||
|
This way the different requirements of desktop and embedded platforms could be
|
||||||
|
encapsulated in the indexer strategy.
|
||||||
|
|
|
@ -76,7 +76,12 @@ $Id$
|
||||||
== rusage + pad-probes =
|
== rusage + pad-probes =
|
||||||
* check get_rusage() based cpu usage detection in buzztard
|
* check get_rusage() based cpu usage detection in buzztard
|
||||||
this together with pad_probes could gives us decent application level profiles
|
this together with pad_probes could gives us decent application level profiles
|
||||||
* 1:1 elements are easy to handle, n:1, 1:m and n:m type elemnts are tricky
|
* different elements
|
||||||
|
* 1:1 elements are easy to handle
|
||||||
|
* 0:1 elements need a start timer
|
||||||
|
* 1:0 elements need a end timer
|
||||||
|
* n:1, 1:m and n:m type elemnts are tricky
|
||||||
|
adapter based elements might have a fluctuating usage in addition
|
||||||
|
|
||||||
// result data
|
// result data
|
||||||
struct {
|
struct {
|
||||||
|
|
37
docs/random/ensonic/receipies.txt
Normal file
37
docs/random/ensonic/receipies.txt
Normal file
|
@ -0,0 +1,37 @@
|
||||||
|
$Id$
|
||||||
|
|
||||||
|
= receipies =
|
||||||
|
|
||||||
|
The idea is to collect some recommendations for common, but not so trivial
|
||||||
|
tasks. docs/design/part-block.txt has something like that already. Ideally these
|
||||||
|
would go to the application developer manual and there would be sample code.
|
||||||
|
|
||||||
|
== initial seeking ==
|
||||||
|
=== question ===
|
||||||
|
How to I configure the initial playback segment?
|
||||||
|
|
||||||
|
=== idea ===
|
||||||
|
1) set pipeline to PAUSED
|
||||||
|
2) send seek event
|
||||||
|
3) set pipeline to PLAYING
|
||||||
|
|
||||||
|
=== problems ===
|
||||||
|
1) would preroll the pipeline only to flush it when the seek comes
|
||||||
|
|
||||||
|
|
||||||
|
== async state changes ==
|
||||||
|
=== question ===
|
||||||
|
what to do when gst_element_set_state() returns ASYNC?
|
||||||
|
|
||||||
|
=== idea ===
|
||||||
|
1) listen to the STATE_CHANGED message on the bus
|
||||||
|
2) trigger next action
|
||||||
|
|
||||||
|
=== problems ===
|
||||||
|
This scatters logic over multiple functions (callbacks).
|
||||||
|
|
||||||
|
|
||||||
|
== topic ==
|
||||||
|
=== question ===
|
||||||
|
=== idea ===
|
||||||
|
=== problems ===
|
Loading…
Reference in a new issue