2005-02-16 17:19:06 +00:00
|
|
|
$Id$
|
|
|
|
|
2006-10-16 13:53:55 +00:00
|
|
|
= profiling =
|
2005-02-16 17:19:06 +00:00
|
|
|
|
2006-10-16 13:53:55 +00:00
|
|
|
== what information is interesting? ==
|
|
|
|
* pipeline throughoutput
|
|
|
|
if we know the cpu-load for a given datastream, we could extrapolate what the
|
|
|
|
system can handle
|
|
|
|
-> qos profiling
|
|
|
|
* load distribution
|
|
|
|
which element causes which cpu load/memory usage
|
2005-02-16 17:19:06 +00:00
|
|
|
|
|
|
|
|
2006-08-30 12:28:55 +00:00
|
|
|
= qos profiling =
|
|
|
|
|
|
|
|
* what data is needed ?
|
2007-03-28 13:44:41 +00:00
|
|
|
* (streamtime,propotion) pairs from sinks
|
2006-08-30 12:28:55 +00:00
|
|
|
draw a graph with gnuplot or similar
|
|
|
|
* number of frames in total
|
2007-07-20 07:26:39 +00:00
|
|
|
* number of audio/video frames dropped from each element that support QOS
|
|
|
|
* could be expressed as percent in relation to total-frames
|
2006-08-30 12:28:55 +00:00
|
|
|
|
2007-07-20 07:26:39 +00:00
|
|
|
* query data (e.g. via. gst-launch)
|
2006-08-30 12:28:55 +00:00
|
|
|
* add -r, --report option to gst-launch
|
2007-07-20 07:26:39 +00:00
|
|
|
* during playing we capture QOS-events to record 'streamtime,proportion' pairs
|
2006-10-16 13:53:55 +00:00
|
|
|
gst_pad_add_event_probe(video_sink->sink_pad,handler,data)
|
|
|
|
* during playback we like to know when an elemnt drops frames
|
|
|
|
what about elements sending a qos_action message?
|
2006-08-30 12:28:55 +00:00
|
|
|
* after EOS, send qos-queries to each element in the pipeline
|
|
|
|
* qos-query will return:
|
|
|
|
number of frames rendered
|
|
|
|
number of frames dropped
|
|
|
|
* print a nice table with the results
|
2006-10-16 13:53:55 +00:00
|
|
|
* QOS stats first
|
2007-07-20 07:26:39 +00:00
|
|
|
* writes a gnuplot data file
|
|
|
|
* list of 'streamtime,proportion,<drop>' tuples
|
|
|
|
|
2006-08-30 12:28:55 +00:00
|
|
|
|
2006-10-16 13:53:55 +00:00
|
|
|
= core profiling =
|
|
|
|
|
|
|
|
* scheduler keeps a list of usecs the process function of each element was
|
|
|
|
running
|
|
|
|
* process functions are: loop, chain, get, they are driven by gst_pad_push() and
|
|
|
|
gst_pad_pull_range()
|
|
|
|
* scheduler keeps a sum of all times
|
|
|
|
* each gst-element has a profile_percentage field
|
|
|
|
|
|
|
|
* when going to play
|
|
|
|
* scheduler sets sum and all usecs in the list to 0
|
|
|
|
* when handling an element
|
|
|
|
* remember old usecs t_old
|
|
|
|
* take time t1
|
|
|
|
* call elements processing function
|
|
|
|
* take time t2
|
|
|
|
* t_new=t2-t1
|
|
|
|
* sum+=(t_new-t_old)
|
|
|
|
* profile_percentage=t_new/sum;
|
|
|
|
* should the percentage be averaged?
|
|
|
|
* profile_percentage=(profile_percentage+(t_new/sum))/2.0;
|
|
|
|
|
|
|
|
* the profile_percentage shows how much CPU time the element uses in relation
|
|
|
|
to the whole pipeline
|
|
|
|
|
2006-11-06 15:17:35 +00:00
|
|
|
== rusage + pad-probes =
|
2006-10-16 13:53:55 +00:00
|
|
|
* check get_rusage() based cpu usage detection in buzztard
|
2006-10-20 11:36:56 +00:00
|
|
|
this together with pad_probes could gives us decent application level profiles
|
2006-11-15 13:00:16 +00:00
|
|
|
* different elements
|
|
|
|
* 1:1 elements are easy to handle
|
|
|
|
* 0:1 elements need a start timer
|
|
|
|
* 1:0 elements need a end timer
|
|
|
|
* n:1, 1:m and n:m type elemnts are tricky
|
|
|
|
adapter based elements might have a fluctuating usage in addition
|
2006-11-06 15:17:35 +00:00
|
|
|
|
|
|
|
// result data
|
|
|
|
struct {
|
|
|
|
beg_min,beg_max;
|
|
|
|
end_min,end_max;
|
|
|
|
} profile_data;
|
|
|
|
|
|
|
|
// install probes
|
|
|
|
gst_bin_iterate_elements(pipeline)
|
|
|
|
gst_element_iterate_pads(element)
|
|
|
|
if (gst_pad_get_direction(pad)==GST_PAD_SRC)
|
|
|
|
gst_pad_add_buffer_probe(pad,end_timer,profile_data)
|
|
|
|
else
|
|
|
|
gst_pad_add_buffer_probe(pad,beg_timer,profile_data)
|
|
|
|
|
|
|
|
// listen to bus state-change messages to
|
|
|
|
// * reset counters on NULL_TO_READY
|
|
|
|
// * print results on READY_TO_NULL
|
2006-10-16 13:53:55 +00:00
|
|
|
|