2016-12-12 21:38:57 +00:00
|
|
|
# Add a python api for tracer analyzers
|
|
|
|
|
|
|
|
The python framework will parse the tracer log and aggregate information.
|
2016-12-20 09:26:55 +00:00
|
|
|
the tool writer will subclass from the Analyzer class and override methods:
|
2016-12-12 21:38:57 +00:00
|
|
|
|
2016-12-20 09:26:55 +00:00
|
|
|
'handle_tracer_class(self, entry)'
|
|
|
|
'handle_tracer_entry(self, entry)'
|
2016-12-12 21:38:57 +00:00
|
|
|
|
2016-12-20 09:26:55 +00:00
|
|
|
Each of those is optional. The entry field is the parsed log line. In most cases
|
|
|
|
the tools will parse the structure contained in event[Parser.F_MESSAGE].
|
|
|
|
|
|
|
|
A tool will use an AnalysisRunner to chain one or more analyzers and iterate the
|
|
|
|
log. A tool can also replay the log multiple times. If it does, it won't work in
|
|
|
|
'streaming' mode though (streaming mode can offer live stats).
|
|
|
|
|
|
|
|
## TODO
|
|
|
|
### gst shadow types
|
|
|
|
Do we want to provide classes like GstBin, GstElement, GstPad, ... to aggregate
|
|
|
|
info. One way to get them would be to have a GstLogAnalyzer that knows
|
|
|
|
about data from the log tracer and populates the classes. Tools then can
|
|
|
|
do e.g.
|
2016-12-12 21:38:57 +00:00
|
|
|
|
|
|
|
pad.name() # pad name
|
|
|
|
pad.parent().name() # element name
|
|
|
|
pad.peer().parent() # peer element
|
|
|
|
pad.parent().state() # element state
|
|
|
|
|
2016-12-20 09:26:55 +00:00
|
|
|
### improve class handling
|
|
|
|
We already parse the tracer classes. Add helpers that for numeric values that
|
|
|
|
extract them, and aggregate min/max/avg. Consider other statistical information
|
|
|
|
(std. deviation) and provide a rolling average for live view.
|
2016-12-12 21:38:57 +00:00
|
|
|
|
|
|
|
## Examples
|
|
|
|
### Sequence chart generator (mscgen)
|
|
|
|
|
|
|
|
1.) Write file header
|
|
|
|
|
|
|
|
2.) collect element order
|
|
|
|
Replay the log and use pad_link_pre to collect pad->peer_pad relationship.
|
|
|
|
Build a sequence of element names and write to msc file.
|
|
|
|
|
|
|
|
3.) collect event processing
|
|
|
|
Replay the log and use pad_push_event_pre to output message lines to mscfile.
|
|
|
|
|
|
|
|
4.) write footer and run the tool.
|
|
|
|
|
|
|
|
## Latency stats
|
|
|
|
|
|
|
|
1.) collect per sink-latencies and for each sink per source latencies
|
|
|
|
Calculate min, max, avg. Consider streaming interface, where we update the stats
|
|
|
|
e.g. once a sec
|
|
|
|
|
|
|
|
2.) in non-streaming mode write final statistic
|
|
|
|
|
|
|
|
## cpu load stats
|
|
|
|
|
|
|
|
Like latency stats, for cpu load. Process cpu load + per thread cpu load.
|
|
|
|
|
|
|
|
## top
|
|
|
|
|
|
|
|
Combine various stats tools into one.
|
|
|
|
|
|
|
|
# Improve tracers
|
|
|
|
## log
|
|
|
|
* the log tracer logs args and results into misc categories
|
|
|
|
* issues
|
|
|
|
* not easy/reliable to detect its output among other trace output
|
|
|
|
* not easy to match pre/post lines
|
|
|
|
* uses own do_log method, instead of gst_tracer_record_log
|
|
|
|
* if we also log structures, we need to log the 'function' as the
|
|
|
|
structure-name, also fields would be key=(type)val, instead of key=value
|
|
|
|
* if we switch to gst_tracer_record_log, we'd need to register 27 formats :/
|