gstreamer/tracer
Stefan Sauer ee72bd1b09 tracer: parser: small speedup
Add a parser_perf test. Skip the extra filter stage and change the regex to
match on category=TRACE lines only.
Also flip the check in analysis_runner, since we only have a few tracer
classes in the beginning, the rest are tracer entries.
2016-12-25 11:38:28 +01:00
..
tracer tracer: parser: small speedup 2016-12-25 11:38:28 +01:00
gsttr-stats.py tracer: gsttr-stats: add a fast path for tracer-entry matching 2016-12-25 11:38:28 +01:00
Makefile tracer/Makefile: fix test invocation 2016-12-20 08:24:57 +01:00
README tracer/README: update docs 2016-12-20 10:26:55 +01:00

# Add a python api for tracer analyzers

The python framework will parse the tracer log and aggregate information.
the tool writer will subclass from the Analyzer class and override methods:

  'handle_tracer_class(self, entry)'
  'handle_tracer_entry(self, entry)'

Each of those is optional. The entry field is the parsed log line. In most cases
the tools will parse the structure contained in event[Parser.F_MESSAGE].

A tool will use an AnalysisRunner to chain one or more analyzers and iterate the
log. A tool can also replay the log multiple times. If it does, it won't work in
'streaming' mode though (streaming mode can offer live stats).

## TODO
### gst shadow types
Do we want to provide classes like GstBin, GstElement, GstPad, ... to aggregate
info. One way to get them would be to have a GstLogAnalyzer that knows
about data from the log tracer and populates the classes. Tools then can
do e.g.

  pad.name()             # pad name
  pad.parent().name()    # element name
  pad.peer().parent()    # peer element
  pad.parent().state()   # element state

### improve class handling
We already parse the tracer classes. Add helpers that for numeric values that
extract them, and aggregate min/max/avg. Consider other statistical information
(std. deviation) and provide a rolling average for live view.

## Examples
### Sequence chart generator (mscgen)

1.) Write file header

2.) collect element order
Replay the log and use pad_link_pre to collect pad->peer_pad relationship.
Build a sequence of element names and write to msc file.

3.) collect event processing
Replay the log and use pad_push_event_pre to output message lines to mscfile.

4.) write footer and run the tool.

## Latency stats

1.) collect per sink-latencies and for each sink per source latencies
Calculate min, max, avg. Consider streaming interface, where we update the stats
e.g. once a sec

2.) in non-streaming mode write final statistic

## cpu load stats

Like latency stats, for cpu load. Process cpu load + per thread cpu load.

## top

Combine various stats tools into one.

# Improve tracers
## log
* the log tracer logs args and results into misc categories
* issues
  * not easy/reliable to detect its output among other trace output
  * not easy to match pre/post lines
  * uses own do_log method, instead of gst_tracer_record_log
    * if we also log structures, we need to log the 'function' as the
      structure-name, also fields would be key=(type)val, instead of key=value
    * if we switch to gst_tracer_record_log, we'd need to register 27 formats :/