validateflow: Add a way to configure when to generate expectations

By default, generate them whenever the file is missing but adding a way
to override that with `validateflow,generate-expectations=true` to force
regenerating them or setting `validateflow,generate-expectations=false`
to disallow generating them (on CI servers for example)

Also update the validateflow documentation to take that into account
and remove references to pipeline.json file which is now gone!

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-devtools/-/merge_requests/200>
This commit is contained in:
Thibault Saunier 2020-05-05 18:09:08 -04:00
parent 3264de6751
commit b1cf1ffebd
9 changed files with 227 additions and 75 deletions

View file

@ -0,0 +1,8 @@
meta,
args = {
"fakesrc num-buffers=1 ! fakesink name=sink",
},
configs = {
"$(validateflow), pad=sink:sink, buffers-checksum=true",
}
# The validate tool will simply play the pipeline until EOS is reached.

View file

@ -0,0 +1 @@
fakesrc.simple.validatetest

View file

@ -0,0 +1,4 @@
event stream-start: GstEventStreamStart, flags=(GstStreamFlags)GST_STREAM_FLAG_NONE, group-id=(uint)1;
event segment: format=BYTES, start=0, offset=0, stop=18446744073709551615, time=0, base=0, position=0
buffer: checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709, dts=0:00:00.000000000, flags=discont
event eos: (no structure)

View file

@ -1,62 +1,99 @@
# Validate Flow plugin
Validate Flow plugin &mdash; GstValidate plugin to record a log of buffers and events and compare them to an expectation file.
Validate Flow plugin — GstValidate plugin to record a log of buffers and
events and compare them to an expectation file.
## Description
This plugin exists for the purpose of testing non-regular-playback use cases where the test author specifies the full pipeline, a series of actions and needs to check whether the generated buffers and events make sense.
This plugin exists for the purpose of testing non-regular-playback use cases
where the test author specifies the full pipeline, a series of actions and needs
to check whether the generated buffers and events make sense.
The testing procedure goes like this:
1. The test author writes a validate configuration where validateflow is used. A pad where monitoring will occur is specified. A scenario containing actions to run (e.g. push buffers from an appsrc) can also be specified.
1. The test author writes a [.validatetest](gst-validate-test-file.md) test
where validateflow is used. A pad where monitoring will occur is specified
and possibly a list of [actions](gst-validate-action-types.md) to run can
also be specified.
2. The test author runs the test with the desired pipeline, the validate config created before, and the scenario. Since an expectation file does not exist at this point, validateflow will create one. The author should check its contents for any missing or unwanted events. No actual checking is done by validateflow in this step, since there is nothing to compare to yet.
2. The test author runs the test with the desired pipeline, the configuration
and the actions. Since an expectation file does not exist at
this point, validateflow will create one. The author should check its
contents for any missing or unwanted events. No actual checking is done by
validateflow in this step, since there is nothing to compare to yet.
3. Further executions of the test will also record the produced buffers and events, but now they will be compared to the previous log (expectation file). Any difference will be reported as a test failure. The original expectation file is never modified by validateflow. Any desired changes can be made by editing the file manually or deleting it and running the test again.
validateflow can be run standalone with gst-validate-1.0, but most of the time it will be used in `pipelines.json`, run by gst-validate-launcher, which will take care of creating all the necessary files and some configuration boilerplate. To run all these tests execute:
gst-validate-launcher validate.launch_pipeline.'*' -m
You can also specify a specific test like this:
gst-validate-launcher validate.launch_pipeline.qtdemux_change_edit_list.default -m
3. Further executions of the test will also record the produced buffers and
events, but now they will be compared to the previous log (expectation file).
Any difference will be reported as a test failure. The original expectation
file is never modified by validateflow. Any desired changes can be made by
editing the file manually or deleting it and running the test again.
## Example
The following is an example of a test in `pipelines.json` using validateflow. This file can usually be found in `~/gst-validate/gst-integration-testsuites/testsuites/pipelines.json`:
### Simplest example
``` json
"qtdemux_change_edit_list":
{
"pipeline": "appsrc ! qtdemux ! fakesink async=false",
"config": [
"%(validateflow)s, pad=fakesink0:sink, record-buffers=false"
],
"scenarios": [
{
"name": "default",
"actions": [
"description, seek=false, handles-states=false",
"appsrc-push, target-element-name=appsrc0, file-name=\"%(medias)s/fragments/car-20120827-85.mp4/init.mp4\"",
"appsrc-push, target-element-name=appsrc0, file-name=\"%(medias)s/fragments/car-20120827-85.mp4/media1.mp4\"",
"checkpoint, text=\"A moov with a different edit list is now pushed\"",
"appsrc-push, target-element-name=appsrc0, file-name=\"%(medias)s/fragments/car-20120827-86.mp4/init.mp4\"",
"appsrc-push, target-element-name=appsrc0, file-name=\"%(medias)s/fragments/car-20120827-86.mp4/media2.mp4\"",
"stop"
]
}
]
},
The following is an example of a `fakesrc.simple.validatetest` file using
validateflow.
{{ plugins/fakesrc.simple.validatetest.yaml }}
Then generate the expectation file with:
``` bash
gst-validate-1.0 --set-test-file /path/to/fakesrc.simple.validatetest
```
This example shows the elements of a typical validate flow test (a pipeline, a config and a scenario). Some actions typically used together with validateflow can also be seen. Notice variable interpolation is used to fill absolute paths for media files in the scenario (`%(medias)s`). In the configuration, `%(validateflow)s` is expanded to something like this, containing proper paths for expectations and actual results:
This will generate the
`/path/to/fakesrc.simple/flow-expectations/log-sink-sink-expected` file
containing:
{{ plugins/fakesrc.simple/flow-expectations/log-sink-sink-expected.log }}
Note that the test will be marked as "SKIPPED" when we generate expectation
files.
The test can now be run with:
```
gst-validate-1.0 --set-test-file /path/to/fakesrc.simple.validatetest
```
### Example controlling the source
The following is an example of the `qtdemux_change_edit_list.validatetest` file using validateflow.
``` yaml
validateflow, expectations-dir="/home/ntrrgc/gst-validate/gst-integration-testsuites/flow-expectations/qtdemux_change_edit_list", actual-results-dir="/home/ntrrgc/gst-validate/logs/validate/launch_pipeline/qtdemux_change_edit_list"
set-globals, media_dir="$(test_dir)/../../../medias/"
meta,
seek=false,
handles-states=false,
args = {
"appsrc ! qtdemux ! fakesink async=false",
},
configs = {
"$(validateflow), pad=fakesink0:sink, record-buffers=false",
}
# Scenario action types
appsrc-push, target-element-name=appsrc0, file-name="$(media_dir)/fragments/car-20120827-85.mp4/init.mp4"
appsrc-push, target-element-name=appsrc0, file-name="$(media_dir)/fragments/car-20120827-85.mp4/media1.mp4"
checkpoint, text="A moov with a different edit list is now pushed"
appsrc-push, target-element-name=appsrc0, file-name="$(media_dir)/fragments/car-20120827-86.mp4/init.mp4"
appsrc-push, target-element-name=appsrc0, file-name="$(media_dir)/fragments/car-20120827-86.mp4/media2.mp4"
stop
```
When running the tests, a config file will be created under the hood by gst-validate-launcher and passed as `GST_VALIDATE_CONFIG`. Similarly, scenario files will be created and set in `GST_VALIDATE_SCENARIO`. gst-validate-1.0 will be run with the specified pipeline.
This example shows the elements of a typical validate flow test (a pipeline, a
config and a scenario). Some actions typically used together with validateflow
can also be seen. Notice variable interpolation is used to fill absolute paths
for media files in the scenario (`$(test_dir)`). In the configuration,
`$(validateflow)` is expanded to something like this, containing proper paths
for expectations and actual results (these values are interpolated from the
`.validatetest` file location):
``` yaml
validateflow, expectations-dir="/validate/test/file/path/validateqtdemux_change_edit_list/flow-expectations/", actual-results-dir="$(GST_VALIDATE_LOGSDIR)/logs/validate/launch_pipeline/qtdemux_change_edit_list"
```
The resulting log looks like this:
@ -80,26 +117,63 @@ event caps: video/x-h264, stream-format=(string)avc, alignment=(string)au, level
## Configuration
In order to use the plugin a validate configuration file must be provided, containing a line starting by `validateflow` followed by a number of settings. Every `validateflow` line creates a `ValidateFlowOverride`, which listens to a given pad. A test may have several `validateflow` lines, therefore having several overrides and listening to different pads with different settings.
In order to use the plugin a validate configuration file must be provided,
containing a line starting by `validateflow` followed by a number of settings.
Every `validateflow` line creates a `ValidateFlowOverride`, which listens to a
given pad. A test may have several `validateflow` lines, therefore having
several overrides and listening to different pads with different settings.
* `pad`: Required. Name of the pad that will be monitored.
* `record-buffers`: Default: false. Whether buffers will be logged. By default only events are logged.
* `buffers-checksum`: Default: false. Whether a checkum of the buffer data is logged. Implies `record-buffers`.
* `ignored-fields`: Default: `"stream-start={ stream-id }"` (as they are often non reproducible). Key with a serialized GstValueList(str) of fields to not record.
* `logged-fields`: Default: `NULL` Key with a serialized GstValueList(str) of fields to record, eg. `logged-event-fields="stream-start={flags}, caps={width, height, framerate}, buffer={pts}"`. Overrides `ignored-event-fields` for specified event types.
* `record-buffers`: Default: false. Whether buffers will be logged. By default
only events are logged.
* `buffers-checksum`: Default: false. Whether a checkum of the buffer data is
logged. Implies `record-buffers`.
* `ignored-fields`: Default: `"stream-start={ stream-id }"` (as they are often
non reproducible). Key with a serialized GstValueList(str) of fields to not
record.
* `logged-fields`: Default: `NULL` Key with a serialized GstValueList(str) of
fields to record, eg. `logged-event-fields="stream-start={flags},
caps={width, height, framerate}, buffer={pts}"`. Overrides
`ignored-event-fields` for specified event types.
* `ignored-event-types`: Default: `{ }`. List of event type names to not record
* `logged-event-types`: Default: `NULL`. List of event type names to not record, if noone provided, all events are logged, except the ones defined in the `ignored-event-types`.
* `expectations-dir`: Path to the directory where the expectations will be written if they don't exist, relative to the current working directory. By default the current working directory is used, but this setting is usually set automatically as part of the `%(validateflow)s` expansion to a correct path like `~/gst-validate/gst-integration-testsuites/flow-expectations/<test name>`.
* `actual-results-dir`: Path to the directory where the events will be recorded. The expectation file will be compared to this. By default the current working directory is used, but this setting is usually set automatically as part of the `%(validateflow)s` expansion to the test log directory, i.e. `~/gst-validate/logs/validate/launch_pipeline/<test name>`.
* `logged-event-types`: Default: `NULL`. List of event type names to not
record, if noone provided, all events are logged, except the ones defined in
the `ignored-event-types`.
* `expectations-dir`: Path to the directory where the expectations will be
written if they don't exist, relative to the current working directory. By
default the current working directory is used, but this setting is usually
set automatically as part of the `%(validateflow)s` expansion to a correct
path like `~/gst-validate/gst-integration-testsuites/flow-expectations/<test
name>`.
* `actual-results-dir`: Path to the directory where the events will be
recorded. The expectation file will be compared to this. By default the
current working directory is used, but this setting is usually set
automatically as part of the `%(validateflow)s` expansion to the test log
directory, i.e. `~/gst-validate/logs/validate/launch_pipeline/<test name>`.
* `generate-expectations`: Default: unset. When set to `true` the expectation
file will be written and no testing will be done and if set to `false`,
the expectation file will be required. If a validateflow config is
used without specifying any other parametters, the validateflow plugin will
consider that all validateflow overrides will use that value.
## Scenario actions
Scenarios with validateflow work in the same way as other tests. Often validatetests will use appsrc in order to control the flow of data precisely, possibly interleaving events in between. The following is a list of useful actions.
Scenarios with validateflow work in the same way as other tests. Often
validatetests will use appsrc in order to control the flow of data precisely,
possibly interleaving events in between. The following is a list of useful
actions.
* `appsrc-push`: Pushes a buffer from an appsrc element and waits for the chain operation to finish. A path to a file is provided, optionally with an offset and/or size.
* `appsrc-eos`: Queues an EOS event from the appsrc. The action finishes immediately at this point.
* `appsrc-push`: Pushes a buffer from an appsrc element and waits for the chain
operation to finish. A path to a file is provided, optionally with an offset
and/or size.
* `appsrc-eos`: Queues an EOS event from the appsrc. The action finishes
immediately at this point.
* `stop`: Tears down the pipeline and stops the test.
* `checkpoint`: Records a "checkpoint" message in all validateflow overrides, with an optional explanation message. This is useful to check certain events or buffers are sent at a specific moment in the scenario, and can also help to the comprehension of the scenario.
* `checkpoint`: Records a "checkpoint" message in all validateflow overrides,
with an optional explanation message. This is useful to check certain events
or buffers are sent at a specific moment in the scenario, and can also help
to the comprehension of the scenario.
More details on these actions can be queried from the command line, like this:

View file

@ -1245,7 +1245,10 @@ gst_validate_replace_variables_in_string (gpointer source,
if (!var_value) {
gst_validate_error_structure (source,
"Trying to use undefined variable `%s`.\n Available vars:\n - locals%s\n - globals%s\n",
"Trying to use undefined variable `%s`.\n"
" Available vars:\n"
" - locals%s\n"
" - globals%s\n",
varname, gst_structure_to_string (local_vars),
gst_structure_to_string (global_vars));

View file

@ -334,13 +334,13 @@ gst_validate_get_config (const gchar * structname)
const gchar *config;
GStrv tmp;
guint i;
GList *configs;
GList *testfile_configs = NULL, *configs = NULL;
configs = gst_validate_get_testfile_configs (structname);
testfile_configs = gst_validate_get_testfile_configs (structname);
config = g_getenv ("GST_VALIDATE_CONFIG");
if (!config) {
return configs;
return testfile_configs;
}
tmp = g_strsplit (config, G_SEARCHPATH_SEPARATOR_S, -1);
@ -355,6 +355,7 @@ gst_validate_get_config (const gchar * structname)
configs = g_list_concat (configs, l);
}
g_strfreev (tmp);
configs = g_list_concat (configs, testfile_configs);
return configs;
}

View file

@ -1002,6 +1002,12 @@ not been tested and explicitly activated if you set use --wanted-tests ALL""")
group.add_argument("--validate-enable-iqa-tests", dest="validate_enable_iqa_tests",
help="Enable Image Quality Assessment validation tests.",
default=False, action='store_true')
group.add_argument("--validate-generate-expectations", dest="validate_generate_expectations",
choices=['auto', 'enabled', 'disabled'],
help="Force generating expectations (when set to `enabed`)"
" force failure on missing expactations when set to `disabled`"
" and create if needed when set to `auto`.",
default='auto')
group.add_argument("--validate-generate-ssim-reference-files",
help="(re)generate ssim reference image files.",
default=False, action='store_true')
@ -1199,6 +1205,13 @@ not been tested and explicitly activated if you set use --wanted-tests ALL""")
self._run_defaults = False
options.wanted_tests[
i] = options.wanted_tests[i].replace("ALL", "")
options.validate_default_config = None
if options.validate_generate_expectations != 'auto':
options.validate_default_config = os.path.join(options.logsdir, "__validate_default.config")
with open(options.validate_default_config, 'w') as f:
val = "true" if options.validate_generate_expectations == "enabled" else "false"
print("validateflow,generate-expectations=%s" % val, file=f)
try:
options.wanted_tests.remove("")
except ValueError:

View file

@ -598,11 +598,9 @@ class Test(Loggable):
if not subenv:
subenv = self.extra_env_variables
if "GST_VALIDATE_CONFIG" in subenv:
subenv['GST_VALIDATE_CONFIG'] = '%s%s%s' % (
subenv['GST_VALIDATE_CONFIG'], os.pathsep, config)
else:
subenv['GST_VALIDATE_CONFIG'] = config
cconf = subenv.get('GST_VALIDATE_CONFIG', "")
paths = [c for c in cconf.split(os.pathsep) if c] + [config]
subenv['GST_VALIDATE_CONFIG'] = os.pathsep.join(paths)
def launch_server(self):
return None
@ -905,6 +903,10 @@ class GstValidateTest(Test):
def get_subproc_env(self):
subproc_env = os.environ.copy()
if self.options.validate_default_config:
self.add_validate_config(self.options.validate_default_config,
subproc_env, )
subproc_env["GST_VALIDATE_UUID"] = self.get_uuid()
subproc_env["GST_VALIDATE_LOGSDIR"] = self.options.logsdir

View file

@ -69,6 +69,7 @@ struct _ValidateFlowOverride
gchar *actual_results_file_path;
ValidateFlowMode mode;
gboolean was_attached;
GstStructure *config;
/* output_file will refer to the expectations file if it did not exist,
* or to the actual results file otherwise. */
@ -223,6 +224,8 @@ validate_flow_override_new (GstStructure * config)
const GValue *tmpval;
flow = g_object_new (VALIDATE_TYPE_FLOW_OVERRIDE, NULL);
flow->config = config;
GST_OBJECT_FLAG_SET (flow, GST_OBJECT_FLAG_MAY_BE_LEAKED);
override = GST_VALIDATE_OVERRIDE (flow);
@ -326,7 +329,41 @@ validate_flow_override_new (GstStructure * config)
g_free (pad_name_safe);
}
if (g_file_test (flow->expectations_file_path, G_FILE_TEST_EXISTS)) {
flow->was_attached = FALSE;
gst_validate_override_register_by_name (flow->pad_name, override);
override->buffer_handler = validate_flow_override_buffer_handler;
override->buffer_probe_handler = validate_flow_override_buffer_handler;
override->event_handler = validate_flow_override_event_handler;
g_signal_connect (flow, "notify::validate-runner",
G_CALLBACK (_runner_set), NULL);
return flow;
}
static void
validate_flow_setup_files (ValidateFlowOverride * flow, gint default_generate)
{
gint local_generate_expectations = -1;
gboolean generate_if_doesn_exit = default_generate == -1;
gboolean exists =
g_file_test (flow->expectations_file_path, G_FILE_TEST_EXISTS);
if (generate_if_doesn_exit) {
gst_structure_get_boolean (flow->config, "generate-expectations",
&local_generate_expectations);
generate_if_doesn_exit = local_generate_expectations == -1;
}
if ((!default_generate || !local_generate_expectations) && !exists) {
gst_validate_error_structure (flow->config, "Not writing expectations and"
" configured expectation file %s doesn't exist in config:\n > %"
GST_PTR_FORMAT, flow->expectations_file_path, flow->config);
}
if (exists && local_generate_expectations != 1 && default_generate != 1) {
flow->mode = VALIDATE_FLOW_MODE_WRITING_ACTUAL_RESULTS;
flow->output_file_path = g_strdup (flow->actual_results_file_path);
gst_validate_printf (NULL, "**-> Checking expectations file: '%s'**\n",
@ -352,18 +389,6 @@ validate_flow_override_new (GstStructure * config)
gst_validate_abort ("Could not open for writing: %s",
flow->output_file_path);
flow->was_attached = FALSE;
gst_validate_override_register_by_name (flow->pad_name, override);
override->buffer_handler = validate_flow_override_buffer_handler;
override->buffer_probe_handler = validate_flow_override_buffer_handler;
override->event_handler = validate_flow_override_event_handler;
g_signal_connect (flow, "notify::validate-runner",
G_CALLBACK (_runner_set), NULL);
return flow;
}
static void
@ -484,8 +509,12 @@ runner_stopping (GstValidateRunner * runner, ValidateFlowOverride * flow)
return;
}
if (flow->mode == VALIDATE_FLOW_MODE_WRITING_EXPECTATIONS)
if (flow->mode == VALIDATE_FLOW_MODE_WRITING_EXPECTATIONS) {
gst_validate_skip_test ("wrote expectation files for %s.\n",
flow->pad_name);
return;
}
{
gchar *contents;
@ -582,6 +611,7 @@ static gboolean
gst_validate_flow_init (GstPlugin * plugin)
{
GList *tmp;
gint default_generate = -1;
GList *config_list = gst_validate_plugin_get_config (plugin);
if (!config_list)
@ -589,10 +619,26 @@ gst_validate_flow_init (GstPlugin * plugin)
for (tmp = config_list; tmp; tmp = tmp->next) {
GstStructure *config = tmp->data;
ValidateFlowOverride *flow = validate_flow_override_new (config);
ValidateFlowOverride *flow;
if (gst_structure_has_field (config, "generate-expectations") &&
!gst_structure_has_field (config, "pad")) {
if (!gst_structure_get_boolean (config, "generate-expectations",
&default_generate)) {
gst_validate_error_structure (config,
"Field 'generate-expectations' should be a boolean");
}
continue;
}
flow = validate_flow_override_new (config);
all_overrides = g_list_append (all_overrides, flow);
}
for (tmp = all_overrides; tmp; tmp = tmp->next)
validate_flow_setup_files (tmp->data, default_generate);
/* *INDENT-OFF* */
gst_validate_register_action_type_dynamic (plugin, "checkpoint",
GST_RANK_PRIMARY, _execute_checkpoint, ((GstValidateActionParameter [])