When we have nested timelines, we need to make sure the underlying
formatted file is reloaded when commiting the main composition to
take into account the new timeline.
In other to make the implementation as simple as possible we make
sure that whenever the toplevel composition is commited, the decodebin
holding the gesdemux is torn down so that a new demuxer is created
with the new content of the timeline.
To do that a we do a NleCompositionQueryNeedsTearDown query to which
gesdemux answers leading to a full nlecomposition stack
deactivation/activation cycle.
Make sure that an event resulting from the seek happens before removing
the pad probe, dropping anything while it is not the case.
This guarantees that the seek happens before `nlesource` outputs
anything. This was not necessary as with decodebin or usual source
flushing seeks lead to synchronous flush_start/flush_stop and we could
safely assume that once the seek is sent, it was happenning.
With nested `nlecomposition` this assumption is simply not true as
in the composition seeks are basically cached and happen later in
the composition updating thread.
This fixes races where we ended up removing the blocking probe before
the seek actually started to be executed in the nlecomposition
nested inside an nlesource which leaded to data from *before* the seek
to be outputed which means we could display wrong frames,
and it was leading to interesting deadlocks.
Seeks that lead to a stack change lead to deactivating the current
stack. At that point we explicitely flush downstream as a reaction to
the flushing seek. Until now those flushes had a random seqnum, this
fails if we are a nested compostion as the parent composition will end
up dropping that flush which in turns might lead to deadlocks. For
example, the flush goes through a `compositor` which wants to flush
downstream to stop its srcpad task, but that flush wouldn't have
"released" its srcpad thread if the composition srcpad drops it, meaning
it won't be able to stop the task ever.
Otherwise if we shutdown a composition whith an nested composition
(inside a source in the test) and leak it, we end up with the nested
composition task still running (in READY) which is bad.
Add a test for that which leaks the pipeline on purpose.
First marshilling it to the main thread is dangerous as it is a blocking
operation and it should never happen there.
The asset cache is MT safe now so it is possible to load the timeline
from that thread directly
Making sure to have 1 GstDiscoverer per thread.
Use that new feature in gesdemux by loading the timeline directly from
the streaming thread. Modifying the timeline is not supported allowed
anyway.
Otherwise there is a race where we trigger the seek at the exact
same time the composition is being teared down potentially leading
to basesrc restarting its srcpad task which ends up being leaked.
Fixes ges.playback.scrub_backward_seeking.test_title.audio_video.vorbis_theora_ogg
and probably all its friends timeouting with the following stack trace:
(gdb) t a a bt
Thread 4 (Thread 0x7f5962acd700 (LWP 19997)):
#0 0x00007f5976713efd in syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38
#1 0x00007f5976a9d3f3 in g_cond_wait (cond=cond@entry=0x7f5938125410, mutex=mutex@entry=0x7f59381253c8) at gthread-posix.c:1402
#2 0x00007f5976c9e26b in gst_task_func (task=0x7f59381253b0 [GstTask]) at ../subprojects/gstreamer/gst/gsttask.c:313
#3 0x00007f5976a7ecb3 in g_thread_pool_thread_proxy (data=<optimized out>) at gthreadpool.c:307
#4 0x00007f5976a7e2aa in g_thread_proxy (data=0x7f5954071d40) at gthread.c:784
#5 0x00007f59767ea58e in start_thread (arg=<optimized out>) at pthread_create.c:486
#6 0x00007f59767196a3 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
Thread 3 (Thread 0x7f5963fff700 (LWP 19995)):
#0 0x00007f597670e421 in __GI___poll (fds=0xe32da0, nfds=2, timeout=-1) at ../sysdeps/unix/sysv/linux/poll.c:29
#1 0x00007f5976a553a6 in g_main_context_poll (priority=<optimized out>, n_fds=2, fds=0xe32da0, timeout=<optimized out>, context=0xe31ff0) at gmain.c:4221
#2 0x00007f5976a553a6 in g_main_context_iterate (context=0xe31ff0, block=block@entry=1, dispatch=dispatch@entry=1, self=<optimized out>) at gmain.c:3915
#3 0x00007f5976a55762 in g_main_loop_run (loop=0xe32130) at gmain.c:4116
#4 0x00007f59768db10a in gdbus_shared_thread_func (user_data=0xe31fc0) at gdbusprivate.c:275
#5 0x00007f5976a7e2aa in g_thread_proxy (data=0xe1b8a0) at gthread.c:784
#6 0x00007f59767ea58e in start_thread (arg=<optimized out>) at pthread_create.c:486
#7 0x00007f59767196a3 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
Thread 2 (Thread 0x7f5968dcc700 (LWP 19994)):
#0 0x00007f597670e421 in __GI___poll (fds=0xe1bcc0, nfds=1, timeout=-1) at ../sysdeps/unix/sysv/linux/poll.c:29
#1 0x00007f5976a553a6 in g_main_context_poll (priority=<optimized out>, n_fds=1, fds=0xe1bcc0, timeout=<optimized out>, context=0xe1b350) at gmain.c:4221
#2 0x00007f5976a553a6 in g_main_context_iterate (context=context@entry=0xe1b350, block=block@entry=1, dispatch=dispatch@entry=1, self=<optimized out>) at gmain.c:3915
#3 0x00007f5976a554d0 in g_main_context_iteration (context=0xe1b350, may_block=may_block@entry=1) at gmain.c:3981
#4 0x00007f5976a55521 in glib_worker_main (data=<optimized out>) at gmain.c:5861
#5 0x00007f5976a7e2aa in g_thread_proxy (data=0xe1b800) at gthread.c:784
#6 0x00007f59767ea58e in start_thread (arg=<optimized out>) at pthread_create.c:486
#7 0x00007f59767196a3 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
Thread 1 (Thread 0x7f5975df4fc0 (LWP 19993)):
#0 0x00007f5976713efd in syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38
#1 0x00007f5976a9d3f3 in g_cond_wait (cond=cond@entry=0xe34020, mutex=0xe39b80) at gthread-posix.c:1402
#2 0x00007f5976a7f41c in g_thread_pool_free (pool=0xe34000, immediate=0, wait_=<optimized out>) at gthreadpool.c:776
#3 0x00007f5976c9f1ca in default_cleanup (pool=0xe256b0 [GstTaskPool]) at ../subprojects/gstreamer/gst/gsttaskpool.c:89
#4 0x00007f5976c9e32d in init_klass_pool (klass=<optimized out>) at ../subprojects/gstreamer/gst/gsttask.c:161
#5 0x00007f5976c9e502 in gst_task_cleanup_all () at ../subprojects/gstreamer/gst/gsttask.c:381
#6 0x00007f5976c214f4 in gst_deinit () at ../subprojects/gstreamer/gst/gst.c:1095
#7 0x000000000040394f in main (argc=6, argv=<optimized out>) at ../subprojects/gst-editing-services/tools/ges-launch.c:94
This commit is to ensure cleanup internal elements on state change failure.
nlecomposition posts its own error message after cleanup child.
If we don't suppress child error, meanwhile, an application
triggered downward state change (resulting from child error message)
might be able to reach nlecomposition before internal cleaning child up.
That eventually results to downward state change failure.
/usr/bin/ld: .libs/libgstges_la-gesdemux.o: in function `ges_timeline_new_from_uri_from_main_thread':
./plugins/ges/gesdemux.c:279: undefined reference to `gst_discoverer_new'
/usr/bin/ld: ./plugins/ges/gesdemux.c:288: undefined reference to `gst_discoverer_start'
We have an optiominisation to avoid double seeks when a seek is passed
the end of the current stack. The problem, is that we no longer flush
the pipeline when this code is reached. This patch comments out this
optimization adding a FIXME. As mention, flushing the stack instead of
seeking would work, but does not seem trivial considering all the
mechanic inplace to forward or not the events.
https://bugzilla.gnome.org/show_bug.cgi?id=787405