Make buffer timestamps more accurate and, more importantly, actually
representative of the MIDI events timing.
Previously, buffers were only sent with timetamps aligned at a 10ms
boundary which was just wrong, now the buffer timestamp represents the
real time of the MIDI event.
Conveniently, the ALSA sequencer API supports scheduling events in the
future so the sequencer infrastructure can be used to have the tick
delivered at the right time, avoiding any custom scheduling mechanism.
The ticks scheduling starts on the first transition to PLAYING, and the
delay is also calculated when the pipeline goes into PLAYING.
https://bugzilla.gnome.org/show_bug.cgi?id=787683
When setting the "ports" property the value is duplicated but it's not
freed when the elements stops.
Reported by Valgrind (example run with "alsamidisrc ports=128:0"):
6 bytes in 1 blocks are definitely lost in loss record 30 of 1,911
at 0x4C2BBEF: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
by 0x5411528: g_malloc (gmem.c:94)
by 0x542A9FE: g_strdup (gstrfuncs.c:363)
by 0x775211E: gst_alsa_midi_src_set_property (gstalsamidisrc.c:284)
by 0x5184A4D: object_set_property (gobject.c:1439)
by 0x5184A4D: g_object_setv (gobject.c:2245)
by 0x51859DD: g_object_set_property (gobject.c:2529)
by 0x4F0474C: ??? (in /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0.1203.0)
by 0x4F065C8: ??? (in /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0.1203.0)
by 0x4F07557: ??? (in /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0.1203.0)
by 0x4EFE3EE: gst_parse_launch_full (in /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0.1203.0)
by 0x4EFE673: gst_parse_launchv_full (in /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0.1203.0)
https://bugzilla.gnome.org/show_bug.cgi?id=787683
The while loop should end when all buffers "and" the preroll
buffer are consumed but this means to continue waiting if there
are still some pending buffers "or" preroll buffer.
The unit test was correct but racy because of this mistake.
I.e. because of the wrong "and" the while could finish too early.
cd tests/check && GST_CHECKS=test_query_drain make elements/appsink.forever
https://bugzilla.gnome.org/show_bug.cgi?id=789763
n_frames could end up being quite big (potentially up to G_MAXINT64). Which
would result in overflowing 64bits when multiplying it by GST_SECOND.
Instead move GST_SECOND to the num argument
If we are shutting down, don't spawn a cleanup thread to cleanup old
groups and instead queue them to be cleaned up in the state change
thread.
This avoids (hopefully for good) having a race between the state change
thread and other threads trying to deactivate elements/pads.
Deactivating pads from two threads isn't 100% MT-safe. There is a
slim chance that the GstPadActivateFunc might be called twice with
the same values (in this case from the cleanup thread *and* from
the GstElement change_state function when going from PAUSED to READY).
In order to avoid that, call any existing cleanup function *before*
calling the parent change_state implementation on downwards state
changes.
There is a race going on somewhere when we attempt to remove elements
*while* the parent container is switching to PLAYING.
In order to avoid this issue with discoverer, make sure we never
remove elements while switching to PLAYING.
We only need to initialize the mutex/cond once when creating the
element and then release them when we are done with the element.
Avoids weird "mutex_clear called when still locked" issues
When deactivating pads, we need to ensure that the streaming threads
going through the pads we wish to deactivate can cleanly return.
Failure to do that would result in the streaming locks of those
pads never being released. The end result would be a deadlock
when stopping decodebin2.
In order to avoid that situation, release the "dyn" lock around
the deactivation code. And refactor the code to cope with the
list of blocked pads having potentially changed when re-acquiring
the lock.
We have a dedicated one-shot thread to handle cleanup of old groups.
While this is a good idea. It's an even better idea to make sure
that thread is *completed* before the parsebin element to which
it is related isn't freed/gone.
* There can only be one cleanup thread happening at any point in time.
If there is already one, we wait for the previous one to finish.
* When shutting down (NULL=>READY) make sure the thread is finished
https://bugzilla.gnome.org/show_bug.cgi?id=790007
We have a dedicated one-shot thread to handle cleanup of old groups.
While this is a good idea. It's an even better idea to make sure
that thread is *completed* before the decodebin2 element to which
it is related isn't freed/gone.
* There can only be one cleanup thread happening at any point in time.
If there is already one, we wait for the previous one to finish.
* When shutting down (NULL=>READY) make sure the thread is finished
https://bugzilla.gnome.org/show_bug.cgi?id=790007
Instead of emitting 'drained' whenever every single chain is drained
(which would result in plenty of signal emission, and would also
occur when switching groups), only emit it when the top-level chain
is drained.
Furthermore, mark unknown (and therefore unexposed) pads as drained
since we'll never get EOS on them.
https://bugzilla.gnome.org/show_bug.cgi?id=787367
There were still some races going on where seeking events wouldn't
be properly intercepted/executed by this thread.
* Instead of always waiting for the GCond to be emitted, first just
check if there is an event available
* Take ownership of the event *while* the lock is taken and not
after releasing/reacquiring it
* Finally acquire lock at the very top and release it at the end
to make it a bit more streamlined
This removes the remaining issues with seeks not being executed
The previous branch will release the lock in the call to
gst_ogg_demux_seek_back_after_push_duration_check_unlock()
Only unlock it if we didn't call that function
When calculating duration in push-mode we seek to a certain position
and discard any data until we get data from that requested position.
The problem is that basing ourselves solely on offset to determine
whether we reached the target offset is wrong since the source might
be fast enough to send us that target position *before* it processed
the requested seek.
This would end up in a situation where:
* We think we're done with duration estimate
* We fire a seek back to "0" in the loop thread
* We resume normal processing
* ... except that we're still getting data from too far ahead which
we decide to process.
* And we start doing totally wrong granule/time/duration calculation
and pushing wrong data.
Instead of this confusion, wait until we receive data from the requested
seek. We do that by using the fact that the seqnum in
seek_event_drop_til will be non-zero until the SEGMENT corresponding
to the requested SEEK has been received.
Bonus: makes startup slightly faster
Code using the push_loop_thread (using for sending seeks) assumes
that the thread was properly started, except that this isn't always
true and the thread might not have completely started.
Instead wait for the thread to properly start before doing anything
else.
If we are going to return a (potentially) 64bit integer, don't use
a 32bit one for calculation, otherwise we could end up exceeding
the maximum size of a 32bit int.