I found a rather reproducible race in a WebKit LayoutTest when a player
was intantiated and a VP8/9 video was loaded, then torn down after
getting the video dimensions from the caps.
The crash occurs during the handling of the first frame by gstvpxdec.
The following actions happen sequentially leading to a crash.
(MT=Main Thread, ST=Streaming Thread)
MT: Sets pipeline state to NULL, which deactivates vpxdec's srcpad,
which in turn sets its FLUSHING flag.
ST: gst_vpx_dec_handle_frame() -- which is still running -- calls
gst_video_decoder_allocate_output_frame(); this in turn calls
gst_video_decoder_negotiate_unlocked() which fails because the
srcpad is FLUSHING. As a direct consequence of the negotiation
failure, a pool is NOT set.
gst_video_decoder_negotiate_unlocked() still assumes there is a
pool, crashing in a critical in gst_buffer_pool_acquire_buffer()
a couple statements later.
This patch fixes the bug by returning != GST_FLOW_OK when the
negotiation fails. If the srcpad is FLUSHING, GST_FLOW_FLUSHING is
returned, otherwise GST_FLOW_ERROR is used.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/1031>
GTK-Doc specifies that, by default, the caller owns returned objects, so that the caller should free them when it is done. However, in the case of this function, the returned GstAudioInfo is owned by the decoder, so this default choice is incorrect. This creates double free problems when using GStreamer Rust bindings, because they are generated using the information contained in the docs.
Fix this by correctly specifying that the caller does not own the returned object.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/1032>
GTK-Doc specifies that, by default, the caller owns returned objects, so that the caller should free it when it is done. However, in the case of this function, the returned GstAudioInfo is owned by the decoder, so this default choice is incorrect. This creates double free problems when using GStreamer Rust bindings, because they are generated using the information contained in the docs.
Fix this by correctly specifying that the caller does not own the returned object.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/1032>
User often want to set encoder properties on encoding profiles,
this introduces a way to easily 'preset' properties when defining the
profile. This uses GstStructure to define those properties the same
way it is done in `splitmux` for example as it makes simple to handle.
This also defines a more complex structure type where we can map a set
of properties to set depending on the muxer/encoder factory that has
been picked by EncodeBin so it is quite flexible.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/1002>
When the gstglimagesink is started with the option `glimagesink
render-rectangle="<0,0,1920,1080>"`, the pipeline reaches a deadlock.
The reason the deadlock occurs is that the
`gst_gl_window_set_render_rectangle` takes locks on the window, in
addition it calls `window_class->set_render_rectangle(...)` which
executes the `_on_resize` function. Since the `_on_resize` function also
takes locks on the window the deadlock is achieved.
By scheduling the adjustment of the render rectangle through an async
message for `gst_gl_window_dispmanx_set_render_rectangle`, the actual
resize happens in another context and therefore doesn't suffers from the
lock taken in `gst_gl_window_set_render_rectangle`.
This solution follows the same approach as gl/wayland. The problem was
introduced by b887db1. For the full discussion check #849.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/1030>
Using RTP header extensions is currently not convenient. Users have to
handle signals from the RTP payloader and instantiate the extension
element themselves, making it impossible to use with gst-launch.
Adding a property allowing the payloader to automatically try creating
extensions. This should help simple use cases and testing using
gst-launch.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/1022>
If there are 3 or more known atoms in a row, it's likely that this is
actually MOV/MP4 even if we don't find any other known atoms. If 5 or
more are found then this is most certainly MOV/MP4 and we can return.
Also if a moov and mdat atom is found, this is definitely a MOV/MP4 file
and can be used as such, independent of anything else following the
mdat.
Fixes typefinding of various MOV files that have no `ftyp` atom but
otherwise a valid file structure followed by some garbage.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/1013>
These parameters are incorrectly regarded as mutable in G-IR making them
"incompatible" with languages that are explicit about mutability like
Rust. In order to clean up the code and expected API there, update the
signatures here, right at the source (instead of overriding them in
Gir.toml and hoping for the best).
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/1005>
This will only make use of the framerate if the subclass is chaining up
BaseSink::set_caps(). Otherwise it will have the same behaviour as the
basesink default.
Doing so is useful if video buffers don't contain a duration to
calculate a default duration, and various video sinks already implement
a custom version of this.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/986>
Elements operating in pull mode may optionally pass a buffer to
pull_range that should be filled with the data. The only element
that does that at the moment is oggdemux operating in pull mode.
tagdemux currently creates a sub-buffer whenever a buffer pulled
from upstream (filesrc, usually) needs to be trimmed. This creates
a new buffer, however, so disregards any passed-in buffer from a
downstream oggdemux.
This would cause assertion failures and playback problems for
ogg files that contain ID3 tags at the end.
Fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/issues/848
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/994>
This situation happens in the situation where an input stream has a framerate
exceeding the timeout latency (Ex: 1fps with a latency of 500ms) and an input
stream greater than output framerate (ex: 60fps in, 30 fps out).
The problem that would happen is that we would timeout, but then buffers from
the fast input stream would only be popped out one by one.... until a buffer
reaches the low-framerate input stream at which point they would quickly be
popped out/used. The resulting output would be "slow ... fast ... slow ... fast"
of that input fast stream.
In order to avoid this situation, whenever we detect a late buffer, check if
there's a next one and re-check with that one.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/990>
When filling the checker pattern from multiple threads, y_start
needs to be taken into account to determine the shade of the
current pixel.
Example pipeline:
gst-launch-1.0 videotestsrc ! video/x-raw, width=1920, height=1080, format=I420 ! \
queue ! compositor sink_0::xpos=200 ! video/x-raw, format=I420 ! videoconvert ! \
xvimagesink
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/988>
The initial byte offset should be calculated from the stride,
not from the dest_add variable
Example pipeline:
gst-launch-1.0 videotestsrc ! video/x-raw, width=1920, height=1080, format=YUY2 ! \
queue ! compositor sink_0::xpos=200 ! video/x-raw, format=YUY2 ! xvimagesink
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/988>