The legacy emulation in DRM/KMS drivers badly interact with GStreamer and
may cause the framerate to be halved. With this property, users can disable
vsync (which is handled internally by the emulation) in order to regain the
full framerate.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3303>
The event type for instant-rate-change events was poorly chosen,
leading to them being re-sent too late and even after EOS.
Add a mechanism in GstPad for the sticky event order to be
different to the value of the event type to fix that up.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3387>
The original BUNDLE support commit placed a queue after the
rtpfunnel that combines streams, but I don't see a good reason for
it. It has default settings, so if network output is slow might
accidentally store up to 1 second of pending data, increasing
latency.
Remove it in favour of doing any necessary buffering before
webrtcbin. If it turns out that there is a reason for it to
exist, the limits should probably be configurable and small.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3437>
Currently, when rtspsrc property add-reference-timestamp-metadata=true,
a downstream rtph264depay element will attach multiple copies of the
same GstReferenceTimestampMeta to the depayloaded media buffers. This
can have signficant performance impacts further downstream in a pipeline
like the following:
rtspsrc add-reference-timestamp-metadata=true ! rtph264depay ! h264parse ! ... ! rtph264pay ! ...
For example, if there are 10 packet buffers for a frame of RTP H.264
video, each of those packet buffers will contain the same reference
timestamp meta. The rtph264depay element will then attach all 10
metadata to the depayloaded frame. And then later when we payload the
frame buffer again for proxying, we now have 10 more buffers each with
10 instance of the same metadata. Allocating/deallocating 100+ instances
of metadata @ 30fps for multiple streams has a pretty large performance
impact.
Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/1578
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3431>
The tile width in pixel is not always available. Notably for
8L128 10bit format, the tile is 8x128 bytes, and the pixel
format is fully packed. That means that the tile contains at
least 6 pixels per line, but it also hold some bits of the
pixel from the same line on the previous or next tile.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3424>
In current tile representation, only tiles with power of two
width and height in bytes are supported. This limitation
prevents adding more complex tiles formats.
In this patch, we deprecate tile_ws and tile_hs from GstVideoFormatInfo and
replace if with an array of GstVideoTileInfo. Each plane tiles are then
described with their pixels width/height, line stride and total size.
The helper gst_video_format_info_get_tile_sizes() that depends on the
deprecated API is also being removed. This can simply be removed as it wasn't
in any stable release yet.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3424>
Setting force_live lets aggregator behave as if it had at least one of
its sinks connected to a live source, which should let us get rid of the
fake live test source hack that is probably present in dozens of
applications by now.
+ Expose API for subclasses to set and get force_live
+ Expose force-live properties in GstVideoAggregator and GstAudioAggregator
+ Adds a simple test
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3008>
The only case where we definitely need to write a new trun is when the
data_offset value does not match the end of the list of entries.
Needing multiple trun atoms is required when interleaving multiple
streams together.
All other cases can be covered by adding more entries to the existing
trun atom.
Fixes playback of fragemented mp4 in ffplay and chrome.
Using e.g. mp4mux fragment-duration=1000 fragment-mode=dash-or-mss
and
mp4mux fragment-duration=1000 fragment-mode=first-moov-then-finalise
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3426>
Attribute's value should use returned value from get_attribute for
VAConfigAttribRTFormat, since VAProfileHEVCMain10, in AMD Mesa Gallium,
can process either VA_RT_FORMAT_420 and VA_RT_FORMAT_420_10, which isn't
considered in gstreamer-vaapi design, where encoder's src pads will
expose only 4:2:0 color formats but no 4:2:0 10bit. So, this is a workaround
for this issue while new vah265enc is released.
Signed-off-by: Boyuan Zhang <boyuan.zhang@amd.com>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3397>
This was the intention from the start, just took me a few years *cough* to
actually implement it properly.
Gapless is handled by re-using as much as possible the same decoders and sinks
if present, and only pre-rolling switching at the sources level (with buffering
if/when needed).
In order to enable "gapless" playback, the "next" uri should be set at any time
between the moment the `about-to-finish` signal is emitted and the moment the
current play item is done. Previously this could only be done with the signal
emission.
This new implementation also allows "Instantaneous URI switching". This allows a
much faster way of switching playback entries while re-using as many elements as
possible. To enable this set `instant-uri` property to TRUE, the default being
FALSE.
API: instant-uri properties
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2784>
When dealing with gapless input (i.e. streams with changing group-id in
GST_EVENT_STREAM_START), we need to take into account the elapsed
running-time (if applicable) in order to properly calculate levels and output
time. Without doing this all incoming data from future groups would be
considered as being "late" and would be consumed immediately.
This does **NOT** modify the actual segment and buffer times, and is only used
internally.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2784>
DecodebinInput (and their backing parsebin or identity) are no longer released
when the corresponding sinkpad is unlinked, but when it's released.
The parsebin element will be resetted:
* If incoming caps are incompatible (was the case before)
* Or when unlinking and it was previously pull-based
This opens the way to use decodebin3 with changing inputs (i.e. gapless)
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2784>
Introduce the option to have the streams be parsed with `parsebin` for
compatible sources (i.e. which are eligible for buffering in the same way as
before this commit).
By parsing the inputs directly, this allows more accurate buffering control:
* Instead of relying on potential bitrate information coming from somewhere
* and *without* being linked downstream
If `parse-streams` is activated and the stream is eligible for buffering, then a
`multiqueue` will be used on the output of `parsebin` in order to handle the
buffering.
API: `parse-streams`
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2784>
If the incoming streams are already parsed, there is no need to add yet-another
parsebin to process it *IF* that stream is compatible with a decoder or the
decodebin3 output caps.
This only applies if all the following conditions are met:
* The incoming stream can *NOT* do pull-based scheduling
* The incoming stream provides a `GstStream` and `GstStreamCollection`
* The caps are compatible with either the decodebin3 output caps or a decoder
input
If all those conditions are met, a identity element is used instead of a
parsebin element and the same code paths are taken.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2784>
* Instead of creating temporary `PendingPad` structures, always create a
DecodebinInputStream for every pad of parsebin
* Remove never used `pending_stream` field from DecodebinInputStream
* When unblocking a given DecodebinInput (i.e. wrapping a parsebin), also make
sure that other parsebins from the same GstStreamCollection are unblocked
since they come from the same source
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2784>
Make an explicit topology/tree of structures:
* ChildSrcPadInfo is created for each source element source pad
* ChildSrcPadInfo contains the chain of optional elements specific to that
pad (ex: typefind)
* A ChildSrcPadInfo links to one or more OutputSlot, which contain what is
specific to the output (i.e. optional buffering and ghostpad)
* No longer use GObject {set|get}_data() functions to store those structures and
instead make them explicit
* Pass those structures around explicitely in each function/callback
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2784>
The following problem could happen:
* Thread 1 : urisourcebin gets activated from READY->PAUSED
* Thread 2 : some element causes a pad to be added to urisourcebin , which gets
linked downstream, which decides to activate upstream to pull-based.
* That requires "activating" the pads from PUSH to NONE, and then from NONE to PULL
* Thread 1 : the base class state change handlers checks if all pads are
activated
The issue is that since going form PUSH to PULL requires going through NONE,
there is a window during which:
* Thread 1 : The pad was set to NONE (before being set to PULL)
* Thread 2 : The base class activates that pad (to PUSH)
* Thread 1 : The attempt to "activate" to PULL fails (silently or not)
This is very racy, so in order to avoid that, we make sure that we only add pads
once the transition from READY->PAUSED in the parent classes is done.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2784>
This change allow output caps to be updated even though we stay in
streaming state. This is needed so that any upstream updated to fields
like framerate, hdr data, etc. can result in a downstream caps event
being pushed.
Previously, any of these changes was being ignored and the downstream
caps would not reflect it.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3328>
In theory, input caps can be updated anytime at non-keyframe or
sequence boundary, such as HDR10 metadata, framerate, aspect-ratio
or so. Those information update might not trigger ::new_sequence()
or subclass may ignore the changes.
By this commit, input state change will be tracked by baseclass
and subclass will be able to know the non-decoding-essential
update by checking the codec specific picture struct
on ::output_picture()
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3328>
This reverts commit fcad4cc646.
This was wrong is so many ways.
* The memcmp was badly used (it should use == 0 to check the data is identical,
and not != 0)
* There was no boundary checks on the present stream section_data when passing
it to memcmp.
* The return value should have been TRUE (i.e. we have done all checks, none of
them failed, therefore the section has been seen before)
* stream->section_data would *always* be NULL if the section had already been
processed
Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/1559
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3421>
There are cases where upstream will not provide a framerate, or it won't be
fixed. But if there is latency introduced by the decoder we do want to report
it.
Therefore use the framerate stored in the actual decoder, which will have a
default.
Fixes hangs when playing back such streams with decodebin3 (where the multiqueue
will not have been informed of that downstream latency and not grow accordingly)
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3391>
Postpone the cleanup of any consecutive sequence of lost frames
which starts at frame 0, until frame 100 is dequeued from driver.
This allows fluster tests JVT/CVWP2_TOSHIBA_E, JVC/CVWP3_TOSHIBA_E
and HEVC/POC_A_Bossen_3 that sends out-of-order frames to successfully
complete (e.g., test of Amphion vpu driver).
Fixes#1569
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3398>
In order to figure out if the "raw" audio contained within the wav
container is actually DTS, wavparse calls the typefinder helper
except that means it runs all typefinders.
Since it only cares about checking for DTS, we should only run the
audio/x-dts typefinder (if present). Commit 858e516 did not really
fix things.
Use the new type helper with the caps to fix this.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3417>
Set udpsrc seqnums on all events sent to the udpsrc's, and before
forwarding events out of rtspsrc set the latest seek seqnum on them if
any.
Also produce a consistent seqnum in rtspsrc from the very beginning.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3409>
Rewriting GstCudaConverter object, since the old implementation was not
well organized and it's hard to add new features.
Moreover, the conversion operations were not very optimized.
Major change of this implementation:
* Remove redundant intermediate conversion operations such as
any RGB -> ARGB(64) conversion or any YUV -> Y444 (or 16bits Y444).
That's not required most of cases. The only required case is
converting 24bits (such as RGB/BGR) packed format to 32bits format
because CUDA texture object does not support sampling 24bits format
* Use normalized sample fetching (i.e., [0, 1] range float value)
and also normalized coordinates system for CUDA texture.
It's consistent with the other graphics APIs such as Direct3D
and OpenGL, that makes sampling operations much easier.
* Support a kind of viewport and adopt math for colorspace conversion
from GstD3D11 implementation
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3389>
GstCudaConverter object can do colorspace conversion and scale at once.
Adding new element "cudaconvertscale" to do that, this can
save unnecessary GPU operation if colorspace conversion and
rescale is required for given input stream format.
Most of codes are taken from d3d11convert element
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3389>
Introduce a new API that can return a GstTypeFind * with helper functions
and data set around buffer data.
While at it, drop factory field from GstTypeFindBufHelper. While it was
useful for logging, it was not passed through function arguments and keeping
it for logging would require an additional API increasing the API surface
and making it harder to use.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3296>
Instead of returning a "const gchar" or a "gchar" that should not be freed, always
return a duplicated string as those functions were used together with g_strdup anyway.
This is needed to prepare support for returning modified strings in next commit.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1147>
This is small regression from commit f7abd81a.
When calling `gst_element_query()` no pad is associated with that query, but the
current code always forwards the query to the associated pad, which is NULL in
previous case. This patch checks for the pad before forwarding the query.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3404>
If we don't receive any data from usrsctp, then there will be no src pad
for the stream id and the stream reset will fail to remove the relevant
src pad. Workaround by first attempting to add the relevant src pad, then
almost immediately removing it.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3381>
Replace video_copy with memcpy to fix the issue when the sizes of the
src frame and dst frame don't match. Moreover, for Windows, you have to
do the copy first and call gst_msdk_import_to_msdk_surface later.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3231>
Replace video_copy with memcpy to fix the issue when the sizes of the
src frame and dst frame don't match. Moreover, for Windows, you have to
do the copy first and call gst_msdk_import_to_msdk_surface later.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3231>
Currently MSDK context does not support d3d11va. Now introduce d3d11va
device to MSDK context, making it able to create msdk session with d3d11
device and to easily share with upstream and donwstream.
Add environment variable to enable user to choose GPU device in multi-GPU
environment. This variable is only valid when there's no context
returned by upstream or downstream. Otherwise it will use the device
that created by upstream or downstream.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3231>
Add support for more formats so as to run the libvpx high bit depth test suite.
This means the files under CONFIG_VP9_HIGHBITDEPTH
This also allows running the yuv444p 8bit file in the regular 8 bit vp9 suite.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3356>
If an input is malformed (only produces cea608 field 1 cc_data) then
when in passthrough we would effectively be dropping every second cea608
on output as we would not store any unused cea608 data.
Fix by having all code paths go through the framerate conversion code
which will store and retrieve any relevant data across buffers.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3211>
When a tile format is padded and imported as DMABuf, the stride
contains the information about the actual width and height in
number of tiles. This information is needed by the detiling shader
in order accuratly calculate the location of pixels. To fix that,
we also copy the offset and strides into the otuput format and
the converter will ensure that the shader is recompiled whenever
the stride changes.
This fixes video corruptions observed when decoding on MT8195
with videos that aren't not aligned to 64bytes in width.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3365>
... otherwise PAR can be wrongly signalled during the negotiation
Fixing below pipeline when desktop resolution is not 640x480
gst-launch-1.0.exe \
d3d11screencapturesrc ! videoscale !
video/x-raw,width=640,height=480,pixel-aspect-ratio=1/1 ! d3d11videosink
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3360>
1. Removes the verification if the internal encoder is not opened
yet to allow the property setting.
2. And toggles on the base class' reconf flag for each property
variable that can be modified at run time.
3. Mark those modifiable properties as mutable while playing.
Currently the run-time modifiable properties are:
qpi, qpp, qpb, bitrate, target percentage, target usage and rate control
Other properties can be enabled too, but they need testing.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2466>
Adds an internal function reset() which drains the internal queues and
calls the reconfig() vmethod.
This reset() method is called inconditionally at set_format() and in
handle_frame() if the instance's reconf flag is enabled.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2466>
If parameters remain similar enough to avoid either encoder reopening
or downstream renegotiation, avoid it.
This is going to be useful for dynamic parameters setting.
To check if the stream parameters changed, so the internal encoder has
to be closed and opened again, are required two steps:
1. If input caps, profile, chroma or rate control mode have changed.
2. If any of the calculated variables and element properties have
changed.
Later on, only if the output caps also changed, the pipeline
is renegotiated.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2466>
This method will return the caps configured in the reconstruct buffer
pool, and its maxium number of buffers to allocate.
The caps are needed later to know if the internal encoder has to be
reopened if the stream properties change.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2466>
This adds a new boolean property `auto-reconnect`, defaulting to `true`.
Setting it to `false` makes the elements (in caller mode) immediately
report an error to the application instead of trying to reconnect.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3326>
Adding DirecShow video capture filter mode, in addition
to existing MediaFoundation and WinRT(UWP) mode, to support
DirectShow only filters (not KS driver compatible)
such as custom virtual camera filters.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3350>
Use gst_debug_set_threshold_from_string's new reset behavior to undo
GST_DEBUG and ensure the logging tests have a known configuration.
`gst_debug_set_threshold_from_string ("LOG", TRUE)` has the same effect
as `gst_debug_set_threshold_from_string ("", TRUE)` followed by
`gst_debug_set_default_threshold (GST_LEVEL_LOG)`.
Don't bother remembering the default log level set when the test
started. It will get reset by the next test, anyway.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/605>
TLDR: Make `gst_set_threshold_from_string ("", TRUE)` reset *all*
threshold settings, including those set by previous invocations of
`gst_debug_set_threshold_from_string`.
The docs say:
@reset: %TRUE to clear all previously-set debug levels before setting
new thresholds
What actually happens is it sets the default threshold to `ERROR`,
leaves the patterns in place and calls
`gst_debug_category_reset_threshold` on each category.
In effect, any category that is matched by a pattern gets reset to that
threshold if the app changed it by directly invoking
`gst_debug_category_set_threshold`. All other categories are reset to
`ERROR`.
In my opinion this parameter currently has little value, as the same
effect can be achieved by including `ERROR` (without a pattern) in the
string, as in `"foo*:WARNING,*bar:INFO,ERROR"`.
What I actually expect it to do is reset *all* threshold settings,
including those set by previous invocations of
`gst_debug_set_threshold_from_string`, starting off with a clean slate
for the patterns provided with the call.
Otherwise there is no API to do this, besides:
- Painfully removing patterns one-by-one via
`gst_debug_unset_threshold_for_name` *if* you know what the patterns
are.
- Adding a `*:FOO` pattern to affect all categories, which makes the
default threshold useless and practically leaks all the old
patterns.
In my opinion this also makes it fit better into the layers of threshold
config, which is:
1. Temporary:
- `gst_debug_category_set_threshold`
- `gst_debug_category_reset_threshold`
2. Patterns:
- `gst_debug_set_threshold_for_name`
- `gst_debug_unset_threshold_for_name`
- `gst_debug_set_threshold_from_string`
- `GST_DEBUG`
3. Default:
- `gst_debug_set_default_threshold`
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/605>
The scenario is the following:
* Thread 1 is pushing an EOS event on a sinkpad
* Thread 2 is pushing a STREAM_START event on the same sinkpad before Thread 1
returns. Note : It starts pushing the event after Thread 1 took the object lock.
There is a potential race between:
* The moment Thread 1 sets the EOS flag once it has finished sending the
event (via store_sticky_event). When it does that it has both the STREAM and
OBJECT lock
* The moment Thread 2 sends the STREAM_START event (Which should release that
EOS status), but removing the EOS flag is only done while holding the OBJECT
lock and not the STREAM_LOCK, which means it could be re-set by Thread 1 before
it then checks again the EOS flag (without the STREAM lock taken).
The EOS flag unsetting by STREAM_START should be done with the STREAM lock
taken, otherwise it will be racy.
Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/1452
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3320>
In a few cases throughout qtdemux, the results of QT_UINT32 were being
stored in a signed integer, which could cause subtle bugs in the case of
an integer overflow, even allowing the the result to equal a negative
number!
This patch prevents this by simply storing the results of this function
call properly in an unsigned integer type. Additionally, we fix up the
length checking with stsd parsing to prevent cases of child atoms
exceeding their parent atom sizes.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3344>
Subtracting a gint from another (or a guint from another) has no guarantees that
it will result in a gint.
Therefore do the actual comparision instead.
Also use the *actual* type for comparing flags (the field value types are different)
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3319>
Unlike the legacy elements, GstAdaptiveDemuxStream is a GObject now,
so a bunch of things that were actually stream methods on the
parent demux object can directly become stream methods now.
Move the stream class out to a header of its own.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3314>
Sometimes g_input_stream_read_all_finish() can return
0 bytes, but still succeed (return TRUE) and have more
data available later. Only finish the transfer
if it returns 0 bytes *and* FALSE with no error.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3314>
The cancelled flag was only set in the stream finalize()
method, after all activity on the stream has stopped anyway.
Replace uses of cancelled with checks on the stream state.
Remove the replaced flag, which was checked but never set
to TRUE anywhere any more.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3314>
With non-serialized sticky events, such as GST_EVENT_INSTANT_RATE, we both want
to store the event (for later re-linking) *AND* push the event in a non-blocking
way.
We therefore must *not* propagate pending sticky events if the event is "sticky
or serialized" but only if it's "serialized"
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3254>
- Make the srt_epoll_wait loops more uniform.
- Error only via GError when possible; let the element send the error
message. Avoids a second error message.
- Return 0 when cancelled. Avoids an error message from the element.
- Don't send an error message from send_headers when we're a server
sink.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3156>
When using the child proxy notation (child::property=value) it may
happen that the target child does not exist at the time of parsing
(i.e: decodebin creates the encoder according to the contents of the
stream). On this cases, we want to delay the setting of the property
to later, when new elements are added. Previous logic performed a
delayed set even if the target child was found but the property
was not found in it. This should be treated as a failure because,
unlike missing elements, properties should not appear dynamically.
By not failing, typos in property names may go unnoticed to the end
user.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2908>
These null checkes are slightly misleading when double-checking
mutability for external language interop. None of the functions in
these files allow the variable at hand to become `NULL` under normal
operation, because they are checked at initialization and never (allowed
to be) reassigned to `NULL`.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1615>
When matching segments across playlists with Program-Date-Times,
use the difference in segment PDTs to adjust the stream time
that's being transferred. This can fix cases where the
segment boundaries don't align across different streams
and the first download gets thrown away once the PTS
is seen and found not to match.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3309>
Check whether the init file / MAP data for a segment
is different to the current data and trigger an
update if so. Previously, the header would only
be checked in HLS after switching bitrate or
after a seek / first download.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3307>
Previously the minimum buffering threshold was hardcoded to a specific
value (10s). This is suboptimal this an actual value will depend on the actual
stream being played.
This commit sets the low watermark threshold in time to 0, which is an automatic
mode. Subclasses can provide a stream `recommended_buffering_threshold` when
update_stream_info() is called.
Currently implemented for HLS, where we recommended 1.5 average segment
duration. This will result in buffering being at 100% when the 2nd segment has
been downloaded (minus a bit already being consumed downstream)
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3240>
This is an additional quality parameter. In the default configuration this
quality switch is deactivated because it would cause a workload increase
which might be significant. If workload is not an issue in the application
it can be recommended to activate this feature.
A flush request is done when set_format is called to empty internal bit
buffer maintained by fdk-aac. When this happens, during the explicit
call to handle_buffer, decodeFrame does not return a AAC_DEC_OK. This
gets reported as a decoding error while no decoding error in fact took
place. Since this can be confusing, just return a GST_FLOW_OK and log
that an explicit flush was requested.
In order to figure out if the "raw" audio contained within the wav
container is actually DTS, right now we call the typefinder helper
which runs all typefinders.
Speed up this type finding process by specifying the extension.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3294>
GST_TRACERS="leaks" GST_DEBUG="GST_TRACER:7,leaks:6" gst-play-1.0 --use-playbin3 test.mkv
When running a pipeline like above, leaks are observed.
0:00:56.882419132 240637 0x5562c528ccc0 TRACE GST_TRACER :0:: object-alive, type-name=(string)GstConcatPad, address=(gpointer)0x7efd7c0d20a0, description=(string)<'':sink_0>, ref-count=(uint)1, trace=(string);
0:00:56.882429131 240637 0x5562c528ccc0 TRACE GST_TRACER :0:: object-alive, type-name=(string)GstConcatPad, address=(gpointer)0x7efd7c0d2be0, description=(string)<'':sink_0>, ref-count=(uint)1, trace=(string);
0:00:56.882437056 240637 0x5562c528ccc0 TRACE GST_TRACER :0:: object-alive, type-name=(string)GstConcatPad, address=(gpointer)0x7efd7c0d3720, description=(string)<'':sink_0>, ref-count=(uint)1, trace=(string);
gst_element_release_request_pad does not unref the pad. It needs to
be followed by gst_object_unref. Doing that fixes the above leaks.
Use g_ptr_array_new_with_free_func with gst_object_unref as the free
function to unref the pad after release.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3177>
Posting latency messages causes a full and potentially expensive latency
recalculation of the pipeline. While subclasses should check whether the latency
really changed or not before calling this function, we ensure that we do not
post such messages if it didn't change.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3282>
The picture parameter picture->top_field_first is reused in this mode
to signal the TOP fields. As a side effect, it will change every frame
and current code assumed that if this changes then a renegotiation is
needed. Fixed this by ignoring that change whenever we are decoding one field
only.
Fixes#1523
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3276>
In fact, all the h264 bit writer have byte aligned output except
the slice header. So we change the API from bit size in unit to
byte size, which is easy to use. For slice header, we add a extra
"trail_bits_num" to return the unaligned bits number.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3193>
We use va pool as msdkvpp's bufferpool, which means both va memory
and dma memory will be allocated by va pool. Considering drm modifier
stuff is not ready, we use va memory with higher priortiry than
dma memory when deciding vpp caps.
Besides, this patch removes the specified "interlace-mode" in vpp caps.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3253>
The gap handling was in place, but there was no event handler to trigger it.
Implement the alpha sink event handler for the gaps. This fixes handling of
valid streams which may not refresh the alpha frames for every video frames.
It will also allow a clean error if the stream was missing the initial
alpha frame, at least until we find a better way to handle these
invalid frames.
Related to #1518
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3264>
Handle when encoder doesn't support rate control, which is set as
VA_RC_NONE, and if the set rate control mode is not supported by the
GStreamer element, the element configuration fails.
Also it logs out max and target bitrate.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3063>
The entrypoint is set when the encoder helper is constructed,
nonetheless it was also passed as parameter when opening. That's
buggy.
In order to simplify the code, the entrypoint at construction is
honored.
But gst_va_encoder_has_profile_and_entrypoint() now doesn't rely in
the internal list of profiles since it only contains those that
belongs to codec and entrypoint, thus it queries directly the VA
driver.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3063>
Need to put the actual profile in the output caps otherwise any
capsfilter after the encoder that was used to force the output
profile will fail, such as
fdkaacenc ! audio/mpeg,stream-format=adts,profile=he-aac-v1 ! ..
because we put profile=lc in there to match the profile signaled
in the ADTS header. This is expressed through the base-profile=lc
in the GStreamer caps though, the profile needs to carry the
'real' profile.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1785>
duplicate symbol '__invoke_on_main' in:
/Library/Frameworks/GStreamer.framework/Versions/1.0/lib/libgstvulkan-1.0.a(cocoa_gstvkwindow_cocoa.m.o)
/Library/Frameworks/GStreamer.framework/Versions/1.0/lib/libgstgl-1.0.a(cocoa_gstglwindow_cocoa.m.o)
ld: 1 duplicate symbol for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
Also make the same change in iOS for consistency.
Continuation of https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/1132
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3242>
segment events by demuxer.
In order to play nicely with `ffmpeg`, demuxers in `gst-libav` have to make
buffers available to `ffmpeg` while taking the blocking I/O model in `ffmpeg`
into account, which results in buffers not being sent downstream until `ffmpeg`
has processed them in its separate thread.
In constrast, many `gstreamer` events are simply forwarded downstream.
Currently `GST_EVENT_SEGMENT` events are forwarded downstream without any
processing, which can potentially result in:
* `GST_EVENT_SEGMENT` events being out of sync with buffers
* `GST_EVENT_SEGMENT` events going out that are incorrect because they apply
to data seen by the demuxer, but not necessarily seen by downstream elements
I came across this bug when I was attempting to enable G723.1 demuxing/decoding
using the G723.1 demuxer and decoder provided by `ffmpeg`. I wrote tests to
verify support for the functionality, and found that, in push mode,
`GST_EVENT_SEGMENT` events pushed to the demuxer by the upstream `filesrc`
element would be forwarded to the decoder without modification, resulting in
an internal data streaming error. With this patch, tests work in both push and
pull mode.
This patch solves the problem by disabling the forwarding of
`GST_EVENT_SEGMENT` events downstream (an initial `GST_EVENT_SEGMENT` event is
still pushed downstream by the demuxer). It's possible there's a better way to
do this, but, having looked at how a few different `gstreamer` demuxers deal
with `GST_EVENT_SEGMENT` events, it seems like the processing is somewhat
specific to the demuxer implementation, whereas `gst-libav` has one general way
of handling the situation for any `ffmpeg` demuxer. Perhaps there's a better
way to solve this using the `ffmpeg` API to take advantage of specific demuxer
details. IDK.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3218>
Add Windows Graphics Capture (WGC) API based screen capture mode.
The conditions where this mode is used:
* Explicitly requested by user (capture-api property)
* To capture specific window
* When DXGI desktop duplication API does not work on hybrid graphics systems
(e.g., multi-gpu laptop)
Full features of this implementation require Windows 11. And Windows 11
SDK is required to build this feature.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3144>
When the output alignment is smaller than the input alignment, for
example, When the output alignment is "FRAME" and the parse is likely
connecting to a decoder, the current PTS setting for AV1 frames inside
a TU is not very correct.
For example, a TU may begin with non-displayed frames and end with a
displayed frame. The current way will assign the PTS to the first
non-displayed frame, which is a decode-only frame and the PTS will be
discarded in the video decoder. While the last displayed frame has
invalid PTS, and so the video decoder needs to guess its PTS based on
the frame rate and previous frame's PTS. This is not a decent and
robust way. And more important, when the previous frames provide DTS,
the video decoder will also guess the PTS based on the previous frames'
DTS and trigger the warning like:
gstvideodecoder.c:3147:gst_video_decoder_prepare_finish_frame: \
<vavp9dec0> decreasing timestame
It sets the reordered_output and makes the decoder in free run mode.
We should correct the PTS for a TU, let the non-displayed frames have
no PTS while set the correct PTS to the displayed one. Also, when the
AV1 stream has multi spatial layers, there are more than one displayed
frames inside one TU with the same PTS.
Note: If the input alignment is not TU aligned, we can not know the
exact PTS of this TU, and so we just clear the PTS of the decode only
frame and leave others unchanged.
We also correct all the PTS if the output is OBU aligned. All their
PTS and DTS are set to the input buffer's PTS.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3182>
When the incoming data has big alignment than the output, we do not need to
call finish_frame() and exit the current handle_frame() for each splitted
frame. We can push them all at one shot with in one handle_frame(), whcih
may improve the performance and can help us to find the edge of TU.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3182>
These tests relied on setting the name of an element twice to verify
that the last one set took precedence, however name is a CONSTRUCT property
and the parser now errors out when such properties are set twice, in
g_object_new_with_properties .
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3026>
When trying to build the plugin, GCC starts complaining about issues
with one of the cdparanoia headers and it block us from being able
to build the plugin with Werror.
The current warning in the header look like this:
```
[1/2] Compiling C object subprojects/gst-plugins-base/ext/cdparanoia/libgstcdparanoia.so.p/gstcdparanoiasrc.c.o
In file included from ../subprojects/gst-plugins-base/ext/cdparanoia/gstcdparanoiasrc.h:37,
from ../subprojects/gst-plugins-base/ext/cdparanoia/gstcdparanoiasrc.c:31:
/usr/include/cdda/cdda_interface.h:164:3: warning: initialization discards ‘const’ qualifier from pointer target type [-Wdiscarded-qualifiers]
164 | "Success",
| ^~~~~~~~~
...
/usr/include/cdda/cdda_interface.h:163:14: warning: ‘strerror_tr’ defined but not used [-Wunused-variable]
163 | static char *strerror_tr[]={
| ^~~~~~~~~~~
[2/2] Linking target subprojects/gst-plugins-base/ext/cdparanoia/libgstcdparanoia.so
```
Last release of cdparanoia was in 2008, so our best bet for the
time is to ignore the warnings.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2722>
This allows correct handling of wrapping around backwards during the
first wraparound period and avoids the infamous "Cannot unwrap, any
wrapping took place yet" error message.
It allows makes sure that for actual timestamp jumps a valid value is
returned instead of 0, which then allows the caller to handle it
properly. Not having this can have the caller see the same timestamp (0)
for a very long time, which for example can cause rtpjitterbuffer to
output the same timestamp for a very long time.
Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/1500
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3202>
Adding loopback capture mode for specified PID.
Note that this feature requires Windows 10 build 20348
(Windows 11/Windows Server 2022 or later),
and any process loopback related properties will not be exposed
if OS does not support it.
Example launch lines:
* wasapi2src loopback-mode=include-process-tree loopback-target-pid=<PID>
Captures audio generated by an application (specified by PID)
and its child process
* wasapi2src loopback-mode=exclude-process-tree loopback-target-pid=<PID>
Captures desktop audio excluding PID and its child process
Fixes: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/1278
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3195>
If there is an error while connecting, the streaming task will be stopped, and
is_running() will be false, causing a GST_FLOW_FLUSHING to be returned. Instead,
we perform the error check (!self->connection) first, to return an error if
that's what occured.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3189>
We've seen occasional crashes in the `wavparse` module associated with
referencing a buffer in `gst_wavparse_chain` that's already been freed. The
reference is stolen when the buffer is transferred to the adapter with
`gst_adapter_push` and, IIUC, assuming the source doesn't hold a reference to
the buffer, the buffer could be freed during interaction with the adapter in
`gst_wavparse_stream_headers`.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3179>
in certain ways.
In the case that a test is provided for, the size of the `fmt ` chunk is
changed from 16 bytes to 18 bytes (bytes 17 - 20 below):
```
$ hexdump -C corruptheadertestsrc.wav
00000000 52 49 46 46 e4 fd 00 00 57 41 56 45 66 6d 74 20 |RIFF....WAVEfmt |
00000010 12 00 00 00 01 00 01 00 80 3e 00 00 00 7d 00 00 |.........>...}..|
00000020 02 00 10 00 64 61 74 61 |....data|
00000028
```
(Note that the original file is much larger. This was the smallest sub-file
I could find that would generate the crash.)
Note that, while the same issue doesn't cause a crash in pull mode, there's a
different issue in that the file is processed successfully as if it was a .wav
file with zero samples.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3173>
When the alignment is "FRAME" and the parse is likely connecting to
a decoder, the current PTS setting for VP9 frames inside a super
frame is not very correct.
For example, the super frame may begin with non-displayed frames and
end with a displayed frame. The current way will assign the PTS to
the first non-displayed frame, which is a decode-only frame and the
PTS will be discarded in the video decoder. While the last displayed
frame has invalid PTS, and so the video decoder needs to guess its
PTS based on the frame rate and previous frame's PTS. This is not a
decent and robust way. And more important, when the previous frames
provide DTS, the video decoder will also guess the PTS based on the
previous frames' DTS and trigger the warning like:
gstvideodecoder.c:3147:gst_video_decoder_prepare_finish_frame: \
<vavp9dec0> decreasing timestame
It sets the reordered_output and makes the decoder in free run mode.
We should correct the PTS for a super frame, let the non-displayed
frames have no PTS while set the correct PTS to the displayed one.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3155>