The initial goal was to support the case where we are paused watching a live
stream, and when we resume we can no longer resume from the previously
downloaded position. In that case we internally do a flushing seek back to the
"current live head position". This was also extended since to be able to
handle (utterly broken) servers when we can't really figure out where we are
anymore and therefore trigger that lost sync so we can try to get back on our
feet.
This does fix the issue... but results in spurious FLUSH_{START|STOP} events
being sent downstream. While that's fine for regular playback scenarios, it's a
bit of a wild scenario since a lot of pipelines/applications don't expect such
events when it wasn't triggered by downstream/application.
Fixes#3605
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7005>
This fixes a regression introduced by 6c4f52ea20
There are cases where the input stream will be push-based, time-segment and not
have a collection nor caps. This means the event-based checks are not sufficient
to decide when/where to plug in a identity or parsebin to process the input.
For those corner cases we setup a buffer probe to ensure we always end up with
at least a parsebin
Fixes#3609
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7010>
It can be seen as a WA in the case of multi-channel transcoding (like
decoder output to two channels, one for encoder and one for vpp).
Normally, encoder sets min pts of a huge value to avoid negative dts,
while vpp set pts without this addtional huge value, which are likely to
cause input surface pts does not fit with encoder (since both encoder
and vpp accept the same buffer from decoder, means they modify the timestamp
of one mfx surface). So we add this huge value to vpp to ensure enc and
vpp set the same value to input mfx surface meanwhile does not break
encoder's setting min pts for dts protection.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6971>
Adding a property to control error reporting behavior when output
window is closed in playing or paused state. This can be useful
for apps where an app wants to close window even if it's playing
a stream, and the closed window is expected.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6939>
The typefind code was rejecting content smaller than 128 bytes making it
impossible to play files with very small srt files.
But those can actually be properly detected so fix typefind to allow
smaller content and try its best with it.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6937>
When measuring video latency, one mechanism involves taking a photo
with a camera of two screens showing the test video overlayed with
timeoverlay or clockoverlay. In these cases, if the display's pixel
response time is crappy, you will see ghosting due to which it can be
quite difficult to discern what the current timestamp being shown is.
This commit adds a property that *also* shows the timestamp in
a different (sequentially predictable) location every frame, which
makes it easy to tell what the latest rendered timestamp is.
For bonus points, you can also use the fade-time of the previous frame
to measure with sub-framerate accuracy when the photo was taken, not
just clamped to the framerate, giving you a higher precision latency
value.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6935>
This attempts to implement the gtkglsink element on Windows using WGL,
as there were some more gotchas that are along the way, since we need to
juggle with libepoxy along the way, meaning that we need a recent
GTK+-3.24.x for this to work properly, i.e. the upcoming GTK+-3.24.43.
Since we are essentially using an overlay compositor only during
rendering, move its initialization and destruction into the
gtk_gst_gl_widget_render() function, so that things are safer as we are
doing things across threads between gstreamer (gst-gl) and GTK, as GL
operations, as above, have more gotchas on Windows.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4289>
When dealing with push-based inputs, we are now delaying the creation of
parsebin/identity until we get all pre-buffer events.
We therefore can simplify the handling of new pads being linked and only have to
check if upstream can handle pull-based or not.
Avoids creating parsebin for parsed upstream data altogether
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6953>
Simple gst_gl_sync_meta_wait() is not sufficient to ensure GL commands
are executed before dma-buf devices get to see the buffer.
This is the first step that should make the code behave correctly for
everybody, although there may be performance penalty. In the future we
should introduce a more general sync meta that would allow to move the
waiting from gldownload (the producer) to the sink elements (the
consumers).
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6968>
Not all GPUs can support arbitrary offset of
D3D12_PLACED_SUBRESOURCE_FOOTPRINT when copying GPU memory between
texture and buffer. Instead of calculating size/offset per plane,
calculate the entire size and offsets at once.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6967>
drm_fourcc.h should be picked up via the pkgconfig include, not the
system includedir directly.
Also consolidate the libdrm usage in va and msdk.
All this allows it to be picked up consistently (via the subproject,
for example).
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6932>
This patch fixes this critical warning when registering MSDK:
_dma_fmt_to_dma_drm_fmts: assertion 'fmt != GST_VIDEO_FORMAT_UNKNOWN' failed
It was because the HEVC string with possible output formats has an extra space
that could not be parsed correctly.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6853>
The name is already passed via the signal parameters so it doesn't have
to be retrieved again via GstObject API, which would crash on other
GObjects. Child proxy child objects can be any kind of GObject and the
code here otherwise handles this correctly already.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6938>
When transforming from unknown alignment to frame or obu, the TU timestamp
was not properly transferred. Fix this by saving the TU DTS as the first
DTS seen within the the TU data, and the PTS as the last PTS seen in that
TU data. Finally, reset the TU timestamp after each TU have completed.
Fixes#1496
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6895>
Some servers might not provide 100% matching PDT when doing updates, or accross
variants. This would cause the code matching segments using PDT to fail if the
segment PDT was 1 microsecond (or whatever small value) before the candidate
segment. And would pick the (wrong) following segment as the matching one.
In order to be more tolerant when matching, we instead check whether the
candidate segment is within the first segment of the segment we are trying to
match.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6610>
If we end up with a segment with an internal time that varies from the supposed
one, this could be for two reasons:
* We guess-timated the wrong segment to go to when advancing or switching
variants. In that case we try to find the actual segment to go to (just before
this change).
* There was a complete playlist change (for whatever reason) and we can't find a
replacement. In that case we want to carry on playback from this position but
need to remember that we moved (by setting the stream to DISCONT, and
resetting the new mapping).
Fixes playback on several broken stream
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6610>
Since the default value of `m3u8->discont_sequence` (before parsing of the
playlist data) was 0 .. we would never properly detect the presence of that
field if it was present with a value of 0.
This would later on cause havoc in playlist synchronization where we would
assume it didn't have a discontinuity sequence specified (whereas it did, and it
was 0).
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6610>
A lot of streams will do a poor job of estimating proper duration of fragments
in the playlist, but over several fragments have it correct.
Instead of constantly trying to realign the estimated stream time, allow for a
more realistic tolerance of 3-4 video frames
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6610>
When updating playlists, we want to know whether the updated playlist is
continuous with the previous one. That is : if we advance, will the next
fragment need to have the DISCONT buffer set on it or not.
If that happens (because we switched variants, or the playlist all of a sudden
changed) we remember that there is a pending discont for the next fragment. That
will be used and resetted the next time we get the fragment information.
Previously this was only partially done. And it was racy because it was set
directly on `GstAdaptiveDemux2Stream->discont` when a playlist was updated,
instead of when the next fragment was prepared.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6610>
When dealing with live streams, the function was assuming that all segments of
the playlist had valid stream_time. But that isn't TRUE, for example in the case
of failing to synchronize playlists.
Fixes losing sync due to not being able to match playlist on updates
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6610>
Even if no new synchronization information is available.
This is necessary because the timestamp offset logic in rtpbin depends
on the base RTP time that is determined by the jitterbuffer, but this
changes all the time (especially in mode=slave) and the timestamp
offsets have to be updated accordingly. Doing so is especially important
if they're only determined by the RTP-Info, which never changes from the
very beginning.
The interval can be configured via the new min-sync-interval property.
Synchronization happens at least that often, but at most as often as the
old sync-interval property allows.
Both intervals are now based on the monotonic system clock.
Additionally, clean up synchronization code a bit, only emit either
inband NTP or RTCP SR synchronization at the same time, based on which
one has the more recent time information, and only emit RTP-Info
synchronization if it wasn't provided previously at the same time as the
NTP-based synchronization information.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6543>
There is generally no requirement to ignore RTCP SR if the RTP time of
the SR differs a lot from the last received RTP packet. The mapping
between RTP and NTP time stays valid until there was a stream reset, in
which case we wouldn't use that information anyway.
When using rtcp-sync-send-time=false the default of 1s difference can
easily be exceeded, e.g. if encoding of the stream after capture adds
more than 1s of latency.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6543>
Never is useful for some RTSP servers that report plain garbage both via
RTCP SR and RTP-Info, for example.
NTP is useful if synchronization should only ever happen based on RTCP
SR or NTP-64 RTP header extension.
Also slightly change the behaviour of always/initial to take RTP-Info
based synchronization into account too. It's supposed to give the same
values as the RTCP SR and is available earlier, so will generally cause
fewer synchronization glitches if it's made use of.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6543>
Instead of switching on the very first stream, require that all streams
have switched before switching to the different synchronization
mechanism.
Without this there will be a noticeable gap during the switch. E.g. when
going from RTP-Info to NTP-based association, first the first stream
only would get an offset, then the first two, ... then all of them.
Depending on the order of streams this will cause a lot of changes in
ts-offset during the transition.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6543>
Previously these parameters were randomly changed in the body of the
function to avoid having to declare a new variable, which made the code
very hard to follow. By marking them as const this won't be possible
anymore in the future.
Also the RTP clock-base (RTP time from RTSP RTP-Info) is an unsigned
64 bit integer as it's an extended RTP timestamp.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6543>
Both were entangled previously and very hard to follow what happens
under which conditions. Now as a very first step the code decides which
of the two cases it is going to apply, and then proceeds accordingly.
This also avoids calculating completely invalid values along the way and
even printing them int the debug output.
Also improve debug output in various places.
This shouldn't cause any behaviour changes.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6543>
This simplifies the code as it's a much simpler case than the normal
inter-stream synchronization, and interleaving it with that only
reduces readability of the code.
Also improve some debug output in this code path.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6543>
Adds a separate vtenc_h265a element (with a _hw variant as usual) for the HEVCWithAlpha codec type.
Decided to go with a separate element to not break existing uses of the normal HEVC encoder.
The preserve_alpha property is still only used for ProRes, no need for it here because we explicitly say we want alpha
when using the new element.
For now, the HEVCWithAlpha has an issue where it does not throttle the amount of input frames queued internally.
I added a quick workaround where encode_frame() will block until enqueue_frame() callback notifies it that some space
has been freed up in the internal queue. The limit was set to 5, which should be enough I guess? Hopefully this is not
too prone to race conditions.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6664>
When we are dealing with parsed inputs (i.e. using identity), we need to ensure
that we have a valid stream collection (and therefore DBCollection) before
anything flows dowsntream.
In those cases, we hold onto those events until we get such a collection.
Fixes#3356
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6774>
This commit separates collection and selections into a new separate structure:
DecodebinCollection.
This provides a much cleaner/saner way of dealing with collections being
updated, gapless playback, etc...
There is now a list of DecodebinCollection in flight, of which two are special:
* input_collection, the currently inputted/merged collection
* output_collection, the currently active collection on the output of multiqueue
Handling GST_EVENT_SELECT_STREAMS is split, by looking for the collection to
which it applies. And the requested streams are stored in it. IIF that
collection is output_collection we can do the switch, else it will be updated
when it becomes active.
Detecting which collection/selection is active is done by looking at the
GST_EVENT_STREAM_START on the output of the multiqueue.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6774>
* Move the handling of GST_EVENT_STREAM_START on a slot to a separate function
* There was a lot of usage of `gst_stream_get_stream_id()` for the slot
active_stream. Cache that instead of constantly querying it.
* Rename the variables in `handle_stream_switch()` to be clearer
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6774>
* Centralize associating an output to a slot in one function, including properly
resetting those fields
* Rename functions to be more explicit
* Move code to "reset" an output stream into a dedicated function (will be used
later)
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6774>
* Rename the function names to be clearer, with prefixes
* Pass the input (or stream) directly where appropriate
* Document usage, inputs, ownership
* Rename variables for clarity where applicable
* Avoid double lock/unlock if callee can handle it directly
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6774>
Simplify its usage by having it directly create the message if the collection
changed. This is what caller were always doing and avoids releasing selection
locks yet-another-time
Also use it in more places to avoid code repetition
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6774>
`StdVideoDecodeH265PictureInfo.flags.IsReference` refers to section 3.132 ITU-T
H.265 specification:
reference picture: A picture that is a short-term reference picture or a
long-term reference picture.
`GstH265Picture.ref` doesn't reflect this, but we need to query the NAL type of
the processed slice.
This patch fixes the validation layer error
`VUID-vkCmdBeginVideoCodingKHR-slotIndex-07239` while using the NVIDIA driver.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6901>
Conceptually identical to the present signal of d3d11videosink.
This signal will be emitted with current render target
(i.e., swapchain backbuffer) and command queue. Signal handler
can record GPU commands for an overlay image or to blend
an image to the render target.
In addition to d3d12 resources, videosink will send
d3d11 and d2d resources depending on "overlay-mode"
property, so that signal handler can render by using
preferred/required DirectX API.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6838>
The idea of using separate command queue per videosink was that
swapchain is bound to a command queue and we need to flush the
command queue when window size is changed. But the separate
queue does not seem to improve performance a lot.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6838>
On various 32 bit systems, time_t is actually 64 bits while long is
still only 32 bits. The macro would wrongly trigger its assertion in
this case if a value with more than 68 years worth of seconds is
converted.
Examples are various newer 32 bit platforms and old ones that are
compiled with -D_TIME_BITS=64.
Also statically assert that time_t is either 32 or 64 bits. Other values
might need adjustments in the macro.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6869>
The prime_fds for multi planes may be the same. For example, on Intel's
platform, the NV12 surface may have the same FD for the plane0 and the
plane1. Then, the DRM_IOCTL_GEM_CLOSE will close the same handle twice
and get an "Invalid argument 22" error the second time.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6914>
From the spec (chapter 34, v1.3.283):
````
UNORM: the components are unsigned normalized values in the range [0, 1]
SRGB: the R, G and B components are unsigned normalized value that represent
values using sRGB nonlinear encoding, while the A component (if one
exists) is a regular unsigned normalized value
```
The difference is the storage encoding, the first one is aimed for image
transfers, while the second is for shaders, mostly in the swapchain stage in the
pipeline, and it's done automatically if needed [1].
As far as I have checked, other frameworks (FFmpeg, GTK+), when import or export
images from/to Vulkan, use exclusively UNORM formats, while SRGB formats are
ignored.
My conclusion is that Vulkan formats are related on how bits are stored in
memory rather their transfer functions (colorimetry).
This patch does two interrelated changes:
1. It swaps certain color format maps to try first, in both
gst_vulkan_format_from_video_info() and gst_vulkan_format_from_video_info_2(),
the UNORM formats, when comparing its usage, and later check for SRGB.
2. It removes the code that check for colorimetry in
gst_vulkan_format_from_video_info_2(), since it not storage related.
1. https://community.khronos.org/t/noob-difference-between-unorm-and-srgb/106132/7
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6797>
This mirrors the behaviour in vp8enc / vp9enc and is generally more
useful than using any framerate from the caps as it provides some degree
of accuracy if the stream doesn't have timestamps perfectly according to
the framerate.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6891>
Turns out AudioConvertHostTimeToNanos and AudioGetCurrentHostTime are macOS-only APIs, which prevents apps using
GStreamer on iOS from being accepted into App Store.
This commit replaces those functions with a manual version of what they do - mach_absolute_time() for the current time,
and data from mach_timebase_info() at the beginning to convert host timestamps to nanoseconds.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6789>
The `videosink` refernce in main() is a floating one, so it should not
be unref'ed (the playbin practically takes ownership of it).
This prevents a "gst_object_unref: assertion '((GObject *)
object)->ref_count > 0' failed" at runtime.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6883>
Using g_error() crashes the application, producing a coredump e.g. when
the user closes the video window, which can be confusing (especially on
the very first tutorial).
Let's change this to log the error message without crashing, using
g_printerr(), like subsequent tutorials.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6883>
To simplify the description, I'm assuming we only have two streams: video and audio.
For the video stream, we have the following events :
- STREAM_START => stream->wait set to true
- NEW_SEGMENT(1) => blocked waiting in gst_stream_synchronizer_wait
- FLUSH_START => unblocked
- FLUSH_STOP => stream->wait reset to false
- NEW_SEGMENT(2) => not waiting, since stream->wait is false
Then for the audio stream, we have the following events :
- STREAM_START => stream->wait set to true
- NEW_SEGMENT(2) => blocked waiting in gst_stream_synchronizer_wait for ever.
Note: The first NEW_SEGMENT event and the FLUSH_START, FLUSH_STOP events of the audio stream
are dropped before being received by the streamsynchronizer element, because the decodebin audio pad src
is not yet linked to the playsink audio pad sink.
To fix this deadlock, we don't reset stream->wait to false in the FLUSH_STOP event when it is not
waiting for the EOS of the other streams.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6763>
If we have constant duration buffers, set the duration on
outgoing buffers, like rtpmp4adepay does. This fixes
problems with (for example) muxers like mp4mux not writing
the duration of the final sample into the index.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6878>
_get_osfhandle() expects valid fd and CRT will abort program
if given paramerter is invalid. The fd can be invalidated
in various way, file was deleted by other process after
we open a file. To avoid it, our own exception
handler must be installed so that _get_osfhandle() can return
INVALID_HANDLE_VALUE if fd is invalid.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6877>
In some cases you want to ensure that a specific element factory is used
while requiring some specific caps but this was not possible. You can
now do `qtmux:video/x-prores,variant=standard|factory-name=avenc_prores_ks`
to ensure that the `avenc_prores_ks` factory is used to produce the
'standard' variant of prores video stream.
This also enhances a bit the documentation
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6875>
The encoder can specify the a preferred_output_delay value to get better throughput
performance. The higher delay may get better HW performance, but it may increases
the encoder and pipeline latency.
When the output queue length is smaller than preferred_output_delay, the encoder
will not block to waiting for the encoding output. It will continue to prepare and
send more commands to GPU, which may improve the encoder throughput performance.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4359>
This counter is incremented once for every segment, meaning it would
e.g. overflow after 24 days when using 1ms segments. Once that happens,
completely wrong positions are reported and invalid memory is handed out
for writing/reading the next segments.
As the affected variables are unfortunately part of the public API of
the struct, a second set of variables is added together with accessor
functions and both variables are kept in sync for backwards
compatibility.
All existing users of the two variables are moved to the new ones but
external code might still run into the overflow.
This also slightly breaks API as external code updating the variables
will have no effect anymore but the only known user of this is
pulsesink.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6740>
If the muxer times out because of the latency deadline it can happen
that some pads have no caps yet. In that case skip creation of streams
for these pads and create updated section tables once the first buffer
arrives later.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6823>
This makes sure that for sparse streams (KLV, DVB subtitles, ...) the
muxer does not wait until the next buffer is available for them but
times out on the latency deadline and outputs data.
For non-live pipelines it will still be necessary for upstream to
correctly produce gap events for sparse streams.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6823>