Not all GPUs can support arbitrary offset of
D3D12_PLACED_SUBRESOURCE_FOOTPRINT when copying GPU memory between
texture and buffer. Instead of calculating size/offset per plane,
calculate the entire size and offsets at once.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6973>
Some servers might not provide 100% matching PDT when doing updates, or accross
variants. This would cause the code matching segments using PDT to fail if the
segment PDT was 1 microsecond (or whatever small value) before the candidate
segment. And would pick the (wrong) following segment as the matching one.
In order to be more tolerant when matching, we instead check whether the
candidate segment is within the first segment of the segment we are trying to
match.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6961>
If we end up with a segment with an internal time that varies from the supposed
one, this could be for two reasons:
* We guess-timated the wrong segment to go to when advancing or switching
variants. In that case we try to find the actual segment to go to (just before
this change).
* There was a complete playlist change (for whatever reason) and we can't find a
replacement. In that case we want to carry on playback from this position but
need to remember that we moved (by setting the stream to DISCONT, and
resetting the new mapping).
Fixes playback on several broken stream
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6961>
Since the default value of `m3u8->discont_sequence` (before parsing of the
playlist data) was 0 .. we would never properly detect the presence of that
field if it was present with a value of 0.
This would later on cause havoc in playlist synchronization where we would
assume it didn't have a discontinuity sequence specified (whereas it did, and it
was 0).
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6961>
A lot of streams will do a poor job of estimating proper duration of fragments
in the playlist, but over several fragments have it correct.
Instead of constantly trying to realign the estimated stream time, allow for a
more realistic tolerance of 3-4 video frames
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6961>
When updating playlists, we want to know whether the updated playlist is
continuous with the previous one. That is : if we advance, will the next
fragment need to have the DISCONT buffer set on it or not.
If that happens (because we switched variants, or the playlist all of a sudden
changed) we remember that there is a pending discont for the next fragment. That
will be used and resetted the next time we get the fragment information.
Previously this was only partially done. And it was racy because it was set
directly on `GstAdaptiveDemux2Stream->discont` when a playlist was updated,
instead of when the next fragment was prepared.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6961>
When dealing with live streams, the function was assuming that all segments of
the playlist had valid stream_time. But that isn't TRUE, for example in the case
of failing to synchronize playlists.
Fixes losing sync due to not being able to match playlist on updates
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6961>
The name is already passed via the signal parameters so it doesn't have
to be retrieved again via GstObject API, which would crash on other
GObjects. Child proxy child objects can be any kind of GObject and the
code here otherwise handles this correctly already.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6951>
On various 32 bit systems, time_t is actually 64 bits while long is
still only 32 bits. The macro would wrongly trigger its assertion in
this case if a value with more than 68 years worth of seconds is
converted.
Examples are various newer 32 bit platforms and old ones that are
compiled with -D_TIME_BITS=64.
Also statically assert that time_t is either 32 or 64 bits. Other values
might need adjustments in the macro.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6919>
The prime_fds for multi planes may be the same. For example, on Intel's
platform, the NV12 surface may have the same FD for the plane0 and the
plane1. Then, the DRM_IOCTL_GEM_CLOSE will close the same handle twice
and get an "Invalid argument 22" error the second time.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6916>
To simplify the description, I'm assuming we only have two streams: video and audio.
For the video stream, we have the following events :
- STREAM_START => stream->wait set to true
- NEW_SEGMENT(1) => blocked waiting in gst_stream_synchronizer_wait
- FLUSH_START => unblocked
- FLUSH_STOP => stream->wait reset to false
- NEW_SEGMENT(2) => not waiting, since stream->wait is false
Then for the audio stream, we have the following events :
- STREAM_START => stream->wait set to true
- NEW_SEGMENT(2) => blocked waiting in gst_stream_synchronizer_wait for ever.
Note: The first NEW_SEGMENT event and the FLUSH_START, FLUSH_STOP events of the audio stream
are dropped before being received by the streamsynchronizer element, because the decodebin audio pad src
is not yet linked to the playsink audio pad sink.
To fix this deadlock, we don't reset stream->wait to false in the FLUSH_STOP event when it is not
waiting for the EOS of the other streams.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6887>
The FORCE_KEYFRAME frame which has GST_VIDEO_CODEC_FRAME_FLAG_FORCE_KEYFRAME
bit set should be the sync point. So we should let it be an IDR frame to begin
a new GOP, rather than just promote it to an I frame.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6857>
The FORCE_KEYFRAME frame which has GST_VIDEO_CODEC_FRAME_FLAG_FORCE_KEYFRAME
bit set should be the sync point. So we should let it be an IDR frame to begin
a new GOP, rather than just promote it to an I frame.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6857>
This mirrors the behaviour in vp8enc / vp9enc and is generally more
useful than using any framerate from the caps as it provides some degree
of accuracy if the stream doesn't have timestamps perfectly according to
the framerate.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6909>
Turns out AudioConvertHostTimeToNanos and AudioGetCurrentHostTime are macOS-only APIs, which prevents apps using
GStreamer on iOS from being accepted into App Store.
This commit replaces those functions with a manual version of what they do - mach_absolute_time() for the current time,
and data from mach_timebase_info() at the beginning to convert host timestamps to nanoseconds.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6899>