There are some streams, with HRD, where the the calculated
max_dpb_frames is zero (max_dpb_mbs is less than size mb). In order to
get the dbp size it is required to rely on the VUI parameters if they
are present.
According to the spec Annex E.2.1
**max_dec_frame_buffering** specifies the required size of the HRD
decoded picture buffer (DPB) in units of frame buffers. It is a
requirement of bitstream conformance that the coded video sequence
shall not require a decoded picture buffer with size of more than
Max(1, max_dec_frame_buffering) frame buffers to enable the output of
decoded pictures at the output times specified by dpb_output_delay of
the picture timing SEI messages. The value of max_dec_frame_buffering
shall be greater than or equal to max_num_ref_frames. An upper bound
for the value of max_dec_frame_buffering is specified by the level
limits in clauses A.3.1, A.3.2, G.10.2.1, and H.10.2.
When the max_dec_frame_buffering syntax element is not present, the
value of max_dec_frame_buffering shall be inferred as follows:
– If profile_idc is equal to 44, 86, 100, 110, 122, or 244 and
constraint_set3_flag is equal to 1, the value of
max_dec_frame_buffering shall be inferred to be equal to 0.
– Otherwise (profile_idc is not equal to 44, 86, 100, 110, 122, or 244
or constraint_set3_flag is equal to 0), the value of
max_dec_frame_buffering shall be inferred to be equal to MaxDpbFrames.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1381>
Extensions and layers can be enabled before calling
gst_vulkan_instance_open() but after calling
gst_vulkan_instance_fill_info().
Use the list of available extensions to better choose a default display
implementation to use based on the available Vulkan extensions for surface
output.
Defaults are still the same.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1341>
Until now, bound check would simply trace the values and the range. This
enhances the trace by also tracing the name of the variable that was to be set
or read. This is not magically perfect in all cases, but greatly speed the
debugging work. Here's an example before and after this change:
Before: gst_h264_parser_parse_slice_hdr: value not in allowed range. value: 819183, range -87-77
After: gst_h264_parser_parse_slice_hdr: value for 'slice->slice_qp_delta' not in allowed range. value: 819183, range -87-77
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1356>
The unit tests only checked for vulkan_dep.found(), which can
be true if the libs are there but glslc was not found, in which
case the plugin wouldn't be built and the unit tests would fail
because of missing vulkan plugins.
Doesn't really make much sense to build the vulkan integration lib
either if we're not going to build the vulkan plugin, so just disable
both for now if glslc is not available.
Fixes#1301
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1307>
Using glist requires a lot of small allocation at runtime and also
it comes with a slow sort algorithm. As we play with that for very
frame and slices, use GArray instead. Note that we cache some arrays
in the instance as there is no support for stack allocated arrays
in GArray.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1238>
This was removed in Vulkan 1.2.140.
> Shortly after 2020-04-24, we will be removing the automatically
> generated `VK_*_BEGIN_RANGE`, `VK_*_END_RANGE`, and `VK_*_RANGE_SIZE`
> tokens from the Vulkan headers. These tokens are currently defined for
> some enumerated types, but are explicitly not part of the Vulkan API.
> They existed only to support some Vulkan implementation internals,
> which no longer require them. We will be accepting comments on this
> topic in [#1230], but we strongly suggest any external projects using
> these tokens immediately migrate away from them.
[#1230]: https://github.com/KhronosGroup/Vulkan-Docs/issues/1230
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1255>
The parser is used all over the place assuming that after calling
gst_h265_parser_identify_nalu(), the start-code found is can also be
identified. In H264 this works, because scan_for_start_code rely on
gst_byte_reader_masked_scan_uint32() that ensures that 1 byte passed the 3
bytes start code is found. But for HEVC, we need two bytes to identify the
following NAL.
This patch will return NO_NAL_END, even if a start code is found in the case
there was not enough bytes. This solution was chosen to maintain backward
compatibility, and reduce complexicity.
Fixes#1287
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1251>
Same as done for H264, this error was trying to catch the case where we had
a start code without any bytes afterward. This will never happen since the
start code scanner only returns a match if there is one byte after start
code (pattern 0x00000100 / mask 0xffffff00). In H264, once byte is sufficient
to identify the NALU.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1251>
This will stop stripping four bytes start code. This was fixed and broken
again as it was causing the a timestamp shift. We now call
gst_base_parse_set_ts_at_offset() with the offset of the first NAL to ensure
that fixing a moderatly broken input stream won't affect the timestamps. We
also fixes the unit test, removing a comment about the stripping behaviour not
being correct.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1251>
This error was trying to catch the case where we had a start code without any
bytes afterward. This will never happen since the start code scanner only returns
a match if there is one byte adter start code (pattern 0x00000100 / mask
0xffffff00).
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1251>
This is the first version of AV1 parser implementation in GStreamer.
A test file is also provied with several test cases. It contains a
test sequence taken from the aom testdata set, with one key and one
inter-frame. The same test sequence has been reencoded to annexb.
testdata is taken from aom testdata (and reencoded for annexb) as well
as handcrafted testcases. Once reference testdata is available, the
testing could be imporved aswell.
Co-author: He Junyan <junyan.he@hotmail.com>
Co-author: Víctor Manuel Jáquez Leal <vjaquez@igalia.com>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/785>
the old manner does not consider the profile idc. The profile idc should
play an more important role in recognizing the profile than the other
information. And there is no need to mix profiles of different extensions
together to find the closest profile when the bits stream is not standard,
different extensions support different features and should not be mixed.
The correct way should be recognize the extension category by profile idc
firstly, and then find the closest profile.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1121>
FormatRangeExtensionProfile declares the common bits used for not
only format range extensions profiles, but also for several different
h265 extension profiles, such as high throughput, screen content
coding extensions, etc. And So the old name is not proper.
We also rename the get_h265_extension_profile function.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1121>
We should use the traget ExtensionProfile's IDC to check the
profile_compatibility_flag, rather than the profile_idc in the
stream. The old profile_compatibility_flag check always return
true. This causes that profiles with same constraint flags but
different profile_idc can't be recognized correctly. For example,
the screen-extended-main-444 profile is always be recognized as
the high-throughput-444 profile.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1121>
In case of IDR, any previously decoded pictures must be drained
before the IDR and POC of IDR should be zero. So we can output
IDR immediately. Also, when POC of current picture is expected to be
the next output POC, decoder can output the picture as well
without waiting.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1211>
... to prevent requesting decoding before the preparation.
For instance, baseclass should not request decoding a picture if there
is no parsed valid headers, since subclass is most likely
not ready to decoding it.
gst_h265_parser_parse_{vps,sps,pps} APIs were used to parse VPS/SPS/PPS and
also in order to update parser's internal state at once. Meanwhile
gst_h265_parse_{vps,sps,pps} APIs are to parse VPS/SPS/PPS without state update.
This commit introduces new APIs so that only accepted VPS/SPS/PPS by user
can be updated to be used by parser.
gst_h264_parser_parse_{sps,pps} APIs were used to parse SPS/PPS and
also in order to update parser's internal state at once. Meanwhile
gst_h264_parse_{sps,pps} APIs are to parse SPS/PPS without state update.
This commit introduces new APIs so that only accepted SPS/PPS by user
can be updated to be used by parser.
... and store all parsed values.
We are storing pic_struct_present_flag although it's not part of
this SEI message but GstH264PicTiming includes it to clarify
following syntax values.
In addition to that, by adding CpbDpbDelaysPresentFlag, we don't need to
refer to VUI anymore.
As per specification in A.3.1 h) and A.3.2 f), the maximum size of the DPB is
16. Fix the maximum in the fine and fix the formula to use MIN instead of MAX
so that we no longer always use the maximum for the profile/level.
If decoding failed because end_picture() failed, set the picture to
nonexisting, this way output_picture() will be skipped. This avoids confusing
special cases in output_picture() implementation.
gsth265parser does it already. Although corresponding API of h265parser is
gst_h265_sei_free, _clear suffix is more consistent naming for h264parser
since there are gst_h264_{sps,pps}_clear().
According to following two specs, add support for AC4 in tsdemux.
1. ETSI TS 103 190-2 V1.2.1 (2018-02) : Annex D (normative): AC-4 in MPEG-2 transport streams
2. ETSI EN 300 468 V1.16.1 (2019-08) : Annex D (normative):Service information implementation of AC-3, EnhancedAC-3, and AC-4 audio in DVB systems
That's the value of NumDeltaPocs[RefRpsIdx] and we might be able to derive
the value from given sps and slice header.
Because well known hardware implementations refer to the value, however,
storing the value makes things easier.
Following is the list of hardware implementations
* DXVA2: ucNumDeltaPocsOfRefRpsIdx
* NVDEC/VDPAU: NumDeltaPocsOfRefRpsIdx
See C.5.2.2 Output and removal of pictures from the DPB.
If the number of pictures in the DPB is greater than or equal to
sps_max_dec_pic_buffering_minus1[HighestTid] + 1, then the picture
should be outputted.
On new_segment, the decoder is expected to negotiate. The decoder may want to
pre-allocate the needed buffers. Pass the max_dpb_size as this is needed to
determin how many buffers should be allocated.
This introduce a library which contains a set of base classes which
handles the parsing and the state tracking for the purpose of decoding
different CODECs. Currently H264, H265 and VP9 are supported. These
bases classes are used to decode with low level decoding API like DXVA,
NVDEC, VDPAU, VAAPI and V4L2 State Less decoders. The new library is
named gstreamer-codecs-1.0 / libgstcodecs.
Add a missing dependency to wl_client_dep for the wayland build. Some distros
have the wayland-client headers not installed in /usr/include (which is perfectly
valid, the pkg-config .pc file gives the right feedback).
Instead of synchronising at the ICE transport, do clock sync for the
RTP stream at the DTLS transport via the dtlssrtpenc rtp-sync
property. This avoids delaying RTCP while waiting until it is time
to output an RTP packet when rtcp-mux is enabled.
https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/1212
Posting any message to parent seems to be pointless. That might break
parent window.
Regardless of the posting, parent window can catch mouse event
and also any keyboard events will be handled by parent window by default.
It's currently the only sane way we can use MoltenVK functions to
integrate with Metal API.
It also removes the need to specify the VK_ICD_FILENAMES environment
variable pointing to MoltenVK_icd.json.
Includes a new GstVulkanHandlePool base class for pooling different
resources togther. The descriptor cache object is ported to
GstVulkanHandlePool with the exact same functionality.
A new GstVulkanFenceCache is also implemented for caching fences
which is used internally by GstVulkanDevice for creating or reusing
fences.
The existing GstVulkanTrashFenceList object now caches trash objects.
Part 1 is a base class (vkvideofilter) that handles instance, device,
queue retrieval and holding that has been moved to the library
Part 2 is a fullscreenrenderquad that is still in the plugin that
performs all of the previous vulkan-specific functionality.
Weak refs don't quite work here correctly as there is always a race with
taking the lock between find_view() and remove_view(). If find_view()
returns a view that is going to removed by remove_view() then we have an
interesting situation.
In theory, the number and type of views for an image are relatively
constant and should not change one they've been set up which means that
it is actually practical to perform pool-like reference counting here
where the image holds a pool of different views that it can give out
as necessary.
It broke after removal of usage of GTimeVal that was deprecated,
it requires seconds in this unix-based creation instead of microseconds.
The downside here is that it will create an extra object just to be
discarded in order to add the microsecond part to it.
It would end up segfaulting as it would return a NULL value
Exposure mode property, extra colour tone values (aqua, emboss, sketch, neon), extra scene modes (backlight, flowers, AR, HDR).
Missing vmethods for exposure mode, analog gain, lens focus, colour temperature, min & max exposure time
Contribs by Mohammed Sameer <msameer@foolab.org>, Adam Pigg <adam@piggz.co.uk>
The symbol visibility=hidden flag was only being applied to C
compilation, so plugins implemented in C++ would leak extra symbols
than the 2 _get_desc() and _register().
That also showed that the gst-libs opencv C++ lib was not marking
symbols for export correctly because the BUILDING_GST_OPENCV define
wasn't in the C++ args, so fix that too.
The major functionality gain this provides is proper reference counting
for a descriptor set. Overall this allows us to create descriptor sets
when they are needed (or reused from a cache) without violating any of
vulkan's object synchronisation requirements.
As there are a fixed number of sets available in a pool, the number of
descriptors in elements is currently hardcoded to 32. This can be extended
in a future change to create pools on the fly if that limit is ever overrun.
Allows a cleaner control flow when there is no fence available for use
with the trash list. An always signalled fence type will always return
TRUE for gst_vulkan_fence_is_signalled.
Serve two purposes:
1. refcounting of vulkan handles with associated destruction. When
combined with the trash list, the user can ensure destruction at
the correct time according to the vulkan rules.
2. avoids polluting our API with 32-bit vs 64-bit integer/pointers
differences as exposed through the vulkan API. on 32-bit, vulkan
non-dispatchable handles are 64-bit integers and on 64-bit, they
are pointers.
By passing NULL to `g_signal_new` instead of a marshaller, GLib will
actually internally optimize the signal (if the marshaller is available
in GLib itself) by also setting the valist marshaller. This makes the
signal emission a bit more performant than the regular marshalling,
which still needs to box into `GValue` and call libffi in case of a
generic marshaller.
Note that for custom marshallers, one would use
`g_signal_set_va_marshaller()` with the valist marshaller instead.
This adds two properties:
* scte-35-pid: If not 0, enables the SCTE-35 support for the current
program. This will write the proper PMT and send SCTE-35 NULL
commands (i.e. heartbeats) at a regular interval
* scte-35-null-interval: This specifies the interval at which the
NULL commands should be sent
Sending SCTE-35 commands is done by creating the appropriate SCTE-35
GstMpegtsSection and then sending them on the muxer. See the
associated example
Commit 8a56a7de6d (camerabin2: preview:
Appsink doesn't need to sync) add a line that set the "sync" property on
the appsink. However, the author seems to forget that there's another
property setting on appsink a few lines below.
It's very likely that the added line is required because the original
line doesn't take effect (maybe because it's too late). But for whatever
reason, the original line is now redundant. So, I remove it in this
commit.
GstTranscoder adds extra_args for gir which call gst_init() during
introspection. These extra arguments are the same as the standard
ones defined in the top level meson.build as "git_init_section",
However, the top level definition also ensures an empty plugin
repository is used.
Because GstTranscoder does not use the standard args, plugins get
loaded when it is introspected. Since some of the plugins fail
without specific hardware, this causes #1100.
This patch makes it use gir_init_section.
Fixes#1100.
Some hardware decoders, for example Hantro G1, have to be told the
size of the pic_order_cnt related syntax elements pic_order_cnt_lsb,
delta_pic_order_cnt_bottom, delta_pic_order_cnt[0], and
delta_pic_order_cnt[1] in bits.
Some hardware decoders, for example Hantro G1, have to be told the size
of the dec_ref_pic_marking() syntax element in bits. Record the size so
it can be passed on to the hardware.
The calculated size of short_term_ref_pic_set is not a part of
HEVC syntax but the value is used by some stateless decoders
(e.g., vaapi, dxva, vdpau and nvdec) for the purpose of skipping
parsing the syntax by the accelerator.
Add num_ref_idx_active_override_flag and sp_for_switch_flag to
member of GstH264SliceHdr. No reason to hiding them and
some decoder implementations (e.g., DXVA) rely on externally parsed header
data which can be provided by h264parser.
* Fix meson build script for Windows. Since the Vulkan dependency
object was declared by us in case of Windows, the dependency object
shouldn't be used for finding header
* Fix build error by including Windows specific header
gstvkdisplay.c(563): error C2065: 'VK_KHR_WIN32_SURFACE_EXTENSION_NAME': undeclared identifier
1. The iOS create_surface implementation needs to call out to the main
thread to create the window (UIKit requirement)
2. get_surface() can be called and will attempt to create the VkSurface
from an invalid view/layer.
Also pass the layer for MoltenVK so we don't get pesky 'UIView function
not called on main thread' warnings.
Fix gst_event_new_seek call in gst-libs/gst/player/gstplayer.c
If rate >= 0.0, then previous code doesn't set end of segment. So, the end of segment
will be in place where previous seek put it. This is not neccesary end of media file
(in case of reverse playback). So if we play video backward for some time and then
switched to forward playing, we will get EOS somewhere in the middle of media file.
This commit always sets end of segment, thus fixing this bug
During resizes, the VkQueuePresent can return OUT_OF_DATE and if a buffer
is displayed returning OUT_OF_DATE it would error out and stop the pipeline.
We already have a explicit check for OUT_OF_DATE and the same general
error check in the statements following so just use that code.
NULL checking the main_context does not help as we've just destroyed the
GMainContext and set that field to NULL, not to mention it's unnecessary.
Fixes a leak of display's GSource.