We were unconditionally adding top-level descriptors in the PMT which
were only related to bluray support for PS3 (from 10 years ago).
These should be re-added conditionally
This went un-noticed for 6 years :( The issue is that for short
sections (without subtables and CRC), we would always fail when
checking whether we had enough data or not and then default to the
long section checking.
Use the long section checking would then cause interesting side-effects
for short sections (such as believing they were already seen and therefore
would be dropped/ignored).
Instead of using a proxy of `is_packetized` flag this patch
replaces it with the accessor to that flag in decoder base class,
avoiding probable mismatches.
commit 55c0d720 added the capability to handle non-packetized bitstream,
and there is a loop to handle multiple frames in a non-packetized buffer
in gst_msdkdec_handle_frame. However it is possible that a
non-packetized buffer still contains valid data but there is no long any
pending unfinished frame. Currently gst_video_decoder_decode_frame is
invoked to send a new frame with new input data, the situaltion is
repeated till an EOS is received. An application has to exit when
receiving an EOS, however there is still valid data in a
non-packetezied input buffer, hence some frames are dropped.
This fix adds a parse callback for non-packeteized input, a new frame
will be sent to the subclass as soon as the input buffer has valid data
This fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/665
And only update the caps and stream-start event accordingly. This
ensures that we'll always forward sticky events that arrive after the
caption pad was created, and especially updates to existing sticky
events like the segment event.
Also create a proper stream id based on the upstream stream id for the
stream-start event, and make sure that all the sticky events we know are
already on the caption pad at the time it is added to the element.
clamp-to-border will return the border color which is typically black,
white or transparent. When linear filtering the edge pixels will
typeically be combined with the border color which is not typically what
we want. Especially when color converting, this removes a green box
around the edge when converting YUV->RGB.
This fixes a regression from commit "srtp: Support libsrtp2"
e9aa117200 where an internal
set of ssrc(s) was added because the libsrtp v2 keeps its
internal streams as private. But the change prevented that
ssrc(s) that not in the caps from being added to the stats.
This patch ensures that all ssrc(s) are inserted to this set
instead of only inserting those from the caps.
If the mutex is locked while running frameComplete there is a potential deadlock
bound to happen when we get a new exported images from the backend.
Fixes#1101
This allows selecting whether we continue updating our last known
upstream timecode whenever a new one arrives or instead only keep the
last known one and from there on count up.
This uses the last known upstream timecode (counted up per frame), or
otherwise zero if none was known.
The normal last-known timestamp uses the internal timecode as fallback
if no upstream timecode was ever known.
Commit 8a56a7de6d (camerabin2: preview:
Appsink doesn't need to sync) add a line that set the "sync" property on
the appsink. However, the author seems to forget that there's another
property setting on appsink a few lines below.
It's very likely that the added line is required because the original
line doesn't take effect (maybe because it's too late). But for whatever
reason, the original line is now redundant. So, I remove it in this
commit.
GstTranscoder adds extra_args for gir which call gst_init() during
introspection. These extra arguments are the same as the standard
ones defined in the top level meson.build as "git_init_section",
However, the top level definition also ensures an empty plugin
repository is used.
Because GstTranscoder does not use the standard args, plugins get
loaded when it is introspected. Since some of the plugins fail
without specific hardware, this causes #1100.
This patch makes it use gir_init_section.
Fixes#1100.
This can be made to work in certain circumstances when
cross-compiling, so default to not building g-i stuff
when cross-compiling, but allow it if introspection was
enabled explicitly via -Dintrospection=enabled.
See gstreamer/gstreamer#454 and gstreamer/gstreamer#381.
Upon bitrate change, make sure to close the encoder otherwise
the encoder is not re-initialized and the target bitrate is
never reached, and the encoder was flushed at each frame
from this moment.
Regression introduced in f2b35abcab which replaced the call
that was closing the encoder by an early return to avoid
re-initialization.
gstwasapiutil.c(173) : warning C4715: 'gst_wasapi_device_role_to_erole': not all control paths return a value
gstwasapiutil.c(188) : warning C4715: 'gst_wasapi_erole_to_device_role': not all control paths return a value
The GstDeviceProvider isn't subclass of GstElement.
(gst-device-monitor-1.0:49356): GLib-GObject-WARNING **: 20:21:18.651:
invalid cast from 'GstWasapiDeviceProvider' to 'GstElement'
Some hardware decoders, for example Hantro G1, have to be told the
size of the pic_order_cnt related syntax elements pic_order_cnt_lsb,
delta_pic_order_cnt_bottom, delta_pic_order_cnt[0], and
delta_pic_order_cnt[1] in bits.
Some hardware decoders, for example Hantro G1, have to be told the size
of the dec_ref_pic_marking() syntax element in bits. Record the size so
it can be passed on to the hardware.
The calculated size of short_term_ref_pic_set is not a part of
HEVC syntax but the value is used by some stateless decoders
(e.g., vaapi, dxva, vdpau and nvdec) for the purpose of skipping
parsing the syntax by the accelerator.
Add num_ref_idx_active_override_flag and sp_for_switch_flag to
member of GstH264SliceHdr. No reason to hiding them and
some decoder implementations (e.g., DXVA) rely on externally parsed header
data which can be provided by h264parser.
The first channel in memory for MFX_FOURCC_RGB4 (VA_FOURCC_ARGB or
GST_VIDEO_FORMAT_BGRA) is B, not A. In MSDK, channle B is used to access
data for RGB4 surface. In addition, the returned pointers for
MFX_FOURCC_AYUV and MFX_FOURCC_Y410 in gst_msdk_video_memory_map_full
were wrong too before this fix.
When the bitrate is changed in playing state the encoder issues a reconfig
that drains and recreates the underlaying hw encoder instance.
With this set of changes we ensure that all this work is only made when
the bitrate did actually change. It also tries to reuse the vpp buffer
pool and fixes the pool leak spotted when testing this feature.
hlssink2 defined "max-files" property to decide the maximum number
of fragments which should be stored in disk. But we've not used
the property. Instead, the size has been maintained by "playlist-length".
Since "max-files" and "playlist-length" have different meaning,
the decision should be done by "max-files" property.
For example, an user might want expose only 3 fragments via playlist
but might want to keep more files than 3 in disk.