Add properties to control input cropping in the V4L2 device.
The input cropping is applied before composing the result to the
capture buffer. By default the capture size will be set to the same
size as the crop region, but it can be scaled to a different output
frame size if supported by the V4L2 device.
If scaling is not supported, the cropped image will
be composed as is into the top-left corner of the capture buffer.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1089>
Get the current crop bounding region from the V4L2 device so
that it can be provided to applications and used to validate
crop settings. Also make the default crop region available so
that it can be used to reset the crop when appropriate.
Uses the selection API when available with fallback to the crop
API for older kernels.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1089>
The gst_v4l2_object_set_crop() is used for removing buffer
alignment padding. Give it a name that better reflects
that usage. This helps to distinguish from cropping of the
input image (e.g. cropping at the image sensor on a captre
device), which can be unrelated to the memory buffer padding,
especially if scaling is involved.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1089>
The default query handler would go through typefind, which by default accepts
any CAPS. But once configured, parsebin can't reconfigure itself, it should
therefore pass through the ACCEPT_CAPS query to the first element after
typefind (if any).
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1900>
Don't reconfigure outputs when the select-streams
event is sent from the app, as the selection may
not take effect for some time. Instead, wait
for the pipeline to confirm the new set of
selected streams when it sends the message.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1900>
If we previously had subtitles coming in, the video
may be chained through a text overlay block. Before,
the code would end up trying to link pads that were
already linked and video would not get reconnected
properly.
To fix that, make sure that the candidate
pads are actually unlinked first. If a textoverlay
is present and no longer needed, it will be cleaned
up later in the reconfiguration sequence.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1900>
Requesting a new pad can start a reconfiguration cycle, where
playsink will block all input pads and wait for data on them
before doing internal reconfiguration. If a pad is released,
that reconfiguration might never trigger because it's now waiting
for a pad that doesn't exist any more.
In that case, complete the reconfiguration on pad release.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1180>
Make it posible to configure the element to obtain the timestamps from
reference timestamp meta data instead of using the ntp-offset property,
or estimating its own offset. Currently the only time format supported
is "timestamp/x-unix", i.e. UTC time expressed in the unix time epoch.
In addition the custom event GstNtpOffset has been renamed to
GstOnvifTimestamp, to reflect that it is not necessarily used to convey
the ntp-offset. As a consequence we had to modify a couple of files in
the rtsp-server as well.
Fixes#984
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1683>
This patch fixes a seg.fault in gst_structure_new() with warnings as below.
GLib-GObject-WARNING **:
../gobject/gtype.c:4330: type id '0' is invalid
GLib-GObject-WARNING **:
can't peek value table for type '<invalid>' which is not currently referenced
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1918>
On GstVideoDecoder::{drain,flush}, we send null packet with
CUVID_PKT_ENDOFSTREAM flag to drain out decoder. Which will
reset CUVID parser as well.
To continue decoding after the drain, the next input buffer
should include sequence headers otherwise CUVID parser will
not report any decodeable frame.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1911>
There could be a case where the new program has the same program number as the
previous one ... but is actually located on a PID previously used for elementary
stream. In that case the program is guaranteed to not be an update of the
previous program but a completely new one.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1893>
We need to be able to look for programs by their PID also. Using a hash table
was a bit sub-par (and overkill) for storing a range of programs.
This is needed because there could potentially be two programs with the same
program id but different PMT PID (while one is being deactivated the new one
would "exist").
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1893>
This commit modifies the interleave calculation to allow growing when incoming
data is before the segment start.
The rationale is that there is no requirement whatsoever for data before the
segment start to be "coherent" on all streams.
For example, a demuxer could rightfully send data from the video stream from the
previous keyframe (potentially quite a bit before the segment start) and the
audio from just before the segment start.
This will activate the same logic as growing the interleave when some streams
haven't received buffers yet.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1892>
* When a stream receives EOS, it will no longer change, we shouldn't take that
stream into account for interleave calculation.
* When streams (re)appear, we do not want to grow the initial interleave values
to excessive values. Instead of setting it to a default of 5s, progressively
grow it to that maximum.
* When the status of input streams change (i.e. going to/from "some haven't
received data yet" and "all have received data"), update the interleave
immediately instead of waiting for (potentially) 5s of data before updating
it.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1892>
The point here is that rtpsession will create a new rtpsource when
the field "rtx-ssrc" is present, and when not doing rtx, that means
a random ssrc will create a new rtpsource that will be included in RTCP
messages for the current session.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1882>
Instead of using GstMiniObject to hold H264 frame, now it uses a plain
structure. Besides, instead of holding a reference to
GstVideoCodecFrame, the H264 frame structure is set as a
GstVideoCodecFrame user data.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1856>
According to va_dec_hevc.h, pic_param->st_rps_bits should be set
for accelorater to skip parsing the *short_term_ref_pic_set
(num_short_term_ref_pic_sets) structure.
Also modified fill_picture to get parser info as a parameter,
in order to get slide_hdr->short_term_ref_pic_set_size.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1886>
Fix a small race where a group can receive stream-start
and post a pending buffering message just as another
thread posts a different buffering message, causing them
to be received by the application out of order. In the
worst case, this leads the application receiving a
stale 99% buffering message and going back to buffering
right after the 100% buffering message.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1840>
Apparently GtkSharp expects each object has only one ToggleRef at any
time. Assigning element.Handle into Raw has a consequence that second
ToggleRef attempts to get created but fails on g_object_unref () that
breaks a GObject assertion:
toggle_refs_notify: assertion failed: (tstack.n_toggle_refs == 1)
This is because toggle references should be removed with
g_object_remove_toggle_ref(), not a simple unref().
In order to avoid duplicate toggle references, introduce
ElementFactory.MakeRaw(), which creates a GstElement without its
accompanying C# object. The returned raw pointer can be assigned into
another GLib.Object without trouble.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1885>
And use the output segment position for the outgoing timestamp while it
is. This is needed to delay the calculation of `output_ts_offset` until
we actually have a usable timestamp, as tsmux will output a few initial
packets while `last_ts` is still unset.
Without this, the calculation would use the initial `0` value, which did
not have the intended effect of making VBR mode behave like CBR mode,
but always calculated an offset equal to the selected start time.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1884>