The RTCP SR packet might be without SDES in case of a reduced-size RTCP
packet. For syncing purposes the CNAME is needed but it might be known
already from an earlier RTCP packet or out of band, via the SDP for
example.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2132>
The format of the caps fields is
ssrc-(SSRC_VALUE)-(ATTRIBUTE_NAME)=(ATTRIBUTE_VALUE)
.
Parsing of the attributes from the caps into the SDP is not implemented
as this depends not only a single stream's caps but on the whole rtpbin
configuration.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2132>
Found via an analyzed build for Clang. Specifically we had:
gstav1parse.c[1850,11] in gst_av1_parse_detect_stream_format: Logic error: The left operand of '==' is a garbage value
gstav1parse.c[1606,11] in gst_av1_parse_handle_to_small_and_equal_align: Logic error: The left operand of '==' is a garbage value
Also a couple of false-positives:
gstav1parse.c[1398,24] in gst_av1_parse_handle_one_obu: Logic error: Branch condition evaluates to a garbage value
gstav1parse.c[1440,37] in gst_av1_parse_handle_one_obu: Logic error: The left operand of '-' is a garbage value
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2230>
GLib guarantees libintl is always present, using proxy-libintl as
last resort. There is no need to mock gettex API any more.
This fix static build on Windows because G_INTL_STATIC_COMPILATION must
be defined before including libintl.h, and glib does it for us as part
as including glib.h.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2028>
If a file includes a new version of a plugin that exits in the
registry, the output of gst-inspect is incorrect. The output has the
correct version but incorrect filename, and element description.
This seems to have also fixed some documentation issues.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1344>
Our decoder implementation does not use downstream d3d11 pool for
decoding because of special requirement of D3D11/DXVA. So preallocation
using the downstream buffer pool will waste GPU memory in most cases.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2211>
This provides new HLS, DASH and MSS adaptive demuxer elements as a single plugin.
These elements offer many improvements over the legacy elements. They will only
work within a streams-aware context (`urisourcebin`, `uridecodebin3`,
`decodebin3`, `playbin3`, ...).
Stream selection and buffering is handled internally, this allows them to
directly manage the elementary streams and stream selection.
Authors:
* Edward Hervey <edward@centricular.com>
* Jan Schmidt <jan@centricular.com>
* Piotrek Brzeziński <piotr@centricular.com>
* Tim-Philipp Müller <tim@centricular.com>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2117>
This reverts commit 652773de36 and
modifies it to rename the caps field name to coded-picture-structure.
It was previously removed because it confuses the decoder and we didn't
have a valid use case for including it in the encoded caps at this
stage. We now do have such a use case but still don't want to confuse
the decoder, so the field is renamed.
However, it is still not accurate without looking at the SEI picture
structure of each frame, so it was named coded-picture-structure. If its
value is "frame" it is most likely progressive, if it's "field" it is
most likely interlaced or mixed.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2177>
get_merged_collection() returns an owned stream collection and was
leaked in the else block.
Fix leak when running:
GST_TRACERS=leaks GST_DEBUG="GST_TRACER:7,leaks:6" gst-play-1.0 --use-playbin3 test.mkv
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/954>
Make sure that the requested stream selection isn't identical to the current
one. If that's the case, just carry on as usual.
This avoids multiple `streams-selected` posting ... when the selection didn't
change.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2185>
* glimagesink is not a recommended one on Windows
* Remove directdrawsink section
* d3dvideosink is legacy and should not be recommended
* Add d3d11videosink part
* directsoundsink should be deprecated
* Add wasapisink/wasapi2sink part
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2144>
The current way names the level by the number of B frames it contains, the
less it contains, the higher level it is. So the non ref B frames are in the
lowest layer and the B frames in the highest level refer to I/P frames.
But the widely used way is just the opposite, the ref B frames are in the
lower level and non ref B frames are at the highest level.
The is just a terminology change, and does not have any effect for compression
result and quality.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2149>
It doesn't matter for measurement purposes whether receiving them takes
a while and various PTP servers are not prioritizing to send them,
causing them to be dropped unnecessarily and preventing proper
synchronization with such servers.
This is especially a problem if the RTTs in the network are very low
compared to the additional delay imposed by the server.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2161>
timeapi.h is missing in our MinGW toolchain. Include mmsystem.h
header instead, which defines struct and APIs in case of our MinGW
toolchain. Note that in case of native Windows10 SDK (MSVC build),
mmsystem.h will include timeapi.h
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2153>
In case of re-syncing (i.e. moving to another partition to avoid too much of an
interleave), there was previously no checks to figure out whether a given
partition was already fully handled (i.e. when coming across it again after a
previous resync).
In order to handle this at least for single-track partitions, check whether we
have reached the essence track duration, and if so skip the partition.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2150>
The essence track position should only be overriden if we sucesfully switched to
another position. In case of EOS we do not want to override it else we would
increase the track position *again* at the end of this function
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2150>
This field is used by DXVA/NVDEC/VA, and each specification
describes (NVDEC is not well documented) that it's the number of
bits used in short_term_ref_pic_set().
DXVA doesn't explicitly mention that whether the size of
emulation preventation bytes (EPB) is inclusive or not, but
VA is clearly specifying that it's the size after removing
EPB. Excluding EPB size here makes more sense therefore.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1930>
The documentation could be read to mean that the caller continuous to
'own' the buffer, and that there is some other mechanism to find out
when to unref it.
Clarify that "not taking ownership" here means "taking a reference",
and specify that you can unref it at any time after calling the
function.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2110>
They are part of gst_dep already and we have to make sure to always have
gst_dep. The order in dependencies matters, because it is also the order
in which Meson will set -I args. We want gstreamer's config.h to take
precedence over glib's private config.h when it's a subproject.
While at it, remove useless fallback args for gmodule/gio dependencies,
only gstreamer core needs it.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2031>
When building for Android, chances are that gstreamer is going to be
loaded from Java using System.loadLibrary(). In that case we can
initialize GStreamer (including static plugins), redirect log functions,
etc.
This code is copied from cerbero because it can be used with
gstreamer-full-1.0 too. Cerbero needs to be adapted to drop that code
and generate gst_init_static_plugins() function.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/617>
Added GstVaFeature enum type, and new parameter for VA allocator's
set_format() and get_format(). Also added a new parameter in VA pool
gst_va_pool_new_with_config() and
gst_buffer_pool_config_set_va_allocation_params().
This new parameter will define if derived images will by used for
buffer mapping.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2057>
Expose description of vendor for user information, similar to
the description property of d3d11device.
Also, set description and DRM device path on GstContext structure
so that user can read them and it will be printed on terminal
when gst-launch-1.0 is used
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2064>
The console HANDLE will be keep signalled state unless application
reads console input buffer immediately. So we should read and flush
console input buffer from the thread where the event is signalled,
instead of GMain context thread.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2058>
Configure playsink tried element with the bus of the main pipeline.
That tried element can be a gl video sink, which would benefit from being
able to propagate context messages to the main pipeline and have other
internal pipeline elements configured with it. Having different elements
configured with the same GL context allows them to share buffers with
video/x-raw(memory:GLMemory) caps and achieving zero-copy.
Thanks to Alicia Boya García <aboya@igalia.com> for her work co-debugging
the issue and contributing to find a solution.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2056>
Sources that can internally handle buffering shouldn't have yet-another
buffering element after it. This can be simply detected by checking if it can
answer a TIME BUFFERING query just after creation.
If that is the case, we can expose the element source pads directly
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1905>
By default, the classification is
"Converter/Filter/Colorspace/Scaler/Video/Hardware", but if VA
post-processor driver supports either color balance, skin tone
enhancement, sharpening or noise reduction, "Effect" is added.
Thus, if vapostproc ranking is raised, it can be chosen by
autovideosink.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2066>
g_signal_disconnect*() doesn't stop any existing callbacks from running
which means that if the notify::state callback is in progress in one
thread and the data channel object is finalize()ed in another thread,
then there could be a use-after-free trying lock the data channel
object.
We can't reasonably use a GWeakRef as we don't have a 'parent' object to
free the GWeakRef after the data channel is finalized. This is also
complicated by the fact that the application can hold a reference to the
data channel object that would live beyond the lifetime of webrtcbin
itself.
We solve this by implementing a ghetto weak-ref solution internally with
a list of outstanding data channels.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1664>
If things progress fast enough, some state changes may not be seen be
the waiting code.
Fix by:
1. keeping a list of all the state changes
2. waiting checks each entry and if the relevant state is found, all
states up to and including then are removed.
This ensures that any waits will see all the state sets.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1664>
Input (sink pads) is the already-ssrc-muxed stream with the relevant rtp
sdes header extensions already applied:
- mid
- stream-id
- repaired-stream-id
Output (src pads) have the pads separated into individual ssrc's as
that's what rtpbin gives us.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1664>
Each rtpbin exposed recv_src pad is now exposed as webrtcbin src_%u pad
now with no meaining applied to the value of %u. Previously this used
to mean the mline in the SDP. If this is is still required, then the
transceiver can be retrieved from the pad and the "mlineindex" property
from the transciever. The "mid" is also retrievable from the
transceiver.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1664>
When creating a transceiver when creating an answer, the media kind of the
transceiver was never set correctly initially. This would lead to a
GST_WARNING being produced about changin a transceiver's media kind.
Fix by retrieving the GstSDPMedia kind from the offer instead as the answer
GstSDPMedia has not been set as the answer caps have not been chosen yet.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1664>
In order to other plugins use gstva objects, such as allocators and buffer
pools, this merge request move them from the va plugin to the gstva library.
This objects are not exposed in <gst/va/gstva.h> since they are not expected
to be used by users, only by plugin implementators.
Because of the surface copy design, which is used to implement allocator's
mem_copy() virtual function, depends on the vafilter, which is kept inside
the plugin, memory copy through VAPosproc is disabled and removed temporarly.
Also added some missing parameter validation.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2048>
Untabifying header file.
The logging category was moved from the plugin generic category to
the display category. It can argue that video formats hacks are
display dependant.
Added validations for input parameters.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2048>
... and add more encoding options.
QSV API supports dynamic bitrate change without IDR insertion.
That's more efficient way of runtime encoding option update
than starting from new sequence with IDR per bitrate option change.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2039>
FFMPEG 5+ doesn't allow overriding the codec anymore (causes a segfault if you
attempt to do that). But the best part is ... that with the current caps
implementation in pad template and gst_ffmpeg_caps_to_codecid() we would never
replace it by anything different than the existing codec id.
Fixes#1054
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2052>
../gst-libs/gst/mpegts/gst-dvb-section.c:206:9: error: variable 'i' set but not used [-Werror,-Wunused-but-set-variable]
guint i = 0, allocated_events = 12;
^
../gst-libs/gst/mpegts/gst-dvb-section.c:365:9: error: variable 'i' set but not used [-Werror,-Wunused-but-set-variable]
guint i = 0, allocated_streams = 12;
^
../gst-libs/gst/mpegts/gst-dvb-section.c:543:9: error: variable 'i' set but not used [-Werror,-Wunused-but-set-variable]
guint i = 0, allocated_streams = 12;
^
../gst-libs/gst/mpegts/gst-dvb-section.c:885:9: error: variable 'i' set but not used [-Werror,-Wunused-but-set-variable]
guint i = 0, allocated_services = 8;
^
../gst-libs/gst/mpegts/gst-dvb-section.c:1316:9: error: variable 'i' set but not used [-Werror,-Wunused-but-set-variable]
guint i = 0, allocated_services = 8;
^
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2046>
Check that `self` and `self->callback` are defined. `self` can be set to
`NULL` in `remove_listener`, and `self->callback` can be set to `NULL`
inside `gst_amc_surface_texture_jni_set_on_frame_available_callback`.
This can cause a segfault since the Java object can outlive the C
object, and call the callback after `remove_listener` is called.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2024>
Use the return value from gst_element_link_pads() and gst_bin_add()
Fixes:
../ext/gl/gstglmixerbin.c:305:12: error: variable 'res' set but not used [-Werror,-Wunused-but-set-variable]
gboolean res = TRUE;
^
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2038>
Produce an error if we try to use the feature of holding capture buffer
but it is not supported by the driver. Ingoring this can lead to stalls
as the driver will run-out of capture buffer to decode into. This
affects slice decoders but also split-field interlaced decoding.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2009>
This flag is set when the stream is interlaced and the specific
slice is made of single parity fields rather the paired at the
macroblock layer. This is rarely needed in late decoding process
but the Rockchip RKVDEC HW interface requires it, hence needs to
be passed through V4L2 Stateless interface.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2009>
The official releases of libxml2 have been migrated to gitlab where
they are published for download via HTTP instead of FTP. Besides
adapting to the new location we now also get the benefit that the
tarball can be downloaded in restricted networks where FTP might be
blocked.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2020>
Some problematic H265 stream may miss the reference frame in the DPB,
and get some message like: "No short term reference picture for xxx".
So there may be empty entries in ref_pic_list0/1 when passing to
decode_slice() function of sub class. We need to check the NULL pointer.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2018>
Specify modules to look for OpenEXR when CMake is used, as we may have
CMake config files instead of pkg-config files that result from building
OpenEXR, which may be built with CMake which is typically the case on Visual
Studio builds.
In this case, Meson does seem to find the 'OpenEXR' package with CMake
after trying pkg-config, but does not consider it enough without the
'modules:' argument.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2014>
This reverts commit 3cad3455377d5a22faa138d9df840257059776c8.
That commit was breaking the association between an audio and
a video track in the standard case.
In practice, to support carrying separate MediaStream, we are
going a way to map what MediaStreamTrack belong to what MediaStream,
but that will require some thinking about the API.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2023>
From https://datatracker.ietf.org/doc/html/draft-ietf-mmusic-msid-16:
> Multiple media descriptions with the same value for msid-id and
> msid-appdata are not permitted.
Our previous implementation of simply using the CNAME as the msid
identifier and the name of the transceiver as the msid appdata was
misguided and incorrect, and created issues when bundling multiple
video streams together: the ontrack event was emitted with the same
streams for the two bundled medias, at least in Firefox.
Instead, use the transceiver name as the identifier, and expose
a msid-appdata property on transceivers to allow for further
customization by the application. When the property is not set,
msid-appdata can be left empty as it is specified as optional.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2003>
WebKit is not going to render anything until a URI is set, leading to a
WPE posting a `WPE View did not render a buffer` error message. To avoid
requiring the user to know it if they only want to use
`wpesrc::load-bytes` we can just use `about:blank` as default and
everything will work as users would expect.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1492>
This was not a problem here because even if we end up accidentally
linking to the wrong pad, things will work out eventually as long as
one pad-added is emitted for each pad that is added.
But it will be a huge problem if someone copies this code and changes
something that requires different handling for different sorts of
pads. The resultant code will be racy. Let's not do this, it's a bad
example.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2008>
Adding new encoder elements nvd3d11{h264,h265}enc for Direct3D11
input support and re-written nvcuda{h264,h265}enc elements.
Newly writeen elements have some differences compared with old
nv{h264,h265}enc including non-backward compatible changes.
* RGBA is not a supported input format any more:
New elements will support only YUV formats to avoid implicit conversion
done by hardware. Ideally it should be done by upstream element
in order to have more control on it. Moreover, RGBA support can cause
redundant RGBA -> YUV conversion if multiple encoders are
used for the same RGBA input
* Subsampled planar format support is dropped:
I420 and YV12 format are not supported formats for Direct3D11.
Although it's supported in CUDA mode, it's not a hardware friendly
memory layout and it will waste GPU memory since UV planes
will have large padding due to the memory layout requirement of NVENC.
* GL support is dropped: Similar to the RGBA case,
GL support in encoder would be suboptimal if GL input is
used by multiple encoders, because each encoder will copy GL memory
into CUDA memory.
Upstream cudaupload element can be used for GL <-> CUDA
interop instead.
* No more pre-allocation of encoder input surfaces. New implementation
will use input CUDA memory without copy (zero-copy) or
will copy into a NVENC's input buffer struct in case of
system memory input.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1997>
Dispatches a list of active touch events to the wpe view on each
received TOUCH_FRAME event. Touch inputs currently only move the cursor,
since wpe doesn't seem to support clicking/scrolling or zooming with
touch input.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1633>
Represents touchscreen events as a trail of black squares, one for each
reported position. Additionally, this adds the `display-mouse` and
`display-touch` properties to toggle visibility of mouse/touchscreen
events, since touchscreens often emulate mouse events, as well as
logging for all received navigation events.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1633>
Add 5 new navigation event types for touchscreen events, with the same
naming and meaning as in libinput - touch-down, touch-motion, touch-up,
touch-frame and touch-cancel - as well as constructors and parse
functions for them.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1633>
Add a function to get x/y coordinates from suitable navigation events,
and one to create a copy with given coordinate values.
For e.g. translating event coordinates, this avoids having to either
switch on the event type to select the right parse function, or
having to rely on implementation details of the underlying event
structure.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1633>
This deprecates the current send_event interface, and the wrapper
functions based on it, replacing it with a send_event_simple interface and
wrapper function. Together with the new event constructors, this avoids
implementations having to directly access the underlying structure.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1633>
Since the strings are empty for GST_MSDK_CAPS_MAKE_WITH_DMABUF_FEATURE
and GST_MSDK_CAPS_MAKE_WITH_VA_FEATURE, when excuting
gst-inspect-1.0.exe msdkh265enc, there will be convert static caps error
because of the extra semicolon between two empty strings. Now macro
definitions are added to avoid this issue.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2004>
Pass the current frame to the duplicate_picture callback. This makes it easier
to set the frame's output_buffer if we already have one available. Also
documented that unlike VP9, it is not optional to implement this as the
picture will populate the DPB if it is a key-frame. To ensure this, remove the
default implementation.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1992>
The system_frame_number is notably used by V4L2 decoder as a unique
indentifier for the frame that was decoded. This value is used to tell driver
which frame to reference, as V4L2 does not have an efficient mechanism to
otherwise pass back the frames.
For this reason, and because it is more ligical, copy the original
system_frame_number into the duplicate picture instead of using the current
frame.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1992>
Two RTP Header extensions are very relevant for rtprtxsend/receive.
1. "urn:ietf:params:rtp-hdrext:sdes:rtp-stream-id": will always be removed
2. "urn:ietf:params:rtp-hdrext:sdes:repaired-rtp-stream-id": will be written
instead of the "rtp-stream-id" header extension.
Currently it's only a simple replacement of one header extension for
another however a future change would only add the relevant extension
based on some heuristics (like, video frames only on one of the rtp key
frame buffers, or only until the rtx ssrc has been validated by the peer)
in order to reduce the required bandwidth.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1759>
Showing existing keyframe have special meaning in AV1. All the references
frame will be refreshed with the original keyframe information. The refresh
process (7.20) is implemented by saving data from the frame_header into the
state. To fix this special case, load all the relevant information into the
frame_header.
As there is nothing happening in between this and the loading of the key-frame
into the state, this patch also remove the separate API function, using it
internally instead.
Fixes#1090
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1971>
We need to parse the payload type map provided by the offer SDP and
set those values on the payloader, otherwise webrtcbin will create
a recvonly answer SDP and we won't send anything to the browser.
Fixed it for both C and Python sendrecv examples.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1864>
Earlier, the example only supported one negotiation mode:
* Browser client is running, gstreamer starts a call and sends offer
Now these three modes are also supported:
* Browser client is running, gstreamer starts a call and sends an
offer request
* gstreamer connects and waits for browser client to start a call and
send an offer
* gstreamer connects and waits for browser client to start a call and
send an offer request
The following features are still missing:
* Data channel support
* TWCC support + stats logging
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1864>
The documentation for several gst_*_writable_structure functions stated
that they would never return NULL, without making clear that the passed
object is required to be writable. This changes the wording in those
cases to make that requirement more clear.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1784>
When syncing to an RFC7273 clock this will add the original
reconstructed reference clock timestamp to buffers in form
of a GstReferenceTimestampMeta.
This is useful when we want to process or analyse data based
on the original timestamps untainted by any local adjustments,
for example reconstruct AES67 audio streams with sample accuracy.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1964>
When syncing to an RFC7273 clock this will add the original
reconstructed reference clock timestamp to buffers in form
of a GstReferenceTimestampMeta.
This is useful when we want to process or analyse data based
on the original timestamps untainted by any local adjustments,
for example reconstruct AES67 audio streams with sample accuracy.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1964>
When syncing to an RFC7273 clock this will add the original
reconstructed reference clock timestamp to buffers in form
of a GstReferenceTimestampMeta.
This is useful when we want to process or analyse data based
on the original timestamps untainted by any local adjustments,
for example reconstruct AES67 audio streams with sample accuracy.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1964>
We bind transceivers' fec_percentage property to the FEC encoder
percentage property, and with the binding bidirectional a deadlock
was introduced by the latest changes from !1762:
We take hold of the transceiver's object lock, then add the binding
and set the property to its initial value on the encoder, which causes
set_property to deadlock in the transceiver when the binding kicks in.
Changing the binding type to DEFAULT (source to target) is enough
to address the deadlock and still serves the original intent.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1967>
Previously the result of the calculations included inaccuracies caused
by the NTP clock estimation, which caused the timestamps to jitter
+/- 1/clockrate.
By reorganizing the calculations it is possible to get rid of this
inaccuracy and calculate deterministic and exact packet timestamps based
on the actual NTP clock as long as the estimation is not off by more
than 2**31 clockrate units.
The only remaining inaccuracy that is introduced now is caused by the
conversion from the NTP clock to the pipeline clock.
Also split up debug output, demote many messages to the trace debug
level and output more intermediate results.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1955>
This is difficult to encounter in ordinary networks, but is
encountered when using tc-netem to add random delays to packets, and
also when your UDP stream is bonded over multiple links with varying
characteristics.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1952>
Holding previously decoded but not outputted pictures even after
new_sequence is not a safe approach in various aspect.
However, we cannot drain out DPB on new_sequence() unconditionally,
because there is a case where decoder should drop decoded pictures
if NoOutputOfPriorPicsFlag is set.
To detect NoOutputOfPriorPicsFlag before the new_sequence() call,
this patch splits decoding process into two path, one for nal unit parsing
in order to detect NoOutputOfPriorPicsFlag and then each nal unit
will be decoded.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1937>
Gapless playback is handled by adjusting buffer timestamps & durations
and by adding GstAudioClippingMeta.
Support for "Frankenstein" streams (= poorly stitched together streams)
is also added, so that gapless playback support doesn't prevent those
from being properly played.
Co-authored-by: Sebastian Dröge <sebastian@centricular.com>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1028>
1. Always set the according GstVaH264EncFrame pointer when GstVideoCodecFrame
pointer is assigned, which can make the logic safe.
2. Fix the forgotten change in _sort_by_frame_num. Its input pointer now is
GstVideoCodecFrame type.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1935>
Add properties to control input cropping in the V4L2 device.
The input cropping is applied before composing the result to the
capture buffer. By default the capture size will be set to the same
size as the crop region, but it can be scaled to a different output
frame size if supported by the V4L2 device.
If scaling is not supported, the cropped image will
be composed as is into the top-left corner of the capture buffer.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1089>
Get the current crop bounding region from the V4L2 device so
that it can be provided to applications and used to validate
crop settings. Also make the default crop region available so
that it can be used to reset the crop when appropriate.
Uses the selection API when available with fallback to the crop
API for older kernels.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1089>
The gst_v4l2_object_set_crop() is used for removing buffer
alignment padding. Give it a name that better reflects
that usage. This helps to distinguish from cropping of the
input image (e.g. cropping at the image sensor on a captre
device), which can be unrelated to the memory buffer padding,
especially if scaling is involved.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1089>
The default query handler would go through typefind, which by default accepts
any CAPS. But once configured, parsebin can't reconfigure itself, it should
therefore pass through the ACCEPT_CAPS query to the first element after
typefind (if any).
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1900>
Don't reconfigure outputs when the select-streams
event is sent from the app, as the selection may
not take effect for some time. Instead, wait
for the pipeline to confirm the new set of
selected streams when it sends the message.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1900>
If we previously had subtitles coming in, the video
may be chained through a text overlay block. Before,
the code would end up trying to link pads that were
already linked and video would not get reconnected
properly.
To fix that, make sure that the candidate
pads are actually unlinked first. If a textoverlay
is present and no longer needed, it will be cleaned
up later in the reconfiguration sequence.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1900>
Requesting a new pad can start a reconfiguration cycle, where
playsink will block all input pads and wait for data on them
before doing internal reconfiguration. If a pad is released,
that reconfiguration might never trigger because it's now waiting
for a pad that doesn't exist any more.
In that case, complete the reconfiguration on pad release.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1180>
Make it posible to configure the element to obtain the timestamps from
reference timestamp meta data instead of using the ntp-offset property,
or estimating its own offset. Currently the only time format supported
is "timestamp/x-unix", i.e. UTC time expressed in the unix time epoch.
In addition the custom event GstNtpOffset has been renamed to
GstOnvifTimestamp, to reflect that it is not necessarily used to convey
the ntp-offset. As a consequence we had to modify a couple of files in
the rtsp-server as well.
Fixes#984
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1683>
This patch fixes a seg.fault in gst_structure_new() with warnings as below.
GLib-GObject-WARNING **:
../gobject/gtype.c:4330: type id '0' is invalid
GLib-GObject-WARNING **:
can't peek value table for type '<invalid>' which is not currently referenced
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1918>
On GstVideoDecoder::{drain,flush}, we send null packet with
CUVID_PKT_ENDOFSTREAM flag to drain out decoder. Which will
reset CUVID parser as well.
To continue decoding after the drain, the next input buffer
should include sequence headers otherwise CUVID parser will
not report any decodeable frame.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1911>
There could be a case where the new program has the same program number as the
previous one ... but is actually located on a PID previously used for elementary
stream. In that case the program is guaranteed to not be an update of the
previous program but a completely new one.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1893>
We need to be able to look for programs by their PID also. Using a hash table
was a bit sub-par (and overkill) for storing a range of programs.
This is needed because there could potentially be two programs with the same
program id but different PMT PID (while one is being deactivated the new one
would "exist").
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1893>
This commit modifies the interleave calculation to allow growing when incoming
data is before the segment start.
The rationale is that there is no requirement whatsoever for data before the
segment start to be "coherent" on all streams.
For example, a demuxer could rightfully send data from the video stream from the
previous keyframe (potentially quite a bit before the segment start) and the
audio from just before the segment start.
This will activate the same logic as growing the interleave when some streams
haven't received buffers yet.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1892>
* When a stream receives EOS, it will no longer change, we shouldn't take that
stream into account for interleave calculation.
* When streams (re)appear, we do not want to grow the initial interleave values
to excessive values. Instead of setting it to a default of 5s, progressively
grow it to that maximum.
* When the status of input streams change (i.e. going to/from "some haven't
received data yet" and "all have received data"), update the interleave
immediately instead of waiting for (potentially) 5s of data before updating
it.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1892>
The point here is that rtpsession will create a new rtpsource when
the field "rtx-ssrc" is present, and when not doing rtx, that means
a random ssrc will create a new rtpsource that will be included in RTCP
messages for the current session.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1882>
Instead of using GstMiniObject to hold H264 frame, now it uses a plain
structure. Besides, instead of holding a reference to
GstVideoCodecFrame, the H264 frame structure is set as a
GstVideoCodecFrame user data.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1856>
According to va_dec_hevc.h, pic_param->st_rps_bits should be set
for accelorater to skip parsing the *short_term_ref_pic_set
(num_short_term_ref_pic_sets) structure.
Also modified fill_picture to get parser info as a parameter,
in order to get slide_hdr->short_term_ref_pic_set_size.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1886>
Fix a small race where a group can receive stream-start
and post a pending buffering message just as another
thread posts a different buffering message, causing them
to be received by the application out of order. In the
worst case, this leads the application receiving a
stale 99% buffering message and going back to buffering
right after the 100% buffering message.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1840>