Expose SEI data in the H.264 bitstream parser API and
extract closed captions and other things that are not
specified in the H.264 spec itself in the videoparser.
Based on patch by: Mathieu Duponchelle <mathieu@centricular.com>
https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/940
This is the equivalent of iceTransportPolicy in the RTCConfiguration
dictionary.
Only two values are implemented:
* all: default behaviour
* relay: only gather relay candidates
The third member of the iceTransportPolicy enum, "public", is
obsolete.
* Add FIXME for future correction of HRDParames parsing.
Spec. defines that the number of HRDParames could be up to
"vps_num_layer_sets_minus1 + 1" (i.e., 1024).
* Add parsing vps_base_layer_{internal,available}_flag.
* Fix possible invalid vps_extension parsing.
Fixes#798
Otherwise setting an URI after creation will already set the state
to READY/buffering and disallow setting the configuration.
See https://github.com/servo/servo/issues/22010
For each lib we build export its own API in headers when we're
building it, otherwise import the API from the headers.
This fixes linker warnings on Windows when building with MSVC.
The problem was that we had defined all GST_*_API decorators
unconditionally to GST_EXPORT. This was intentional and only
supposed to be temporary, but caused linker warnings because
we tell the linker that we want to export all symbols even
those from externall DLLs, and when the linker notices that
they were in external DLLS and not present locally it warns.
What we need to do when building each library is: export
the library's own symbols and import all other symbols. To
this end we define e.g. BUILDING_GST_FOO and then we define
the GST_FOO_API decorator either to export or to import
symbols depending on whether BUILDING_GST_FOO is set or not.
That way external users of each library API automatically
get the import.
While we're at it, add new GST_API_EXPORT in config.h and use
that for GST_*_API decorators instead of GST_EXPORT.
The right export define depends on the toolchain and whether
we're using -fvisibility=hidden or not, so it's better to set it
to the right thing directly than hard-coding a compiler whitelist
in the public header.
We put the export define into config.h instead of passing it via the
command line to the compiler because it might contain spaces and brackets
and in the autotools scenario we'd have to pass that through multiple
layers of plumbing and Makefile/shell escaping and we're just not going
to be *that* lucky.
The export define is only used if we're compiling our lib, not by external
users of the lib headers, so it's not a problem to put it into config.h
Also, this means all .c files of libs need to include config.h
to get the export marker defined, so fix up a few that didn't
include config.h.
This commit depends on a common submodule commit that makes gst-glib-gen.mak
add an #include "config.h" to generated enum/marshal .c files for the
autotools build.
https://bugzilla.gnome.org/show_bug.cgi?id=797185
When the position query fails the returned value shall remain -1 instead of 0 to
avoid confusion on application side between error and beginning of media.
https://bugzilla.gnome.org/show_bug.cgi?id=797066
Move the errant piece of dtlssrtpenc state change
management from dtlstransport in the Webrtc libs,
into the transportsendbin that does the rest of
the element management so it's all in one place.
It is completely legal to have packets with zero sizes.
Zero-sized packet indicates header with only Start Code.
One eg: is user data packet. The patch allows having
GstMpegVideoPacket with zero sizes.
https://bugzilla.gnome.org/show_bug.cgi?id=796477
If connection-speed property is in use, this value should be used as the
current download rate since subclasses might read it to figure out
which playlist variant they will use.
https://bugzilla.gnome.org/show_bug.cgi?id=784592
Regardless of LIVE or VOD, "a manifest has next period but
currently EOSed" state is meaning that it's time to advance period.
Previous behavior of adpativedemux, however, was able to period
advancing only for VOD case, since the adaptivedemux tried to
update and wait new manifest without respecting existence of the next period.
https://bugzilla.gnome.org/show_bug.cgi?id=781183
This moves all the conversion related code to a single place, allows
less code-duplication inside compositor and makes the glmixer code less
awkward. It's also the same pattern as used by GstAudioAggregator.
This is only used for caching reasons and should never actually be in
the public API. If this is ever a bottleneck later, caching around a
class private struct could be implemented.
The aggregated_frame is now called prepared_frame and passed to the
prepare_frame and cleanup_frame virtual methods directly. For the
currently queued buffer there is a method on the video aggregator pad
now.
We need different export decorators for the different libs.
For now no actual change though, just rename before the release,
and add prelude headers to define the new decorator to GST_EXPORT.
We need different export decorators for the different libs.
For now no actual change though, just rename before the release,
and add prelude headers to define the new decorator to GST_EXPORT.
We need different export decorators for the different libs.
For now no actual change though, just rename before the release,
and add prelude headers to define the new decorator to GST_EXPORT.
We need different export decorators for the different libs.
For now no actual change though, just rename before the release,
and add prelude headers to define the new decorator to GST_EXPORT.
We need different export decorators for the different libs.
For now no actual change though, just rename before the release,
and add prelude headers to define the new decorator to GST_EXPORT.
We need different export decorators for the different libs.
For now no actual change though, just rename before the release,
and add prelude headers to define the new decorator to GST_EXPORT.
We need different export decorators for the different libs.
For now no actual change though, just rename before the release,
and add prelude headers to define the new decorator to GST_EXPORT.
We need different export decorators for the different libs.
For now no actual change though, just rename before the release,
and add prelude headers to define the new decorator to GST_EXPORT.
We need different export decorators for the different libs.
For now no actual change though, just rename before the release,
and add prelude headers to define the new decorator to GST_EXPORT.
We need different export decorators for the different libs.
For now no actual change though, just rename before the release,
and add prelude headers to define the new decorator to GST_EXPORT.
We need different export decorators for the different libs.
For now no actual change though, just rename before the release,
and add prelude headers to define the new decorator to GST_EXPORT.
We need different export decorators for the different libs.
For now no actual change though, just rename before the release,
and add prelude headers to define the new decorator to GST_EXPORT.
We need different export decorators for the different libs.
For now no actual change though, just rename before the release,
and add prelude headers to define the new decorator to GST_EXPORT.
We need different export decorators for the different libs.
For now no actual change though, just rename before the release,
and add prelude headers to define the new decorator to GST_EXPORT.
Those fields have been introduced in version 2 and later to define new
profiles like the format range extensions profiles (A.3.5).
NOTE: This patch breaks the parser ABI, rebuild needed.
https://bugzilla.gnome.org/show_bug.cgi?id=793876
We used to have the same enum to represent H265 profiles and idc values.
Those are no longer the same with extension profiles defined from
version 2 of the spec.
Split those enums so the semantic of each is clearer and we'll be able
to add extension profiles to GstH265Profile.
Also add gst_h265_profile_tier_level_get_profile() to retrieve the
GstH265Profile from the GstH265ProfileTierLevel. It will be used to
implement the detection of extension profiles.
https://bugzilla.gnome.org/show_bug.cgi?id=793876
There is nothing in the spec that state that framerate is not valid in
that case. This aligns GStreamer with FFMPEG behaviour for similar
streams.
https://bugzilla.gnome.org/show_bug.cgi?id=793284
SDP's are generated and consumed according to the W3C PeerConnection API
available from https://www.w3.org/TR/webrtc/
The SDP is either created initially from the connected
sink pads/attached transceivers as in the case of generating an offer or
intersected with the connected sink pads/attached transceivers as in
the case for creating an answer. In both cases, the rtp payloaded streams
sent by the peer are exposed as separate src pads.
The implementation supports trickle ICE, RTCP muxing, reduced size RTCP.
With contributions from:
Nirbheek Chauhan <nirbheek@centricular.com>
Mathieu Duponchelle <mathieu@centricular.com>
Edward Hervey <edward@centricular.com>
https://bugzilla.gnome.org/show_bug.cgi?id=792523
According to the vp8 spec, the first partition (size can be derived from
the frame header) should have all compressed header information and we
implemented gst codecparser based on that. But it doesn't seem to be the
case with some of the streams (#792773) and libvpx
works fine because it uses the whole frame size (not the first partition
size) to initialize the bool decoder.
https://bugzilla.gnome.org/show_bug.cgi?id=792773
As most Wayland compositors supports XWayland, X11 backend get
selected. This also realign better GStreamer decision to what
happens with GTK and other stack out there.
This patch adds code to gldownload to export the image as a
dmabuf if requested. The element now exposes memory:DMABuf as
a cap feature, and if it is selected, the element exports the
texture to an EGL image and then a dmabuf. It also implements a
fallback to system memory download in case the exportation failed.
https://bugzilla.gnome.org/show_bug.cgi?id=776927
Undefined symbols for architecture x86_64:
"_gst_gl_context_cocoa_get_type", referenced from:
__create_layer in libgstopengl_la-caopengllayersink.o
Might need some more in other headers, but first need to
clarify what exactly should be exported, there are some
inconsistencies (installed header files vs. funcs in docs).
Libraries in -bad are not covered by our API/ABI stability
guarantees, and to the best of our knowledge everyone using
this API has moved to the replacement APIs ages ago.
It causes crashes in applications because the result of
fbGetDisplay() might be in use elsewhere in the application
and Vivante doesn't seem to do any refcounting
This reverts commit 47fd4d391e.
This patch is incorrect. It doesn't actually compile, and causes a crash
because the viv-fb window implementation needs a native EGL handle
to pass to fbCreateWindow, but the GstGLDisplayEGL handleis actually
an EGLDisplay now (and gets cast to the wrong type)
When switching bitrates we set the old streams as cancelled, but it
could also be confused with a cancel due to other reasons (as an error)
and it would lead the element to stop the pipeline mistankely. This
would happen when the stream being replaced was waiting for a manifest
update on live. Ss make it sure that we are stopping for switching
bitrates to avoid erroring out.
https://bugzilla.gnome.org/show_bug.cgi?id=789457
If we're adding to the tail of the queue, it's because we're converting
a gap event, so don't block there it means we're calling from the output
thread.
https://bugzilla.gnome.org/show_bug.cgi?id=784911
Add a comment for when the state matters. Use a local var for priv in
update_time_level() to improve readability. Move the our_latency local
var below the query results checks.
We want to skip serialization for FLUSH_STOP events (apparently). We can
simplify the code to add it to the top-level conditions. There was nothing
done in the first code path if the event was FLUSH_STOP.
Just queue it like any other serialized event. This way we don't need to
check if there still are buffers in the queue.
Validated with the tests and gst-launch-1.0 pipelines.
Don't reuse the offset variables will contain a sample offset for an
intermediate time value. Instead add a segment_pos variable of type
GstClockTime for this. Use The clock-time macros to check if we got
a valid time.
Acording to the logic this cannot happen (we already check this before). So
add a assert like we do above and remove the check. This make it clearer that
we check for the offset range.
Also remove a dead assignment since we reassign this a few lines below.
Don't copy the whole event struct. Set the input params when we call the
forwarding helper. Initialize the internal fields and return values in the
helper.
This simplifies the code a lot without any functional changes apart from
not closing the display connection. Closing the display connection is
not safe to do as it is shared between all other code in the same
process and no reference counting or anything happens at the platform
layer.
1. Propagate the GstGLDisplay we create
2. Add the created GstGLContext to the propagated GstGLDisplay
Otherwise with multi-branch GL pipelines involving gtkglsink, things
will fall apart and errors will be genarated somewhere.
Except for gst/gl/gstglfuncs.h
It is up to the client app to include these headers.
It is coherent with the fact that gstreamer-gl.pc does not
require any egl.pc/gles.pc. I.e. it is the responsability
of the app to search these headers within its build setup.
For example gstreamer-vaapi includes explicitly EGL/egl.h
and search for it in its configure.ac.
For example with this patch, if an app includes the headers
gst/gl/egl/gstglcontext_egl.h
gst/gl/egl/gstgldisplay_egl.h
gst/gl/egl/gstglmemoryegl.h
it will *no longer* automatically include EGL/egl.h and GLES2/gl2.h.
Which is good because the app might want to use the gstgl api only
without the need to bother about gl headers.
Also added a test: cd tests/check && make libs/gstglheaders.check
https://bugzilla.gnome.org/show_bug.cgi?id=784779
Scenario:
A manifest starts out in live mode but then the recording is finalized
and a subsequent update changes the state to a non-live manifest when
the server has finished recording/transcoding/whatever with the full
list of fragments.
Without this patch, the manifest update task is never stopped on the
live->non-live transition and will busy loop, burning through one CPU
core.
https://bugzilla.gnome.org/show_bug.cgi?id=786275
Make a bunch of symbols private that are currently leaked
accidentally because they have a gst_* prefix and are used
internally. We mark those we can't make static with
G_GNUC_INTERNAL so that they get hidden with the autotools
build as well (although we could just pass -fvisibility=hidden
there too).
The goal here is to minimize the work needed to bring all images
to a common format. A better criteria than the number of pads
with a given format is the number of pixels with a given format.
https://bugzilla.gnome.org/show_bug.cgi?id=786078
This commit ensures that the idle probe which GstAdaptiveDemuxStream
adds to the upstream source pad is removed after use. Previously a new
probe was added to the pad whenever a fragment was downloaded, meaning
the number of pad probe callbacks being executed increased continually.
https://bugzilla.gnome.org/show_bug.cgi?id=785957
There can be twice as many stream tasks running as there are output
pads for playback of variant HLS playlists. Half of them are the
current pads, and the other half are the pads that are about to be
switched to due to a bitrate change.
The old code only stopped the current streams which could result
in a deadlock on stopping the pipeline. The changes force stopping
and joining of any prepared streams too.
https://bugzilla.gnome.org/show_bug.cgi?id=785987
Crossfading is a bit more complex than just having two pads with the
right keyframes as the blending is not exactly the same.
The difference is in the way we compute the alpha channel, in the case
of crossfading, we have to compute an additive operation between
the destination and the source (factored by the alpha property of both
the input pad alpha property and the crossfading ratio) basically so
that the crossfade result of 2 opaque frames is also fully opaque at any
time in the crossfading process, avoid bleeding through the layer
blending.
Some rationnal can be found in https://phabricator.freedesktop.org/T7773.
https://bugzilla.gnome.org/show_bug.cgi?id=784827
Found on rpi when gpu_mem is too low so there is not enough memory to
create the eglimage. But still gst_buffer_pool_acquire_buffer succeeded.
And it leads to a CRITICAL assert:
gst_egl_image_get_image: assertion 'GST_IS_EGL_IMAGE (image)' failed
https://bugzilla.gnome.org/show_bug.cgi?id=785518