Having the opencv feature enabled would lead to the opencv3 dependency
being required which failed with only opencv4 being available.
Instead don't require anything and error out at the end if the feature was enabled
but no dependency was found.
Expose SEI data in the H.264 bitstream parser API and
extract closed captions and other things that are not
specified in the H.264 spec itself in the videoparser.
Based on patch by: Mathieu Duponchelle <mathieu@centricular.com>
https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/940
This is the equivalent of iceTransportPolicy in the RTCConfiguration
dictionary.
Only two values are implemented:
* all: default behaviour
* relay: only gather relay candidates
The third member of the iceTransportPolicy enum, "public", is
obsolete.
* Add FIXME for future correction of HRDParames parsing.
Spec. defines that the number of HRDParames could be up to
"vps_num_layer_sets_minus1 + 1" (i.e., 1024).
* Add parsing vps_base_layer_{internal,available}_flag.
* Fix possible invalid vps_extension parsing.
Fixes#798
Otherwise setting an URI after creation will already set the state
to READY/buffering and disallow setting the configuration.
See https://github.com/servo/servo/issues/22010
For each lib we build export its own API in headers when we're
building it, otherwise import the API from the headers.
This fixes linker warnings on Windows when building with MSVC.
The problem was that we had defined all GST_*_API decorators
unconditionally to GST_EXPORT. This was intentional and only
supposed to be temporary, but caused linker warnings because
we tell the linker that we want to export all symbols even
those from externall DLLs, and when the linker notices that
they were in external DLLS and not present locally it warns.
What we need to do when building each library is: export
the library's own symbols and import all other symbols. To
this end we define e.g. BUILDING_GST_FOO and then we define
the GST_FOO_API decorator either to export or to import
symbols depending on whether BUILDING_GST_FOO is set or not.
That way external users of each library API automatically
get the import.
While we're at it, add new GST_API_EXPORT in config.h and use
that for GST_*_API decorators instead of GST_EXPORT.
The right export define depends on the toolchain and whether
we're using -fvisibility=hidden or not, so it's better to set it
to the right thing directly than hard-coding a compiler whitelist
in the public header.
We put the export define into config.h instead of passing it via the
command line to the compiler because it might contain spaces and brackets
and in the autotools scenario we'd have to pass that through multiple
layers of plumbing and Makefile/shell escaping and we're just not going
to be *that* lucky.
The export define is only used if we're compiling our lib, not by external
users of the lib headers, so it's not a problem to put it into config.h
Also, this means all .c files of libs need to include config.h
to get the export marker defined, so fix up a few that didn't
include config.h.
This commit depends on a common submodule commit that makes gst-glib-gen.mak
add an #include "config.h" to generated enum/marshal .c files for the
autotools build.
https://bugzilla.gnome.org/show_bug.cgi?id=797185
When the position query fails the returned value shall remain -1 instead of 0 to
avoid confusion on application side between error and beginning of media.
https://bugzilla.gnome.org/show_bug.cgi?id=797066
Move the errant piece of dtlssrtpenc state change
management from dtlstransport in the Webrtc libs,
into the transportsendbin that does the rest of
the element management so it's all in one place.
It is completely legal to have packets with zero sizes.
Zero-sized packet indicates header with only Start Code.
One eg: is user data packet. The patch allows having
GstMpegVideoPacket with zero sizes.
https://bugzilla.gnome.org/show_bug.cgi?id=796477
If connection-speed property is in use, this value should be used as the
current download rate since subclasses might read it to figure out
which playlist variant they will use.
https://bugzilla.gnome.org/show_bug.cgi?id=784592
Regardless of LIVE or VOD, "a manifest has next period but
currently EOSed" state is meaning that it's time to advance period.
Previous behavior of adpativedemux, however, was able to period
advancing only for VOD case, since the adaptivedemux tried to
update and wait new manifest without respecting existence of the next period.
https://bugzilla.gnome.org/show_bug.cgi?id=781183
This moves all the conversion related code to a single place, allows
less code-duplication inside compositor and makes the glmixer code less
awkward. It's also the same pattern as used by GstAudioAggregator.
This is only used for caching reasons and should never actually be in
the public API. If this is ever a bottleneck later, caching around a
class private struct could be implemented.
The aggregated_frame is now called prepared_frame and passed to the
prepare_frame and cleanup_frame virtual methods directly. For the
currently queued buffer there is a method on the video aggregator pad
now.
We need different export decorators for the different libs.
For now no actual change though, just rename before the release,
and add prelude headers to define the new decorator to GST_EXPORT.
We need different export decorators for the different libs.
For now no actual change though, just rename before the release,
and add prelude headers to define the new decorator to GST_EXPORT.
We need different export decorators for the different libs.
For now no actual change though, just rename before the release,
and add prelude headers to define the new decorator to GST_EXPORT.
We need different export decorators for the different libs.
For now no actual change though, just rename before the release,
and add prelude headers to define the new decorator to GST_EXPORT.
We need different export decorators for the different libs.
For now no actual change though, just rename before the release,
and add prelude headers to define the new decorator to GST_EXPORT.
We need different export decorators for the different libs.
For now no actual change though, just rename before the release,
and add prelude headers to define the new decorator to GST_EXPORT.
We need different export decorators for the different libs.
For now no actual change though, just rename before the release,
and add prelude headers to define the new decorator to GST_EXPORT.
We need different export decorators for the different libs.
For now no actual change though, just rename before the release,
and add prelude headers to define the new decorator to GST_EXPORT.
We need different export decorators for the different libs.
For now no actual change though, just rename before the release,
and add prelude headers to define the new decorator to GST_EXPORT.
We need different export decorators for the different libs.
For now no actual change though, just rename before the release,
and add prelude headers to define the new decorator to GST_EXPORT.
We need different export decorators for the different libs.
For now no actual change though, just rename before the release,
and add prelude headers to define the new decorator to GST_EXPORT.
We need different export decorators for the different libs.
For now no actual change though, just rename before the release,
and add prelude headers to define the new decorator to GST_EXPORT.
We need different export decorators for the different libs.
For now no actual change though, just rename before the release,
and add prelude headers to define the new decorator to GST_EXPORT.
We need different export decorators for the different libs.
For now no actual change though, just rename before the release,
and add prelude headers to define the new decorator to GST_EXPORT.
Those fields have been introduced in version 2 and later to define new
profiles like the format range extensions profiles (A.3.5).
NOTE: This patch breaks the parser ABI, rebuild needed.
https://bugzilla.gnome.org/show_bug.cgi?id=793876
We used to have the same enum to represent H265 profiles and idc values.
Those are no longer the same with extension profiles defined from
version 2 of the spec.
Split those enums so the semantic of each is clearer and we'll be able
to add extension profiles to GstH265Profile.
Also add gst_h265_profile_tier_level_get_profile() to retrieve the
GstH265Profile from the GstH265ProfileTierLevel. It will be used to
implement the detection of extension profiles.
https://bugzilla.gnome.org/show_bug.cgi?id=793876
There is nothing in the spec that state that framerate is not valid in
that case. This aligns GStreamer with FFMPEG behaviour for similar
streams.
https://bugzilla.gnome.org/show_bug.cgi?id=793284
SDP's are generated and consumed according to the W3C PeerConnection API
available from https://www.w3.org/TR/webrtc/
The SDP is either created initially from the connected
sink pads/attached transceivers as in the case of generating an offer or
intersected with the connected sink pads/attached transceivers as in
the case for creating an answer. In both cases, the rtp payloaded streams
sent by the peer are exposed as separate src pads.
The implementation supports trickle ICE, RTCP muxing, reduced size RTCP.
With contributions from:
Nirbheek Chauhan <nirbheek@centricular.com>
Mathieu Duponchelle <mathieu@centricular.com>
Edward Hervey <edward@centricular.com>
https://bugzilla.gnome.org/show_bug.cgi?id=792523
According to the vp8 spec, the first partition (size can be derived from
the frame header) should have all compressed header information and we
implemented gst codecparser based on that. But it doesn't seem to be the
case with some of the streams (#792773) and libvpx
works fine because it uses the whole frame size (not the first partition
size) to initialize the bool decoder.
https://bugzilla.gnome.org/show_bug.cgi?id=792773
As most Wayland compositors supports XWayland, X11 backend get
selected. This also realign better GStreamer decision to what
happens with GTK and other stack out there.
This patch adds code to gldownload to export the image as a
dmabuf if requested. The element now exposes memory:DMABuf as
a cap feature, and if it is selected, the element exports the
texture to an EGL image and then a dmabuf. It also implements a
fallback to system memory download in case the exportation failed.
https://bugzilla.gnome.org/show_bug.cgi?id=776927
Undefined symbols for architecture x86_64:
"_gst_gl_context_cocoa_get_type", referenced from:
__create_layer in libgstopengl_la-caopengllayersink.o
Might need some more in other headers, but first need to
clarify what exactly should be exported, there are some
inconsistencies (installed header files vs. funcs in docs).
Libraries in -bad are not covered by our API/ABI stability
guarantees, and to the best of our knowledge everyone using
this API has moved to the replacement APIs ages ago.
It causes crashes in applications because the result of
fbGetDisplay() might be in use elsewhere in the application
and Vivante doesn't seem to do any refcounting
This reverts commit 47fd4d391e.
This patch is incorrect. It doesn't actually compile, and causes a crash
because the viv-fb window implementation needs a native EGL handle
to pass to fbCreateWindow, but the GstGLDisplayEGL handleis actually
an EGLDisplay now (and gets cast to the wrong type)
When switching bitrates we set the old streams as cancelled, but it
could also be confused with a cancel due to other reasons (as an error)
and it would lead the element to stop the pipeline mistankely. This
would happen when the stream being replaced was waiting for a manifest
update on live. Ss make it sure that we are stopping for switching
bitrates to avoid erroring out.
https://bugzilla.gnome.org/show_bug.cgi?id=789457
If we're adding to the tail of the queue, it's because we're converting
a gap event, so don't block there it means we're calling from the output
thread.
https://bugzilla.gnome.org/show_bug.cgi?id=784911
Add a comment for when the state matters. Use a local var for priv in
update_time_level() to improve readability. Move the our_latency local
var below the query results checks.
We want to skip serialization for FLUSH_STOP events (apparently). We can
simplify the code to add it to the top-level conditions. There was nothing
done in the first code path if the event was FLUSH_STOP.
Just queue it like any other serialized event. This way we don't need to
check if there still are buffers in the queue.
Validated with the tests and gst-launch-1.0 pipelines.
Don't reuse the offset variables will contain a sample offset for an
intermediate time value. Instead add a segment_pos variable of type
GstClockTime for this. Use The clock-time macros to check if we got
a valid time.
Acording to the logic this cannot happen (we already check this before). So
add a assert like we do above and remove the check. This make it clearer that
we check for the offset range.
Also remove a dead assignment since we reassign this a few lines below.
Don't copy the whole event struct. Set the input params when we call the
forwarding helper. Initialize the internal fields and return values in the
helper.
This simplifies the code a lot without any functional changes apart from
not closing the display connection. Closing the display connection is
not safe to do as it is shared between all other code in the same
process and no reference counting or anything happens at the platform
layer.
1. Propagate the GstGLDisplay we create
2. Add the created GstGLContext to the propagated GstGLDisplay
Otherwise with multi-branch GL pipelines involving gtkglsink, things
will fall apart and errors will be genarated somewhere.
Except for gst/gl/gstglfuncs.h
It is up to the client app to include these headers.
It is coherent with the fact that gstreamer-gl.pc does not
require any egl.pc/gles.pc. I.e. it is the responsability
of the app to search these headers within its build setup.
For example gstreamer-vaapi includes explicitly EGL/egl.h
and search for it in its configure.ac.
For example with this patch, if an app includes the headers
gst/gl/egl/gstglcontext_egl.h
gst/gl/egl/gstgldisplay_egl.h
gst/gl/egl/gstglmemoryegl.h
it will *no longer* automatically include EGL/egl.h and GLES2/gl2.h.
Which is good because the app might want to use the gstgl api only
without the need to bother about gl headers.
Also added a test: cd tests/check && make libs/gstglheaders.check
https://bugzilla.gnome.org/show_bug.cgi?id=784779
Scenario:
A manifest starts out in live mode but then the recording is finalized
and a subsequent update changes the state to a non-live manifest when
the server has finished recording/transcoding/whatever with the full
list of fragments.
Without this patch, the manifest update task is never stopped on the
live->non-live transition and will busy loop, burning through one CPU
core.
https://bugzilla.gnome.org/show_bug.cgi?id=786275
Make a bunch of symbols private that are currently leaked
accidentally because they have a gst_* prefix and are used
internally. We mark those we can't make static with
G_GNUC_INTERNAL so that they get hidden with the autotools
build as well (although we could just pass -fvisibility=hidden
there too).
The goal here is to minimize the work needed to bring all images
to a common format. A better criteria than the number of pads
with a given format is the number of pixels with a given format.
https://bugzilla.gnome.org/show_bug.cgi?id=786078
This commit ensures that the idle probe which GstAdaptiveDemuxStream
adds to the upstream source pad is removed after use. Previously a new
probe was added to the pad whenever a fragment was downloaded, meaning
the number of pad probe callbacks being executed increased continually.
https://bugzilla.gnome.org/show_bug.cgi?id=785957
There can be twice as many stream tasks running as there are output
pads for playback of variant HLS playlists. Half of them are the
current pads, and the other half are the pads that are about to be
switched to due to a bitrate change.
The old code only stopped the current streams which could result
in a deadlock on stopping the pipeline. The changes force stopping
and joining of any prepared streams too.
https://bugzilla.gnome.org/show_bug.cgi?id=785987
Crossfading is a bit more complex than just having two pads with the
right keyframes as the blending is not exactly the same.
The difference is in the way we compute the alpha channel, in the case
of crossfading, we have to compute an additive operation between
the destination and the source (factored by the alpha property of both
the input pad alpha property and the crossfading ratio) basically so
that the crossfade result of 2 opaque frames is also fully opaque at any
time in the crossfading process, avoid bleeding through the layer
blending.
Some rationnal can be found in https://phabricator.freedesktop.org/T7773.
https://bugzilla.gnome.org/show_bug.cgi?id=784827
Found on rpi when gpu_mem is too low so there is not enough memory to
create the eglimage. But still gst_buffer_pool_acquire_buffer succeeded.
And it leads to a CRITICAL assert:
gst_egl_image_get_image: assertion 'GST_IS_EGL_IMAGE (image)' failed
https://bugzilla.gnome.org/show_bug.cgi?id=785518
Otherwise check_events() will not remove the GAP event (as the queue
tail is not the event anymore but the GAP buffer), then the GAP buffer
is handled, then the GAP event is handled again, ... forever.
Avoids dereferencing dead objects
What happens in the autovideosink case is that context 1 is created and
destroyed before all the async operations hae executed on the associated
window. When the delayed operations execute, they then reference dead
objects and crash.
We fix this by keeping refs over all async operations so the object
cannot be deleted while async operations are in flight.
https://bugzilla.gnome.org/show_bug.cgi?id=782379
This is a workaround for a regression introduced by
f4190a49c0
( adaptivedemux: Check live seeking range more often )
The goal of the previous commit was to be able to cope with non-1.0
rates on live streams which have a "seeking window" (i.e. the server
keeps around quite a bit of the live stream so you can seek back into
it).
Without that commit, two different kind of issues would happen:
* When doing reverse playback, you would never check whether you
are outside of the seekable region. And would then continuously
try to download fragments that are no longer present.
* When doing fast forward, you would end up requesting fragments
which are not present yet.
In order to determine whether one was *really* outside of the seekable
window, we check whether the current stream position is still
within the seekable region.
The *problem* though with that commit is that it assumes that subclasses
will return continuously updated seeking ranges (i.e. dependent on the
current time), which is *NOT* the case.
For example:
* dashdemux does use the current UTC to determine the seekable region
* hlsdemux uses the values from the last updated manifest
Therefore if one downloads fragments faster than realtime, for HLS
we would end up at the end of the last manifest seekable range, and
the previous commit would consider the stream as being ended... which
is not the case.
In the long run, we need to figure out a way to cope with non-1.0
rates on live streams for all types of stream (including HLS).
https://bugzilla.gnome.org/show_bug.cgi?id=783075
This is a race that was exposed by the {hls|dash}.scrub_forward_seeking
validate test.
The "race" is that a subclass might want to change format, causing
a new stream to be created (but not exposed/switched yet) and put on the
prepared_streams list. That stream will have values (including pending
segment) from the pre-seek state.
Before the stream is exposed/switched, a new seek comes in and the stream
values get updated ... but the ones that will be changed don't get updated
causing them to push out wrong segments once they are exposed.
https://bugzilla.gnome.org/show_bug.cgi?id=773159
Add a function to install the default RGBA pad templates,
but don't make them required so that there can be
GstGLFilter sub-classes with different input/output
caps if they want. Remove the hard-coded RGBA restriction in
the set_caps_features call, as it will be taken care
of by intersecting with the pad templates.
Update all the sub-classes to match
Adaptive demuxers are special demuxers that runs their own sources
internally. In this patch we flag the demuxer as being a source in order
to receive the downstream events. We then handle the EOS event by
resetting the internal state and pushing EOS on all pads. This handling
is done asynchronously to avoid blocking user thread.
https://bugzilla.gnome.org/show_bug.cgi?id=723868
When the pad has received EOS, its buffer may still be mixed
any number of times, when the pad's framerate is inferior
to the output framerate.
This was introduced by my patch in
https://bugzilla.gnome.org/show_bug.cgi?id=782962, this patch
also correctly addresses the initial issue.
On the raspberry pi no pkg-config file is provided for the bcm_host
library. We are using AC_CHECK_LIB to detect this lib with autotools,
cc.find_library() library is a closer meson equivalent.
https://bugzilla.gnome.org/show_bug.cgi?id=784026
We have to pass the "height" as height = vmeta->offset[1] / width to the
API, which of course does not work well for formats with only a single
plane. Use the whole memory size instead of the offset in that case.
Previous commit let demux call gst_uri_downloader_cancel() on _demux_reset().
Note that, _demux_reset() called during PAUSED_TO_READY and READY_TO_PAUSED.
And, it will set "cancelled" on uridownloader which blocks the use of
uridownloader. The issue is that, subclass can use the uridownloader not only
live streaming for manifest update, but also for fetching another manifests
such as variant and rendition m3u8 of hls streaming. So to unblock it,
demux should clear "cancelled" before processing initial manifest.
https://bugzilla.gnome.org/show_bug.cgi?id=783401
before broadcasting preroll.
The deadlock was as follows:
-> The subclass pushes a buffer on a newly-created stream in T1
-> We take the preroll lock in T1, to handle_preroll
-> The demuxer is stopped in T2, we take the MANIFEST_LOCK
-> T1 starts blocking because it received a reconfigure event
and needs to take the MANIFEST_LOCK
-> T2 deadlocks because it now wants the preroll_lock.
https://bugzilla.gnome.org/show_bug.cgi?id=783255
Make sure the manifest update loop is stopped before proceeding with the
resetting of the manifest data. Otherwise, the updates loop will try to
use it and it leads to a segfault
https://bugzilla.gnome.org/show_bug.cgi?id=783028
As we release the MANIFEST_LOCK in stop_tasks,
demux->priv->old_streams can be set, we need to free these
otherwise we may end up trying to dispose elements in the
READY state.
https://bugzilla.gnome.org/show_bug.cgi?id=783256
When an accurate seek is requested on a live stream, only requests the
exact value for the "starting position" (i.e. start in forward playback
and stop in reverse playback).
https://bugzilla.gnome.org/show_bug.cgi?id=782698
And to make that 100% obvious, only use variables declared within the
switch cases instead of function-wide ones.
Also remove useless one-time-use-only variable.
CID #1409857
The live seeking range was only checked when doing actual seeks. This was
assuming that the rate would always be 1.0 (i.e. the playback would
advance in realtime, and therefore fragments would always be available
since the seeking window moves at the same rate).
With non-1.0 rates, this no longer becomes valid, and therefore we need
to check whether we are still within the live seeking range when advancing.
https://bugzilla.gnome.org/show_bug.cgi?id=783075
What we want is to retry downloading the fragment on 4xx/5xx errors
however returning EOS will cause waiting for a manifest update for live
(which may be a really long time) or stop everything for non-live.
Change that to only return EOS/ERROR once we've reached the error limit.
https://bugzilla.gnome.org/show_bug.cgi?id=776609
GL_RGB565 is sized internal glformat, the corresponding glformat
should be GL_RGB and type is GL_UNSIGNED_SHORT_565. Otherwise will
return GL_INVALID_ENUM when creating texture.
https://bugzilla.gnome.org/show_bug.cgi?id=783066
This ensures that they really get processed in order with
buffers. Just waiting for the queue to be empty is sometimes not
enough as the buffers are dropped from the pad before the result is
pushed to the next element, sometimes resulting in surprising
re-ordering.
In the case an aggregator is created and pads are requested but only
linked later, we end up never updating the upstream latency.
This was because latency queries on pads that are not linked succeed,
so we never did a new query once a live source has been linked, so the
thread was never started.
https://bugzilla.gnome.org/show_bug.cgi?id=757548
When caps changes while streaming, the new caps was getting processed
immediately in videoaggregator, but the next buffer in the queue that
corresponds to this new caps was not necessarily being used immediately,
which resulted sometimes in using an old buffer with new caps. Of course
there used to be a separate buffer_vinfo for mapping the buffer with its
own caps, but in compositor the GstVideoConverter was still using wrong
info and resulted in invalid reads and corrupt output.
This approach here is more safe. We delay using the new caps
until we actually select the next buffer in the queue for use.
This way we also eliminate the need for buffer_vinfo, since the
pad->info is always in sync with the format of the selected buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=780682
The function needs to be unlocked if any data is received, but only
end the first buffer processing on an actual buffer, synchronized events
don't matter on the first buffer processing.
https://bugzilla.gnome.org/show_bug.cgi?id=781673
With the macOS/iOS implementations, the active thread can change
multiple times over the life of a pipeline which would expose a race in
the thread tracking.
Fix by taking a ref on the active thread while the context is active.
https://bugzilla.gnome.org/show_bug.cgi?id=779202