When only linking the element, the upload object will be created from
_transform_caps() but will never be unreffed as the only case is in _stop().
Add an unref if non-NULL to a new finalize handler for this case.
It's possible that the window may have been destroyed when a winsys
event comes in for it.
Fixes an assertion in make -C tests/check generic/states.check
The buffer data is not always copied in _Fill, and will be
read in _DecodeFrame. We unmap at the end of the function,
whether we get there via failure or early out, and keep a
ref to the buffer to ensure we can use it to unmap the
memory even after _finish_frame is called, as it unrefs
the buffer.
Note that there is an access beyond the allocated buffer,
which is only apparent when playing from souphttpsrc (ie,
not from filesrc). This appears to be a bug in the bit
reading code in libfdkaac AFAICT.
https://bugzilla.gnome.org/show_bug.cgi?id=772186
This is specific to when the waylandsink is not being embedded. In
this patch we pass the render lock to the window so it can safely
call gst_wl_window_set_render_rectangle() with the new size.
https://bugzilla.gnome.org/show_bug.cgi?id=722343
We already take the render lock from the wlqueue thread in some other
place which indicates that there is no use of this atomic instead of
a proper locking mechanism.
When we don't have a viewporter (scaling support), we can't use the
1x1 scaleup image trick. Instead, we need to allocate a buffer with
the same size as the area that need to have black background.
This add support for non-standard strides to be used. Note that
some extra work is needed for multi-plane format which may have
a different GstMemory object per plane. This is not currently a
problem since SHM interface is limited to 1 memory.
The buffer pool API does not allow multiple of owner. This otherwise
lead to error when renegotiation take place. Aso consider the
allocation query "need_pool" boolean.
Fixes an assertion when moving from passthrough to non-passthrough
Without an explicit reconfigure, glfiter won't have created the GL
resources such as the FBO, GL bufferpool, etc and basetransform will
allocate sysmem buffers instead.
This makes the viewporter interface optional. The end result is
obviously far from optimal, though it greatly helps testing on older
compostitors or gnome-wayland. We can make it strictly needed later when
this new interface get widely adopted.
When start qmlglsink app, it will set NULL buffer to GstQSGTexture
in which case that qt_context_ will be a random value and cause
gst_gl_context_activate() fail.
https://bugzilla.gnome.org/show_bug.cgi?id=770925
Previously it was created in the init function and destroyed in ::stop, which
lead to segfaults when reusing the element.
Now the upload object is created in ::transform_caps if it is NULL, which is the
earliest we need it. The other vfuncs already bail out if the upload object is
NULL, which means that negotiation wasn't done.
Now when used with video/x-raw as input, the GLMemoryUpload method checks for
->tex_target in input GLMemory(es) and sets the output texture-target
accordingly.
Fixes video corruption with a pipeline like avfvideosrc ! video/x-raw !
glimagesink where on macos avfvideosrc pushes RECTANGLE textures but glupload
was configuring texture-target=2D as output.
The videoaggregator negotiation sequence changed some time
back and broke glstereomix. Instead of doing nego incorrectly
in the find_best_format() vfunc, do it directly in the
update_caps() method.
And scale the bitrate with the absolute rate (if it's bigger than 1.0) to get
to the real bitrate due to faster playback.
This allowed in my tests to play a stream with 10x speed without buffering as
the lowest bitrate is chosen, instead of staying/selecting the highest bitrate
and then buffering all the time.
It was previously disabled for not very well specified reasons, which seem to
be not valid anymore nowadays.
It implements now this interface with its video-direction
property. Values are changed to GstVideoOrientationMethod but they have
the same value than the originals.
https://bugzilla.gnome.org/show_bug.cgi?id=768687
https://github.com/mesonbuild/meson
With contributions from:
Tim-Philipp Müller <tim@centricular.com>
Matej Knopp <matej.knopp@gmail.com>
Jussi Pakkanen <jpakkane@gmail.com> (original port)
Highlights of the features provided are:
* Faster builds on Linux (~40-50% faster)
* The ability to build with MSVC on Windows
* Generate Visual Studio project files
* Generate XCode project files
* Much faster builds on Windows (on-par with Linux)
* Seriously fast configure and building on embedded
... and many more. For more details see:
http://blog.nirbheek.in/2016/05/gstreamer-and-meson-new-hope.htmlhttp://blog.nirbheek.in/2016/07/building-and-developing-gstreamer-using.html
Building with Meson should work on both Linux and Windows, but may
need a few more tweaks on other operating systems.
_stdint.h is generated by Autotools and we don't really need it. All
supported platforms now ship with stdint.h. The only stickler was MSVC,
and since Visual Studio 2015 it also ships stdint.h now.
It now returns the correct values for both orthographic and perspective
projections and takes into account the aspect ratio of the video, handles
the Y-flipping in GL and by us and uses some more helpers from graphene.
Fixes spurious segfault in unit test, where the task was started again during
shutdown when all pads were removed... and was then still running while the
element was finalized.
We don't have to do yet another additional request but can just download the
data directly.
Also unify the key-unit only mode buffer pushing and extract it into its own
function now that it became more complicated.
https://bugzilla.gnome.org/show_bug.cgi?id=741104
We need to mark every first buffer of a key unit as discont, and also every
first buffer of a moov and moof. This ensures that qtdemux takes note of our
buffer offsets for each of those buffers instead of keeping track of them
itself from the first buffer. We need offsets to be consistent between moof
and mdat
https://bugzilla.gnome.org/show_bug.cgi?id=741104
Fixes the following error when building in osx.
error: implicit conversion from enumeration type
'GstJPEG2000Colorspace' to different enumeration type
'GstJPEG2000Sampling'
This reverts commit 947656cfd2.
This makes all dash seeking tests fail. Needs more testing to fully understand
what's going wrong. Revert ok'd by Sebastian
We don't need to call the latter at all as we're definitely in this period and
the segment is selected via the SIDX.
This is especially important when doing SNAP seeks, as otherwise we would
always start from the beginning of the period (usually 0) again.
After the check in line 1,111, media->uri can't be NULL. So the two checks
for GST_HLS_MEDIA_TYPE_CLOSED_CAPTIONS are the same, removing the redundant
one which goes to cc_unsupported.
CID 1364752
Create an output stream for each media when alternate renditions
are present. Update the manifests for all those streams, and
make sure that typefinding is still done for files smaller than 2KB
such as small WebVTT files.
When fetching a byte-region from a server resource,
adjust the downstream buffer offsets so that downstream
doesn't know. This is because id3demux insists on the
first offset being 0. Later we might strip ID3 headers
entirely and this will be unneeded.
Modify playlist updating to track information across updates
better, although still hackish.
When connection_speed == 0, choose the default variant
not the first one in the (now sorted) variant list, as that
will have the lowest bitrate.
Make M3U8 and GstM3U8MediaFile refcounted. The contents
of it and GstM3U8MediaFile are pretty much immutable
already, but if we make it refcounted we can just
return a ref to the media file from _get_next_fragment()
instead of copying over all fields one-by-one, and then
copying them all into the adaptive stream structure fields again.
Move state from client into m3u8 structure. This will
be useful later when we'll have multiple media playlists
being streamed at the same time, as will be the case with
alternative renditions.
This has the downside that we need to copy over some
state when we switch between variant streams.
The GstM3U8Client structure is gone, and main/current
lists are not directly in hlsdemux. hlsdemux had as
many CLIENT_LOCK/UNLOCK as the m3u8 code anyway...
The gst_dash_demux_get_live_seek_range () function returns a stop value
that is beyond the available range. The functions
gst_mpd_client_check_time_position() and
gst_mpd_client_get_next_segment_availability_end_time() in
gstmpdparser.c include the segment duration when checking if a segment
is available. The gst_dash_demux_get_live_seek_range() function
in gstdashdemux.c ignores the segment duration.
According to the DASH specification, if maxSegmentDuration is not present,
then the maximum Segment duration is the maximum duration of any Segment
documented in the MPD.
https://bugzilla.gnome.org/show_bug.cgi?id=753751
There's no need for the jump to an extra thread in most cases, especially
when relying solely on a shader to render. We can use the provided
render_to_target() functions to simplify filter writing.
Facilities are given to create fbo's and attach GL memory (renderbuffers
or textures). It also keeps track of the renderable size for use with
effective use with glViewport().
Don't clear decryption state immediately after
initialising it in the start_fragment. Don't clear
the state of all streams when we want to only clear
the current stream.
https://bugzilla.gnome.org//show_bug.cgi?id=768757
Add demuxer instance-wide decryption key cache. The current and
last key url are per-stream, so make a shared cache. Move the
decryption handling into the stream object, and use the shared
cache for the keys.
Prepare hlsdemux for more than one single stream. Currently hlsdemux
assumes there'll only ever be one stream and most of the stream-specific
state is actually in the hlsdemux structure. Add a stream subclass
instead and move some stream-specific members there instead.
In this mode, we let WebRTC Audio Processing figure-out the delay. This
is useful when the latency reported by the stack cannot be trusted. Note
that in this mode, the leaking of echo during packet lost is much worst.
It is recommanded to use PLC (e.g. spanplc, or opus built-in plc).
In this mode, we don't do any synchronization. Instead, we simply process all
the available reverse stream data as it comes.
Compiler would complain about include directory that didn't
exist because QPA_INCLUDE_PATH gets subst-ed regardless
(and if it didn't we'd have just an empty -I argument).
https://bugzilla.gnome.org/show_bug.cgi?id=767553
This simplifies the code but also removes a bug with tracking of the remaining
size for the initial subfragment: we were not considering the size between the
index and the start of the first moof here.
https://bugzilla.gnome.org/show_bug.cgi?id=764684
When switching fragments we don't want to keep any data around from the last
one, and also forget about all data when doing flushing seeks or selecting new
bitrates.
https://bugzilla.gnome.org/show_bug.cgi?id=764684
The previous code would run out of sync if there was packet lost
or clock skews. When that happened, the echo cancellation feature would
completely stop working. As this is crucial for audio calls, this patch
re-implement synchronization completely.
Instead of letting it drift until next discont, we now synchronize
against the record data at every iteration. This way we simply never
let the stream drift for longer then 10ms period. We also shorter the
delay by using the latency up the probe (basically excluding the sink
latency. This is a decent delay to avoid starving in the probe queue.
https://bugzilla.gnome.org/show_bug.cgi?id=768009
When echo cancel is enabled, we now fail the pipeline if there is
not echo probe. For this reason there is no need to check if probe
pointer is set anymore.
The byte-stream to avc conversion did not consider NAL sizes bigger than 2^16,
multiple layers, multiple NALs per layer, and various other things. This
caused corrupted streams in higher bitrates and other circumstances.
Let's just forward byte-stream as generated by the encoder and let h264parse
handle conversion to avc if needed. That way we only have to keep around one
version of the conversion and don't have to fix it in multiple places.
Rather than assuming something. e.g. zerocopy on iOS with GLES3 requires
the use of Luminance/Luminance Alpha formats and does not work with
Red/RG textures.
The saved timestamp is used to compute the delay of the probe data.
As it's used at the following incoming buffer, it needs to be offset
with the duration of the buffer to represent the end position. Also,
properly initialize the saved timestamp and protect against TIME_NONE.
Until now, we were synchronizing both DSP and Probe adapter by
waiting and clipping the probe adapter data. This increases the CPU
usage, can cause copies if the audio is not 10ms aligned and the worst
is that it prevents the processing from compensating for inaccurate
latency. This is also a step forward toward supporting playback
filters.