This way we let opusdec do the resampling if needed and don't carry
around buffers with a too high sample rate if not required.
While Opus always uses 48kHz internally, this information from the
header specifies which frequencies are safe to drop.
gstoggdemux.c:1233:11: error: format specifies type 'long' but the argument has type 'ogg_int64_t' (aka 'long long') [-Werror,-Wformat]
granule);
^~~~~~~
https://bugzilla.gnome.org/show_bug.cgi?id=746512
The code that was calculating the start granule from packet durations
was interpreting a negative value as an error, but this is actually a
valid case, to indicate clipping of data at start.
https://bugzilla.gnome.org/show_bug.cgi?id=743900
If we get EOS when we're trying to build a chain, we disable seeking
and continue instead of posting an error. This can happen for corner
cases such as a stream with a video that stops before the end, for
instance.
https://bugzilla.gnome.org/show_bug.cgi?id=745980
When looking for pages when seeking, we stop looking for non sparse
streams if we don't find one within a given threshold. This fixes
seeking filling up queues and blocking in corner cases such as an
audio file with a pathological 1 frame video stream (yes, I saw one).
https://bugzilla.gnome.org/show_bug.cgi?id=745980
Store the video info of the internal frame decode width/height
separate to the exposed (cropped) frame info, so that it can be
used for mapping the downstream allocated video frame buffer correctly
when using GstVideoCropMeta.
Fixes playback of files with sizes that aren't a multiple of 16-pixels
width or height.
https://bugzilla.gnome.org/show_bug.cgi?id=741030
This will usually deadlock, despite this patch being in master for
quite some time and working fine. Nevertheless, we deem it to be
not working, disregarding facts.
As such, we fix it by keeping track of seek events, and sending
them upstream from a separate thread. Buffers are then discarded
till we get a new segment with the expected seqnum.
READY->PAUSED can be too early as souphttpsrc can get the HTTP
headers after this. Try again in the chain function.
Also use seeking query to disable seeking if upstream reports
being unseekable.
Some resetting code has to be done in the NEW_SEGMENT
event handler, instead of the missing FLUSH_STOP one.
Segment base was also wrongly accounted for. This was hidden
by the fact that flushing resets the base.
A discontinuity is now also signalled on seeking. We have to
also ensure that the discontinuity "sticks" till a buffer
with a valid timestamp goes out, or the audio decoder base
class will ignore the discontinuity for purposes of keeping
track of the current time.
This allows using non flushing segment seeks for looping
HTML audio in particular, and more generally non flushing seeks.
https://bugzilla.gnome.org/show_bug.cgi?id=729198
The code was using the first nonnegative granulepos to seed the
granule tracking, which appeared to work since headers have zero
granulepos. However, this does not work for files with a hole at
start, which are common in live streaming.
The correct behavior is to look for the first granule, and subtract
the duration of all the packets finishing on this page.
The function which does this relies on the fact that the ogg_stream
structure can be duplicated by shallow copy, in order to pull the
packets from the first page(s) on the copy without affecting the
original stream state.
The max latency parameter is "the maximum time an element
synchronizing to the clock is allowed to wait for receiving all
data for the current running time" (docs/design/part-latency.txt).
https://bugzilla.gnome.org/show_bug.cgi?id=744338
Don't use private GMutex implementation details to check
whether it has been freed already or not. Just turn dispose
function into finalize function which will only be called
once, that way we can just clear the mutex unconditionally.
Makes theora work in cases where the header packets are only in the caps
(because theoradec was connected to oggdemux late and missed the
beginning of the stream)
If the streaming task attempts to read a chain while the pipeline
is stopping (which can happen if the pipeline stops shortly after
start or a new URI being setup in gapless playback case), it will
see a flushing return from upstream, and should then also return
flushing to the caller, rather than emit a flow error.
https://bugzilla.gnome.org/show_bug.cgi?id=722442
Fix leak of caps event and of caps objects when setting caps on sink and src
pads. Sync audiovisualizer class implementation to the one in gst-plugins-bad.
This commit matches c5ef1bee73 in that module.
https://bugzilla.gnome.org/show_bug.cgi?id=742875
This reverts commit a91d521a36.
Being a base class it is better to check the value instead of ignoring it since
a child class could be created that returns valuable information.
klass->setup (scope) will always return TRUE since all children of this class
do so, no need to store the return. Besides, the value is overwritten a few
lines down before it is ever used. Save the unnecessary memory and instructions.
CID #1226467
ret is assigned but not used and in the next cycle of the loop it is overwritten
with default_prepare_output_buffer (). If there is a flow error the function
should return instead.
CID #1226475
It might happen that the timestamp is before the segment and the
check would succeed. In this case reducing the duration makes no
sense and would lead to broken results.
The previous code was setting keytarget to target
to make sure the keyframe found for each pad was
indeed before the target.
Then if target == keytarget, it assumed a keyframe had been
found, which was not the case if target was before the first frame
in the file.
This patch checks that a keyframe was indeed found, and if not
seeks to 0, without bisecting again.
Assuming default gst qa assets in $HOME/gst-validate
seek_before_first_frame.scenario:
description, seek=true, handles-states=true
pause, playback-time=0.0
seek, playback-time=0.0, start=0.0, flags=accurate+flush
seek, playback-time=0.0, start=0.01, flags=accurate+flush
seek, playback-time=0.0, start=0.1, flags=accurate+flush
GST_DEBUG=*theoradec*:2 gst-validate-1.0 playbin \
uri=file://$HOME/gst-validate/gst-qa-assets/medias/ogg/vorbis_theora.0.ogg \
--set-scenario seek_before_first_frame.scenario
https://bugzilla.gnome.org/show_bug.cgi?id=741097
When encoding, libvorbis will tell us how many samples are encoded
in the buffer it returns. This number may be less than the maximum
of samples in the block, if this is the last packet. In we have no
segment end time, we set it to the end time of that last sample to
tell downstream that the buffer contains less samples.
Samples may be clipped at the end, and this is conveyed by a
granulepos that's smaller than it would otherwise be. Use the
segment stop time to detect this, and calculate the right
granulepos.
When the textoverlay is set outside the video frame by deltax or deltay the
calculation segfaults, but it is also unnecessary since it doesn't need to be
displayed. So we should clip the text.
https://bugzilla.gnome.org/show_bug.cgi?id=738242
vorbis_reorder_map is defined for eight channels max. If we have more
than eight channels, it's the application which shall define the order.
Since we set audio position to none, we just interleave all the channels
without any particular reordering.
https://bugzilla.gnome.org/show_bug.cgi?id=737742
The allocation query failure doesn't mean that the negotiation
has failed as the element can allocate buffers itself.
Instead, only fail if the pads are flushing and the allocation
query failed.
https://bugzilla.gnome.org/show_bug.cgi?id=735844
The source pad might be flushing while negotiating, resulting in
set_caps or the ALLOCATION query failing. In this case set the
reconfigure flag on the source pad so that negotiation is retried on the
next buffer.
When downstream claims to accept the overlay meta but fails to
provide it in the allocation query, properly fallback to setting
a new caps without the overlay meta as that is not going to be used.
Only do this if the original caps doesn't have the overlay already,
otherwise there isn't much that can be done.
https://bugzilla.gnome.org/show_bug.cgi?id=735800
Setting segment.base in the segment sent from gst_ogg_demux_handle_page() is
enough to ensure that chained oggs are played corretly (see bgo#706569).
Tweaking the base in gst_ogg_pad_submit_packet() as well result in delays when
playing a file with start != -1.
https://bugzilla.gnome.org/show_bug.cgi?id=735808
It's not done in any other code calling negotiate and will cause deadlocks
as it is sending events and queries in the pipeline.
Specifically this pipeline was deadlocking:
gst-launch-1.0 videotestsrc ! textoverlay ! textoverlay ! fakesink
Base time should be accumulated so non flushing seeks have the expected base.
Not accumulating result in segments appearing as "too late" and so are not
played by the sink.
https://bugzilla.gnome.org/show_bug.cgi?id=735509
Make textoverlay negotiate caps more correctly.
1) Check what caps we received in the video-sink
2) If it already has the overlay meta -> use it directly
3) If it doesn't, textoverlay try adding the overlay meta and using it,
if downstream doesn't support it, just use what is received in the
video-sink
4) Check if the allocation query also supports the meta to enable
really using it
Before it wasn't really doing renegotiation of any kind, just
re-checking if it should use the overlay meta or not
Also had to update the caps in the test as memory:SystemMemory seems
to be required when you use a caps feature otherwise intersection/subset
checks will fail.
https://bugzilla.gnome.org/show_bug.cgi?id=733916
The headers were never getting reffed when being added to the headers
list, which is later unreffed-and-freed by the caller (e.g.
gst_opus_parse_parse_frame()).
https://bugzilla.gnome.org/show_bug.cgi?id=733013
When a pipeline using alsasink and push mode upstream fails
to preroll, the following state will be the case:
- A loop upstream will be PAUSED, pushing a first buffer
- alsasink will be READY, pending PAUSED, because async
On error, the pipeline will switch to NULL. alsasink is in
READY, so goes to NULL immediately. It zeroes its cached
caps. Meanwhile, the upstream loop can cause a caps query,
conccurent with the state change. This will use those cached
caps. If the zeroing happens between the NULL test and the
dereferencing, GStreamer will critical down in the GstValue
code.
Since it appears that such a gap between states (PAUSED
and pushing upstream, and NULL downstream) is expected, we
need to protect the read/write access to the cached caps.
This fixes the critical.
See https://bugzilla.gnome.org/show_bug.cgi?id=731121
This lets oggdemux determine they are not delta units, and removes
spurious per packet warnings about being unable to determine the
packet's keyframeness.
They are very confusing for people, and more often than not
also just not very accurate. Seeing 'last reviewed: 2005' in
your docs is not very confidence-inspiring. Let's just remove
those comments.
This should not cause any actual bug since Theora and Daala have
a maximum shift of 31, and a packet duration of 2^31 seems very
implausible. But it fixes:
Coverity 1139804, 1139803, 1139802
Add an extra function to the oggstream map to inform it about
the incoming buffers. This way oggmux can keep a count on the
vp8 invisible frames and calculate the granulepos correctly.
https://bugzilla.gnome.org/show_bug.cgi?id=722682
vp8 stream header shouldn't be assumed to be provided in caps always
as this would repeat the same code in all demuxers/encoders. Instead,
make oggmux generate them if they are not supplied.
https://bugzilla.gnome.org/show_bug.cgi?id=722682
When seeking back to original state after duration seeks, let
upstream know that we want the whole file, including the last
byte that wasn't requested on the duration seeks.
https://bugzilla.gnome.org/show_bug.cgi?id=724633
video time uses the 'segment' and the text time should use
the 'text_segment'.
If different segments are used for video and text it would
lead to out of sync video/subtitles.
A change in gst_ogg_demux_do_seek caused oggdemux to wait for
a page for each of the streams, including a skeleton stream if
one was present. Since Skeleton only has header pages, that
was never going to end well.
Also, the code was skipping CMML streams when looking for pages,
so would also have broken on CMML streams.
Thus, we change the code to disregard Skeleton streams, as well
as discontinuous streams (such as CMML and Kate). While it may
be desirable to consider Kate streams too (in order to avoid
losing a subtitle starting near the seek point), this may be
a performance drag when seeking where no subtitles are. Maybe
one could add a "give up" threshold for such discontinuous
streams, so we'd get any page if there is one, but do not end
up reading preposterous amounts of data otherwise.
In any case, it is important that the code that determines
the amount of streams to look pages for remains consistent with
the "early out" conditions of the code that actually parses
the incoming pages, lest we never decrease the pending counter
to zero.
This fixes seeking on a file with a skeleton track reading all
the file on each seek.
https://bugzilla.gnome.org/show_bug.cgi?id=719615
Ogg data is read chunk by chunk, and the chunk size used was
originally taken from libvorbisfile. However, this value leads
to poor performance when used on an Ogg file with large pages
(Ogg pages can be close to 64 KB).
We can't just use a larger chunk size, since this will decrease
performance on small page streams, so we use an adaptive scheme
where the chunk size is twice the largest page size we've seen
so far in the stream. For "typical" Ogg/Vorbis, this gives us
almost the same chunk size (a bit lower), and this lets us get
better performance on streams with large pages.
1275 is the maximum size of a frame, but the encoder may return
up to 3 frames, and we need a few extra bytes for TOC, etc. We
use 4000, which is a bit more, and suggested in the libopus docs.
In case we receive a flush event before having our caps set, we will
end up trying to create a theora encoder even though we are not ready.
Avoid that situation making sure we are initialized before accepting to
be flushed.
https://bugzilla.gnome.org/show_bug.cgi?id=709858
The initial support for the new ALSA chmap API.
Just translate the current chmap to GstAudioChannelPosition during the
setup. No function to specify the channel map manually yet, so still
impossible to assign any non-standard positions or to configure in a
different order even if the hardware allows.
https://bugzilla.gnome.org/show_bug.cgi?id=709755
Store the seek stop and seqnum and properly restore them when
receiving the corresponding Segment from upstream. Also fixes
seqnum for converted seek events.
When bisecting after an earliest time has been found, we need
to only consider the stream for which the earliest time was found.
Before, the following scenario could be and was encountered:
a) Find the earliest time for stream X
b) bisect and find a page which granuletime is indeed < target, but
contains another stream.
c) decide to seek at the wrong offset, sometimes inferior to
the real one, in which case the error was undected or
d) the offset was superior, and thus the actual target keyframe was
not processed, and packets were skipped waiting
for a granulepos.
https://bugzilla.gnome.org/show_bug.cgi?id=700537
The problem experienced is that the EOS was never emitted by oggmux during a
rendering with GES. The proposed patch checks if the pad is EOS before deciding
it's the "best pad".
https://bugzilla.gnome.org/show_bug.cgi?id=699792
If our previous flow return was NOT_LINKED, don't try to push on the pads some
more. If we get a RECONFIGURE event on the pad, try to push on it again.
Changed the check to a current_time equal to the stop will produce
EOS instead of the next one. Also, segment.start can't be NONE, so removing
this check.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=696899
gst_query_set_nth_allocation_pool() requires there to be a pool in the
query already. This is not always the case when we get the query from
upstream. Use gst_query_add_allocation_pool() instead in such case.
https://bugzilla.gnome.org/show_bug.cgi?id=681719
Instead, remember we need a keyframe, and we will force the encoder
to emit one next time we submit a new frame.
Since libtheora does not have an API to request a keyframe, we reset
the max keyframe interval to 1 temporarily.
This has the advantage that the rate control keeps its history,
and that the encoder won't choose different quant tables or
somesuch, thus requiring new streamheaders (although this is
probably only a theoretical possibility). Should also be a
bit faster than resetting the encoder.
https://bugzilla.gnome.org/show_bug.cgi?id=663350
The current code is memsetting the GstVideoFrame.data address to 0s (which
causes a segfault). This member is actually an array of data buffers (one for
each plane). This fix iterates over each data plane to clear them all.
https://bugzilla.gnome.org/show_bug.cgi?id=695655
No need to copy buffers we put into the streamheader any more
now that we don't put caps on buffers any more, so there's no
danger of a refcount cycle.
No need to copy buffers we put into the streamheader any more
now that we don't put caps on buffers any more, so there's no
danger of a refcount cycle.
No need to copy buffers we put into the streamheader any more
now that we don't put caps on buffers any more, so there's no
danger of a refcount cycle.
Really really fix attribute list handling by taking a
copy of the original attributes that pango_attr_list_filter
can mutate, but keep the original around intact to restore
later.
The root cause is that alsa-lib is not thread safe for the same handle.
There are two threads in the gstreamer accessing alsa-lib not serilized.
The race condition happens when one thread holds the old framebuffer app_ptr
position in the kernel, another thread advances the framebuffer app_ptr.
when the former thread is scheduled to run again, it overwrites the app_ptr
to old value by copying from kernel.Thus,the app_ptr in the upper
alsa-lib(pcm_rate) become one period size more advanced than the lower
alsa-lib(pcm_hw & kernel).
gstreamer uses noblock and poll method to communicate with the alsa-lib.
The app_ptr unsync situation as described above makes the poll return immediately because
it concludes there is enough space for the ring-buffer via the low-level alsa-lib.
The write function returns immediately because it concludes there is not enough
space for the ring-buffer from the upper-level alsa-lib. Then the loop of poll
and write runs again and again until another period size is available for
ring-buffer.This leads to the cpu 100 problem.
delay_lock is used to avoid the race condition.
Fixes: https://bugzilla.gnome.org/show_bug.cgi?id=690937
Don't loop forever if an USB audio device gets disconnected
while in use. Post an error message instead. This is not
enough yet though, we still need to make the base class
and/or the ring buffer bail out.
https://bugzilla.gnome.org/show_bug.cgi?id=690197
The format probing code was assuming there'd be one caps
structure for each separate width/depth combination like
we did in 0.10 all over the place: for one, we'd query
unsigned/signed formats together for the same width/height,
and we'd add the entire current structure to the probed
caps when we find a format is supported. Now that we have
all raw formats in a single structure, this is all not going
to work so well any more. We added the entire structure with
all possible formats to the caps if we support just one format.
Fix probing so that we only return the list of actually
supported raw audio formats (with native endianness) from
get_caps().
opus + jpegformat plugin builds fail when gstreamer is configured with
--disable-gst-debug as they are checking the GST_DISABLE_DEBUG symbol
instead of GST_DISABLE_GST_DEBUG.
Signed-off-by: Peter Korsgaard <jacmet@sunsite.dk>
https://bugzilla.gnome.org/show_bug.cgi?id=683850
Make it possible for subclasses to provide the timestamp (as an absolute time
against the pipeline clock) of the last read data.
Fix up alsa to provide the timestamp received from alsa. Because the alsa
timestamps are in monotonic time, we can only do this when the monotonic clock
has been selected as the pipeline clock.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=635256
This reverts part of "visual: enable commented out code again."
(commit 8222ba16c8).
The shader code does indeed look broken (or rather,
it makes assumptions that are not necessarily true here,
namly that pixel stride is 4, for example), which
makes totem very crashy and causes other weird behaviour.
Also see https://bugzilla.gnome.org/show_bug.cgi?id=683527