This moves AAC profile detection to pbutils, and uses this in
typefindfunctions. This will also be used in qtdemux.
https://bugzilla.gnome.org/show_bug.cgi?id=617314
API: gst_codec_utils_aac_get_profile()
API: codec_utils_aac_caps_set_level_and_profile()
This allows us to add generic codec-specific functionality, like
extracting profile/level data from headers, without having to duplicate
code across demuxers and typefindfunctions.
As a starting point, this moves over AAC level extraction code from
typefindfunctions, so it can be reused in qtdemux, etc.
https://bugzilla.gnome.org/show_bug.cgi?id=617314
API: gst_codec_utils_aac_get_sample_rate_from_index()
API: gst_codec_utils_aac_get_level()
Where it was previously located, we would get async-done for the first
unknown-type, even if other valid streams would appear afterwards.
decode_bin_expose() will take care of posting async-done when the group
is exposed.
But we still want to post it in case the typefinding returned an unknown
type, in which case we will post it after posting an error.
These two changes ensure we do as much as possible before posting async-done.
Replace moving-color-bars pattern with smpte100, and change
moving-speed to horizontal-speed. Default is now 0. Add
a rotation stage to pattern building.
Allocate a temporary scanline for building images. Remove
unused code. Disable several patterns that we're unable to
test and probably never used. Add other variants of bayer
sampling. Convert some patterns to use videotestsrc_blend_line.
Replace solid-color property with foreground-color and add
background-color. Pull some common code out of each of the
pattern generating functions. Fix many of the patterns to
use foreground-color/background-color instead of white/black.
Generated images are indentical to previously if foreground-color
and background-color are left as default.
API: GstVideoTestSrc::foreground-color
API: GstVideoTestSrc::background-color
Send FLUSH_STOP right after forwarding the seek event upstream if necessary.
This makes sure that adder->srcpad is not left flushing if seeking fails or if
upstream is blocked.
The same fix was already applied to videomixer in 49b2a946.
This should speed up standard Vorbis encoding and decoding pipelines a bit.
Thanks to David Schleef for the assistance to get the ORC code right
and explaining everything.
We currently don't use the GAP flag for video and the docs say
that this is for buffers, that have been created to fill a gap
and contains neutral data. For video this is the previous frame.
This information can be used by encoders to encode the duplicated
frames more efficiently. See bug #627459.
That is, if eos is received which will not be forwarded, and the stream
has not yet seen any data, then send a buffer to preroll downstream
(which might otherwise be accomplished by the eos event).
Streamsynchronizer excepts to see stream-changed msg for all streams, but to
arrange for this, video and subtitle streams need to be decoupled by means
of queues (due to pad blocks that may occur).
Fixes#626463.
Specifically, as the latter may have one thread pushing EOS to several streams,
that needs to be decoupled into various thread to prevent preroll hanging
problems.
Otherwise we're producing different caps and basetransform thinks that it
can't passthrough buffer allocations, etc.
In 0.11 all video caps really should have the PAR set...
... which generalizes the current listing of white, black, etc.
In particular, also allow specifying alpha channel, and modify
some structures and pattern filling to cater for alpha value as well.
Fixes#624919.
API: GstVideoTestSrc:solid-color
This fixes a race condition in playbin2's gapless mode, where the
EOS of other streams might arrive in the sinks before the last stream
ends and the switch to the new track happens. The EOS sinks won't
accept any new data then and playback stops.
To prevent this, delay all EOS events until all streams are EOS
and advance the sinks of the EOS streams by filler newsegment
events if necessary.
Fixes bug #625118.
This reads the 3gp profile from the major/compatible brands and puts
this as a 'profile' field in caps. This can be used by demuxers to
decide whether they can handle this stream or not. Also needed for
DLNA.
https://bugzilla.gnome.org/show_bug.cgi?id=620291
Logic for choice of GST_PAD_LINK_CHECK_* is as follows:
* Where return of pad_link wasn't checked before : NOTHING
* Where linking is between known compatible elements : NOTHING
* All other cases : TEMPLATE_CAPS
Slashes down playsink reconfigure by up to 50% cpu time.
This makes sure that we always keep the display aspect ratio and
add black borders if necessary, which is usually something you want
for viewing a video.
This behaviour was not preferred and caused visible image quality
degradations. The real solution would be, to apply a real
deinterlacing filter before scaling the frames.
Fixes bug #615471.
We only look for packets with payload, but it appears there may be packets without,
which makes it harder to find the N packets with payload in a row that we need in
order to typefind this successfully, so scan some more data than necessary in the
optimistic scenario. Alternatively we could change IS_MPEGTS_HEADER().
Fixes#623663.
Before gapless playback failed when switching between audio-only,
video-only and audio-video files, when choosing different clocks
and when the different streams had different durations.
This is now handled by a helper element, which keeps track of the
running times of all streams and synchronizes them.
Fixes bug #602437.
.weba (audio) and .webv (video) were speculation on my part before
the public launch. As of yet no decision has been made on the
file extension for audio-only WebM, and I'm pretty sure there will
never be one for video-only.
Fixes bug #623837.
Fixes spurious errors that happen after an error and playing a working
stream afterwards or signals that are emitted for non-active groups.
Fixes bug #624266.
This reverts commit 9d7538247f.
If the DVD subpicture caps are not part of the raw caps, uridecodebin
doesn't qualify resindvdbin as raw source and plugs decodebins, which
causes broken DVD playback because of bugs elsewhere.
This change was originally added to only expose supported, raw subtitles,
e.g. if the subtitle sink did not support DVD subpictures but a converter
to some supported format exists. It's not very important right now because
we have nothing (that is autoplugged) to convert from plaintext/pango-markup
or DVD subpictures to something else.
Fixes bug #623583.
Otherwise the uridecodebin will be still a child of playbin2 and
its signals will still be connected. In future state changes this
will then emit unrelated signals that will confuse playbin2 or,
even worse, cause crashes and assertions.
Fixes bug #623318.
If an error happens, the PAUSED state will never be reached. If an
application re-uses decodebin2 (like totem) where one would normally
set to READY between each file, the cleanup that normally happens in
the PAUSED=>READY codepath will never be called, resulting in the
following file to re-use the previous demuxer/decoder/...
https://bugzilla.gnome.org/show_bug.cgi?id=622807
We need to clear the pointer to our ts-offset element when we destroy the video
chain elements to make sure nobody derefs it to invalid memory afterwards.
Otherwise we would end up with a bogus ->audiochain->ts_offset field
which would cause segfaults/assertions when trying to modify the
'ts-offset' property in update_av_offset().
Was easy to trigger when using a list of audio+video files mixed with
video-only files in totem.
Use the pad caps when they are available to continue the autoplugging. If the
pad caps are set, they are fixed and then we can directly continue autoplugging.
Use an accumulator for the autoplug-sort signal so that we can stop the emission
when a signal handler produced a valid result. This avoids the object handler
to overwrite the results from user signals.
Fixes#621161
Scan a bit into the data when checking for dts frames instead
of expecting the frame sync to be right at the start of the
data. This is needed for some dts-disguised-as-pcm-in-wav files.
See #413942.
Orc is not a hard requirement. Things should still compile and
work without orc, but slow fallback code may be used in this
case. Fix up configure to not error out if orc is not installed
and wrap use of orc profiling in audioresample in #ifdefs.
Fixes#620136 some more.
Make jpeg typefinder check more than just the first two bytes
plus Exif or JFIF marker. This allows us to report MAXIMUM
probability in cases where there's no Exif or JFIF marker,
making typefinding stop early. Also extract width and height,
because we can.
Fix typo that made the AC-3 typefinder not actually check for a
second frame, but rather compare the sync point found to itself,
which resulted in the AC-3 typefinder reporting an overly optimistic
MAXIMUM or VERY_LIKELY probability when it found a possible frame
sync.
Move the convert_frame function to playsink and make it part of the API. This is
in preparation to add the convert_frame signal to playsink.
See #620279
If a file contains raw streams (not requiring a decoder) that we do
not want (expose-all-streams == FALSE), we would previously consider
those of unknown-type (missing a decoder) ... whereas in fact it was just
because they don't need decoders.
This only applies if expose-all-streams is FALSE.
* don't re-create our possible caps every single time, just use the
template caps.
* don't intersect the caps against the template, basetransform has already
done that for us.
62% speedup of _transform_caps() (instruction calls, measured with callgrind)
API : expose-all-streams
If disabled:
* only the streams that CAN be decoded and match the final caps will have a
decoder plugged in and be exposed.
* the streams that COULD HAVE BEEN decoded but do not match the finals caps
will not have a decoder plugged in and will not be exposed.
If no decoder is available to decode a certain stream, then the missing element
message will still be emitted regardless of the value of the property.
https://bugzilla.gnome.org/show_bug.cgi?id=617868
Adder was using always incrementing timestamps. Seeking was done by setting the
position in the newsegment event. This was failing when doing segmented seeks
with rate<0.0, as offset (and thus timestamp) would go below 0.
Now we take both cur and end from the seek event. We construct newsegment events
depending including cur and end from the seek event. We set position to the
start of the segment. Timestamp is set to start or end of segment depending on
rate. Offset is recalculated.
Use foo_LDADD instead of foo_LDFLAGS to specify the libraries to link to.
This should make sure arguments are passed to the linker in the right
order, and makes LDFLAGS usable again.
Based on initial patch by Brian Cameron <brian.cameron@oracle.com>
Fixes#615697.
This adds code to calculate the level for a given AAC stream and export
it in the stream caps. For AAC LC streams, the level is calculated
according to the definition under the AAC Profile. For other streams,
the definition under the Main Profile is used.
HE-AAC support is still to be done, and is dependent on detecting the
presence of SBR and PS in the stream.
Level is added as a field of type string because that's the way it's
done in H.264 caps as well. There are only a few possible levels, so
not using a numerical type is not too painful in this case, and
consistency is nice.
Fixes#613589.
This looks at the AAC profile for ADTS streams and adds the profile as a
string in the corresponding caps.
Profile is the actual profile, base-profile denotes the minimum codec
requirements to decode this stream. In this case they're always the
same, but they may differ e.g. in case of certain HE-AAC streams that
can be partially decoded by LC decoders (with loss of quality of course)
if no suitable HE-AAC decoder is available.
Fixes#612312.
Decrement sample counter when playing backwards. Set proper segment when playing
backwards (0..cur instead or cur..-1). Add more logging and fix a format string.
Unreffing it whenever the sinks are removed will make the volume
element unavailable after a playbin reuse because it is only
recreated if the audio sink has changed.
Fixes bug #614288.
In reverse mode we want use the next next timestamp (and not the other way
around). Fixes the tests again. Also readd a log line that was dropped with
previous commit.
We know our plugins and examples are independent of each other, so may
just as well build them in parallel. Makes the output a bit messy, but
that shouldn't be a problem and can easily be avoided with make -j1.
And fix the resulting compile failures.
I'm sorry about the patch necessary to gstclockoverlay.h but after
talking to Tim we decided we can live with it.
Change playbin2 to not error out if there are subtitles and audio
but no video. If visualizations are enabled the subtitles are rendered on top
of the visualization stream, otherwise the subtitles are not linked at all and
only the audio is played (and a warning message is posted).
If there are only subtitles but neither audio nor video an error message is
still posted.
Fixes bug #610866.
For this add subtitle encoding properties to playsink and subtitleoverlay
and update the values in the containing elements.
Also update the font description in textoverlay or the used renderer
element if it is changed during playback.
Fixes bug #610310.
Use the same translated message string for missing core elements as
playbin uses, which is a bit nicer and also indicates that there is
something wrong with the user's GStreamer installation (which arguably
is the case if elements like typefind or queue2 are missing).
Otherwise the ghostpad will still be linked to the peer and there
will still be a reference kept, leading to nothing being unlinked
and destroyed until decodebin2 is finalized.
This fixes reuse of decodebin2 if a raw stream is connected to
its sinkpad.
This makes sure that we don't destroy the last reference before the
element gets back to NULL state. Fixes assertion failures if a playbin2
instance is reused but different sinks are automatically chosen because
of different caps.
This reverts commit 7335ce5d3e.
Support abusing the uri property to configure the next uri to play
outside of the about-to-finish handler for the time being after all.
We also shouldn't use thread private structures for this, since it
should be possible to block the thread that emitted about-to-finish
while the main thread sets the uri property. See #607226.
When reusing a decodebin2 element, clear the properties we might have changed,
to their default values or else we might end up with old configuration.
Fixes#608484
Make AC-3 typefinder use the DataScanCtx stuff so we don't have to
do gst_type_find_peek() in the inner loop all the time. Also return
when we've suggested AC3 caps, instead of continuing with the loop.
When we are dealing with a source that produces raw audio/video, we don't use a
decodebin2 to decode the data and we thus don't have the drained/about-to-finish
signal emited. To fix this, we add a padprobe on the source pads and emit the
drained signal ourselves. This then makes playbin2 emit the about-to-finish
signal for raw sources such as cdda://
Fixes#607116
Add PNM typefinder, so we can remove the one that's in the PNM plugin
in -bad (which btw uses different/wrong media types that don't match
the ones used by gdkpixbufdec) and people don't make fun of us for
loading image decoders when typefinding and playing back audio files.
We don't want to end up setting values on elements where the property is of
a different type than we expect. Can't transform the value either, since we
can't really make assumptions about the scale and transform function.
Fixes crashes when using playbin2 with apexsink (#606949).
Changing the URIs in a state > READY results in unexpected behaviour,
i.e. the new URIs are only used after the current track has finished.
Fixes bug #607226.
In this case the video still goes through the text chain and
subtitles are still going in there, in case subtitles are
enabled again. This makes sure that re-enabling subtitles
happens instantly.
Fixes hanging video when disabling subtitles, caused by an
unliked video pad.
Detect EOS faster.
Try to reuse one of the input buffer as the output buffer. This usually works
and avoids an allocation and a memcpy.
Be smarter with GAP buffers so that they don't get mixed or cleared at all. Also
try to use a GAP buffer as the output buffer when all input buffers are GAP
buffers.
It may not be uncommon for the input timestamps to experience some jitter
around the 'perfect time'. As such, instead of regularly adding and dropping
samples, optionally allow for some tolerance in a more relaxed approach.
API: GstAudioRate:tolerance
This is necessary because the sinks don't notice the group switches
and the decoders/demuxers have a different running time than the
sinks.
Fixes bug #537050.
In some cases (all buffers dropped by a parser) a decodebin2
chain might receive an EOS before it gets enough data to
expose a decoded pad. In the case that no streams can expose
a pad we should error out instead of hang.
Fixes#542758
Just counting how many messages were sent and how many were received
is not good enough because they might've been duplicated (e.g. by the
visualization audio tee). Comparing the sequence numbers should give
better results in that case.
Otherwise the async state change from READY->PAUSED of the
uridecodebins will take playbin2 from PLAYING->PAUSED again
during gapless group switches.
Fixes bug #602000.
When a decodebin2 receives no-more-pads of a group it
can set that group's multiqueue buffering thresholds to
'playing' buffering method, avoiding that it buffers
too long and cause problems when using with queue2.
See the associated bug for details.
Fixes#600787
During a group switch return the cached duration of the old group
because the old group still didn't finish playback. If we have no
cached duration return FALSE.
Fixes bug #585969.
Make sure, to only "simulate" subtitle no-more-pads if it was still
pending and also handle errors in the subtitle pipeline as warnings
after the subtitles prerolled.
Don't set the suburidecodebin to READY after errors, handle_message
will usually be called from the streaming thread and doing that
from there is obviously not a good idea.
Now the caps property isn't set anymore for the subtitle caps
but instead in the autoplug-continue signal it is detected
if the caps belong to a supported subtitle stream.
This makes automatic use of newly installed plugins.
First of all, make sure that suburidecodebin never
errors out because of not-linked in case external subtitles
are used but then subtitles are disabled.
And then make sure that external subtitles always start from
the correct position and are not racing until EOS if they
get unselected and selected again.