with i686-apple-darwin10-gcc-4.2.1:
encoding-profile.h:134: warning: type qualifiers ignored on function return type
encoding-profile.c:240: warning: type qualifiers ignored on function return type
gstencodebin.c: In function 'next_unused_stream_profile':
gstencodebin.c:454: warning: format '%d' expects type 'int', but argument 8 has type 'GType'
gstencodebin.c:464: warning: format '%d' expects type 'int', but argument 8 has type 'GType'
Since we calculate timestamps by:
timestamp = t0 + (out samples) / (out rate)
and durations by:
duration = ((out samples) + (processed samples)) / (out rate) - timestamp
if t0 is nonzero, this would simplify to
duration = t0 + (processed samples) / (out rate).
This duration is too large by the amount t0. We should have done:
duration = t0 + ((out samples) + (processed samples)) / (out rate) - timestamp
so that
duration = (processed samples) / (out rate).
Frame size is given in words; it is already multiplied by two where
needed, so the left shift is superfluous. This extra multiplication
caused the code to inspect the third packet instead of the second,
which would fail for files where the second packet has a size
different from the first.
Some things aren't quite right yet and cause problems (0-sized buffers
with PREROLL flag set cause crashes in elements that don't expect those;
getting pipeline back to preroll/playing again when audio/video streams
have different lengths and a seek past the end of one of the stream
happens doesn't always work, etc.). Needs further investigation in the
next cycle.
https://bugzilla.gnome.org/show_bug.cgi?id=633700https://bugzilla.gnome.org/show_bug.cgi?id=634699
Fix conversions to IYU1, they allocated infinite amounts of memory before
because no conversion to IYU1 was actually implemented and it was running
into an infinite loop trying to find suitable intermediate formats.
Also fix the stride and sizes used for IYU1.
Fix a bug when reconfiguring the playsink where the subpicture
stream is broken by attempting to connect it through
streamsynchroniser and second time.
Going over integer arithmetic will lead to minimal rounding errors,
leading to +/-1 changes for volume==1.0. Implement the controlled
processing with floating point arithmetic, which was already done
for the C versions anyway.
Advance stop times too when they are getting higher than the
stop time of segments, avoiding assertions.
The stop time has to be advanced too so that running time keep in sync
for gapless mode.
https://bugzilla.gnome.org/show_bug.cgi?id=631312
This moves AAC profile detection to pbutils, and uses this in
typefindfunctions. This will also be used in qtdemux.
https://bugzilla.gnome.org/show_bug.cgi?id=617314
API: gst_codec_utils_aac_get_profile()
API: codec_utils_aac_caps_set_level_and_profile()
This allows us to add generic codec-specific functionality, like
extracting profile/level data from headers, without having to duplicate
code across demuxers and typefindfunctions.
As a starting point, this moves over AAC level extraction code from
typefindfunctions, so it can be reused in qtdemux, etc.
https://bugzilla.gnome.org/show_bug.cgi?id=617314
API: gst_codec_utils_aac_get_sample_rate_from_index()
API: gst_codec_utils_aac_get_level()
Where it was previously located, we would get async-done for the first
unknown-type, even if other valid streams would appear afterwards.
decode_bin_expose() will take care of posting async-done when the group
is exposed.
But we still want to post it in case the typefinding returned an unknown
type, in which case we will post it after posting an error.
These two changes ensure we do as much as possible before posting async-done.
Replace moving-color-bars pattern with smpte100, and change
moving-speed to horizontal-speed. Default is now 0. Add
a rotation stage to pattern building.
Allocate a temporary scanline for building images. Remove
unused code. Disable several patterns that we're unable to
test and probably never used. Add other variants of bayer
sampling. Convert some patterns to use videotestsrc_blend_line.
Replace solid-color property with foreground-color and add
background-color. Pull some common code out of each of the
pattern generating functions. Fix many of the patterns to
use foreground-color/background-color instead of white/black.
Generated images are indentical to previously if foreground-color
and background-color are left as default.
API: GstVideoTestSrc::foreground-color
API: GstVideoTestSrc::background-color
Send FLUSH_STOP right after forwarding the seek event upstream if necessary.
This makes sure that adder->srcpad is not left flushing if seeking fails or if
upstream is blocked.
The same fix was already applied to videomixer in 49b2a946.
This should speed up standard Vorbis encoding and decoding pipelines a bit.
Thanks to David Schleef for the assistance to get the ORC code right
and explaining everything.
We currently don't use the GAP flag for video and the docs say
that this is for buffers, that have been created to fill a gap
and contains neutral data. For video this is the previous frame.
This information can be used by encoders to encode the duplicated
frames more efficiently. See bug #627459.
That is, if eos is received which will not be forwarded, and the stream
has not yet seen any data, then send a buffer to preroll downstream
(which might otherwise be accomplished by the eos event).
Streamsynchronizer excepts to see stream-changed msg for all streams, but to
arrange for this, video and subtitle streams need to be decoupled by means
of queues (due to pad blocks that may occur).
Fixes#626463.
Specifically, as the latter may have one thread pushing EOS to several streams,
that needs to be decoupled into various thread to prevent preroll hanging
problems.
Otherwise we're producing different caps and basetransform thinks that it
can't passthrough buffer allocations, etc.
In 0.11 all video caps really should have the PAR set...
... which generalizes the current listing of white, black, etc.
In particular, also allow specifying alpha channel, and modify
some structures and pattern filling to cater for alpha value as well.
Fixes#624919.
API: GstVideoTestSrc:solid-color
This fixes a race condition in playbin2's gapless mode, where the
EOS of other streams might arrive in the sinks before the last stream
ends and the switch to the new track happens. The EOS sinks won't
accept any new data then and playback stops.
To prevent this, delay all EOS events until all streams are EOS
and advance the sinks of the EOS streams by filler newsegment
events if necessary.
Fixes bug #625118.
This reads the 3gp profile from the major/compatible brands and puts
this as a 'profile' field in caps. This can be used by demuxers to
decide whether they can handle this stream or not. Also needed for
DLNA.
https://bugzilla.gnome.org/show_bug.cgi?id=620291
Logic for choice of GST_PAD_LINK_CHECK_* is as follows:
* Where return of pad_link wasn't checked before : NOTHING
* Where linking is between known compatible elements : NOTHING
* All other cases : TEMPLATE_CAPS
Slashes down playsink reconfigure by up to 50% cpu time.
This makes sure that we always keep the display aspect ratio and
add black borders if necessary, which is usually something you want
for viewing a video.
This behaviour was not preferred and caused visible image quality
degradations. The real solution would be, to apply a real
deinterlacing filter before scaling the frames.
Fixes bug #615471.
We only look for packets with payload, but it appears there may be packets without,
which makes it harder to find the N packets with payload in a row that we need in
order to typefind this successfully, so scan some more data than necessary in the
optimistic scenario. Alternatively we could change IS_MPEGTS_HEADER().
Fixes#623663.
Before gapless playback failed when switching between audio-only,
video-only and audio-video files, when choosing different clocks
and when the different streams had different durations.
This is now handled by a helper element, which keeps track of the
running times of all streams and synchronizes them.
Fixes bug #602437.
.weba (audio) and .webv (video) were speculation on my part before
the public launch. As of yet no decision has been made on the
file extension for audio-only WebM, and I'm pretty sure there will
never be one for video-only.
Fixes bug #623837.
Fixes spurious errors that happen after an error and playing a working
stream afterwards or signals that are emitted for non-active groups.
Fixes bug #624266.
This reverts commit 9d7538247f.
If the DVD subpicture caps are not part of the raw caps, uridecodebin
doesn't qualify resindvdbin as raw source and plugs decodebins, which
causes broken DVD playback because of bugs elsewhere.
This change was originally added to only expose supported, raw subtitles,
e.g. if the subtitle sink did not support DVD subpictures but a converter
to some supported format exists. It's not very important right now because
we have nothing (that is autoplugged) to convert from plaintext/pango-markup
or DVD subpictures to something else.
Fixes bug #623583.
Otherwise the uridecodebin will be still a child of playbin2 and
its signals will still be connected. In future state changes this
will then emit unrelated signals that will confuse playbin2 or,
even worse, cause crashes and assertions.
Fixes bug #623318.
If an error happens, the PAUSED state will never be reached. If an
application re-uses decodebin2 (like totem) where one would normally
set to READY between each file, the cleanup that normally happens in
the PAUSED=>READY codepath will never be called, resulting in the
following file to re-use the previous demuxer/decoder/...
https://bugzilla.gnome.org/show_bug.cgi?id=622807
We need to clear the pointer to our ts-offset element when we destroy the video
chain elements to make sure nobody derefs it to invalid memory afterwards.
Otherwise we would end up with a bogus ->audiochain->ts_offset field
which would cause segfaults/assertions when trying to modify the
'ts-offset' property in update_av_offset().
Was easy to trigger when using a list of audio+video files mixed with
video-only files in totem.
Use the pad caps when they are available to continue the autoplugging. If the
pad caps are set, they are fixed and then we can directly continue autoplugging.
Use an accumulator for the autoplug-sort signal so that we can stop the emission
when a signal handler produced a valid result. This avoids the object handler
to overwrite the results from user signals.
Fixes#621161