If we already saw the keyframes that we need to find,
we do not need to bisect to find them.
This will always be the case for streams with audio only,
where each frame acts as a keyframe, but will occasionally
also happen for streams with video.
https://bugzilla.gnome.org/show_bug.cgi?id=662475
Opus streams outside of Ogg may not have headers, and oggstream
may be used by oggmux to mux an Opus stream which does not come
from Ogg - thus without headers.
Determining headerness by packet count would strip the first two
packets from such an Opus stream, leading to a very small amount
of audio being clipped at the beginning of the stream.
In push mode, we determine duration by doing a seek to the end of the
stream. However, a skeleton stream with an index will cause the duration
to be known already, and we end up never setting the push_time_duration
variable which we use to know duration has been determined.
https://bugzilla.gnome.org/show_bug.cgi?id=662049
The codec setup headers are a lot more likely to have correct information,
especially as it's easy to remux a skeleton in a file where streams don't
have the same parameters (I've even seen a file with two skeletons).
Still, this is useful in the case we have a codec we can't decode, so we
can at least (theoretically) convert granpos to time, so we discard this
information if the codec setup has already provided it.
This fixes playback on (at lesat) the original archive.org encoding of
"The Night of the Living Dead" (now replaced by another encoding).
https://bugzilla.gnome.org/show_bug.cgi?id=612443
This could happen when testing with navseek, and pressing
right and left at roughly the same time. The current chain
is temporarily moved away, and this caused the flush events
not to be sent to the source pads, which would cause the
data queues downstream to reject incoming data after the
seek, and shut down, wedging the pipeline.
Now, I can't really decide whether this is a nasty steaming
hack or a good fix, but it certainly does fix the issue, and
does not seem to break anything else so far.
https://bugzilla.gnome.org/show_bug.cgi?id=621897
This patch implements seeking in push mode (eg, over the net)
in Ogg, using the double bisection method.
As a side effect, it also fixes duration determination of network
streams, by seeking to the end to check the actual duration.
Known issues:
- Getting an EOS while seeking stops the streaming task, I can't
find a way to prevent this (eg, by issuing a seek in the event
handler).
- Seeking twice in a VERY short succession with playbin2 fails
for streams with subtitles, we end up pushing in a dataqueue
which is flushing. Rare in normal use AFAICT.
- Seeking is slow on slow links - byte ranges guesses could be
made better, decreasing the number of required requests
- If no granule position is found in the last 64 KB of a stream,
duration will be left unknown (should be pretty rare)
https://bugzilla.gnome.org/show_bug.cgi?id=621897
The first packet of a sparse stream may arrive after an initial
delay in the stream. If ogg_stream_packetout reports a discontinuity
in a sparse stream, do not propagate it to other streams in the
chain unnecessarily.
https://bugzilla.gnome.org/show_bug.cgi?id=621897
If both quality and bitrate are set, libtheora will try to meet
both constraints, causing it to prefer emitting a smaller number
of good frames, to emitting the full number of frames that would
not meet the requested quality. This causes a slideshow effect
when the bitrate is low and the quality is high. And the default
theoraenc is high (48/63).
So only set quality when it is requested, and leave it unset
otherwise.
https://bugzilla.gnome.org/show_bug.cgi?id=658443
Remove the _ in front of the endianness prefix.
Remove the _3 postfix for the 24 bits formats.
Add a _32 postfix after the formats that occupy extra space beyond their
natural size.
The result is that the GST_AUDIO_NE() macro can simply append the endianness
after all formats and that we only specify a different sample width when it is
different from the natural size of the sample. This makes things more consistent
and follows the pulseaudio conventions instead of the alsa ones.
After all, we do hope to find actual data for these streams.
However, warn if we could not set up a chain when we find a
non BOS page, as that means we don't have a valid Ogg stream.
https://bugzilla.gnome.org/show_bug.cgi?id=657151
While the casual reader might end up bewildered by just why this
change might increase clarity, it just happens than, in the libogg
and associated sources, op is the canonical name for an ogg_packet
whlie og is the canonical name for an ogg_page, and reading this
code confuses me.
https://bugzilla.gnome.org/show_bug.cgi?id=657151
Headers are inherently durationless.
Instead, set duration to 0 to avoid increasing tracked granpos,
and do not warn about it, since it is totally expected.
https://bugzilla.gnome.org/show_bug.cgi?id=657151
Version written is 3.0
Base times are left empty for now.
Content-Type should be the MIME type of the stream. It is set to
the GStreamer media type for now, which is probably the same for
the streams oggmux supports.
https://bugzilla.gnome.org/show_bug.cgi?id=563251
Make enums for the chroma siting for easier use in the videoinfo.
Make enums for the color range, color matrix, transfer function and the
color primaries. Add these values to the video info structure in a Colorimetry
structure. These values define the exact colors and are needed to perform
correct colorspace conversion. Use a couple of predefined colorimetry specs
because in practice only a few combinations are in use.
Add view_id to the video frames to identify the view this frame represents in
multiview video.
Remove old gst_video_parse_caps_framerate, use the videoinfo for this.
Port elements to new colorimetry info.
Remove deprecated colorspace property from videotestsrc.
vorbisenc currently reacts in a rater draconian fashion if input
timestamps are more than 1/2 sample off what it considers ideal. If data
is 'too late' it truncates buffers, if it is 'too soon' it completely
shuts down encode and restarts it. This is causingvorbisenc to produce
corrupt output when encoding data produced by sources with bugs that
produce a smple or two of jitter (eg, flacdec)
If ints are 64 bits, 32 bits should get promoted in varargs anyway,
and we don't care about 16 bit ints.
This makes the code a lot more readable, and still gets us nice
hexadecimal 32 bit serialnos.
https://bugzilla.gnome.org/show_bug.cgi?id=656775
Rework the audio caps similar to the video caps. Remove
width/depth/endianness/signed fields and replace with a simple string
format and media type audio/x-raw.
Create a GstAudioInfo and some helper methods to parse caps.
Remove duplicate code from the ringbuffer and replace with audio info.
Use AudioInfo in the base audio filter class.
Port elements to new API.
Make a new GstVideoFormatinfo structure that contains the specific information
related to a format such as the number of planes, components, subsampling,
pixel stride etc. The result is that we are now able to introduce the concept of
components again in the API.
Use tables to specify the formats and its properties.
Use macros to get information about the video format description.
Move code to set strides, offsets and size into one function.
Remove methods that are not handled with the structures.
Add methods to retrieve pointers and strides to the components in the video.
Remove the GstVideoPlane structure and move the fields directly into the
GstVideoInfo structure. This makes things a little easier to read and also makes
it more likely that we can pass the stride array to external libraries.
This decreases the number of buffers held on each pad by one,
eliminating next_buffer. Simplifies the logic by relying solely
on CollectPads to let us know when a pad is in EOS. As a side
benefit, the collect pads related code is structured more like
other CollectPad users.
The previous code would occasionally mark the wrong pad as EOS,
causing the code to get in a state where all the streams were
finished, but EOS hadn't been sent to the source pad.
On OSX the cdparanoia headers include IOKit framework headers (in particular
SCSICmds_INQUIRY_Definitions.h) which define a structure that has a member
named VERSION, so we must #undef VERSION before including those for things
to compile on OSX.
Fixes#609918.
This prevents the ugly hack where the text_sink pad template
was only added for textoverlay but not for the subclasses.
Also makes this work with the core change that made
subclasses inherit the templates of their parent class.
Ogg mandates the first header packet must determine a stream's type.
However, some streams (such as VP8) do not include such a header
when muxed in other containers, and thus do not include this header
as a buffer, but only in caps. We thus use headers from caps when
available to determine a new stream's type.
https://bugzilla.gnome.org/show_bug.cgi?id=647856
gcc on OSX complains about ret being used uninitialized in
this function, and it is right. Don't leak element ref
when returning early because newsegment event is not in
TIME format.
Remove the android/ top dir
Fixe the Makefile.am to be androgenized
To build gstreamer for android we are now using androgenizer which generates the
needed Android.mk files.
Androgenizer can be found here:
http://git.collabora.co.uk/?p=user/derek/androgenizer.git
Also initialize it always in TIME format. We require TIME segments
in oggmux anyway and drop newsegment events in other formats and
assume an open-ended segment starting at 0.
Theora and vorbis use running time (which is correct) for calculating
the granulepos for their ogg packets. Oggmux, however, used
timestamps to order the received buffers.
This patch makes it use the running time to compare buffer times
and also to timestamp pushed buffers.
Some bits of the code still use timestamps, but they are only
used to calculate durations, so it should be fine.
https://bugzilla.gnome.org/show_bug.cgi?id=643775
'A OVER B' compositing is explained at
http://en.wikipedia.org/wiki/Alpha_compositing.
Previously, overlaying text on a transparent background image left the
text overlay also transparent. This pipeline shows such an example:
gst-launch videotestsrc pattern=white ! video/x-raw-yuv,format=\(fourcc\)AYUV ! alpha alpha=0.0 ! textoverlay text=Testing auto-resize=False font-desc=60px ! videomixer ! ffmpegcolorspace ! autovideosink
With this patch, text is composited "OVER" the background image and
thus is visible regardless of the alpha of the background image. The
overlay in the above pipeline works after applying this patch.
Pango is not reentrant. Use a class wide mutex to protect pange use in
gst_text_overlay_render_pangocairo(). This works reliable in contrast to the
hack in my previous commit.
Fixes Bug #412678
The speed-level property, which allows callers to trade of encoding
quality for speed in the libtheora api, has a version-dependent
maximum and default values. Instead of hardcoding the acceptable
range for the theoraenc element's presentation of this setting,
we query the library directly at class initialization time and
set the maximum and default values from that. If the query fails,
we fall back to the previous default setting.
To keep the values reported by gst-inspect (which I'm told use
the spec values from the class) with those available on an\
instantiated element, we remove to setting of enc->speed_level
from the initializer and instead pass G_PARAM_CONSTRUCT to
the property spec flags, asking g_object to set this property
when theoraenc objects are constructed.
NB in theory the maximum speed-level could depend on the actual
video caps. If later versions of libtheoraenc do this, a second
call will need to be made from theora_enc_reset to update the
property, since this function is mostly useful for realtime
adjustment of performance while the pipeline is running.
libtheora has two encoding modes, CBR, where it tries to hit a target
bitrate and VBR where it tries to achieve a target quality.
Internally if the target bitrate is set to anything other then 0 the
encoding-mode is CBR.
This means that the gstreamer element can leave the video_quality
setting alone as long as the user is tweaking the bitrate. Which has the
nice side-effect that if the user explicitely sets the bitrate to 0
(which is actually the default), the quality value doesn't get reset and
one ends up encoding VBR at quality-level 0...
In case the ogg mapper doesn't handle all the accepted input formats
(although it really should). Saves us error handling for that case
though. Also log caps properly.
https://bugzilla.gnome.org/show_bug.cgi?id=629196
Using the IN_CAPS flag for this is brittle, and will fail if either
vorbisparse or vorbistag (which is itself based on vorbisparse) is
inserted between oggdemux and oggmux. Possibly other elements too
(eg, theoraparse, etc).
Using oggstream ensures we Get It Right More Often Than Not.
https://bugzilla.gnome.org/show_bug.cgi?id=629196
Discontinuities are automatically signalled by oggdemux at the start
of a new stream. When oggmux is yet to output actual data pages,
do not signal these discontinuities in the ogg stream.
This patch may miss some actual discontinuities at the very start of
a stream, but avoids the spurious missing pages when encoding happens
normally.
A better fix might involve finding a way to distinguish between actual
data discontinuities and discontinuities merely marking the start of
a new stream.
Fixes an issue with ogg page numbering (would skip a number for no
reason, which then looks like a packet was lost somewhere) when
re-muxing an ogg stream, e.g. when re-tagging in rhythmbox.
https://bugzilla.gnome.org/show_bug.cgi?id=629196
Remove "This property requires libtheora version >= 1.1" qualifiers
from property descriptions. They aren't needed any longer now that
we require libtheora >= 1.1.
This was causing keyframe_granule to be set to 0 for all streams
when seeking to the beginning of the stream, i.e., at the
beginning of playback. Fixes#619778.
Instead, use either 0 or 1, depending on bitstream version, which give
the correct result for streams which aren't cut off at start.
This allows that function to not return negative granpos.
https://bugzilla.gnome.org/show_bug.cgi?id=638276
The offset part of the granpos is not a sign of the newer encoding.
Use the version number instead.
This fixes the criticals thrown by theoraparse, and (at last) the
remaining part of #553244.
allocate buffers using gst_buffer_new_and_alloc() instead of
gst_pad_alloc_buffer_and_set_caps(), as the first one will
cause the pad to block, and we don't want that since that will
prevent subsequent pads from being fed if a block occurs at
start, when all pads must be fed for playback to start.
This fixes autoplugging of the tiger element and other things.
https://bugzilla.gnome.org/show_bug.cgi?id=637822
Oggdemux will currently try to pad alloc a buffer from the peer when it is
reading the header files. This is a relic from the time where we had an internal
parser and needs to be removed at some point in time.
The problem is that when there is no peer pad yet (which is normal when
collecting headers) we should still continue to parse all the packets of a
page instead of erroring out on NOT_LINKED.
Fixes#632167
Only keep the last valid granulepos we see when scanning the last
pages. It is possible that the last page that we inspect has a -1 granulepos, in
which case we want to keep the previous valid time instead.
Fixes#631703
Since this is just a debugging feature and libtheora will usually not be
compiled with that option enabled, we should maybe just hide these properties,
since they won't work anyway, and avoid confusing warnings.
Also rename properties to make them less cryptic.
https://bugzilla.gnome.org/show_bug.cgi?id=628488
Files with a skeleton, or other files with a stream that ends before the end of
the chain would start playing from the end of the chain when trying to seek with
a negative rate at a position between the end of any stream and the end of the
chain.
This is due to the loop in _do_seek() assuming that pages will be encountered
for all streams shortly after the place where we want to seek, as found by
do_binary_search().
In the first iteration of the loop, stream ends are now checked against the
time of the current page.
In case of odd values for xpos or ypos, the division by two in CbCr
plane would result in an off-by-one error, which in the case of NV12,
NV21, or UYVY would cause inversion of blue and red colors. (And
would be not so easily noticed for I420 as it would just cause the
chroma to be offset slightly from the luma.)
This patch also fixes a silly typo from the earlier patch which
added NV12 support that broke UYVY support.
The textoverlay element will rerender the text string whenever
overlay sets the 'need_render' flag to TRUE. Previously, we
lazily set the flag to TRUE every time the time string was requested.
Now, we save a copy of the previously given string, and only set
'need_render' to TRUE if the string has changed.
In my tests with a 30fps video stream, and a time string including
a seconds field, this change reduced the CPU usage of the clockoverlay
element from 60% to 5%.
Fixes bug #627780.
Alsa seems to expect that we initialize it. Remove the variable and pass NULL
as we actually don't use it. In alsasink also #ifdef one section that is
grabing diagnostics to be disabled, when logging is disabled (the code was
using the out parameter as well).
Fixes#626125
Rather than only left, right, top, etc, allow for horizontal and vertical
positioning on a scale from 0 to 1.
Also cater for configuring rendered text color.
Fixes#624920.
API: GstTextOverlay:xpos
API: GstTextOverlay:ypos
API: GstTextOverlay:color
Just cast the pointer diff, so it works everywhere without
warnings. Can't use %tu, because that modifier is C99. Warning
was: "format '%li' expects type 'long int', but argument 8 has
type 'int'".
cdparanoia now has a .pc file in post-0.10.2 SVN, so use
that to check for cdparanoia before we try all the other
checks. Besides being generally nicer, this may help with
correctly detecting cdparanoia on OSX some day (see #609918).