This is done by adding a capsfilter after every parser/converter that contains
all possible caps supported by downstream elements. A capsfilter is necessary
here because the decoder is only selected after the parser selected a format
and the parser can't know what downstream would support otherwise.
Remove the _ in front of the endianness prefix.
Remove the _3 postfix for the 24 bits formats.
Add a _32 postfix after the formats that occupy extra space beyond their
natural size.
The result is that the GST_AUDIO_NE() macro can simply append the endianness
after all formats and that we only specify a different sample width when it is
different from the natural size of the sample. This makes things more consistent
and follows the pulseaudio conventions instead of the alsa ones.
Sort muxers based on their caps and ranking before iterating to
find one that fits the profile.
Sorting is done by putting the elements that have a pad template
that can produce the exact caps that is on the profile. For example:
when asking for "video/quicktime, variant=iso", muxers that
have this exact caps on their pad templates will be put first on
the list than ones that have only "video/quicktime".
https://bugzilla.gnome.org/show_bug.cgi?id=651496
This reverts commit 105814e2c7.
The general consensus seems to be that we should revert this for
now. If such behaviour is desired, we should probably enable it
via a flag. And maybe use the scaletempo plugin instead.
Adds a Lanczos-derived scaling method, which is rather slow, but very
high quality. Adds a few properties that can be used to tune various
scaling properties: sharpness, sharpen, envelope, dither. Not currently
Orcified, but was designed with that in mind.
The average_period_set variable can be accessed in different threads, so
always lock it when reading. Furthermore when switching to averaging
mode we should make sure we don't have cached buffers that aren't used
in that mode. And any modeswitch will cause the latency to change, so we
should post a NewLatency message
Make enums for the chroma siting for easier use in the videoinfo.
Make enums for the color range, color matrix, transfer function and the
color primaries. Add these values to the video info structure in a Colorimetry
structure. These values define the exact colors and are needed to perform
correct colorspace conversion. Use a couple of predefined colorimetry specs
because in practice only a few combinations are in use.
Add view_id to the video frames to identify the view this frame represents in
multiview video.
Remove old gst_video_parse_caps_framerate, use the videoinfo for this.
Port elements to new colorimetry info.
Remove deprecated colorspace property from videotestsrc.
Rework the audio caps similar to the video caps. Remove
width/depth/endianness/signed fields and replace with a simple string
format and media type audio/x-raw.
Create a GstAudioInfo and some helper methods to parse caps.
Remove duplicate code from the ringbuffer and replace with audio info.
Use AudioInfo in the base audio filter class.
Port elements to new API.
Instead of just assuming all pads are created at the same time,
remember which ones are actually new (via ->pending_blocked_pads).
This allows the following use-case to properly work:
* Upstream starts with audio-only
* Only that pad gets data, blocks and a real audio sink is created
* Upstream laters adds a video stream
* A new pad is requested, blocks and reconfiguration kicks in in
order to add a new real video sink
Similar meaning same layer, same bitrate, and same number of channels
This fixes misdetection of (some MP3 files that have zero padding
between the ID3 tag and the MP3 stream) as H.264 video.
https://bugzilla.gnome.org/show_bug.cgi?id=656018
As encodebin doesn't connect to the queue signals, it can set
queues to silent mode to make queue not emit them.
Check https://bugzilla.gnome.org/show_bug.cgi?id=621299 for
more info on queue's silent property.
Use atomic ops on pending flags. Rename the segment_pending to
new_segment_pending. Set new_segment_pending not when we received seek, but
when we received the first upstream new_segment.
When we don't have specific {audio|video|text}-sink properties, don't
set them on playsink when reconfiguring.
If we do that, we end up setting the previous configured sink to
GST_STATE_NULL resulting in any potentially pending push being returned
with GST_FLOW_WRONG_STATE which will cause the upstream elements to
silently stop.
https://bugzilla.gnome.org/show_bug.cgi?id=655279
When we have a multi-stream (i.e. audio and video) input and the demuxer
adds/removes pads for a new stream (common in a mpeg-ts stream when the
program stream mapping is updated), the algorithm for EOS handling was
previously wrong (it would only drop the EOS of the *last* pad but would
let the EOS on the other pads go through).
The logic has only been changed a tiny bit for EOS handling resulting in:
* If there is no next group, let the EOS go through
* If there is a next group, but not all pads are drained in the active
group, drop the EOS event
* If there is a next group and all pads are drained, then the ghostpads
will be removed and the EOS event will be dropped automatically.
This allows us to make parsers accept both parsed and unparsed input
without decodebin plugging them in a loop until things blow up, ie.
without affecting applications that still use the old playbin or the
old decodebin.
(Making parsers accept parsed input is useful for later when we want
to use parsers to convert the stream-format into something the decoder
can handle. It's also much more convenient for application authors
who can plug parsers unconditionally in transcoding pipelines, for
example).
Make a new GstVideoFormatinfo structure that contains the specific information
related to a format such as the number of planes, components, subsampling,
pixel stride etc. The result is that we are now able to introduce the concept of
components again in the API.
Use tables to specify the formats and its properties.
Use macros to get information about the video format description.
Move code to set strides, offsets and size into one function.
Remove methods that are not handled with the structures.
Add methods to retrieve pointers and strides to the components in the video.
Add a flags property and two flags to allow one to disable the
conversion elements within encodebin. Doing so insists that the
uncompressed input to encodebin for the appropriate stream type is
sufficient to meet the caps requirements of the encoders, muxers and
encodebin target.
This is mostly beneficial to bypass slow caps negotiations in the
conversion elements.
Caps returned from gst_pad_peer_get_caps_reffed () may not be writable.
If they are not is should cause an assertion in gst_caps_merge (),
however, sometimes assertions are disabled in binary builds of -base and
it's safer to just be sure the caps are writable. Also, check that the
reffed caps pointer is not NULL.
The length check isn't sufficient, an source might
report the correct length, but then still fail to
read the requested number of bytes for some reason.
https://bugzilla.gnome.org/show_bug.cgi?id=652642
Remove the GstVideoPlane structure and move the fields directly into the
GstVideoInfo structure. This makes things a little easier to read and also makes
it more likely that we can pass the stride array to external libraries.