Micro-optimisation: if the buffer consist of just one memory, we
know we have already mapped that memory to read the headers, so
no need to map it another time to get to the payload data, we
can just set up the payload data details right there and then
and avoid another map call in gst_rtp_buffer_get_payload().
Adds up when receiving RTP-payloaded raw video which can easily
be thousands of packets per frame.
Implement a chain_list function, which avoids lots of locking
compared to the default fallback implementation in GstPad.
We may also want to do some more sophisticated timestamp
tracking here at some point, but for now leave it up to the
jitterbuffer and/or subclasses (in case buffers in the
buffer list have no timestamp set on them, there may only
be a timestamp for the whole list on the first buffer).
This provides the exact same behaviour as the default
fallback implementation.
Summary:
So that the user can easily use the same encoding profile to render
with/without audio/video stream.
API:
gst_encoding_profile_is_disabled
gst_encoding_pofile_set_enabled
https://bugzilla.gnome.org/show_bug.cgi?id=749056
Instead of returning the first video meta found on a buffer, return the
one with the lowest id (which is usually the same thing, except on
multi-view buffers)
Use g_utf16_to_utf8() instead of the more generic g_convert(), so
that we can extract text in UTF-16 format even on embedded systems
with crippled iconv support.
This code path is exercised by the id3demux test_unsync_v23
check in gst-plugins-good.
https://bugzilla.gnome.org/show_bug.cgi?id=741144
From the API documentation: "Note that it is generally not
a good idea to reuse an existing cancellable for more
operations after it has been cancelled once, as this
function might tempt you to do. The recommended practice
is to drop the reference to a cancellable after cancelling
it, and let it die with the outstanding async operations.
You should create a fresh cancellable for further async
operations."
https://bugzilla.gnome.org/show_bug.cgi?id=739132
[API] gst_discoverer_info_to_variant
[API] gst_discoverer_info_from_variant
[API] GstDiscovererSerializeFlags
+ Serializes as a GVariant
+ Adds a test
+ Does not serialize potential GstToc (s)
https://bugzilla.gnome.org/show_bug.cgi?id=748814
This affects the pt, ssrc, seqnum-offset and timestamp-offset properties. If
they were set from a property, or we configured caps before, we try to use
that value for them. Even if the first structure of the downstream caps
specifies a different value, we check if the value is supported by other
structures.
Only if all this fails, we use the values given by downstream in the first
structure, i.e. if no properties were set and these are the first caps we
negotiate or downstream does not support our values.
By doing this we ensure that we don't spuriously change ssrcs or other fields
in the middle of the stream (and also consider property values more). Ssrc
changes would currently happen after sending an RTX packet (thus creating a
new internal source inside the rtpsession), and then renegotiating the
payloader (which then gets the RTX ssrc from rtpsession).
https://bugzilla.gnome.org/show_bug.cgi?id=749581
All those where super straight forward from the warnings gtkdoc prints. It kind
of makes sense to apply them before the list of warnings is >100 and people
complain that gtkdoc is noisy.
GST_VIDEO_CONVERTER_OPT_ALPHA_MODE, GST_VIDEO_CONVERTER_OPT_CHROMA_MODE,
GST_VIDEO_CONVERTER_OPT_MATRIX_MODE, GST_VIDEO_CONVERTER_OPT_GAMMA_MODE and
GST_VIDEO_CONVERTER_OPT_PRIMARIES_MODE were G_TYPE_STRING with only a few valid
options. Changed those to real enums.
https://bugzilla.gnome.org/show_bug.cgi?id=749104
2 second frame duration is rather unlikely... but if we don't clip
away buffers that far before the segment we can cause the pipeline to
lockup. This can happen if audio is properly clipped, and thus the
audio sink does not preroll yet but the video sink prerolls because
we already outputted a buffer here... and then queues run full.
In the worst case we will clip one buffer too many here now if no
framerate is given, no buffer duration is given and the actual
framerate is less than 0.5fps.
Fixes seeking on HLS/DASH streams, when seeking into the middle of
fragments and having no framerate/buffer duration.
This will be useful for elements that wish to post unhandled navigation
events on the bus to give the application a chance to do something with
it
https://bugzilla.gnome.org/show_bug.cgi?id=747245
Take into account the different steps between Y and UV when calculating
the line size for vertical resampling or else we might not resample
enough pixels and leave bad lines.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=747790
gst_message_parse_toc() returns a reffed GstToc which is owned by the
GstDiscovererInfo. But we have to make sure we unref its previous value before
setting the new one.
https://bugzilla.gnome.org/show_bug.cgi?id=747103
We only get here if we don't have any srcpad caps, and we're going
to override the GstAudioInfo a few lines below anyway without ever
using it if for whatever reason we get caps here.
Otherwise we would forward the GAP event without ever providing any caps,
which then would make decodebin expose a srcpad without any caps set. That's
confusing for applications and can lead to all kinds of interesting bugs.
Instead do the same as already is done in GstAudioDecoder, and try to invent
caps based on the sinkpad caps and the caps allowed by downstream and the
srcpad template caps.
https://bugzilla.gnome.org/show_bug.cgi?id=747190
Bypass g_convert/iconv if there's nothing to convert. That way,
conversion won't fail on systems where iconv doesn't support
converting utf-8 to latin1 and there's nothing to convert.
https://bugzilla.gnome.org/show_bug.cgi?id=723252
memcmp will blindly compare the reserved fields, as well as any
padding the compiler may choose to sprinkle in GstSegment.
Fixes valgrind complaints in unit tests, as well as some found via
https://bugzilla.gnome.org/show_bug.cgi?id=738216
The spec for this does not say nor imply how this should be
interpreted. The previous code would try to shift by 64 bits,
which is undefined.
Coverity 1195119
https://bugzilla.gnome.org/show_bug.cgi?id=727955
When generating segment, we can't assume the first buffer is actually
the first expected one. If it's not, we need to adjust the segment to
start a bit before.
Additionally, we if don't know when the stream is suppose to have
started (no clock-base in caps), it means we need to keep everything in
running time and only rely on jitterbuffer to synchronize.
https://bugzilla.gnome.org/show_bug.cgi?id=635701
Make a base class that can help with allocating fd-backed memory.
Make dmabuf extend from the base class.
We can now make methods to check if memory has an fd and get the fd for
all the different types of fd-backed memory.
Make a separate file for the code to handle the fd backed memory.
This would make it possible later to add other allocators also using
fd backed memory.
Based on patch from Mozzhuhin Andrey <nopscmn at gmail.com>
Add a table based matrix8 multiplication implementation. The algorithm
does not do any clipping so we need to make sure we never call this on
input that might need to be clipped. In general, this algorithm is
2 times faster than the orc optimized one and would be chosen for all
RGB -> YUV conversions and some YUV->YUV and RGB->RGB conversions.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=732186
When the ringbuffer is deactivated and then acquired, if the audio clock
provided by the sink gets reset to zero, we need to add an offset to the
clock to make sure that subsequent samples are written out at the right
times. While we need to leave this to derived classes to take care of
when they provide their own clock (since that clock may or may not be
reset to zero), we can do this ourselves if we know the provided clock
is our own (which does reset to zero on a re-acquire).
When we are using the fast linear resampler, use the ->inc to calculate
the first and last pixel we need so that we can do vertical resampling
on the right amount of pixels.
CLAMP checks both if value is '< 0' and '> max'. Value will never be a negative
number since it in an unsigned integer. Removing that check and only checking
if it is bigger than max and setting it appropriately.
CID #1271606
Only activate the scaler fastpath for x2 up and downscale when the
scaler method is respectively nearest and linear because that is what
those fastpaths really implement.
Add support for alpha. Make it possible to copy, set and multiply the
alpha value of a frame during conversion.
Set the border alpha to 0xff by default.
Go over some of the fastpaths and add alpha handling.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=745006
Make sure to update the output segment to track the segment
we're decoding in, but don't actually push it downstream until
after buffers are decoded.
https://bugzilla.gnome.org/show_bug.cgi?id=744806
Otherwise upstream can get confused about offsets as there will
be a jump once the tags have been parsed due to the stripped area.
If upstream pulls from 0 to 100, and then tagdemux does the
tag reading and finds out that the first 200 bytes are the tag, the
next pull from upstream will have an offset of 200 bytes. So
upstream will get the following data:
0 - 100, 300 - (EOS), as it will continue requesting from where
it has last stopped, but tagdemux will add an offset to skip the
tags.
This patch makes sure that the tags have been parsed and skipped
since the first pull range call.
https://bugzilla.gnome.org/show_bug.cgi?id=744580
This reverts commit 19b9356680.
These two "profiles" are actually a complete set of profiles, which we will
need to handle separately. Unfortunately it seems like we need information
from the SPS to detect the exact profile.
The new gst_install_plugins_context_set_confirm_search() API can be used
to pass a hint to modify the behaviour of the external installer
process.
https://bugzilla.gnome.org/show_bug.cgi?id=744465
The new gst_install_plugins_context_set_desktop_id() and
gst_install_plugins_context_set_startup_notification_id() API can be
used to pass extra details to the external installer process.
https://bugzilla.gnome.org/show_bug.cgi?id=744465
These values are for now taken from x265 and need to be checked against
the spec. Especially we need to check if information from other fields
need to be taken into consideration too, e.g. the bit depth and chroma
index from the SPS.
This however makes 4:4:4 output of x265enc actually work.
Make a convenience function that combines 2 scalers to perform a 2d
scale. This removes quite a bit of overhead in method calls when doing a
typical scale and it also can reuse a piece of unused memory in the
vertical scaler.
Use the 2d scaler in video-converter and remove the other scalers and
temp memory.
Only merge scalers for selected formats.
Use nearest neighbour scaling for chroma when doing nearest neighbour
for the luma.
Also fastpath GRAY16_OE in nearest neighbour.
configure parameters correctly for packed fastpath.
Small performance tweaks for RGB and friends.
Add, but ifdef out, alternative nearest neighbour scaling, it is slower
than the current table based version.
Use memcpy instead of orc_memcpy because it is measurably faster.
Fix YUY2 and friends vertical scaling.
video-scaler.c:1331:14: error: variable 'func' is used uninitialized whenever 'if' condition is false
[-Werror,-Wsometimes-uninitialized]
} else if (bits == 16) {
^~~~~~~~~~
video-scaler.c:1348:3: note: uninitialized use occurs here
func (scale, src_lines, dest, dest_offset, width, n_elems);
^~~~
video-scaler.c:1331:10: note: remove the 'if' if its condition is always true
} else if (bits == 16) {
^~~~~~~~~~~~~~~~
video-scaler.c:1260:27: note: initialize the variable 'func' to silence this warning
GstVideoScalerVFunc func;
^
= NULL
video-converter.c:3406:12: error: implicit conversion from enumeration type 'GstVideoFormat' to different
enumeration type 'GstFormat' [-Werror,-Wenum-conversion]
format = convert->fformat[plane];
~ ^~~~~~~~~~~~~~~~~~~~~~~
video-converter.c:3413:44: error: implicit conversion from enumeration type 'GstFormat' to different enumeration
type 'GstVideoFormat' [-Werror,-Wenum-conversion]
gst_video_scaler_horizontal (h_scaler, format,
~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^~~~~~
video-converter.c:3471:12: error: implicit conversion from enumeration type 'GstVideoFormat' to different
enumeration type 'GstFormat' [-Werror,-Wenum-conversion]
format = convert->fformat[plane];
~ ^~~~~~~~~~~~~~~~~~~~~~~
video-converter.c:3487:42: error: implicit conversion from enumeration type 'GstFormat' to different enumeration
type 'GstVideoFormat' [-Werror,-Wenum-conversion]
gst_video_scaler_vertical (v_scaler, format, lines, d + out_x, i,
~~~~~~~~~~~~~~~~~~~~~~~~~ ^~~~~~
video-converter.c:3551:12: error: implicit conversion from enumeration type 'GstVideoFormat' to different
enumeration type 'GstFormat' [-Werror,-Wenum-conversion]
format = convert->fformat[plane];
~ ^~~~~~~~~~~~~~~~~~~~~~~
video-converter.c:3569:46: error: implicit conversion from enumeration type 'GstFormat' to different enumeration
type 'GstVideoFormat' [-Werror,-Wenum-conversion]
gst_video_scaler_horizontal (h_scaler, format,
~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^~~~~~
video-converter.c:3577:42: error: implicit conversion from enumeration type 'GstFormat' to different enumeration
type 'GstVideoFormat' [-Werror,-Wenum-conversion]
gst_video_scaler_vertical (v_scaler, format, lines, d + out_x, i,
~~~~~~~~~~~~~~~~~~~~~~~~~ ^~~~~~
In function gst_video_scaler_vertical() the bits variable is always
set to either 8 or 16 in every possible format. No need to initialize it.
If the format isn't valid it goes to no_func, so there is no need to
handle the case of bits not being 8 or 16.
CID #1268401
Add API to add and get custom headers that are not
covered by our header fields enum. This is backwards
compatible in that it will also work for our defined
fields, so if we ever add a new header field to the
enum, get_header_by_name() for the same header string
will still work.
API: gst_rtsp_message_add_header_by_name()
API: gst_rtsp_message_take_header_by_name()
API: gst_rtsp_message_remove_header_by_name()
API: gst_rtsp_message_get_header_by_name()
Add fastpaths for all planar conversion and scaling.
Improve gray and alpha handling.
Add option to specify the chroma resampler method and set to linear as
default.
video-converter.c:3645:24: error: implicit conversion from enumeration type
'GstFormat' to different enumeration type 'GstVideoFormat'
[-Werror,-Wenum-conversion]
convert->fformat = fformat;
~ ^~~~~~~
video-converter.c:3667:24: error: implicit conversion from enumeration type
'GstFormat' to different enumeration type 'GstVideoFormat'
[-Werror,-Wenum-conversion]
convert->fformat = fformat;
~ ^~~~~~~
video-converter.c:3963:50: error: implicit conversion from enumeration type
'const GstVideoFormat' to different enumeration type 'GstFormat'
[-Werror,-Wenum-conversion]
if (!setup_scale (convert, transforms[i].fformat))
~~~~~~~~~~~ ~~~~~~~~~~~~~~^~~~~~~
If we have timestamps on input buffers and are in trickmode no-audio
mode, then don't pass anything to the subclass for decode and simply
send gap events downstream
Only for forward playback for now - reverse requires accumulating
GAP events and pushing out in reverse order.
https://bugzilla.gnome.org/show_bug.cgi?id=735666
In trickmode no-audio mode, or when receiving a GAP buffer,
discard the contents and render as a GAP event instead.
Make sure when rendering a gap event that the ring buffer will
restart on PAUSED->PLAYING by setting the eos_rendering flag.
This mostly reverts commit 8557ee and replaces it. The problem
with the previous approach is that it hangs in wait_preroll()
on a PLAYING-PAUSED transition because it doesn't commit state
properly.
https://bugzilla.gnome.org/show_bug.cgi?id=735666
The decoder can fail to drain on EOS if there was only one gather
set, because it will never have sent the segment event downstream
and set the output segment, and fail to detect that the rate < 0.0
Make sure to send pending events before sending all the gather data
for decode.
In gst_video_resampler_init () if method is GST_VIDEO_RESAMPLER_METHOD_NEAREST
then params.envelope is not initialized but still used later in line 382.
Make sure this variable is initiliazed to avoid undefined behaviour.
CID #1256568
Don't render out silence samples to a buffer, just
start the clock running, since any buffer with the
GAP flag will be discarded in render() now anyway.
Make the base audio sink throw away buffers marked GAP, or all
incoming buffers when performing a trick play with
GST_SEGMENT_TRICKMODE_NO_AUDIO flag set, and make sure to start
the ringbuffer when that happens so the clock starts running.
Preserve the timing calculations when rendering, so state is all
updated the same, but just don't render samples.
https://bugzilla.gnome.org/show_bug.cgi?id=735666
video-converter.c:3073:48: error: implicit conversion from enumeration type 'GstFormat' to different enumeration type 'GstVideoFormat'
[-Werror,-Wenum-conversion]
gst_video_scaler_horizontal (h_scaler, format,
~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^~~~~~
video-converter.c:3081:44: error: implicit conversion from enumeration type 'GstFormat' to different enumeration type 'GstVideoFormat'
[-Werror,-Wenum-conversion]
gst_video_scaler_vertical (v_scaler, format, lines, d, i, out_w);
~~~~~~~~~~~~~~~~~~~~~~~~~ ^~~~~~
video-converter.c:3137:24: error: implicit conversion from enumeration type 'const GstVideoFormat' to different enumeration type 'GstFormat'
[-Werror,-Wenum-conversion]
convert->fformat = GST_VIDEO_INFO_FORMAT (in_info);
~ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../../../gst-libs/gst/video/video-info.h:125:43: note: expanded from macro 'GST_VIDEO_INFO_FORMAT'
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../../../gst-libs/gst/video/video-format.h:361:59: note: expanded from macro 'GST_VIDEO_FORMAT_INFO_FORMAT'
~~~~~~~~^~~~~~
video-converter.c:3157:24: error: implicit conversion from enumeration type 'GstVideoFormat' to different enumeration type 'GstFormat'
[-Werror,-Wenum-conversion]
convert->fformat = GST_VIDEO_FORMAT_GRAY8;
Add fast path scaling for YUY2 and other packed YUV formats. Add a new
method to merge the scalers of the Y and UV components into one scaler.
Add faster horizontal 2tap scaler.
See https://bugzilla.gnome.org/show_bug.cgi?id=741987
Some audio sink sub-classes (pulsesink) don't start their clock
when the ringbuffer starts, but always have to on EOS. When we
explicitly need to start the ringbuffer, make sure sub-classes will
do it by (ab)using the existing eos_rendering flag.
Add support for scaling of images with pstride == 1. This can be used
to scale individual planes later.
Rework some of the scaling code to take the pstride as a parameter.
CLAMP checks both if value is '< 0' and '> max'. Value will never be a negative
number since it is an unsigned integer. Removing that check and only checking if
it is bigger than max and setting it appropriately.
CID 1256559
CLAMP checks both if n_taps is '< 0' and '> max_taps'. n_taps will never be a
negative number because it is an unsigned integer. Removing that check and only
making sure it isn't set bigger than max.
CID 1256558
Add an option to disable chroma resampling.
Improve the matrix option values so that you can choose to use the input
or output matrix or disable conversion.
The gst_discoverer_info_get_missing_elements_installer_details()
documentation and annotation says that the return value should be freed
with g_strfreev(), but actually it's owned by the GstDiscovereInfo
object and should definitely not get freed by the caller as well.
https://bugzilla.gnome.org/show_bug.cgi?id=742006
Video buffer pool will update video alignment to respect stride alignment
requirement. But haven't updated it to video alignment in configure.
Which will cause user get wrong video alignment.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=741501
Otherwise calls to get the clock time might change its internal state
and the internal/external time for calibration get unbalanced leading to
a clock jump
https://bugzilla.gnome.org/show_bug.cgi?id=740834
This makes sure that the element is in the same state before start() is called
the very first time and every future call after the element was used already.
Also it ensure that we always have a clean state before start(), cleaned the
same way in every case.
The same was done already in the decoder, and we cleaned some state just above
manually that would also be taken care of by reset().
This makes sure that the element is in the same state before start() is called
the very first time and every future call after the element was used already.
The stop() vfunc might mess with some of our fields we have just
reset, which could cause memory leaks or invalid state taken over
to later.
Also the stop() vfunc, or anything called until it from another thread,
might want to be able to use the fields that were just resetted and
become confused because of that.
In the decoder we already had a workaround for things like this happening,
this workaround is not needed anymore.
The implementation of that vfunc might want to use the object lock for
something too. It's generally not a good idea to keep the object lock while
calling any function implemented elsewhere.
Also the ringbuffer can only be NULL at this point, remove a useless if block.
And in the sink actually hold the object lock while setting the ringbuffer on
the instance. Code accessing this is expected to use the object lock, so do it
here ourselves too.
Allows subclasses to do custom caps query replies.
Also exposes the standard caps query handler so subclasses can just
extend on top of it instead of reimplementing the caps query proxying.
Allows decoders to proxy downstream restrictions on caps.
Also implements accept-caps query to prevent regressions caused by the
new fields on the return of a caps query that would cause the accept-caps
to fail as it uses subset caps comparisons
Allows subclasses to do custom caps query replies.
Also exposes the standard caps query handler so subclasses can just
extend on top of it instead of reimplementing the caps query proxying.
https://bugzilla.gnome.org/show_bug.cgi?id=741263
With the new caps query results the caps returned might have extra fields
that are not required by the decoder (framerate for image decoders) and it
causes a regression making, for example, jpegdec reject caps that don't
have framerates.
The accept-caps implementation will do 2 checks:
1) Do subset check with the template caps, making sure all the required
fields that are present on the template are present on the received caps.
2) Do a intersection check with the result of a caps query, making sure
that downstream can accept the fields in the received caps.
https://bugzilla.gnome.org/show_bug.cgi?id=741263
Refactor the encoder's caps query proxying function to a common place
and use it in the videodecoder to proxy downstream restrictions.
The new function is private to the gstvideo lib.
https://bugzilla.gnome.org/show_bug.cgi?id=741263
Update the new buffer size after alignment in the pool configuration
before calling the parent set_config. This ensures that the parent knows
about the buffer size that we will allocate and makes the size check
work in the release_buffer method.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=741420
This reverts commit 406f32a946.
The problem was apparently that my video-orc.h was not updated and did not
include the prototype for that function. Only a "make clean" caused it to
be regenerated.
Avoid using a constant.
Avoid doing saturated adds, results are not supposed to overflow here.
Rework the C backup function a little in preparation for custom backup
functions in ORC.
See https://bugzilla.gnome.org/show_bug.cgi?id=741015
In some cases, the user might want the stream outputted by encodebin to
be in the exact same format during all the stream. We should let the
user specify when this is the case. This commit add some API in the
GstEncodingProfile to determine whether the format can be renegotiated
after the encoding started or not.
API:
gst_encoding_profile_set_allow_dynamic_output
gst_encoding_profile_get_allow_dynamic_output
https://bugzilla.gnome.org/show_bug.cgi?id=740214
It will cause the frame to be initialized with inconsistent values that then
later can cause crashes or any other kind of interesting and hard to debug
bugs.
In cases where we just call orc directly this is somewhat
superfluous, but let's do it anyway for consistency. In
other cases the compiler can hopefully use this to optimise
memory access a little.
when using variable taps and when we are limiting the number of taps,
recalculate the lanczos parameters to match the clamped value.
Set the max number of taps to 128
Make a small object to hold a pool of allocated temp lines.
Keep track of how many temp lines each conversion stage needs and use
this to allocate just enough temp lines from the temp lines object. from
the temp lines object.
When dealing with mixed interlaced, setup a scaler and chroma-resampler
for both interlaced and progressive frames and switch between them
depending on the interlace mode of the input frame.
Add an option to limit the number of taps to use in automatic mode. The
problem is that for lanczos, we might use more taps than what we can
handle with the current precision.
Rework the other options a little to make it nicer to set defaults.
Refactor GstVideoInfo init, make function to set default colorimetry.
Call fill_planes after we configure the GstVideoInfo with parameters
from the caps.
The size of the chroma planes for interlaced vertically subsampled
formats needs to be rounded up to 2, we have 2 fields with each
the same anount of chroma lines.
Keep only 1 structure with all matrix information.
Add structure to hold gamma information.
Add more options to control gamma, primaries and color matrix handling.
Add functions to compute transformations to and from XYZ and use this
to convert between primaries.
Merge gamma into the convert to and from RGB stage.
Fix border val.
Simplify the fastpath table, remove unused fields, add some more checks.
Prepare for doing full gamma corrected conversion and scaling by first
splitting the conversions from and to RGB into separate steps.
split scaling in downscaling and upscaling steps to be performed before
and after conversion respectively.
Fix clipping of images that are partially left of the video
surface, they would get clipped on the right side instead of
the left side, because the video unpack functions currently
ignore the x offset parameter. Work around that until that
is implemented.
https://bugzilla.gnome.org/show_bug.cgi?id=739281
In case of overlay being completely or partially outside
the video frame, the offset calculations are not right,
which resulted in the overlay not being displayed as
expected, or crashes due to invalid memory access.
When the overlay rectangle is completely outside,
we need not render the overlay at all.
For partial display of overlay rectangles, src_yoff
was not being calculated, hence it was always clipping
the bottom half of the overlay, By calculating the
src_yoff, now the overlay is clipped properly.
https://bugzilla.gnome.org/show_bug.cgi?id=739281
Make an ORC version of the 2x vertical upsampling code.
Improve unit tests, test chroma up and down sampling.
memset buffer in conversion to make valgrind happy.
Combine multiplies in 4x filters.
Rename conversion functions to make them nicer in orc.
Add ORC versions for various downsampling algorithms
Add unit test chroma resampler
We only need to do the horizontal subsampling on 1 line if we do it
after vertical subsampling and we also avoid doing vertical subsampling
on unused pixels.
Rework the converter, keep track of the conversion steps by chaining the
cache objects together. We can then walk the chain and decide the
optimal allocation pattern.
Remove the free function, we're not going to need this anytime soon.
Keep track of what output line we're constructing so that we can let the
allocator return a line directly into the target image when possible.
Directly read from the source pixels when possible.
We need to allocate the templine with the amount of pixels we are going
to handle, which we only know for the vertical resampler when we are
asked to resample.
Add scaler functions for 16 bits formats.
Rename the scaler functions so that 16bits versions don't look too
weird.
Remove old unused h_2tap functions
Fix v_ntap functions, it was using 1 tap too little.
Rework the way we track the current state of the video through the
different conversion phases and use this to make sure we use the right
format and pstride where needed.
A faster version of 4tap horizontal scaling causes segfaults in ORC
presumably because it uses too many registers so disable it to avoid
crashing in the ORC tests.
video-scaler.c:151:58: error: implicit conversion from enumeration type
'GstVideoScalerFlags' to different enumeration type
'GstVideoResamplerFlags' [-Werror,-Wenum-conversion]
gst_video_resampler_init (&scale->resampler, method, flags, out_size,
~~~~~~~~~~~~~~~~~~~~~~~~ ^~~~~
Only apply an offset that is a multiple of the subsampling. To handle
arbitrary offsets in the future, we need to be able to chroma-resample
part of the borders.
Add support for cropping the source and placing the converted image
into a rectangle in the destination frame.
Add an option to add a border and border color.
Add the old ORC functions for nearest and linear. Label them as Low
quality because they are not as accurate but ORC lacks opcodes to
express this for now.
Add a video scaler object build on top of the resampler. It has
implementation to deal with interlaced video as well as horizontal and
vertical scaling functions.
Use a LineCache object to track and process lines between unpack,
upsample, convert, downsample and pack stages. This simplifies the
main core processing function a lot and allows for future additions
easily.
Add support for interlaced formats in chroma up and downsampling.
There are some few but certain conditions where it is possible for the
dest_width to be smaller than x. So we check this before assigning a negative
value to src_width, which is a unsigned and would be promoted to a number that
can segfault videoblend.
https://bugzilla.gnome.org/show_bug.cgi?id=738242
This was never reset when going from PAUSED->READY and resulted
in encoders being not reusable after EOS. They just rejected any
buffer because they received EOS in their previous life.
The flag wasn't used anywhere except for rejecting buffers after
EOS, and this is now handled by GstPad directly.
The spec mentions a version of the MPEG-2 frame with a base frame and
extension frame. I don't have IEC 13818-3 to figure out what that is,
and don't see any references in search results, so it's a FIXME for now.
https://bugzilla.gnome.org/show_bug.cgi?id=736797
Move the conversion code used in videoconvert to the video library
and expose a simple but generic API to do arbitrary conversion. It can
currently do colorspace conversion but the plan is to add videoscale to
it as well.
See https://bugzilla.gnome.org/show_bug.cgi?id=732415
When playing chained data the audio ringbuffer is released and
then acquired again. This makes it reset the segbase/segdone
variables, but the next sample will be scheduled to play in
the next position (right after the sample from the previous media)
and, as the segdone is at 0, the audiosink will wait the duration
of this previous media before it can write and play the new data.
What happens is this:
pointer at 0, write to 698-1564, diff 698, segtotal 20, segsize 1764, base 0
it will have to wait the length of 698 samples before being able to write.
In a regular sample playback it looks like:
pointer at 677, write to 696-1052, diff 19, segtotal 20, segsize 1764, base 0
In this case it will write to the next available position and it
doesn't need to wait or fill with silence.
This solution is borrowed from pulsesink that resets the clock to
start again from 0, which makes it reset the time_offset to the time
of the last played sample. This is used to correct the place of
writing in the ringbuffer to the new start (0 again)
https://bugzilla.gnome.org/show_bug.cgi?id=737055
Move the assert to the error handling block at the end of the function so the
the logging is still triggered. Reword the logging slightly and add another
comment to hint what went wrong.
Fixes#737138
Issue:
During a PAUSED->PLAYING transition when we are rendering an audio buffer in AudioBaseSink
we make adjustments to the sink's provided clock i.e. fix clock calibration using the external
pipeline clock, within "gst_audio_base_sink_sync_latency function inside gstaudiobasesink.c".
For the calibration adjustment we need to get the sink clock time using "gst_audio_clock_get_time".
But before calling "gst_audio_clock_get_time" we acquire the Object Lock on the Sink. If sink is
a pulsesink, "gst_audio_clock_get_time" internally calls "gst_pulsesink_get_time" which needs to
acquire Pulse Audio Main Loop Lock before querying Pulse Audio for its stream time using
"pa_stream_get_time". Please see "gst_pulsesink_get_time in pulsesink.c".
So the situation here is we have acquired the Object lock on Sink and need PA Main Loop Lock.
Now Pulse Audio Main Thread itself might be in the process of posting a stream status
message after Paused to Playing transition which in turn acquires the PA Main loop lock and
needs the Object Lock on Pulse Sink. This causes a deadlock with the earlier render thread.
Fix:
Do not acquire the object Lock on Sink before querying the time on PulseSink clock. This is
similar to the way we have used get_time at other places in the code. Acquire it after the
get_time call. This way PA Main loop will be able to post its stream status message by
acquiring the Sink Object lock and will eventually release its Main Loop lock needed for
gst_pulsesink_get_time to continue.
https://bugzilla.gnome.org/show_bug.cgi?id=736071
The timeout parameter is only allowed in a session response header
but some clients, like Honeywell VMS applications, send it as part
of the session request header. Ignore everything from the semicolon
to the end of the line when parsing session id.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=736267
Fixes a crash when controlsrc, readsrc or writesrc are modified from
gst_rtsp_source_dispatch_read/write and gst_rtsp_watch_reset at the
same time.
https://bugzilla.gnome.org/show_bug.cgi?id=735569
This avoids a race where we would get new tag but we are already
prerolled and analyzing results.
It is the way it is supposed to be handled as stated in comment:
"If preroll is complete, drop these tags - the collected information is
possibly already being processed and adding more tags would be racy"
Reset last_timestamp_out when applying the output segment
change, to avoid decoder confusion over new timestamp timelines when
a seamless segment change happens.
Move some locks/unlocks to later when they're actually needed.
https://bugzilla.gnome.org/show_bug.cgi?id=734617
As was done for the base video decoder in commit 695675, don't
flush out the decoder on a new SEGMENT event. Segment events
may be a new segment, but are also often segment updates for
the current segment where the old data should be kept. For new
segments, a STREAM_START event will already trigger a drain, but
make sure to flush any remaining partial data then as well.
https://bugzilla.gnome.org/show_bug.cgi?id=734666
This fixes the reverse playback scenario when upstream is not fully
parsing the stream and does not send every keyframe chain separately
with the DISCONT flag on the keyframe.
To explain this, let's suppose we have this stream:
0 1 2 3 4 5 6 7 8
K K K
In most circumstances, the upstream parser will chain in the
decoder the buffers in the following order:
6 7 8 3 4 5 0 1 2
D D D
In this case, GstVideoDecoder will flush the parse queue every time
it receives discont (D) and we will eventually get in the output queue:
(flush here) 8 7 6 (flush here) 5 4 3 (flush here) 2 1 0
In case the upstream parser doesn't do this work, though,
GstVideoDecoder will receive the whole stream at once and will flush
the parse queue afterwards:
0 1 2 3 4 5 6 7 8
D
During the flush, it will look backwards for keyframes and will
decode in this order:
6 7 8 3 4 5 0 1 2
This is the same order that it would receive from upstream if
upstream was parsing and looking for the keyframes, only that now
there is no flushing of the output queue in between keyframes,
which will result in the output queue looking like this:
2 1 0 6 5 3 8 7 6
This will confuse downstream obviously and will play incorrectly.
This patch forces the decoder to flush the output queue every time
it picks a new keyframe to decode, so it will end up decoding 6 7 8
and then flushing before picking 3 for decoding, so the output will
get 8 7 6 before 6 5 3 and the video will play back correctly.
https://bugzilla.gnome.org/show_bug.cgi?id=734441
Otherwize the pipeline would be in an wrong state and on the next
iteration any kind of error could happen
Everytime an error happens in a pipeline the application has to set the
pipeline back to NULL instead of READY.
https://bugzilla.gnome.org/show_bug.cgi?id=733976
This prevent implementing allocation query, as the format need to be
known in order to determin the size and number of buffers needed.
Note: This may lead to few regressions that will need fixing
https://bugzilla.gnome.org/show_bug.cgi?id=732288
Fixes regression introduced by:
commit b60888fd4b
Author: Michael Olbrich <m.olbrich@pengutronix.de>
Date: Tue May 20 11:18:56 2014 +0200
dmabuf: share the mapping with shared copies of the memory
https://bugzilla.gnome.org/show_bug.cgi?id=730441
Fix gst_video_decoder_parse_available() to really parse any pending
source data that is still available in the adapter. This is a memory
optimization to avoid expansion of video packed added to the adapter,
but also a fix to EOS condition when the subclass parse() function
ultimately only needed to call into gvd_have_frame() and no additional
source bytes were consumed, i.e. gvd_add_to_frame() is not called.
This situation can occur when decoding H.264 streams in byte-stream/nal
mode for instance. A decoder always requires the next NAL unit to be
parsed so that to determine picture boundaries. When a new picture is
found, no byte is consumed (i.e. gvd_add_to_frame() is not called)
but gvd_have_frame() is called (i.e. priv->current_frame is gone).
Also make sure to avoid infinite loops caused by incorrect subclass
parse() implementations. This can occur when no byte gets consumed
and no appropriate indication (GST_VIDEO_DECODER_FLOW_NEED_DATA) is
returned.
https://bugzilla.gnome.org/show_bug.cgi?id=731974
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Make the MIKEY message and payload objects miniobjects so that they have
a GType and are refcounted.
We can reuse the dispose method to clear our payload objects.
Add some annotations.
Implement a copy function for the MIKEY message.
Fix the unit test.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=732589
Recognize H.264 Level 5.2, as exposed by modern 2160p30+ streams,
i.e. commonly known as 4K. Also add initial support for handling
Annex.G (SVC) profiles.
https://bugzilla.gnome.org/show_bug.cgi?id=732269
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
With most decoder libraries, and especially when accessing codecs via
OpenMAX or similar APIs, we don't have the ability to properly related
the output buffers to a number of input samples. And could e.g. get
a fractional number of input buffers decoded at a time.
Previously this would in the end lead to an error message and stopped
playback. Change it to a warning message instead and try to handle it
gracefully. In theory the subclass can now get timestamp tracking
wrong if it completely misuses the API, but if on average it behaves
correct (and gst-omx and others do) it will continue to work properly.
Also add a test for the new behaviour.
We don't change it in the encoder yet as that requires more internal logic
changes AFAIU and I'm not aware of a case where this was a problem so far.
We are scaling from a unit in microseconds to a unit in ((1 << 32) per seconds).
We therefore scale the microseconds values by:
value of a second in the target unit (1 << 32)
--------------------------------------------------------------
value of a second in the origin format (1 000 000 microsecond)
A simple '&' is not sufficiant. With mmapping_flags == PROT_READ and
prot == PROT_READ|PROT_WRITE the check produces the wrong result.
Change the check to make sure that prot is a subset of mmapping_flags.
https://bugzilla.gnome.org/show_bug.cgi?id=730559
With lots of shared memory instances (e.g. created by a RTP payloader) the
overhead of duplicating the file descriptor and creating extra mappings is
significant. To avoid this, the parent memory maps the whole region and the
shared copies just reuse the same mapping.
https://bugzilla.gnome.org/show_bug.cgi?id=730441
Add a read source on write socket when lost tunnel.
To be able to detect when clint closes get channel.
This is already done in gst_rtsp_source_dispatch_write but
only when the queue is empty.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=730368
By re-using the uri argument for storing local data, we could end up in
a situation where we would free uri ... which would actually be the
string passed in argument.
Instead explicitely use a local variable. Fixes double-free issues.
CID #1212176
Buffer pool set_config() may return FALSE if requested configuration needed small
changes. Reget the config and try setting it again. This ensure we have a configured
pool if possible.
Currently the API is far from optimal and the user has to work around
our badly defined API to simply install missing plugins.
API:
new:
gst_discoverer_info_get_missing_elements_installer_details
deprecated:
gst_discoverer_info_get_misc
gst_discoverer_stream_info_get_misc
https://bugzilla.gnome.org/show_bug.cgi?id=720596
They are very confusing for people, and more often than not
also just not very accurate. Seeing 'last reviewed: 2005' in
your docs is not very confidence-inspiring. Let's just remove
those comments.
We were returning in various places without unreffing the caps, and
we were also leaking (overwriting) the caps we got from _get_current_caps()
Spotted by Haakon Sporsheim in #gstreamer
This should allow for more meaningful errors. Dereferencing NULL
is more useful information than dereferencing a random address
happened to be on the stack.
If gst_video_overlay_rectangle_apply_global_alpha is called with
a rectangle with unsuitable alpha, expanding the alpha plane will
fail, and thus lead to dereferencing a NULL src pointer. It's not
certain this will happen in practice, as the function is static
and callers might ensure suitable alpha before calling, but there
is no apparent explicit such check.
Add prologue asserts for proper alpha to explicitely prevent this.
Coverity 1139707
Videodecoder does late renegotiation, it will wait for the next
buffer before renegotiating its caps and bufferpool. It might happen
that downstream element switched from passthrough to non-passthrough
and sent a reconfigure upstream (that caused this renegotiation).
This downstream element will ask the video sink below for the bufferpool
with an allocation query and will get the same bufferpool that
videodecoder is holding, too.
When renegotiating, if videodecoder deactivates its bufferpool it
might be deactivating the bufferpool that some element downstream
is using and cause the pipeline to fail.
https://bugzilla.gnome.org/show_bug.cgi?id=727498
Clock slaving can clip start time to zero, giving us a shorted
duration than we originally got. To keep in sync, we must then
discard the samples falling before that zero timestamp.
This possibly fixes random distortion caused by constant PA
underflows which are never resynced.
The KEMAC payload actually needs to have subpayloads and the key should
go into the KEY_DATA subpayload. Add support for subpayloads and
implement the KEY_DATA payload.
Add some pointers to the conversion functions that allow us to add
encryption and decryption later.
baseparse will reverse each GOP for us already, so the segment events can
be after our keyframe. Make sure to get it and all other relevant sticky
events before starting to decode.
MIKEY is defined in RFC 3830 and is used to exchange SRTP encryption
parameters between a sender and a receiver in a secure way.
This library implements a subset of the features, enough to implement
RFC 4567, using MIKEY in SDP and RTSP.
* Only check for conditions we are interested in.
* Makes no sense to specify G_IO_ERR and G_IO_HUP in condition, they
will always be reported if they are true.
* Do not create timed source if timeout is NULL.
* Correctly wait for sources to be dispatched, context_iteration() is
not guaranteed to always block even if set to do so.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=726641
Previously the sequence number kept track of by GstRTPBasePayload would
only be set when going from READY to PAUSED state. This meant that a
downstream element that attempted to configure a basepayloader by
setting seqnum-offset e.g. in its sinkpad's caps template would have
trouble configuring the basepayloader. The reason was that the caps
event which arrives with the desired value for seqnum-offset did not
arrive at the basepayloader until caps negotiation took place,
significantly later than the transition from READY to PAUSED.
The result after this patch is that the default value for the
seqnum-offset property, or later set values for this property, will take
effect when going from READY to PAUSED like before. In addition the an
arriving caps event will also affect the basepayloaders configured
sequence number as the event arrives.
The payload type field in an RTP packet header is 7 bits wide, hence the
boundary values ought to be 0x00 and 0x7f, not the previously stated
values 0x00 and 0x80.
Two new functions have been added,
gst_rtsp_connection_set_tls_database() and
gst_rtsp_connection_get_tls_database(). The certificate database will be
used when a certificate can't be verified with the default database.
https://bugzilla.gnome.org/show_bug.cgi?id=724393
This was a regression introduced by f52fd7a68, where we started using
the stride to encode the dimensions in tiles. This patch simply updates
offset and size calculation as described in the documentation,
part-mediatype-video-raw.txt.
Fixes problem in audioconvert, which would end up using
a mixmatrix when converting between different mono format
because it thinks MONO positioning is different from
unpositioned channels, which is not the case in this
special case. The mixmatrix would end up being 0.0 so
audioconvert would convert to silence samples.
https://bugzilla.gnome.org/show_bug.cgi?id=724509
* Change running time type to guint64
* Use GST_CLOCK_TIME_NONE() to check for invalid timestamps
* Name variables so ns-based and hz-based timestamps are evident
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=719383
We call the _get_time function from the provided clock and we don't lock
the sink object for performance reasons. Make sure we only read and
check variables once so that they don't change while we are executing
the code.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=720661
For default caps generation when handling gap events that are sent
before any buffer, try to use caps that are closer to what upstream
provided to avoid fixating rate or channels to 1 as default.
So there are the steps:
1) Try to set rate, channels and channel-mask from upstream if provided
2) Fixate the rate and channels to the default rate and channels from
audio lib
3) Fixate the caps just to be sure everything is fixed
4) If no channel-mask was provided and channels > 2, use a default
channel-mask (taken from audioconvert code)
https://bugzilla.gnome.org/show_bug.cgi?id=722144
Before trying to generate a default fixated caps when handling a gap
event, make sure that the same strategy that is used when handling
a buffer has been attempted. Otherwise audiodecoder will ignore
upstream caps settings such as rate and channels and will likely
end with a caps with channels=1 and rate=1.
https://bugzilla.gnome.org/show_bug.cgi?id=722144
Instead of using extra plane, we encode the number of tiles in x and y in the stride of
each planes (i.e. y_tiles << 16 | x_tiles) and introduce tile_mode, tile_width and
tile_height into GstVideoFormatInfo structure.
https://bugzilla.gnome.org/show_bug.cgi?id=707361
For reverse playback, the segment event will only be pushed when
the first buffer is actually pushed. But for decoding frames and storing
those into the list to be pushed the output_segment.rate value is used
to determine if it is forward or reverse playback.
In case a previous segment event (or none) is in use it will mistakenly
think it is doing forward playback and push the buffers immediatelly and
try to clip buffers based on an old segment (or an uninitialized one, leading
to an assertion)
This patch fixes this by copying the segment earlier if on reverse playback
https://bugzilla.gnome.org/show_bug.cgi?id=721666
Add a method to make a media-type from the transport. Deprecate the old
method that only used the mode.
Based on patch from Aleix Conchillo Flaqué <aleix@oblong.com>
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=720219
Port a change from audiobasesink from def07410, to ignore setcaps
when the caps don't actually change, and avoid a reconfiguration
and reset of the ringbuffer in that case.
And don't assume in other code that set_format() preserves any fields at
all. These assumptions were already made here for fields that were changed
by set_format().
If there are no caps from the audio decoder when handling a GAP
event - as when one is received right at the start on a DVD without
initial audio - then choose any default caps for downstream and
then send the GAP, so the audio sink has a configured format in
which to start the ringbuffer.
Also, make the audio sink reject a GAP without caps with a clearer
error message.
Fixes bug https://bugzilla.gnome.org/show_bug.cgi?id=603921
Fixes "Unitialized Scalar Variable" issues reported by Coverity.
Has the added advantage of detecting whether somebody *does* use those
fields (ending up with a invalid address).
https://bugzilla.gnome.org/show_bug.cgi?id=720810
This must only ever be used in caps in combination with a non-system
memory GstCapsFeatures, and where it does not make sense to specify
any of the other video formats. Examples of this would be in gst-vaapi.
This reverts commit 5fcdabd907.
Instead of making it impossible to use the ENCODED format we should
just document that it must not be used for capsfeature-less caps.
Also this commit broke API/ABI.
GST_VIDEO_FORMAT_ENCODED was added to support *extracting* video-related
information (like width, height, framerate,...) from caps.
It is __NOT__ intended to be used as a format field on video/x-raw caps.
So that it avoids to send an allocation query twice.
One from an early call to gst_audio_encoder_negotiate from a
subclass, then one from gst_audio_encoder_allocate_output_buffer.
Which means that previously gst_audio_encoder_negotiate was not
clearing the GST_PAD_FLAG_NEED_RECONFIGURE even on success.
Fixes bug https://bugzilla.gnome.org/show_bug.cgi?id=719684