Refactor GstVideoInfo init, make function to set default colorimetry.
Call fill_planes after we configure the GstVideoInfo with parameters
from the caps.
The size of the chroma planes for interlaced vertically subsampled
formats needs to be rounded up to 2, we have 2 fields with each
the same anount of chroma lines.
Keep only 1 structure with all matrix information.
Add structure to hold gamma information.
Add more options to control gamma, primaries and color matrix handling.
Add functions to compute transformations to and from XYZ and use this
to convert between primaries.
Merge gamma into the convert to and from RGB stage.
Fix border val.
Simplify the fastpath table, remove unused fields, add some more checks.
Prepare for doing full gamma corrected conversion and scaling by first
splitting the conversions from and to RGB into separate steps.
split scaling in downscaling and upscaling steps to be performed before
and after conversion respectively.
Fix clipping of images that are partially left of the video
surface, they would get clipped on the right side instead of
the left side, because the video unpack functions currently
ignore the x offset parameter. Work around that until that
is implemented.
https://bugzilla.gnome.org/show_bug.cgi?id=739281
In case of overlay being completely or partially outside
the video frame, the offset calculations are not right,
which resulted in the overlay not being displayed as
expected, or crashes due to invalid memory access.
When the overlay rectangle is completely outside,
we need not render the overlay at all.
For partial display of overlay rectangles, src_yoff
was not being calculated, hence it was always clipping
the bottom half of the overlay, By calculating the
src_yoff, now the overlay is clipped properly.
https://bugzilla.gnome.org/show_bug.cgi?id=739281
Make an ORC version of the 2x vertical upsampling code.
Improve unit tests, test chroma up and down sampling.
memset buffer in conversion to make valgrind happy.
Combine multiplies in 4x filters.
Rename conversion functions to make them nicer in orc.
Add ORC versions for various downsampling algorithms
Add unit test chroma resampler
We only need to do the horizontal subsampling on 1 line if we do it
after vertical subsampling and we also avoid doing vertical subsampling
on unused pixels.
Rework the converter, keep track of the conversion steps by chaining the
cache objects together. We can then walk the chain and decide the
optimal allocation pattern.
Remove the free function, we're not going to need this anytime soon.
Keep track of what output line we're constructing so that we can let the
allocator return a line directly into the target image when possible.
Directly read from the source pixels when possible.
We need to allocate the templine with the amount of pixels we are going
to handle, which we only know for the vertical resampler when we are
asked to resample.
Add scaler functions for 16 bits formats.
Rename the scaler functions so that 16bits versions don't look too
weird.
Remove old unused h_2tap functions
Fix v_ntap functions, it was using 1 tap too little.
Rework the way we track the current state of the video through the
different conversion phases and use this to make sure we use the right
format and pstride where needed.
A faster version of 4tap horizontal scaling causes segfaults in ORC
presumably because it uses too many registers so disable it to avoid
crashing in the ORC tests.
video-scaler.c:151:58: error: implicit conversion from enumeration type
'GstVideoScalerFlags' to different enumeration type
'GstVideoResamplerFlags' [-Werror,-Wenum-conversion]
gst_video_resampler_init (&scale->resampler, method, flags, out_size,
~~~~~~~~~~~~~~~~~~~~~~~~ ^~~~~
Only apply an offset that is a multiple of the subsampling. To handle
arbitrary offsets in the future, we need to be able to chroma-resample
part of the borders.
Add support for cropping the source and placing the converted image
into a rectangle in the destination frame.
Add an option to add a border and border color.
Add the old ORC functions for nearest and linear. Label them as Low
quality because they are not as accurate but ORC lacks opcodes to
express this for now.
Add a video scaler object build on top of the resampler. It has
implementation to deal with interlaced video as well as horizontal and
vertical scaling functions.
Use a LineCache object to track and process lines between unpack,
upsample, convert, downsample and pack stages. This simplifies the
main core processing function a lot and allows for future additions
easily.
Add support for interlaced formats in chroma up and downsampling.
There are some few but certain conditions where it is possible for the
dest_width to be smaller than x. So we check this before assigning a negative
value to src_width, which is a unsigned and would be promoted to a number that
can segfault videoblend.
https://bugzilla.gnome.org/show_bug.cgi?id=738242
This was never reset when going from PAUSED->READY and resulted
in encoders being not reusable after EOS. They just rejected any
buffer because they received EOS in their previous life.
The flag wasn't used anywhere except for rejecting buffers after
EOS, and this is now handled by GstPad directly.
The spec mentions a version of the MPEG-2 frame with a base frame and
extension frame. I don't have IEC 13818-3 to figure out what that is,
and don't see any references in search results, so it's a FIXME for now.
https://bugzilla.gnome.org/show_bug.cgi?id=736797
Move the conversion code used in videoconvert to the video library
and expose a simple but generic API to do arbitrary conversion. It can
currently do colorspace conversion but the plan is to add videoscale to
it as well.
See https://bugzilla.gnome.org/show_bug.cgi?id=732415
When playing chained data the audio ringbuffer is released and
then acquired again. This makes it reset the segbase/segdone
variables, but the next sample will be scheduled to play in
the next position (right after the sample from the previous media)
and, as the segdone is at 0, the audiosink will wait the duration
of this previous media before it can write and play the new data.
What happens is this:
pointer at 0, write to 698-1564, diff 698, segtotal 20, segsize 1764, base 0
it will have to wait the length of 698 samples before being able to write.
In a regular sample playback it looks like:
pointer at 677, write to 696-1052, diff 19, segtotal 20, segsize 1764, base 0
In this case it will write to the next available position and it
doesn't need to wait or fill with silence.
This solution is borrowed from pulsesink that resets the clock to
start again from 0, which makes it reset the time_offset to the time
of the last played sample. This is used to correct the place of
writing in the ringbuffer to the new start (0 again)
https://bugzilla.gnome.org/show_bug.cgi?id=737055
Move the assert to the error handling block at the end of the function so the
the logging is still triggered. Reword the logging slightly and add another
comment to hint what went wrong.
Fixes#737138
Issue:
During a PAUSED->PLAYING transition when we are rendering an audio buffer in AudioBaseSink
we make adjustments to the sink's provided clock i.e. fix clock calibration using the external
pipeline clock, within "gst_audio_base_sink_sync_latency function inside gstaudiobasesink.c".
For the calibration adjustment we need to get the sink clock time using "gst_audio_clock_get_time".
But before calling "gst_audio_clock_get_time" we acquire the Object Lock on the Sink. If sink is
a pulsesink, "gst_audio_clock_get_time" internally calls "gst_pulsesink_get_time" which needs to
acquire Pulse Audio Main Loop Lock before querying Pulse Audio for its stream time using
"pa_stream_get_time". Please see "gst_pulsesink_get_time in pulsesink.c".
So the situation here is we have acquired the Object lock on Sink and need PA Main Loop Lock.
Now Pulse Audio Main Thread itself might be in the process of posting a stream status
message after Paused to Playing transition which in turn acquires the PA Main loop lock and
needs the Object Lock on Pulse Sink. This causes a deadlock with the earlier render thread.
Fix:
Do not acquire the object Lock on Sink before querying the time on PulseSink clock. This is
similar to the way we have used get_time at other places in the code. Acquire it after the
get_time call. This way PA Main loop will be able to post its stream status message by
acquiring the Sink Object lock and will eventually release its Main Loop lock needed for
gst_pulsesink_get_time to continue.
https://bugzilla.gnome.org/show_bug.cgi?id=736071
The timeout parameter is only allowed in a session response header
but some clients, like Honeywell VMS applications, send it as part
of the session request header. Ignore everything from the semicolon
to the end of the line when parsing session id.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=736267
Fixes a crash when controlsrc, readsrc or writesrc are modified from
gst_rtsp_source_dispatch_read/write and gst_rtsp_watch_reset at the
same time.
https://bugzilla.gnome.org/show_bug.cgi?id=735569
This avoids a race where we would get new tag but we are already
prerolled and analyzing results.
It is the way it is supposed to be handled as stated in comment:
"If preroll is complete, drop these tags - the collected information is
possibly already being processed and adding more tags would be racy"
Reset last_timestamp_out when applying the output segment
change, to avoid decoder confusion over new timestamp timelines when
a seamless segment change happens.
Move some locks/unlocks to later when they're actually needed.
https://bugzilla.gnome.org/show_bug.cgi?id=734617
As was done for the base video decoder in commit 695675, don't
flush out the decoder on a new SEGMENT event. Segment events
may be a new segment, but are also often segment updates for
the current segment where the old data should be kept. For new
segments, a STREAM_START event will already trigger a drain, but
make sure to flush any remaining partial data then as well.
https://bugzilla.gnome.org/show_bug.cgi?id=734666
This fixes the reverse playback scenario when upstream is not fully
parsing the stream and does not send every keyframe chain separately
with the DISCONT flag on the keyframe.
To explain this, let's suppose we have this stream:
0 1 2 3 4 5 6 7 8
K K K
In most circumstances, the upstream parser will chain in the
decoder the buffers in the following order:
6 7 8 3 4 5 0 1 2
D D D
In this case, GstVideoDecoder will flush the parse queue every time
it receives discont (D) and we will eventually get in the output queue:
(flush here) 8 7 6 (flush here) 5 4 3 (flush here) 2 1 0
In case the upstream parser doesn't do this work, though,
GstVideoDecoder will receive the whole stream at once and will flush
the parse queue afterwards:
0 1 2 3 4 5 6 7 8
D
During the flush, it will look backwards for keyframes and will
decode in this order:
6 7 8 3 4 5 0 1 2
This is the same order that it would receive from upstream if
upstream was parsing and looking for the keyframes, only that now
there is no flushing of the output queue in between keyframes,
which will result in the output queue looking like this:
2 1 0 6 5 3 8 7 6
This will confuse downstream obviously and will play incorrectly.
This patch forces the decoder to flush the output queue every time
it picks a new keyframe to decode, so it will end up decoding 6 7 8
and then flushing before picking 3 for decoding, so the output will
get 8 7 6 before 6 5 3 and the video will play back correctly.
https://bugzilla.gnome.org/show_bug.cgi?id=734441
Otherwize the pipeline would be in an wrong state and on the next
iteration any kind of error could happen
Everytime an error happens in a pipeline the application has to set the
pipeline back to NULL instead of READY.
https://bugzilla.gnome.org/show_bug.cgi?id=733976
This prevent implementing allocation query, as the format need to be
known in order to determin the size and number of buffers needed.
Note: This may lead to few regressions that will need fixing
https://bugzilla.gnome.org/show_bug.cgi?id=732288
Fixes regression introduced by:
commit b60888fd4b
Author: Michael Olbrich <m.olbrich@pengutronix.de>
Date: Tue May 20 11:18:56 2014 +0200
dmabuf: share the mapping with shared copies of the memory
https://bugzilla.gnome.org/show_bug.cgi?id=730441
Fix gst_video_decoder_parse_available() to really parse any pending
source data that is still available in the adapter. This is a memory
optimization to avoid expansion of video packed added to the adapter,
but also a fix to EOS condition when the subclass parse() function
ultimately only needed to call into gvd_have_frame() and no additional
source bytes were consumed, i.e. gvd_add_to_frame() is not called.
This situation can occur when decoding H.264 streams in byte-stream/nal
mode for instance. A decoder always requires the next NAL unit to be
parsed so that to determine picture boundaries. When a new picture is
found, no byte is consumed (i.e. gvd_add_to_frame() is not called)
but gvd_have_frame() is called (i.e. priv->current_frame is gone).
Also make sure to avoid infinite loops caused by incorrect subclass
parse() implementations. This can occur when no byte gets consumed
and no appropriate indication (GST_VIDEO_DECODER_FLOW_NEED_DATA) is
returned.
https://bugzilla.gnome.org/show_bug.cgi?id=731974
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Make the MIKEY message and payload objects miniobjects so that they have
a GType and are refcounted.
We can reuse the dispose method to clear our payload objects.
Add some annotations.
Implement a copy function for the MIKEY message.
Fix the unit test.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=732589
Recognize H.264 Level 5.2, as exposed by modern 2160p30+ streams,
i.e. commonly known as 4K. Also add initial support for handling
Annex.G (SVC) profiles.
https://bugzilla.gnome.org/show_bug.cgi?id=732269
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
With most decoder libraries, and especially when accessing codecs via
OpenMAX or similar APIs, we don't have the ability to properly related
the output buffers to a number of input samples. And could e.g. get
a fractional number of input buffers decoded at a time.
Previously this would in the end lead to an error message and stopped
playback. Change it to a warning message instead and try to handle it
gracefully. In theory the subclass can now get timestamp tracking
wrong if it completely misuses the API, but if on average it behaves
correct (and gst-omx and others do) it will continue to work properly.
Also add a test for the new behaviour.
We don't change it in the encoder yet as that requires more internal logic
changes AFAIU and I'm not aware of a case where this was a problem so far.
We are scaling from a unit in microseconds to a unit in ((1 << 32) per seconds).
We therefore scale the microseconds values by:
value of a second in the target unit (1 << 32)
--------------------------------------------------------------
value of a second in the origin format (1 000 000 microsecond)
A simple '&' is not sufficiant. With mmapping_flags == PROT_READ and
prot == PROT_READ|PROT_WRITE the check produces the wrong result.
Change the check to make sure that prot is a subset of mmapping_flags.
https://bugzilla.gnome.org/show_bug.cgi?id=730559
With lots of shared memory instances (e.g. created by a RTP payloader) the
overhead of duplicating the file descriptor and creating extra mappings is
significant. To avoid this, the parent memory maps the whole region and the
shared copies just reuse the same mapping.
https://bugzilla.gnome.org/show_bug.cgi?id=730441
Add a read source on write socket when lost tunnel.
To be able to detect when clint closes get channel.
This is already done in gst_rtsp_source_dispatch_write but
only when the queue is empty.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=730368
By re-using the uri argument for storing local data, we could end up in
a situation where we would free uri ... which would actually be the
string passed in argument.
Instead explicitely use a local variable. Fixes double-free issues.
CID #1212176
Buffer pool set_config() may return FALSE if requested configuration needed small
changes. Reget the config and try setting it again. This ensure we have a configured
pool if possible.