It's a waste of resources to map it if it won't be converted
or used at all. Since we moved the frame mapping down, we need
to use the GST_VIDEO_INFO accessor macros now in the code above
that instead of the GST_VIDEO_FRAME accessor macros.
https://bugzilla.gnome.org/show_bug.cgi?id=746147
For each frame, compare the frame boundaries, check if the format contains an
alpha channel, check opacity, and skip the frame if it's going to be completely
overwritten by a higher zorder frame. The check is O(n^2), but that doesn't
matter here because the number of sinkpads is small.
More can be done to avoid needless drawing, but this covers the majority of
cases. See TODOs. Ideally, a reverse painter's algorithm should be used for
optimal drawing, but memcpy during compositing is small compared to the CPU used
for frame conversion on each pad.
https://bugzilla.gnome.org/show_bug.cgi?id=746147
Don't use the apis in codec-utils to extract the profile,tier and level
syntax elements since it is wrong if there are emulation prevention
bytes existing in the byte-stream data.
https://bugzilla.gnome.org/show_bug.cgi?id=747613
Free the existing descriptor array, if any, before replacing it.
Fix leaks with the
validate.file.playback.scrub_forward_seeking.test-mpeg2-mp3_mxf scenario.
https://bugzilla.gnome.org/show_bug.cgi?id=748580
If the stream which is about to be removed still has a ref on a tag list we
should drop it.
Fix a leak which was occasionally happening with the
validate.file.playback.change_state_intensive.tron_en_ge_aac_h264_ts scenario.
https://bugzilla.gnome.org/show_bug.cgi?id=748576
Replace videocrop ! videoscale ! capsfilter with the digitalzoom
bin that has the same pipeline internally and already updates
the capsfilter automatically when caps change, removing this code
from wrappercamerabinsrc and making it cleaner.
Avoids one extra uneeded renegotiation if the elements are already
configured to their final property values when the caps event
goes through.
Also avoids hitting bug https://bugzilla.gnome.org/show_bug.cgi?id=748344
It contains videocrop ! videoscale ! capsfilter and implements digital
zooming.
At this moment, it is a private element of the camerabin plugin.
This will remove some code used in wrappercamerabinsrc to make
code clearer and digitalzoom can potentially be used by other
applications in the future, it has nothing camerabin specific.
wrappercamerabinsrc has a videocrop element to be used for
zooming and for cropping when input caps is different when used
with the GstPhotography interface. The zooming part needs
the following elements:
capsfilter ! videocrop ! videoscale ! capsfilter
The capsfilters should always have the same caps to ensure the
zooming is done and preserves dimensions, unless when it is needed
to do more cropping due to input dimensions those caps
need to be modified accordingly to preserve the output dimensions.
This, however, makes it hard to get caps negotiation to work properly
as we need to have different caps in the capsfilters to account for
the extra cropping needed. It could be simple for fixed caps but it
gets tricky with unfixed ones.
To solve this, this patch splits the zooming and dimension reduction
cropping into 2 separate videocrop elements. The first one does
the dimension cropping, which is only needed when the GstPhotography
API is used and the source provides a caps that is different than
what is requested, while the second is dedicated to zoom crop only.
The first part of the pipeline goes from:
src ! videoconvert ! capsfilter ! videocrop ! videoscale ! capsfilter
to
src ! videocrop ! videoconvert ! capsfilter ! videocrop ! videoscale ! capsfilter
It might add an extra overhead in the image capture as the image might need
to be cropped twice but this can be solved by enabling videocrop to use
crop metas so only the later one does the real cropping.
It also makes the code a bit simpler.
Remove tee and output-selector and just link the source
pad to the outputs we want as needed.
The way we need to prioritize caps negotiation and allocation
queries depending on the mode enabled is too custom to be
handled using tee and output-selector.
This provides more flexibility and doesn't get in the way of proper
handling of negotiation and allocation queries.
The detection for missing format/alignment is done way before this
codepath is reached (at which point we have already decided of a
format and alignment).
CID #1232800
When block width property is set to 0, exception occurs.
This happens due to divide by zero errors in calculations.
block width property can never be 0. Hence adjusting the minimum value to 1.
https://bugzilla.gnome.org/show_bug.cgi?id=744188
Such seeks are used to change playback rate and we do not want
to alter the position in that case, so we bypass the flush/seek
logic, and set things up so a new segment is scheduled to be
regenerated.
https://bugzilla.gnome.org/show_bug.cgi?id=735100
This will happen when the PMT changes, replacing streams with
new ones. In that case, we need to accumulate the running time
from the previous chain in the segment base.
https://bugzilla.gnome.org/show_bug.cgi?id=745102
Reset the internal segment before freeing it.
mxf_index_table_segment_parse() allocates data inside the segment
(like segment->delta_entries) which have to be freed using
mxf_index_table_segment_reset().
https://bugzilla.gnome.org/show_bug.cgi?id=746803
Also:
- Don't modify size on early buffer
The size is the size of the buffer, not of remaining part.
- Use the input caps when manipulating the input buffer
Also store in in the sink pad
- Reply to the position query in bytes too
- Put GAP flag on output if all inputs are GAP data
- Only try to clip buffer if the incoming segment is in time or samples
- Use incoming segment with incoming timestamp
Handle non-time segments and NONE timestamps
- Don't reset the position when pushing out new caps
- Make a number of member variables private
- Correctly handle case where no pad has a buffer
If none of the pads have buffers that can be handled, don't claim to be EOS.
- Ensure proper locking
- Only support time segments
https://bugzilla.gnome.org/show_bug.cgi?id=740236
When the timeout is reached, only ignore pads with no buffers, iterate
over the other pads until all buffers have been read. This is important
in the cases where the input buffers are smaller than the output buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=745768
Correctly calculate alpha in a few places by dividing by 255,
not 256.
Fix the argb and bgra blending functions to avoid an off-by-one
error in the calculations, so painting with alpha = 0xff doesn't
ever bleed through from behind
Currently the alignment property just makes sure that we
output things in multiples of align*packet_size bytes, but
with no clear maximum size. When streaming MPEG-TS over
UDP one wants buffers with a maximum packet size of 1316.
The alignment property so far would just output buffers
that are a multiple of 1316 then.
Instead we now make the alignment property output
individual buffers with the alignment size, which
is entirely backwards compatible with the expected
behaviour up until now. For efficiency reason
collect all those buffers in a buffer list and
send that downstream.
Also collect data to push downstream in a buffer
list from the adapter if we don't align things,
which is still more efficient because of the
silly way the muxer currently creates output
packets.
https://bugzilla.gnome.org/show_bug.cgi?id=722129
Actually accumulate the sample counter to check the accumulated error
between actual timestamps and expected ones instead of just resetting
the error back to 0 with every new buffer.
Also don't reset discont_time whenever we don't resync. The whole point of
discont_time is to remember when we first detected a discont until we actually
act on it a bit later if the discont stayed around for discont_wait time.
https://bugzilla.gnome.org/show_bug.cgi?id=746032
This allows us to handle new segment events correctly; either by dropping
buffers or inserting silence; for example if the offset is changed on an srcpad
connected to audiomixer.
If the video source happens to allow max-zoom to be greater than our maximum hard coded
value of 10 then the user cannot set anything greater than our maximum specified in the
param spec. We have to update our param spec to prevent glib from capping the value
https://bugzilla.gnome.org/show_bug.cgi?id=745740
This reverts commit d387cf67df.
The analysis was wrong: The first 20ms of latency are introduced by the source
already and put into the latency query, making it only necessary to cover the
additional 20ms of audiomixer inside audiomixer.
This prevents it from going into passthrough after receiving 2
byte-stream caps (different ones) as it would keep the have_pps and
have_sps set to true and would just go into passthrough without
updating its caps.
This patch makes it reset its stream information to restart properly
when new caps are received.
https://bugzilla.gnome.org/show_bug.cgi?id=745409
To avoid useless renegotiation of the pipe we can check for
negotiated caps on src_filter and compare it with requested
filter. If the caps intersect, avoid restart.
Signed-off-by: Oleksij Rempel <bug-track@fisher-privat.net>
https://bugzilla.gnome.org/show_bug.cgi?id=672610
Let's assume a source that outputs outputs 20ms buffers, and audiomixer having
a 20ms output buffer duration. However timestamps don't align perfectly, the
source buffers are offsetted by 5ms.
For our ASCII art picture, each letter is 5ms, each pipe is the start of a
20ms buffer. So what happens is the following:
0 20 40 60
OOOOOOOOOOOOOOOO
| | | |
5 25 45 65
IIIIIIIIIIIIIIII
| | | |
This means that the second output buffer (20 to 40ms) only gets its last 5ms
at time 45ms (the timestamp of the next buffer is the time when the buffer
arrives). But if we only have a latency of 20ms, we would wait until 40ms
to generate the output buffer and miss the last 5ms of the input buffer.
The two branches of the if conditional are identical, which means in all cases
the same gst_asf_put_guid() will be executed. Do it directly.
CID #1226448
the calculations for detecting the videomark is being repeated
in for loop unnecessarily. Moving this outside of for loop
such that the code need not be executed evertime the loop is executed.
https://bugzilla.gnome.org/show_bug.cgi?id=744778
Value stored in ret will be ovewritten in the next iteration of the loop. Which
means it is never used.
Plus a style issue to make gst-indent happy and allow the commit.
Don't use private GMutex implementation details to check
whether it has been freed already or not. Just turn dispose
function into finalize function which will only be called
once, that way we can just clear the mutex unconditionally.
the calculations for drawing the videomark is being repeated
in for loop unnecessarily. Moving this outside of for loop
such that the code need not be executed evertime the loop is executed.
https://bugzilla.gnome.org/show_bug.cgi?id=744371
Always update the segment and not only for accurate seeking and always
send a new segment event after seeks.
For non-accurate force a reset of our segment info to start from
where our seek led us as we don't need to be accurate
https://bugzilla.gnome.org/show_bug.cgi?id=743363
Detect invisible pixels, and skip gstspu_vobsub_blend_comp_buffers()
when there are only invisible pixels. This significantly reduces the
CPU load in cases of DVDs which don't use the clip_rect to exclude
processing for parts of the screen where the video is visible.
https://bugzilla.gnome.org/show_bug.cgi?id=667221
There's no reason why audiomixer should override the segment
base of upstream with whatever value it got from a SEEK event,
or even worse... with 0 if there was no SEEK event yet. This
broke synchronization if upstream provided a segment base other
than 0, e.g. when using pad offsets.
Also that this code did things conditional on the element's state
should've been a big warning already that something is just wrong.
If this breaks anything else now, let's fix it properly :)
Also don't do fancy segment position trickery when receiving a
segment event. It's just not correct.
The flush is called on discont and we shouldn't output a new segment
each time a discont happens. So this commit remove the mark for a new
segment when flushing streams by propagating the 'hard' flag passed
on the flusing from the base class.
https://bugzilla.gnome.org/show_bug.cgi?id=743363
Instead of using the GST_OBJECT_LOCK we should have
a dedicated mutex for the pad as it is also associated
with the mutex on the EVENT_MUTEX on which we wait
in the _chain function of the pad.
The GstAggregatorPad.segment is still protected with the
GST_OBJECT_LOCK.
Remove the gst_aggregator_pad_peak_unlocked method as it does not make
sense anymore with a private lock.
https://bugzilla.gnome.org/show_bug.cgi?id=742684
Reduce the number of locks simplify code, what is protects
is exposed, but the lock was not.
Also means adding an _unlocked version of gst_aggregator_pad_steal_buffer().
https://bugzilla.gnome.org/show_bug.cgi?id=742684
Assignment is done to variable segment.stop when the intention was to assign to
local variable stop. Instead of overwriting it, the value is now clamped and
segment.stop is set to it soon after.
CID #1265772
AIFF chunks are supposed to be even aligned.
Aligning the SSND chunk will allow the aiff muxer to properly write
chunks (like the ID3 one) at the end of the file.
https://bugzilla.gnome.org/show_bug.cgi?id=727402
This did not actually work since the video_buffer was set to NULL after
the first black frame.
Reported by: Patrik Oldsberg <patrik.oldsberg@ericsson.com>
This patch calls gst_h264_parser_parse_subset_sps() when a
SPS subset NAL type is found.
All the bits required for parsing the SPS subset in NALs were
already there, just we need to call them when the this NAL type
is found.
With this parsing, the number of views (minus 1) attribute is
filled, which was a requirement for negotiating the stereo-high
profile.
https://bugzilla.gnome.org/show_bug.cgi?id=743174
Initial support for MVC NAL units. It is only needed to propagate the
complete set of NAL units downstream at this time.
https://bugzilla.gnome.org/show_bug.cgi?id=696135
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
klass->setup (scope) will always return TRUE since all children of this class
do so, no need to store the return. Besides, the value is overwritten a few
lines down before it is used.
Change helps keep files in sync after:
-base commit a91d521a36
ret is assigned but not used and in the next cycle of the loop it is overwritten
with default_prepare_output_buffer (). If there is a flow error the function
should return instead.
CID #1226475
Value from left_luminance is assigned to out_luminance here, but that stored
value is not used before it is overwritten in the next cycle of the loop.
Removing assignation.
CID #1226473
found is initialized to FALSE to then only be used in two conditional statements
that will always be false, making the blocks inside them dead code. Looking back
in the file's history the setting of the variable's value before it is checked
was dropped as part of the port to 0.11, bringing that value setting back.
https://bugzilla.gnome.org/show_bug.cgi?id=742638
CLAMP checks both if value is '< 0' and '> max'. Value will never be a negative
number since it is an unsigned integer. Removing that check and only checking if
it is bigger than max and setting it appropriately.
Also converting the previous instance of this into MIN() for consistency.
CID 1139793
Some video bitstreams report a too restrictive set of profiles. If a video
decoder was to strictly follow the indicated profile, it wouldn't support that
stream, whereas it could in theory and in practice. So we should relax the
profile restriction for allowing the decoder to get connected with parser.
https://bugzilla.gnome.org/show_bug.cgi?id=739992
CLAMP checks both if y is '< 0' and '> h1'. y will never be a negative number
since it is an unsigned integer. Removing that check and only checking if it
bigger than h1 and setting it to that max approprietaly.
CID 1139792
type is truncated to 0-31 with "& 0x1f", but right after that it is checks if
the value is equivalent to GST_H265_NAL_VPS, GST_H265_NAL_SPS, and
GST_H265_NAL_PPS (which are 32, 33, and 34 respectively). Obviously, this will
never be True if the value is maximum 31 after the truncation.
The intention of the code was to truncate to 0-63.
After further investigation the previous commit is wrong. The code intended to
check if the type is 39 or the ranges 41-44 and 48-55. Just like gsth265parse.c
does. Type 40 would not be complete.
nal_type is the index for a GstH265NalUnitType enum. There are two types of dead
code here:
First, after checking if nal_type is >= 39 there are two OR conditionals that
check if the value is in ranges higher than that number, so if nal_type >= 39
falls in the True branch those other conditions aren't checked and if it falls
in the False branch and they are checked, they will always also be False. They
are redundant.
Second, the enum has a range of 0 to 40. So the checks for ranges higher than 41
should never be True.
Removing this redundant checks.
CID 1249684
Everytime a buffer is being provided from baseparse, we are parsing all the data from the beginning.
But since we would have already parsed some of the data in the previous iterations,
it doesnt make much sense to keep parsing the same everytime.
Hence skipping the data which is already read in previous iterations to improve the parsing performance.
https://bugzilla.gnome.org/show_bug.cgi?id=740058
The image capture mutex and the pad object lock would cause a race
if the pad query was made right when the image probe was running.
The image probe needs the capture mutex and the querying would need
the pad object lock.
It might be racy with the image probe thread as it uses the capture
mutex just like the start-capture handler from camerabin. The
start-capture would be waiting for the source's streaming thread
to stop to be able to set the source state to ready while the
probe would be blocked waiting to acquire the capture mutex.
It causes a deadlock.
Don't rely on core implementation details, which are private and
may change. It's also not needed here, the performance impact is
close to none. Also copy buffer before changing its metadata.
Get rid of some indirections and inefficiencies,
just payload things directly which gives us more
control over what memory is allocated where and
how and makes things much simpler. In particular,
we can now allocate the payload header plus the
GstMemory to represent it in one go.
Get rid of now-useless packetizer struct and just
call internal functions directly. Also remove
version property which is now defunct, not least
because we create the packetizer with the
version in the init function before a version
can be set.
Add function to calculate a payload CRC across multiple memories
so we don't have to merge buffers with multiple memories just to
calculate the CRC. Also make CRC calculation function static,
since it's not used outside dataprotocol.h and move special-casing
of length = 0 -> CRC = 0 into CRC function (from caller).
Perhaps more importantly, since payload CRC is off by default:
don't map buffer (and possibly merge memories in the process)
if we are not going to use it to calculate a CRC anyway.
This can happen if this is a live pipeline and no source produced any buffer
and sent no caps until the an output buffer should've been produced according
to the latency.
When this is TRUE, we really have to produce output. This happens
in live mixing mode when we have to output something for the current
time, no matter if we have enough input or not.
Some video bitstreams report a too restrictive set of profiles. If a video
decoder was to strictly follow the indicated profile, it wouldn't support that
stream, whereas it could in theory and in practice. So we should relax the
profile restriction for allowing the decoder to get connected with parser.
https://bugzilla.gnome.org/show_bug.cgi?id=739992
When dealing with random-access content (such as files), we initially
search for the last PCR in order to figure out duration and to handle
other position estimation such as those used in seeking.
Previously, the code looking for that last PCR would search in the last
640kB of the file going forward, and stop at the first PCR encountered.
The problem with that was two-fold:
* It wouldn't really be the last PCR (it would be the first one within
those last 640kB. In case of VBR files, this would put off duration
and seek code slightly.
* It would fail on files with bitrates higher than 52Mbit/s (not common)
Instead this patch modifies that code by:
* Scanning over the last 2048kB (allows to cope with streams up to 160Mbit/s)
* Starts by the end of the file, going over chunks of 300 MPEG-TS packets
* Doesn't stop at the first PCR detected in a chunk, but instead records all
of them, and only stop searching if there was "at least" one PCR within
that chunk
This should improve duration reporting and seeking operations on VBR files
https://bugzilla.gnome.org/show_bug.cgi?id=708532
Sometimes rawparse does not handle the seeking query
properly, the rawparse should send the query upstream
first. For example, upstream could support seeking in
TIME format (but not in BYTE format), so the BYTE format
seeking query that rawparse sends in push mode would
fail.
https://bugzilla.gnome.org/show_bug.cgi?id=722764
Read PNG data chunk in one go by letting the parser
base class know the size we need, so that it doesn't
drip-feed us small chunks of data (causing a lot of
reallocs and memcpy in the process) until we have
everything.
Improves parsing performance of very large PNG files
(65MB) from ~13 seconds to a couple of millisecs.
https://bugzilla.gnome.org/show_bug.cgi?id=736176
This commit add an helper to convert a frame to frame-layer format and
use it to implement these two stream-format conversion:
- asf --> sequence-layer-frame-layer
- asf --> frame-layer
In simple/main profile, we basically have a raw frame, so building a
frame layer isn't too complicated. But in advanced profile, the first
frame-layer should contain sequence-header, entrypoint, and frame and
each keyframe should contain entrypoint, so we have to handle these
carefully.
https://bugzilla.gnome.org/show_bug.cgi?id=738526
Add an helper to check that output stream-format is coherent with
profile and header-format. It also check if we know how to do the
conversion if the input stream-format differs from selected
output-format.
So, in case output stream-format is not allowed, it will now fail at
negotiation rather than in pre_push_frame.
https://bugzilla.gnome.org/show_bug.cgi?id=738526
This commit introduces an helper to convert an ASF frame to BDUs format with
startcodes and use this helper to implements following stream-format
conversions:
- asf --> bdu
- asf --> sequence-layer-bdu
- asf --> sequence-layer-raw-frame
https://bugzilla.gnome.org/show_bug.cgi?id=738526
It add the support of following stream-format conversion:
- bdu --> sequence-layer-bdu
- bdu-frame --> sequence-layer-bdu-frame
- frame-layer --> sequence-layer-frame-layer
For these conversion, the only requirements is to push a sequence-layer
buffer prior to data.
https://bugzilla.gnome.org/show_bug.cgi?id=738526
It prepares the template for stream-format conversion and it implements
the following conversion:
- sequence-layer-bdu --> bdu
- sequence-layer-bdu-frame --> bdu-frame
- sequence-layer-frame-layer --> frame-layer
Work is done in the pre_push_frame() method.
https://bugzilla.gnome.org/show_bug.cgi?id=738526
gstinteraudiosrc.c: In function 'gst_inter_audio_src_create':
gstinteraudiosrc.c:339:27: error: variable 'buffer_samples' set but not used [-Werror=unused-but-set-variable]
guint64 period_samples, buffer_samples;
^