wrappercamerabinsrc has a videocrop element to be used for
zooming and for cropping when input caps is different when used
with the GstPhotography interface. The zooming part needs
the following elements:
capsfilter ! videocrop ! videoscale ! capsfilter
The capsfilters should always have the same caps to ensure the
zooming is done and preserves dimensions, unless when it is needed
to do more cropping due to input dimensions those caps
need to be modified accordingly to preserve the output dimensions.
This, however, makes it hard to get caps negotiation to work properly
as we need to have different caps in the capsfilters to account for
the extra cropping needed. It could be simple for fixed caps but it
gets tricky with unfixed ones.
To solve this, this patch splits the zooming and dimension reduction
cropping into 2 separate videocrop elements. The first one does
the dimension cropping, which is only needed when the GstPhotography
API is used and the source provides a caps that is different than
what is requested, while the second is dedicated to zoom crop only.
The first part of the pipeline goes from:
src ! videoconvert ! capsfilter ! videocrop ! videoscale ! capsfilter
to
src ! videocrop ! videoconvert ! capsfilter ! videocrop ! videoscale ! capsfilter
It might add an extra overhead in the image capture as the image might need
to be cropped twice but this can be solved by enabling videocrop to use
crop metas so only the later one does the real cropping.
It also makes the code a bit simpler.
Remove tee and output-selector and just link the source
pad to the outputs we want as needed.
The way we need to prioritize caps negotiation and allocation
queries depending on the mode enabled is too custom to be
handled using tee and output-selector.
This provides more flexibility and doesn't get in the way of proper
handling of negotiation and allocation queries.
The detection for missing format/alignment is done way before this
codepath is reached (at which point we have already decided of a
format and alignment).
CID #1232800
When block width property is set to 0, exception occurs.
This happens due to divide by zero errors in calculations.
block width property can never be 0. Hence adjusting the minimum value to 1.
https://bugzilla.gnome.org/show_bug.cgi?id=744188
Such seeks are used to change playback rate and we do not want
to alter the position in that case, so we bypass the flush/seek
logic, and set things up so a new segment is scheduled to be
regenerated.
https://bugzilla.gnome.org/show_bug.cgi?id=735100
This will happen when the PMT changes, replacing streams with
new ones. In that case, we need to accumulate the running time
from the previous chain in the segment base.
https://bugzilla.gnome.org/show_bug.cgi?id=745102
Reset the internal segment before freeing it.
mxf_index_table_segment_parse() allocates data inside the segment
(like segment->delta_entries) which have to be freed using
mxf_index_table_segment_reset().
https://bugzilla.gnome.org/show_bug.cgi?id=746803
Also:
- Don't modify size on early buffer
The size is the size of the buffer, not of remaining part.
- Use the input caps when manipulating the input buffer
Also store in in the sink pad
- Reply to the position query in bytes too
- Put GAP flag on output if all inputs are GAP data
- Only try to clip buffer if the incoming segment is in time or samples
- Use incoming segment with incoming timestamp
Handle non-time segments and NONE timestamps
- Don't reset the position when pushing out new caps
- Make a number of member variables private
- Correctly handle case where no pad has a buffer
If none of the pads have buffers that can be handled, don't claim to be EOS.
- Ensure proper locking
- Only support time segments
https://bugzilla.gnome.org/show_bug.cgi?id=740236
When the timeout is reached, only ignore pads with no buffers, iterate
over the other pads until all buffers have been read. This is important
in the cases where the input buffers are smaller than the output buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=745768
Correctly calculate alpha in a few places by dividing by 255,
not 256.
Fix the argb and bgra blending functions to avoid an off-by-one
error in the calculations, so painting with alpha = 0xff doesn't
ever bleed through from behind
Currently the alignment property just makes sure that we
output things in multiples of align*packet_size bytes, but
with no clear maximum size. When streaming MPEG-TS over
UDP one wants buffers with a maximum packet size of 1316.
The alignment property so far would just output buffers
that are a multiple of 1316 then.
Instead we now make the alignment property output
individual buffers with the alignment size, which
is entirely backwards compatible with the expected
behaviour up until now. For efficiency reason
collect all those buffers in a buffer list and
send that downstream.
Also collect data to push downstream in a buffer
list from the adapter if we don't align things,
which is still more efficient because of the
silly way the muxer currently creates output
packets.
https://bugzilla.gnome.org/show_bug.cgi?id=722129
Actually accumulate the sample counter to check the accumulated error
between actual timestamps and expected ones instead of just resetting
the error back to 0 with every new buffer.
Also don't reset discont_time whenever we don't resync. The whole point of
discont_time is to remember when we first detected a discont until we actually
act on it a bit later if the discont stayed around for discont_wait time.
https://bugzilla.gnome.org/show_bug.cgi?id=746032
This allows us to handle new segment events correctly; either by dropping
buffers or inserting silence; for example if the offset is changed on an srcpad
connected to audiomixer.
If the video source happens to allow max-zoom to be greater than our maximum hard coded
value of 10 then the user cannot set anything greater than our maximum specified in the
param spec. We have to update our param spec to prevent glib from capping the value
https://bugzilla.gnome.org/show_bug.cgi?id=745740
This reverts commit d387cf67df.
The analysis was wrong: The first 20ms of latency are introduced by the source
already and put into the latency query, making it only necessary to cover the
additional 20ms of audiomixer inside audiomixer.
This prevents it from going into passthrough after receiving 2
byte-stream caps (different ones) as it would keep the have_pps and
have_sps set to true and would just go into passthrough without
updating its caps.
This patch makes it reset its stream information to restart properly
when new caps are received.
https://bugzilla.gnome.org/show_bug.cgi?id=745409
To avoid useless renegotiation of the pipe we can check for
negotiated caps on src_filter and compare it with requested
filter. If the caps intersect, avoid restart.
Signed-off-by: Oleksij Rempel <bug-track@fisher-privat.net>
https://bugzilla.gnome.org/show_bug.cgi?id=672610
Let's assume a source that outputs outputs 20ms buffers, and audiomixer having
a 20ms output buffer duration. However timestamps don't align perfectly, the
source buffers are offsetted by 5ms.
For our ASCII art picture, each letter is 5ms, each pipe is the start of a
20ms buffer. So what happens is the following:
0 20 40 60
OOOOOOOOOOOOOOOO
| | | |
5 25 45 65
IIIIIIIIIIIIIIII
| | | |
This means that the second output buffer (20 to 40ms) only gets its last 5ms
at time 45ms (the timestamp of the next buffer is the time when the buffer
arrives). But if we only have a latency of 20ms, we would wait until 40ms
to generate the output buffer and miss the last 5ms of the input buffer.
The two branches of the if conditional are identical, which means in all cases
the same gst_asf_put_guid() will be executed. Do it directly.
CID #1226448
the calculations for detecting the videomark is being repeated
in for loop unnecessarily. Moving this outside of for loop
such that the code need not be executed evertime the loop is executed.
https://bugzilla.gnome.org/show_bug.cgi?id=744778
Value stored in ret will be ovewritten in the next iteration of the loop. Which
means it is never used.
Plus a style issue to make gst-indent happy and allow the commit.
Don't use private GMutex implementation details to check
whether it has been freed already or not. Just turn dispose
function into finalize function which will only be called
once, that way we can just clear the mutex unconditionally.
the calculations for drawing the videomark is being repeated
in for loop unnecessarily. Moving this outside of for loop
such that the code need not be executed evertime the loop is executed.
https://bugzilla.gnome.org/show_bug.cgi?id=744371
Always update the segment and not only for accurate seeking and always
send a new segment event after seeks.
For non-accurate force a reset of our segment info to start from
where our seek led us as we don't need to be accurate
https://bugzilla.gnome.org/show_bug.cgi?id=743363
Detect invisible pixels, and skip gstspu_vobsub_blend_comp_buffers()
when there are only invisible pixels. This significantly reduces the
CPU load in cases of DVDs which don't use the clip_rect to exclude
processing for parts of the screen where the video is visible.
https://bugzilla.gnome.org/show_bug.cgi?id=667221
There's no reason why audiomixer should override the segment
base of upstream with whatever value it got from a SEEK event,
or even worse... with 0 if there was no SEEK event yet. This
broke synchronization if upstream provided a segment base other
than 0, e.g. when using pad offsets.
Also that this code did things conditional on the element's state
should've been a big warning already that something is just wrong.
If this breaks anything else now, let's fix it properly :)
Also don't do fancy segment position trickery when receiving a
segment event. It's just not correct.