Adaptive demuxers need to start downloading from specific positions
(fragments) for every stream, this means that all streams can snap-seek
to a different position when requested. Snap seeking in this case will
be done in 2 steps:
1) do the snap seeking on the pad that received the seek event and
get the final position
2) use this position to do a regular seek on the other streams to
make sure they all start from the same position
More arguments were added to the stream_seek function, allowing better control
of how seeking is done. Knowing the flags and the playback direction allows
subclasses to handle snap-seeking.
And also adds a new return parameter to inform of the final
selected seeking position that is used to align the other streams.
https://bugzilla.gnome.org/show_bug.cgi?id=759158
FEC may only be used when PLC is enabled on the audio decoder,
as it relies on empty buffers to generate audio from the next
buffer. Hooking to the gap events doesn't work as the audio
decoder does not like more buffers output than it sends.
The length of data to generate using FEC from the next packet
is determined by rounding the gap duration to nearest. This
ensures that duration imprecision does not cause quantization
to 2.5 milliseconds less than available. Doing so causes the
Opus API to fail decoding. Such duration imprecision is common
in live cases.
The buffer to consider when determining the length of audio
to be decoded is the previous buffer when using FEC, and the
new buffer otherwise. In the FEC case, this means we determine
the amount of audio from the previous buffer, whether it was
missing or not (and get the data either from this buffer, or
the current one if the previous one was missing).
Was never ported and doesn't look like
we want it or need it in this form, can
do the same with the libgstvideo sample
conversion utility API now, but better
and in a more flexible way.
Just check whether LATM is defined which is only available
in 2.7 and later. Allows us to simplify the configure check
a little and we can get rid of some hackish workarounds for
problems with earlier version headers.
On some embedded systems, sqrt() is not supported in the shader,
use the actual value of sqrt(2) instead.
Signed-off-by: Haihua Hu <b55597@freescale.com>
Bugzilla: https://bugzilla.gnome.org/show_bug.cgi?id=761271
Allows the subclass to completely override the chosen src caps.
This is needed as videoaggregator generally has no idea exactly
what operation is being performed.
- Adds a fixate_caps vfunc for fixation
- Merges gst_video_aggregator_update_converters() into
gst_videoaggregator_update_src_caps() as we need some of its info
for proper caps handling.
- Pass the downstream caps to the update_caps vfunc
https://bugzilla.gnome.org/show_bug.cgi?id=756207
1. Otherwise rotating the video will clip and show black bars due to
gltransformation's implementation.
2. The other option of make gltransformation aspect-agnostic produces
incorrect output with perspective transformations.
Add a "mask" property that sets whether the edges by cvLaplace should be
used as a mask on the original input or not. The same way the original
image is copied to the edges in edgedetect.
Add a "mask" property that sets whether the detected derivative edges
should be used as a mask on the original input or not. The same way
the original image is added to the edges in edgedetect.
cvCvtPixToPlane() has been deprecated in OpenCV 3.0, and cvSplit() is the
suggested replacement. Since cvSplit() is available in OpenCV 2.4, it is
safe and cautious to update the function usage before it becomes an issue.
cvlaplace was also affected by the silent change in OpenCV API, same as
cvsobel. It hasn't been working for a while. It would return a plain black
image. This commit updates the usage of cvLaplace by using cvCvtColor to
create the grayscale intermediate image to process. This also means there
is no need anymore to use GstBaseTransform's transform_caps, since the pads
are RGB.
cvsobel han't been working for a while due to a silent change in OpenCV
API. It would return a plain black image. This commit updates the usage
of cvSobel by using cvCvtColor to create the grayscale image to process.
This also means there is no need to use GstBaseTransform's transform_caps
anymore, since the pads can be RGB.
In file included from effects/gstgleffectrgbtocurve.c:25:0:
effects/gstgleffectscurves.h:174:32: error: 'xray_curve' defined but not used
static const GstGLEffectsCurve xray_curve = {
...
For a live mpd, if availabilityStartTime is missing, adaptive demux asserts
with: Unexpected critical/warning: gst_date_time_to_g_date_time: assertion
'datetime != NULL' failed.
This patch improves the error message to:
Unexpected critical/warning: gst_mpd_client_seek_to_time: assertion
'client->mpd_node->availabilityStartTime != NULL' failed
https://bugzilla.gnome.org/show_bug.cgi?id=757602
Handling the ghostpad and its internal pad was causing more issues
than helping because of their coupled activation/deactivation
actions.
As we have to install custom chain,event and query functions it is
better to use a floating sink pad internally in the demuxer and just
use those pad functions to push through a standard pad in the demuxer
https://bugzilla.gnome.org/show_bug.cgi?id=757951
Reverses the transformation applied through the properties and forwards the
event.
The process for finding the coordinates on the video are as follows:
1. Convert the given pointer_x and pointer_y to model space at the near and far planes
2. Get the equation of the video plane
3. Find where the ray in 1 intersects the plane
4. Profit!
It performs the exact same operation as videobalance but with opengl shaders and
was tested with glvideomixer by comparing frames from videobalance and
glcolorbalance.
Allows more blending options than just A over B
e.g. frame comparisons are now possible.
glvideomixer name=m
sink_0::zorder=0
sink_1::zorder=1
sink_1::blend-equation-rgb={subtract,reverse-subtract}
sink_1::blend-function-src-rgb=src-color
sink_1::blend-function-dst-rgb=dst-color
! glimagesinkelement
videotestsrc pattern=checkers-4 ! m.sink_0
videotestsrc pattern=checkers-8 ! m.sink_1
SBC frame length calculation wasn't being rounded up to the nearest byte
(as specified in the A2DP 1.0 specification, section 12.9). This could
cause 'stereo' and 'joint stereo' mode SBC streams to have incorrectly
calculated frame lengths.
https://bugzilla.gnome.org/show_bug.cgi?id=742446
After commit 64080e632, configure checks for all the header files that
should be available in OpenCV 2.3 and later. If any of these files isn't
there the OpenCV elements won't be part of the build.
No need to recheck for opencv2/legacy/legacy.hpp again in
gstpyramidsegment.h. Minimum supported OpenCV version must have this header
and configure already checks for it. Removing check.