The buffer pool API does not allow multiple of owner. This otherwise
lead to error when renegotiation take place. Aso consider the
allocation query "need_pool" boolean.
Fixes an assertion when moving from passthrough to non-passthrough
Without an explicit reconfigure, glfiter won't have created the GL
resources such as the FBO, GL bufferpool, etc and basetransform will
allocate sysmem buffers instead.
This makes the viewporter interface optional. The end result is
obviously far from optimal, though it greatly helps testing on older
compostitors or gnome-wayland. We can make it strictly needed later when
this new interface get widely adopted.
When start qmlglsink app, it will set NULL buffer to GstQSGTexture
in which case that qt_context_ will be a random value and cause
gst_gl_context_activate() fail.
https://bugzilla.gnome.org/show_bug.cgi?id=770925
Previously it was created in the init function and destroyed in ::stop, which
lead to segfaults when reusing the element.
Now the upload object is created in ::transform_caps if it is NULL, which is the
earliest we need it. The other vfuncs already bail out if the upload object is
NULL, which means that negotiation wasn't done.
Now when used with video/x-raw as input, the GLMemoryUpload method checks for
->tex_target in input GLMemory(es) and sets the output texture-target
accordingly.
Fixes video corruption with a pipeline like avfvideosrc ! video/x-raw !
glimagesink where on macos avfvideosrc pushes RECTANGLE textures but glupload
was configuring texture-target=2D as output.
The videoaggregator negotiation sequence changed some time
back and broke glstereomix. Instead of doing nego incorrectly
in the find_best_format() vfunc, do it directly in the
update_caps() method.
And scale the bitrate with the absolute rate (if it's bigger than 1.0) to get
to the real bitrate due to faster playback.
This allowed in my tests to play a stream with 10x speed without buffering as
the lowest bitrate is chosen, instead of staying/selecting the highest bitrate
and then buffering all the time.
It was previously disabled for not very well specified reasons, which seem to
be not valid anymore nowadays.
It implements now this interface with its video-direction
property. Values are changed to GstVideoOrientationMethod but they have
the same value than the originals.
https://bugzilla.gnome.org/show_bug.cgi?id=768687
https://github.com/mesonbuild/meson
With contributions from:
Tim-Philipp Müller <tim@centricular.com>
Matej Knopp <matej.knopp@gmail.com>
Jussi Pakkanen <jpakkane@gmail.com> (original port)
Highlights of the features provided are:
* Faster builds on Linux (~40-50% faster)
* The ability to build with MSVC on Windows
* Generate Visual Studio project files
* Generate XCode project files
* Much faster builds on Windows (on-par with Linux)
* Seriously fast configure and building on embedded
... and many more. For more details see:
http://blog.nirbheek.in/2016/05/gstreamer-and-meson-new-hope.htmlhttp://blog.nirbheek.in/2016/07/building-and-developing-gstreamer-using.html
Building with Meson should work on both Linux and Windows, but may
need a few more tweaks on other operating systems.
_stdint.h is generated by Autotools and we don't really need it. All
supported platforms now ship with stdint.h. The only stickler was MSVC,
and since Visual Studio 2015 it also ships stdint.h now.
It now returns the correct values for both orthographic and perspective
projections and takes into account the aspect ratio of the video, handles
the Y-flipping in GL and by us and uses some more helpers from graphene.
Fixes spurious segfault in unit test, where the task was started again during
shutdown when all pads were removed... and was then still running while the
element was finalized.
We don't have to do yet another additional request but can just download the
data directly.
Also unify the key-unit only mode buffer pushing and extract it into its own
function now that it became more complicated.
https://bugzilla.gnome.org/show_bug.cgi?id=741104
We need to mark every first buffer of a key unit as discont, and also every
first buffer of a moov and moof. This ensures that qtdemux takes note of our
buffer offsets for each of those buffers instead of keeping track of them
itself from the first buffer. We need offsets to be consistent between moof
and mdat
https://bugzilla.gnome.org/show_bug.cgi?id=741104
Fixes the following error when building in osx.
error: implicit conversion from enumeration type
'GstJPEG2000Colorspace' to different enumeration type
'GstJPEG2000Sampling'
This reverts commit 947656cfd2.
This makes all dash seeking tests fail. Needs more testing to fully understand
what's going wrong. Revert ok'd by Sebastian
We don't need to call the latter at all as we're definitely in this period and
the segment is selected via the SIDX.
This is especially important when doing SNAP seeks, as otherwise we would
always start from the beginning of the period (usually 0) again.
After the check in line 1,111, media->uri can't be NULL. So the two checks
for GST_HLS_MEDIA_TYPE_CLOSED_CAPTIONS are the same, removing the redundant
one which goes to cc_unsupported.
CID 1364752
Create an output stream for each media when alternate renditions
are present. Update the manifests for all those streams, and
make sure that typefinding is still done for files smaller than 2KB
such as small WebVTT files.
When fetching a byte-region from a server resource,
adjust the downstream buffer offsets so that downstream
doesn't know. This is because id3demux insists on the
first offset being 0. Later we might strip ID3 headers
entirely and this will be unneeded.
Modify playlist updating to track information across updates
better, although still hackish.
When connection_speed == 0, choose the default variant
not the first one in the (now sorted) variant list, as that
will have the lowest bitrate.
Make M3U8 and GstM3U8MediaFile refcounted. The contents
of it and GstM3U8MediaFile are pretty much immutable
already, but if we make it refcounted we can just
return a ref to the media file from _get_next_fragment()
instead of copying over all fields one-by-one, and then
copying them all into the adaptive stream structure fields again.
Move state from client into m3u8 structure. This will
be useful later when we'll have multiple media playlists
being streamed at the same time, as will be the case with
alternative renditions.
This has the downside that we need to copy over some
state when we switch between variant streams.
The GstM3U8Client structure is gone, and main/current
lists are not directly in hlsdemux. hlsdemux had as
many CLIENT_LOCK/UNLOCK as the m3u8 code anyway...
The gst_dash_demux_get_live_seek_range () function returns a stop value
that is beyond the available range. The functions
gst_mpd_client_check_time_position() and
gst_mpd_client_get_next_segment_availability_end_time() in
gstmpdparser.c include the segment duration when checking if a segment
is available. The gst_dash_demux_get_live_seek_range() function
in gstdashdemux.c ignores the segment duration.
According to the DASH specification, if maxSegmentDuration is not present,
then the maximum Segment duration is the maximum duration of any Segment
documented in the MPD.
https://bugzilla.gnome.org/show_bug.cgi?id=753751
There's no need for the jump to an extra thread in most cases, especially
when relying solely on a shader to render. We can use the provided
render_to_target() functions to simplify filter writing.
Facilities are given to create fbo's and attach GL memory (renderbuffers
or textures). It also keeps track of the renderable size for use with
effective use with glViewport().
Don't clear decryption state immediately after
initialising it in the start_fragment. Don't clear
the state of all streams when we want to only clear
the current stream.
https://bugzilla.gnome.org//show_bug.cgi?id=768757
Add demuxer instance-wide decryption key cache. The current and
last key url are per-stream, so make a shared cache. Move the
decryption handling into the stream object, and use the shared
cache for the keys.
Prepare hlsdemux for more than one single stream. Currently hlsdemux
assumes there'll only ever be one stream and most of the stream-specific
state is actually in the hlsdemux structure. Add a stream subclass
instead and move some stream-specific members there instead.
In this mode, we let WebRTC Audio Processing figure-out the delay. This
is useful when the latency reported by the stack cannot be trusted. Note
that in this mode, the leaking of echo during packet lost is much worst.
It is recommanded to use PLC (e.g. spanplc, or opus built-in plc).
In this mode, we don't do any synchronization. Instead, we simply process all
the available reverse stream data as it comes.
Compiler would complain about include directory that didn't
exist because QPA_INCLUDE_PATH gets subst-ed regardless
(and if it didn't we'd have just an empty -I argument).
https://bugzilla.gnome.org/show_bug.cgi?id=767553
This simplifies the code but also removes a bug with tracking of the remaining
size for the initial subfragment: we were not considering the size between the
index and the start of the first moof here.
https://bugzilla.gnome.org/show_bug.cgi?id=764684
When switching fragments we don't want to keep any data around from the last
one, and also forget about all data when doing flushing seeks or selecting new
bitrates.
https://bugzilla.gnome.org/show_bug.cgi?id=764684
The previous code would run out of sync if there was packet lost
or clock skews. When that happened, the echo cancellation feature would
completely stop working. As this is crucial for audio calls, this patch
re-implement synchronization completely.
Instead of letting it drift until next discont, we now synchronize
against the record data at every iteration. This way we simply never
let the stream drift for longer then 10ms period. We also shorter the
delay by using the latency up the probe (basically excluding the sink
latency. This is a decent delay to avoid starving in the probe queue.
https://bugzilla.gnome.org/show_bug.cgi?id=768009
When echo cancel is enabled, we now fail the pipeline if there is
not echo probe. For this reason there is no need to check if probe
pointer is set anymore.
The byte-stream to avc conversion did not consider NAL sizes bigger than 2^16,
multiple layers, multiple NALs per layer, and various other things. This
caused corrupted streams in higher bitrates and other circumstances.
Let's just forward byte-stream as generated by the encoder and let h264parse
handle conversion to avc if needed. That way we only have to keep around one
version of the conversion and don't have to fix it in multiple places.
Rather than assuming something. e.g. zerocopy on iOS with GLES3 requires
the use of Luminance/Luminance Alpha formats and does not work with
Red/RG textures.
The saved timestamp is used to compute the delay of the probe data.
As it's used at the following incoming buffer, it needs to be offset
with the duration of the buffer to represent the end position. Also,
properly initialize the saved timestamp and protect against TIME_NONE.
Until now, we were synchronizing both DSP and Probe adapter by
waiting and clipping the probe adapter data. This increases the CPU
usage, can cause copies if the audio is not 10ms aligned and the worst
is that it prevents the processing from compensating for inaccurate
latency. This is also a step forward toward supporting playback
filters.
The current state of c++ ABI's on Window's and Gst's/Qt's conflicting
mingw builds means that we cannot use mingw for building the qt plugin.
Instead, a qmake .pro file is provided that is expected to be used with the
msvc binaries provided by Qt like so:
(with the PATH environment variable containing the path to the qt biniaries
and PKG_CONFIG_PATH containing the path to GStreamer modules)
cd /path/to/sources/gst-plugins-bad/ext/qt
qmake -tp vc
Then open the resulting VS project and build the library. Then
cp debug/libgstqtsink.dll /path/to/prefix/lib/gstreamer-1.0/libgstqtsink.cll
https://bugzilla.gnome.org/show_bug.cgi?id=761260
This DSP library can be used to enhance voice signal for real time
communication call. In implements multiple filters like noise reduction,
high pass filter, echo cancellation, automatic gain control, etc.
The webrtcdsp element can be used along, or with the help of the
webrtcechoprobe if echo cancellation is enabled. The echo probe should
be placed as close as possible to the audio sink, while the DSP is
generally place close to the audio capture. For local testing, one can
use an echo loop pipeline like the following:
autoaudiosrc ! webrtcdsp ! webrtcechoprobe ! autoaudiosink
This pipeline should produce a single echo rather then repeated echo.
Those elements works if they are placed in the same top level pipeline.
https://bugzilla.gnome.org/show_bug.cgi?id=767800
These two shader will calculate the vector length and use it as denominator.
But length could be zero which will cause undefine behaviour. Add protection for
this condition
https://bugzilla.gnome.org/show_bug.cgi?id=767635
We remove get-rollover-counter signal in favor of the "stats"
property. The "stats" property is a GstStructure with caps
application/x-srtp-encoder-stats that contains an array of
structures with caps application/x-srtp-stream.
Each stream structure contains "ssrc" and "roc" fields.
https://bugzilla.gnome.org/show_bug.cgi?id=733265
Previously GstCurlSmtpSink could cause the pipeline thread to end up
waiting for a stopped thread to perform work.
The scenario was that the sink could be rendering a buffer and waiting
for the curl transfer thread to have sent the data. As soon as the
transfer thread has copied all data to curl's data buffer in
gst_curl_base_sink_transfer_read_cb() then the render call would stop
waiting and return GST_FLOW_OK. While this takes place the transfer
thread may suffer from an error e.g. due gst_poll_wait() timing out.
This causes the transfer thread to record the error, claim (it is not
really true since there was an error) that the data has been sent and
that a response has been received by trying to signal the pipeline
thread (but this has already stopped waiting). Finally the transfer
thread stops itself. A short while later the pipeline thread may attempt
to push an EOS event into GstCurlSmtpSink. Since there is no check in
gst_curl_smtp_sink_event() to check if the sink has suffered from any
error it may attempt to add a final boundary and ask the, now deceased,
transfer thread to transfer the new data. Next the sink element would
have waited for the transfer to complete (using a different mechanism
than normal transfers through GstCurlBaseSink). In this case there was
an error check to avoid waiting if an error had already been seen.
Finally GstCurlSmtpSink would chain up to GstCurlBaseSink which would
then block waiting for a response (normally this would be prevented by
the transfer thread suffering the error claiming that it had been
received, but GstCurlSmtpSink clobbered this flag after the fact).
Now GstCurlSmtpSink avoids this by locking over the entire event handing
(preventing simultaneous changes to flags by the two threads) and also
by avoiding to initiate transfer of final boundary if an error has
already been seen.
Also add GST_FIXME() for remaining similar issue where the pipeline
thread may block indefinitely waiting for transfer thread to transfer
data but the transfer thread errors out and fails to notify the pipeline
thread that the transfer failed.
https://bugzilla.gnome.org/show_bug.cgi?id=767501
Use namespace only after it was actually defined by a header.
gstfacedetect.cpp:79:17: error: using directive refers to implicitly-defined namespace 'std' [-Werror]
using namespace std;
^
Replace the type and function prefix to follow the conventions:
- Use `GST_TYPE_DC1394_SRC` instead of `GST_TYPE_DC1394`.
- Use `GstDC1394Src` and `GstDC1394SrcClass` instead of
`GstDc1394` and `GstDc1394Class`.
- Use `gst_dc1394_src` instead of `gst_dc1394`.
https://bugzilla.gnome.org/show_bug.cgi?id=763026
The dc1394src is a PushSrc element for IIDC cameras based on libdc1394.
The implementation from the 0.x series is deffective:
caps negotiation does not work, and some video formats
provided by the camera are not supported.
Refactor the code to port it to 1.X and enhance the support
for the full set of video options of IIDC cameras:
- The IIDC specification includes a set of camera video modes
(video format, frame size, and frame rates).
They do not map perfectly to Gstreamer formats, but those that
do not match are very rare (if used at all by any camera).
In addition, although the specification includes a raw format,
some cameras use mono video formats to capture in Bayer format.
Map corresponding video modes to Gstreamer formats in capabilities,
allowing both gray raw and Bayer video formats for mono video modes.
- The specification includes scalable video modes (Format7),
where the frame size and rate can be set to arbitrary values
(within the limits of the camera and the bus transport).
Allow the use of such mode, using the frame size and rate
from the negotiatied caps, and set the camera frame rate
adjusting the packet size as in:
<http://damien.douxchamps.net/ieee1394/libdc1394/faq/#How_do_I_set_the_frame_rate>
The scalable modes also allow for a custom ROI offset.
Support for it can be easily added later using properties.
- Camera operation using libdc1394 is as follows:
1. Enumerate cameras on the system and open the camera
identified the enumeration index or by a GUID (64bit hex code).
2. Query the video formats supported by the camera.
3. Configure the camera for the desired video format.
4. Setup the capture resources for the configured video format
and start the camera transmission.
5. Capture frames from the camera and release them when not used.
6. Stop the camera transmission and clear the capture resources.
7. Close the camera freeing its resources.
Do steps 2 and 3 when getting and setting the caps respectively.
Ideally 4 and 6 would be done when going from PAUSED to PLAYING
and viceversa, but since caps might not be set yet, the video mode
is not properly configured leaving the camera in a broken state.
Hence, setup capture and start transmission in the set caps method,
and consequently clear the capture and stop the transmission
when going from PAUSED to READY (instead of PLAYING to PAUSED).
Symmetrycally, open the camera when going from READY to PAUSED,
allowing to probe the camera caps in the negotiation stage.
Implement that using the `start` and `stop` methods of `GstBaseSrc`,
instead of the `change-state` method of `GstElement`.
Stop the camera before setting new caps and restarting it again
to handle caps reconfiguration while in PLAYING (it has no effect
if the camera is not started).
- Create buffers copying the bytes of the captured frames.
Alternatively, the buffers could just wrap the bytes of the frames,
releasing the frame in the buffer's destroy notify function,
if all buffers were destroyed before going from PLAYING to PAUSED.
- No timestamp nor offset is set when creating buffers.
Timestamping is delegated to the parent class BaseSrc,
setting `gst_base_src_set_live` TRUE, `gst_base_src_set_format`
with GST_FORMAT_TIME and `gst_base_src_set_do_timestamp`.
Captured frames have a timestamp field with the system time
at the completion of the transmission of the frame,
but it is not sure that this comes from a monotonic clock,
and it seems to be left NULL in Windows.
- Use GUID and unit properties to select the camera to operate on.
The camera number used in version 0.X does not uniquely identify
the device (it depends on the set of cameras currently detected).
Since the GUID is 64bit identifier (same as MAC address),
handle it with a string property with its hexadecimal representation.
For practicality, operate on the first camera available if the GUID
is null (default) and match any camera unit number if unit is -1.
Alternatively, the GUID could be handed with an unsigned 64 bit
integer type property, using `0xffffffffffffffff` as default value
to select the first camera available (it is not a valid GUID value).
- Keep name `GstDc1394` and prefix `gst_dc1394` as in version 0.X,
although `GstDC1394Src` and `gst_dc1394_src` are more descriptive.
- Adjust build files to reenable the compilation of the plugin.
Remove dc1394 from the list of unported plugins in configure.ac.
Add the missing flags and libraries to Makefile.
Use `$()` for variable substitution, as many plugins do,
although other plugins use `@@` instead.
https://bugzilla.gnome.org/show_bug.cgi?id=763026
The heuristic to choose between packetise or not was changed to use the
segment format. The problem is that this change is reading the segment
during the caps event handling. The segment event will only be sent
after. That prevented the decoder to go in packetize mode, and avoid
useless parsing.
https://bugzilla.gnome.org/show_bug.cgi?id=736252
Use new gst_h264_video_calculate_framerate() API instead of fps_n/fps_d
fields in SPS struct which are to be removed.
Apparently H264 content in MSS is always non-interlaced/progressive,
so we can just pass 0 for field_pic_flag and don't need to parse any
slice headers first if there's no external signalling. But even if
that's not the case the new code is not worse than the existing code.
https://msdn.microsoft.com/en-us/library/cc189080%28VS.95%29.aspxhttps://bugzilla.gnome.org/show_bug.cgi?id=723352
_get_gl_context() can be called concurrently from either propose_allocation() or
decide_allocation(). If it so happens that this happens at the same time,
the check for whether we already had a GL context was outside the lock. Inside
the lock and loop, the first thing that happens is that we unref the current GL
context (if valid) as if there was a conflict adding it to the display. If the
timing was unlucky, subsequent use of the GL context would be referencing an
already unreffed GL context object resulting in a critical:
g_object_ref: assertion 'object->ref_count > 0' failed
https://bugzilla.gnome.org/show_bug.cgi?id=766703
The gltestsrc element uses two shaders: color_shader and snow_shader.
Those are alternatively assigned to the SrcShader->shader pointer and
their reference was transferred to it. Only the SrcShader->shader was
unreffed (in _src_shader_deinit()) so only one shader was properly
freed, the other one was leaked.
Fixed this by giving an extra ref to SrcShader->shader and unreffing the
2 shaders in _src_smpte_free().
https://bugzilla.gnome.org/show_bug.cgi?id=766661
Plugins can provide a set of named values for a control port. Ideally only those
values are set for the property. Check if all scalepoints are integers and if so
generate an enum type.
- add using namespace std; for std::vector
- use the cpp header imgproc.hpp file for the cv::ellipse function instead of
the C header
- Mat no longer takes IplImage in it's constructors, use the cvarrtomat()
function instead.
Fixes a couple of build errors:
gstfacedetect.cpp:140:30: error: ‘vector’ does not name a type
structure_and_message (const vector < Rect > &rectangles, const gchar * name,
^~~~~~
gstfacedetect.cpp:140:37: error: expected ‘,’ or ‘...’ before ‘<’ token
structure_and_message (const vector < Rect > &rectangles, const gchar * name,
^
gstfacedetect.cpp: In function ‘void structure_and_message(int)’:
gstfacedetect.cpp:143:13: error: ‘rectangles’ was not declared in this scope
Rect sr = rectangles[0];
[...]
gstfacedetect.cpp: In function ‘void
gst_face_detect_run_detector(GstFaceDetect*, cv::CascadeClassifier*, gint, gint,
cv::Rect, std::vector<cv::Rect_<int> >&)’:
gstfacedetect.cpp:562:31: error: no matching function for call to
‘cv::Mat::Mat(IplImage*&, cv::Rect&)’
Mat roi (filter->cvGray, r);
[...]
gstfacedetect.cpp: In function ‘GstFlowReturn
gst_face_detect_transform_ip(GstOpencvVideoFilter*, GstBuffer*, IplImage*)’:
gstfacedetect.cpp:594:44: error: no matching function for call to
‘cv::Mat::Mat(cv::Mat, bool)’
Mat mtxOrg (cv::cvarrToMat (img), false);
[...]
gstfacedetect.cpp:734:79: error: ‘ellipse’ was not declared in this scope
ellipse (mtxOrg, center, axes, 0, 0, 360, Scalar (cr, cg, cb), 3, 8,
0);
We can avoid a render pass if downstream supports the affine transformation meta
and increase the performance of some pipelines involving gltransformation.
Implemented by checking for the affine transformation in the allocation query
from downstream and combining our matrix with that of upstream's (or creating
our own).
Provide a function to get the affine matrix in the meta in terms of NDC
coordinates and use as a standard opengl matrix.
Also advertise support for the affine transformation meta in the allocation
query.
We were always failing the allocation query as a flag was never being set to
signal a successful negotiation. Fix by setting the required flag on a
successful caps event from upstream.
The port was trivial, and according to the NEWS file nothing else has changed,
but it is possible that other API was changed without proper notification.
OpenJPEG upstream has shipped a pkg-config file for the past 4 years, and all
distros should be shipping it by now.
https://bugzilla.gnome.org/show_bug.cgi?id=766213
Use the semaphores in the correct place, before and after the submission for
acquiring and presenting the swapchain buffer.
Waiting on the fence that only signals the command buffer completion rather than
the completion of the presentation is racy with the destruction of the vulkan
buffers associated with that image. Wait on the device to be idle instead after
presenting.
1.Porting the exist deinterlace shader and OpenGL callback
to be compatible with OpenGL ES.
2.Add a our blur vertical shader to gldeinterlace.
3.Add a property named “method” to let user choose which
deinterlace function to use. Default to choose blur vertical
method for better performance.
[Matthew Waters]: fix name of greedyh in method property (was greedhy) and port
to git master.
https://bugzilla.gnome.org/show_bug.cgi?id=764873
A realtime clock is used in many places, such as deciding which
fragment to select at start up and deciding how long to sleep
before a fragment becomes available. For example dashdemux needs
sample the client's estimate of UTC when selecting where to start
in a live DASH stream.
The problem with dashdemux calculating the client's idea of UTC is
that it makes it difficult to create unit tests, because the passage
of time is a factor in the test.
This commit changes dashdemux and adaptivedemux to use the
GstSystemClock, so that a unit test can replace the system clock when
it needs to be able to control the clock.
This commit makes no change to the behaviour under normal usage, as
GstSystemClock is based upon the system time.
https://bugzilla.gnome.org/show_bug.cgi?id=762147
When application change pipeline state NULL->READY and then READY->NULL,
glimagesink will not clear glsink->window_id. After that, when application
change state NULL->READY, the new_window_id is equal to window_id, glimagesink
will not set window handle. It will use the internal window but not the window
create by application.
https://bugzilla.gnome.org/show_bug.cgi?id=765241
Currently, gst_srtp_dec_sink_setcaps is happy if the "roc" field is not
provided in the caps. If it is not provided the stream will be properly
inserted in the hash table with a default "roc". Then, when the first
buffer arrives validate_buffer will find an existing stream in the hash
table and will not signal request-key, not allowing the user to provide
a "roc".
This patch expects "roc" in gst_srtp_dec_sink_setcaps, if not found a
request-key will be signaled and the user will be able to provide all
the srtp fields, including "roc".
https://bugzilla.gnome.org/show_bug.cgi?id=765079
We weren't using the result of find_best_format at all.
Also, move the find_best_format usage to the default update_caps() to make
sure that it is also overridable.
https://bugzilla.gnome.org/show_bug.cgi?id=764363
In this case the socket callback has not been called
by libcurl and the curlsink has not been notified about any
connection problems by libcurl.
This indicates that it's a bug in libcurl so catch it as
an unknown error.
https://bugzilla.gnome.org/show_bug.cgi?id=754432
Add namespace bgsegm, replacement functions and Template class for new
OpenCV versions because these functions have been removed. cvarrToMat() is
added because it is compatible with all versions of OpenCV and the use of
class Mat constructor is eliminated, it is also deprecated in 3.X versions.
Use the namespace cv because some functions are called many times.
This patch keeps compatibility with 2.4
https://bugzilla.gnome.org/show_bug.cgi?id=760473
imgproc_c.h is added because CvFont struct needs it in any 3.x version.
We use this structure in GstOpencvTextOverlay. This keeps compatibility
with 2.4.
https://bugzilla.gnome.org/show_bug.cgi?id=760473
Some members of MotionCells were not being initialized in the constructor.
Protect from potential garbage memory usage by fully initializing it.
Moving m_frameSize out of the class because it is only used in
performDetectionMotionCells().
CID 1197704
This makes gltestsrc work everywhere \o/
- workaround RPi returning invalid values for positive coords in the
checker shader
- reduce the number of iterations in the mandelbrot shader for gles2
https://bugzilla.gnome.org/show_bug.cgi?id=751540
cvPyrSegmentation() has been deprecated in OpenCV 3.0, and there isn't any
function to replace it. Deleting this element so we can support OpenCV 3.1
without build issues.
https://bugzilla.gnome.org/show_bug.cgi?id=760473
cvarrToMat() is added because it is compatible with all versions of Opencv
and the use the class constructor Mat is eliminated because is deprecated
in 3.X versions. This keeps compatibility with 2.4.
https://bugzilla.gnome.org/show_bug.cgi?id=760473
cvarrToMat() is added because it is compatible with all versions of Opencv
and using the class Mat constructor is eliminated, because is deprecated
in 3.X versions. The use the using namespace cv because is called some
functions many times. This keeps compatibility with 2.4.
https://bugzilla.gnome.org/show_bug.cgi?id=760473
cvarrToMat() is added because it is compatible with all versions of Opencv
and the use of the class constructor Mat is eliminated because is deprecated
in 3.X versions. Included 'using namespace std' because it is needed for the
Vector class in 3.X versions. This keeps compatibility with 2.4.
https://bugzilla.gnome.org/show_bug.cgi?id=760473
cvarrToMat() is added because it is compatible with all versions of Opencv
and the use of the class constructor Mat is eliminated because is deprecated
in 3.X versions. Included 'using namespace std' because it is needed for the
vector class in 3.X versions. This keeps compatibility with 2.4.
https://bugzilla.gnome.org/show_bug.cgi?id=760473
cvarrToMat() is added because it is compatible with all versions of Opencv
and using the class constructor Mat is eliminated because is deprecated
in 3.X versions. This keeps compatibility with 2.4.
https://bugzilla.gnome.org/show_bug.cgi?id=760473
'METHOD_VAR', 'METHOD_GC' is removed because there aren't equivalent functions
in new OpenCV versions. 'img_right_as_cvMat_rgb', 'img_left_as_cvMat_rgb' and
'depth_map_as_cvMat2' variables is removed because these aren't used.
cvarrToMat() is added because it is compatible with all versions of Opencv
and using the class Mat constructor is eliminated, because is deprecated
in 3.X versions. The use 'using namespace cv' because is called some
functions many times. This keeps compatibility with 2.4.
https://bugzilla.gnome.org/show_bug.cgi?id=760473
cvCVPixToPlane() has been deprecated in OpenCV 3.0, and there is
function to replace it cvSplit(). The include compat.hpp is deleted because
in 3.X versions doen't exist and it isn't necessary for 2.4.X versions
in this element. This keeps compatibility with 2.4.
https://bugzilla.gnome.org/show_bug.cgi?id=760473
configure.ac was changed to work with new versions of OpenCV 3.X.
A new include is added gstopencvutils.cpp because it contains
the previous. This keeps compatibility with 2.4.
https://bugzilla.gnome.org/show_bug.cgi?id=760473
Properly separate files as we will not have only one single base class
for all elements as we used to with 0.10, but the same way it is done
with ladspa, we subclass GstAudioFilter, GstBaseSource etc...
https://bugzilla.gnome.org/show_bug.cgi?id=678207
We want to iterate over all the pads, not just the first one. Fix by returning
TRUE in the GstAggregatorPadForeachFunc.
Removes a GST_IS_GL_CONTEXT() assertion on shutdown with >2 inputs
using gst-launch.
Remove calls to gst_pad_has_current_caps() which then go on to call
gst_pad_get_current_caps() as the caps can go to NULL in between. Instead just
use gst_pad_get_current_caps() and check for NULL.
https://bugzilla.gnome.org/show_bug.cgi?id=759539
The code in the gst_dash_demux_parse_http_xsdate() was trying to
handle the case where the string is not null terminated by resizing
the buffer and appending a zero byte. This does not work if the buffer
is exactly the length of the string because the gst_buffer_resize()
function does not re-allocate the buffer, it just changes its size.
If a buffer is passed to gst_dash_demux_parse_http_xsdate() that is
exactly the length of the string, the function fails with an assert
failure in gst_buffer_resize().
https://bugzilla.gnome.org/show_bug.cgi?id=762148
silences the following warnings from the validation layer
AccessMask xxx must have required access bit xxx and may have optional bits 0
when layout is VK_IMAGE_LAYOUT_TRANSFER_{SRC,DST}_OPTIMAL
It's possible that the sink element will be freed before the widget is
destroyed. When the widget was eventually destroyed, it was attempting to
access member variables of the freed sink struct which resulted in undefined
behaviour.
Fix by disconnecting our signal on finalize.
https://bugzilla.gnome.org/show_bug.cgi?id=762098
This allow adding rtmpsink after the flv streaming have started. Otherwise,
FLV streamheader is never sent to the server, which cannot figure-out
what is this stream about. It should also help in certain renegotiation
figures. The sink will no longer work without an streamheader in caps,
though there is no known implementation of flvdemux that does not
support this.
https://bugzilla.gnome.org/show_bug.cgi?id=760242
stream->current_fragment has the value of g_list_previous (iter) which has
just been checked. No need to check it again.
Just to be safe, use a g_assert() to check fragment before dereferencing.
CID #1352041
The function actually returns the segment availability start time (as defined by the standard).
That is at the end of the segment, but it is called availability start time.
Availability end time is something else (the time when the segment is no longer
available on the server). The function name was misleading.
https://bugzilla.gnome.org/show_bug.cgi?id=757655
CPU waits are more expensive and are only required if the CPU is ever going to
access the data. GPU waits perform inter-context synchronisation and are cheaper
as they don't require CPU intervention.
Adaptive demuxers need to start downloading from specific positions
(fragments) for every stream, this means that all streams can snap-seek
to a different position when requested. Snap seeking in this case will
be done in 2 steps:
1) do the snap seeking on the pad that received the seek event and
get the final position
2) use this position to do a regular seek on the other streams to
make sure they all start from the same position
More arguments were added to the stream_seek function, allowing better control
of how seeking is done. Knowing the flags and the playback direction allows
subclasses to handle snap-seeking.
And also adds a new return parameter to inform of the final
selected seeking position that is used to align the other streams.
https://bugzilla.gnome.org/show_bug.cgi?id=759158
FEC may only be used when PLC is enabled on the audio decoder,
as it relies on empty buffers to generate audio from the next
buffer. Hooking to the gap events doesn't work as the audio
decoder does not like more buffers output than it sends.
The length of data to generate using FEC from the next packet
is determined by rounding the gap duration to nearest. This
ensures that duration imprecision does not cause quantization
to 2.5 milliseconds less than available. Doing so causes the
Opus API to fail decoding. Such duration imprecision is common
in live cases.
The buffer to consider when determining the length of audio
to be decoded is the previous buffer when using FEC, and the
new buffer otherwise. In the FEC case, this means we determine
the amount of audio from the previous buffer, whether it was
missing or not (and get the data either from this buffer, or
the current one if the previous one was missing).
Was never ported and doesn't look like
we want it or need it in this form, can
do the same with the libgstvideo sample
conversion utility API now, but better
and in a more flexible way.
Just check whether LATM is defined which is only available
in 2.7 and later. Allows us to simplify the configure check
a little and we can get rid of some hackish workarounds for
problems with earlier version headers.
On some embedded systems, sqrt() is not supported in the shader,
use the actual value of sqrt(2) instead.
Signed-off-by: Haihua Hu <b55597@freescale.com>
Bugzilla: https://bugzilla.gnome.org/show_bug.cgi?id=761271
Allows the subclass to completely override the chosen src caps.
This is needed as videoaggregator generally has no idea exactly
what operation is being performed.
- Adds a fixate_caps vfunc for fixation
- Merges gst_video_aggregator_update_converters() into
gst_videoaggregator_update_src_caps() as we need some of its info
for proper caps handling.
- Pass the downstream caps to the update_caps vfunc
https://bugzilla.gnome.org/show_bug.cgi?id=756207
1. Otherwise rotating the video will clip and show black bars due to
gltransformation's implementation.
2. The other option of make gltransformation aspect-agnostic produces
incorrect output with perspective transformations.
Add a "mask" property that sets whether the edges by cvLaplace should be
used as a mask on the original input or not. The same way the original
image is copied to the edges in edgedetect.
Add a "mask" property that sets whether the detected derivative edges
should be used as a mask on the original input or not. The same way
the original image is added to the edges in edgedetect.
cvCvtPixToPlane() has been deprecated in OpenCV 3.0, and cvSplit() is the
suggested replacement. Since cvSplit() is available in OpenCV 2.4, it is
safe and cautious to update the function usage before it becomes an issue.
cvlaplace was also affected by the silent change in OpenCV API, same as
cvsobel. It hasn't been working for a while. It would return a plain black
image. This commit updates the usage of cvLaplace by using cvCvtColor to
create the grayscale intermediate image to process. This also means there
is no need anymore to use GstBaseTransform's transform_caps, since the pads
are RGB.
cvsobel han't been working for a while due to a silent change in OpenCV
API. It would return a plain black image. This commit updates the usage
of cvSobel by using cvCvtColor to create the grayscale image to process.
This also means there is no need to use GstBaseTransform's transform_caps
anymore, since the pads can be RGB.
In file included from effects/gstgleffectrgbtocurve.c:25:0:
effects/gstgleffectscurves.h:174:32: error: 'xray_curve' defined but not used
static const GstGLEffectsCurve xray_curve = {
...
For a live mpd, if availabilityStartTime is missing, adaptive demux asserts
with: Unexpected critical/warning: gst_date_time_to_g_date_time: assertion
'datetime != NULL' failed.
This patch improves the error message to:
Unexpected critical/warning: gst_mpd_client_seek_to_time: assertion
'client->mpd_node->availabilityStartTime != NULL' failed
https://bugzilla.gnome.org/show_bug.cgi?id=757602
Handling the ghostpad and its internal pad was causing more issues
than helping because of their coupled activation/deactivation
actions.
As we have to install custom chain,event and query functions it is
better to use a floating sink pad internally in the demuxer and just
use those pad functions to push through a standard pad in the demuxer
https://bugzilla.gnome.org/show_bug.cgi?id=757951
Reverses the transformation applied through the properties and forwards the
event.
The process for finding the coordinates on the video are as follows:
1. Convert the given pointer_x and pointer_y to model space at the near and far planes
2. Get the equation of the video plane
3. Find where the ray in 1 intersects the plane
4. Profit!
It performs the exact same operation as videobalance but with opengl shaders and
was tested with glvideomixer by comparing frames from videobalance and
glcolorbalance.
Allows more blending options than just A over B
e.g. frame comparisons are now possible.
glvideomixer name=m
sink_0::zorder=0
sink_1::zorder=1
sink_1::blend-equation-rgb={subtract,reverse-subtract}
sink_1::blend-function-src-rgb=src-color
sink_1::blend-function-dst-rgb=dst-color
! glimagesinkelement
videotestsrc pattern=checkers-4 ! m.sink_0
videotestsrc pattern=checkers-8 ! m.sink_1
SBC frame length calculation wasn't being rounded up to the nearest byte
(as specified in the A2DP 1.0 specification, section 12.9). This could
cause 'stereo' and 'joint stereo' mode SBC streams to have incorrectly
calculated frame lengths.
https://bugzilla.gnome.org/show_bug.cgi?id=742446
After commit 64080e632, configure checks for all the header files that
should be available in OpenCV 2.3 and later. If any of these files isn't
there the OpenCV elements won't be part of the build.
No need to recheck for opencv2/legacy/legacy.hpp again in
gstpyramidsegment.h. Minimum supported OpenCV version must have this header
and configure already checks for it. Removing check.
Run the transform function of pyramidsegment in place, reusing the image
data as both source and destination in cvPyrSegmentation. This avoids
copying the image back and forth and the extra memory.
Properly handle snap flags during reverse seeking. In this case
the before/after are also reversed, so handle those as such.
For example: with a sequence of 1s fragments:
|-- 0 --|-- 1 --|-- 2 --|-- 3 --|
If you seek to 1.5s it is inside fragment 1. With reverse and
snap-before: should play from the end of fragment 1
snap-after: should play from the end of fragment 0
Avoids an unlikely crash.
Arguably, if allocation fails we have no chance of
recovering but nonetheless, RTMP_Alloc can fail and
librtmp's RTMP_init() (called next) assumes a non-NULL
pointer is passed without checking.
Additionally, unify exit path on error.
Avoids an unlikely crash.
Arguably, if allocation fails we have no chance of
recovering but nonetheless, RTMP_Alloc can fail and
librtmp's RTMP_init() (called next) assumes a non-NULL
pointer is passed without checking.
Additionally, unify exit path on error.
All the gleffects shaders can be run against a gles2 or a legacy opengl glsl
compiler but weren't being advertised as such.
Fixes gleffects under desktop opengl < 3.2.
The property location has been changed in favor of vertex/fragment
string properties; the doc had not been updated and was still referring
to the previous property; also, now the #version header has become mandatory
https://bugzilla.gnome.org/show_bug.cgi?id=759902
No need to attempt splitting the RGB string in 255 tokens
if we only expect 3.
Left max_tokens at 4 to preserve the current logic (which
allows for extra stuff at the end) and added a warning on
parsing failure instead of silently discarding the value.
The URI attribute from the EXT-X-KEY tag and the URI attribute from the
EXT-X-I-FRAMES-ONLY tag are both quoted-string attibutes that have their
quotation marks removed during parsing. The CODECS attribute of the
EXT-X-STREAM-INF is also a quoted-string attribute, but this attribute
was not being un-quoted.
This commit changes the parser to always unquote all quoted-string
attributes and adjusts the unit tests to this new bevahiour for the
CODECS attribute.
An additional test is added to check that parsing of all of the fields
in the EXT-X-STREAM tag is correct, including those that contain comma
characters.
https://bugzilla.gnome.org/show_bug.cgi?id=758384
Clear error as soon as we determine that the download failed,
otherwise there are code paths where we might return without
clearing it ever, which would leak the GError then. Also, we
can pass a NULL GError pointer to _fetch_uri(), so just do that
instead of passing one that we're going to just free again
right away anyway.
Adds more meaningful error than
"Failed to convert multiview video buffer", which is always used
when prepare_next_buffer() fails in gst_glimage_sink_prepare().
https://bugzilla.gnome.org/show_bug.cgi?id=743345
Update opencvtextoverlay to inherit from GstOpencvVideoFilter instead of
from GstElement. This means less code and more uniformity with other OpenCV
elements. The chain/transform function is now a third of the size than
before.
Update pyramidsegment to inherit from GstOpencvVideoFilter instead of from
GstElement. This means less code and more uniformity with other OpenCV
elements.
Update pyramidsegment to inherit from GstOpencvVideoFilter instead of from
GstElement. This means less code and more uniformity with other OpenCV
elements.
Update motioncells to inherit from GstOpencvVideoFilter instead of from
GstElement. This means less code and more uniformity with other OpenCV
elements.
Add gst_gl_memory_allocator_get_default to get the default allocator based on
the opengl version. Allows us to stop hardcoding the PBO allocator which isn't
supported on gles2.
Fixes GL upload on iOS9 among other things.
Performing any GL function marshalling off the GL thread with glimagesink's
render lock is prone to deadlocks between the GL thread and the non-GL thread.
What can happen is this:
1. non-GL thread attempts to function marshal to the GL thread.
2. while 1 is happening, the winsys gives an event (say resize)
3. This calls back into glimagesink which taks the render lock.
4. As the GL function marshalling is attempting to run on the GL
and already has glimagesink's render lock locked. This deadlocks
as the threads are waiting for each other.
Update edgedetect to inherit from GstOpencvVideoFilter instead of from
GstElement. This means less code and more uniformity with other OpenCV
elements.
Adding the support for the two other OpenCV linear filters to smooth
images. The new API does support spatial sigma in the bilateral filter,
hence bringing that property back.
Adding reference to new documentation.
The OpenCV cvSmooth function is deprecated [0] and the documentation
recommends to use GaussianBlur (). This makes the spatial property go
unused. Marking it as deprecated, making it non-functional and will remove
in the next cycle.
[0] http://docs.opencv.org/2.4/modules/imgproc/doc/filtering.html
gst_mpdparser_parse_utctiming_node does not validate the parsed values completely. The following scenarios are incorrectly accepted:
- elements with no schemeIdUri property should be rejected
- elements with unrecognized UTCTiming scheme should be rejected
- elements with empty values should be rejected
The last one triggers a division by 0 in gst_dash_demux_poll_clock_drift:
clock_drift->selected_url = clock_drift->selected_url % g_strv_length (urls);
because it urls is a valid pointer to an empty array.
https://bugzilla.gnome.org/show_bug.cgi?id=759547
We don't directly link to GL in the element, though we use GL headers.
For this reason we need to include the proper GL headers path. This
prevent this element from using a different GL header then libgstgl.
rename gst-launch --> gst-launch-1.0
replace old elements with new elements(ffmpegcolorspace -> videoconvert, ffenc_** -> avenc_**)
fix caps in examples
https://bugzilla.gnome.org/show_bug.cgi?id=759432
The base class is useful for having multiple backing memory types other
than the default. e.g. IOSurface, EGLImage, dmabuf?
The PBO transfer logic is now inside GstGLMemoryPBO which uses GstGLBuffer
to manage the PBO memory.
This also moves the format utility functions into their own file.
The opencv element includes were full of duplicates and uneeded headers.
For example a few elements that stopped using gstcvopencvutils still
included that header file.
Since commit 45ca8876b2 nobody is using
gst_opencv_get_ipl_depth_and_channels() or
gst_opencv_parse_iplimage_params_from_structure(). Remove this dead
code.
Setting the seek flags to GST_SEEK_FLAG_SNAP_* will change the seek
target time to a segment boundary.
Based on original work by Ben Willers <bwillers@digisoft.tv>.
https://bugzilla.gnome.org/show_bug.cgi?id=759108
Dashdemux has set the width and height information from MPD manifest.
Some embedded devices which are not insufficient H/W resources need more information such as framerate
to assign H/W resources. So I suggested that dashdemux also needs to set the framerate information from MDP manifest.
https://bugzilla.gnome.org/show_bug.cgi?id=758515
As HLS does not provide any way of knowing the server's clock, and we do
buffering of "live" streams, at some point we will fall behind the server in
many cases and would have to advance to a fragment that is not in the playlist
anymore.
Previously we would've just resynced to the next oldest fragment that is still
there, but this causes problems as from this point onwards we would always
fall off the playlist again all the time.
Instead we now resync and move to the 3rd newest fragment like we would do
when starting playback of a live stream.
https://bugzilla.gnome.org/show_bug.cgi?id=758987
If connection speed is set, playlist according
to connection speed is selected as current playlist.
Problem is that the current variant of main playlist still
points to previously set variant.
If previously set variant doesn't correspond to current
playlist, then it causes unnecessary change of playlist
to the same playlist after first fragment is downloaded,
because of not updated current variant.
To fix this, we need to make sure that current variant
of main playlist corresponds to the current playlist
https://bugzilla.gnome.org/show_bug.cgi?id=758946
Don't jump backward to 3 files from the end of the playlist
when switching variants - it just means we downloaded
fragments fast and caught up to the end of the playlist.
Disable that by treating a variant switch as a playlist
update, not a restart due to a seek or so.
If the stream is discont, we must provide a timestamp in any case. Elements
like tsdemux are not going to output anything if we give a NONE timestamp
after a discont.
Also marking a stream as discont if a playlist change was not successful would
lead to the above situation, but in that case we are not required at all to
mark the stream discont as we're still at the old playlist.
commit da5c41930c removed the two uses of the
new value of data:
channels = opus_packet_get_nb_channels (data);
bandwidth = opus_packet_get_bandwidth (data);
Since then, data isn't being used between incrementing it by packet_offset
and going out of scope. Removing this uneeded statement.
The scene graph can be initialized when the we receive window handle change
notification and so we will not receive a scenegraph initialization
notification. Initialize ourself in this case.
https://bugzilla.gnome.org/show_bug.cgi?id=758337
The result of the two expressions will be promoted to guint64 anyway,
perform all the arithmetic in 64 bits to avoid potential overflows.
CID 1338690, CID 1338691
sample_rate might be used uninitialized if !sink_caps is TRUE. Initialize
it to the default used in gst_codec_utils_opus_parse_caps () when there is
no rate defined in the caps.
CID 1338695
There is a possibility that the _get_caps impl will be called with the
feature in the filter caps which when interecting with the template,
will return EMPTY and therefore fail negotiation.
https://bugzilla.gnome.org/show_bug.cgi?id=757854
It's for the upstream element driving the pipeline to
handle seeks and send flush events or not, filters
should not interfere here, otherwise downstream pads
could be flushing before upstream pads are flushing,
which can result in GST_FLOW_ERROR being sent instead
of GST_FLOW_FLUSHING when trying to forward sticky
events at just the wrong moment.
No need to use G_GINT64_FORMAT for potentially negative values of
GstClockTimeDiff. Since 1.6 these can be handled with GST_STIME_FORMAT.
Plus it creates more readable values in the logs.
https://bugzilla.gnome.org/show_bug.cgi?id=757480
We always require the channel-mapping-field. If it's 0 we require nothing
else, otherwise we need channels, stream-count and coupled count to be
available.
oggdemux is outputting the meta now, and only outputs if it should really
apply to the current buffer. Previously we would skip N samples also if we
started the decoder in the middle of the stream.
https://bugzilla.gnome.org/show_bug.cgi?id=757153
It is doing the wrong thing because of the Opus pre-skip: while the timestamps
are shifted by the pre-skip, the granule positions are not shifted.
oggmux is doing the right thing here already.
https://bugzilla.gnome.org/show_bug.cgi?id=757153
The first frame has lookahead less samples, the last frame might have some
padding or we might have to encode another frame of silence to get all our
input into the encoded data.
This is because of a) the lookahead at the beginning of the encoding, which
shifts all data by that amount of samples and b) the padding needed to fill
the very last frame completely.
Ideally we would use LPC to calculate something better than silence for the
padding to make the encoding as smooth as possible.
With this we get exactly the same amount of samples again in an
opusenc ! opusdec pipeline.
https://bugzilla.gnome.org/show_bug.cgi?id=757153
Waylandsink needs exception code in gst_wayland_sink_set_window_handle().
After making sink->window, User can call
gst_wayland_sink_set_window_handle(). It is the user's fault, but
Waylandsink needs to handle the exception, if not then sink->window is
changed and rendering fails.
https://bugzilla.gnome.org/show_bug.cgi?id=747482
Waylandsink needs exception code in gst_wayland_sink_set_context(). After
calling gst_wayland_sink_set_context(), below code is set.
GST_ELEMENT_CLASS (parent_class)->set_context (element, context); but, If
user can call onemore. It is user's fault. but waylandsink need to
exception.
https://bugzilla.gnome.org/show_bug.cgi?id=747482
The stream->cur_seg_template is set to the lowest available segment
template (representation or adaptation or period, in this order).
Because the template elements are inherited, the lowest template will
have all the elements the parents had, so there is no need to check the
parent for an element that is not found in the child (eg initialisation
or index).
https://bugzilla.gnome.org/show_bug.cgi?id=752714
The standard does not seem to make any particular explicit not
implicit reference to the signedness of durations, and the code
does not rely on such, nor on the negativity of the -1 value
that's used as a placeholder when a duration property is not
present in the XML.
https://bugzilla.gnome.org/show_bug.cgi?id=750847
According to the standard:
"SegmentBase, SegmentTemplate and SegmentList shall inherit
attributes and elements from the same element on a higher level.
If the same attribute or element is present on both levels,
the one on the lower level shall take precedence over the one
on the higher level."
gst_mpdparser_parse_segment_list_node will now discard any inherited
segment URLs if the parsed element defines some too.
https://bugzilla.gnome.org/show_bug.cgi?id=751832
Solved with a simple shader templating mechanism and string replacements
of the necessary sampler types/texture accesses and texture coordinate
mangling for rectangular and external-oes textures.
Add the various tokens/strings for the differnet texture types (2D, rect, oes)
Changes the GLmemory api to include the GstGLTextureTarget in all relevant
functions.
Update the relevant caps/templates for 2D only textures.
If MPD@suggestedPresentationDelay is not present in the manifest,
dashdemux selects the fragment closest to the most recently generated
fragment. This causes a playback issue because this choice does not allow
the DASH client to build up any buffer of downloaded fragments without
pausing playback. This is because by definition new fragments appear on
the server in real-time (e.g. if segment duration is 4 seconds, a new
fragment will appear on the server every 4 seconds). If the starting
playback position was n*segmentDuration seconds behind "now", the DASH
client could download up to 'n' fragments faster than realtime before it
reached the point where it needed to wait for fragments to appear on the
server.
The MPD@suggestedPresentationDelay attribute allows a content publisher
to provide a suggested starting position that is behind the current
"live" position.
If the MPD@suggestedPresentationDelay attribute is not present, provide
a suitable default value as a property of the dashdemux element. To
allow the default presentation delay to be specified either using
fragments or seconds, the property is a string that contains a number
and a unit (e.g. "10 seconds", "4 fragments", "2500ms").
Corrected the parsing of a segment template string.
Added unit tests to test the segment template parsing.
All reported problems are now correctly handled.
https://bugzilla.gnome.org/show_bug.cgi?id=751735
When building the media segment list using a SegmentList node, the
gst_mpd_client_setup_representation function will iterate through the
list of S nodes and will expect to find a matching SegmentUrl node. If
one does not exist, the code made an illegal memory access.
https://bugzilla.gnome.org/show_bug.cgi?id=752496
These are used to apply restrictions on what the MPD file may
use, so no profile means no restrictions.
Besides, nothing actually uses the profiles (yet) anyway.
https://bugzilla.gnome.org/show_bug.cgi?id=750869
This wl_display proxy is temporary only until waylandsink goes NULL,
at which point the connection to the display is disposed. Unfortunately,
if this is advertised as a GstContext, playbin will cache it and re-feed
it to the sink when it goes PLAYING again, but the wl_display pointer
will at that point be invalid and cause a crash.
Another solution to the problem would be to also cache the GstWlDisplay
object inside the GstContext, which would automatically ref-count
the display connection, but I see no reason in doing that at the moment,
as there are no known users of this GstContext outside waylandsink.
It's probably better to avoid chasing hidden refcounts.
https://bugzilla.gnome.org/show_bug.cgi?id=756567
If a (master) playlist contains a variant list entry without a
URI then during parsing of the next variant list entry we are
a) leaking the entry we're currently parsing (new_list), and
b) free'ing the pointer to the previous list entry (list) without
updating the pointer.
Hence when then adding the URI for the latest parsed entry, incorrect
information is stored, as the information is used from 'list' which
is not valid memory anymore, also leading to crashes.
Fix this by correctly storing the new variant list entry pointer
as needed.
https://bugzilla.gnome.org/show_bug.cgi?id=756861
Nicer to read, two lines of code less, and also the callback
function should've been a GCompareFunc that returns a gint
and not a boolean (it did work correctly, was just confusing).
Currently float and int are supported by default. vec2, vec3, vec4
and mat4 are supported if graphene is used. Of course if one wants
to set custom uniforms they can also be set using the create-shader
signal.
We know that the gchar arrays contain at least one string. Furthermore,
g_strfreev() checks if the array is NULL and simply returns if it is.
Hence, there is no need to check if the array is empty before using
g_strfreev().
CID 1327412-1327415
In order to ensure the sequence_position will always be consistently updated,
store the current file duration.
This way, when we advance, we can always increment the position based on what
was previously outputted.
https://bugzilla.gnome.org/show_bug.cgi?id=752132
One may not have an GstGLContext available or current in the thread where one
would need to update the shader. Support this by signalling create-shader
whenever the one-shot 'update-shader' is set to TRUE.
A GstGLShader is now simply a collection of stages that are
compiled and linked together into a program. The uniform/attribute
interface has remained the same.
Allows playlists that are missing the mediasequence information to
be correctly parsed. If the playlist was updated without reseting
the mediasequence it would constantly increase over subsequent updates,
leading to issues during playback.
For live streams, we want to make sure there's a certain distance
between the sequence to play and the last (earliest) fragment.
The problem is that it assumes there are at least 3 fragments in
the playlist, which might not always be the case (like in the case
of a server restarting and gradually adding fragments).
In order to avoid ending up with negative sequence numbers (which
will just loop forever), limit the new target sequence number to
the highest of:
* either the first sequence number of the playlist (fallback)
* or 3 fragments from the last one (standard behaviour)
Change the gstcvdilate.c file extension to cpp and add it into Makefile for
consistency with other elements of opencv and because Opencv not support C
language in new API 2.4.11.
https://bugzilla.gnome.org/show_bug.cgi?id=754148
Change the file extension to cpp and add it into Makefile for consistency
with other elements of opencv and because Opencv not support C language in
new API 2.4.11.
https://bugzilla.gnome.org/show_bug.cgi?id=754148
Change the file extension to cpp and add it into Makefile for consistency
with other elements of opencv and because Opencv not support C language in
new API 2.4.11.
https://bugzilla.gnome.org/show_bug.cgi?id=754148
Change the gstretinex.c file to cpp and add it into Makefile.
It is necessary to migrate the retinex element to C++,
because new Opencv API leaves obsolete functions like cvSmooth.
This element uses this function.
You can see in this link:
http://docs.opencv.org/modules/imgproc/doc/filtering.html?
highlight=cvsmooth#void cvSmooth(const CvArr* src, CvArr* dst,
int smoothtype, int size1, int size2, double sigma1, double sigma2)
https://bugzilla.gnome.org/show_bug.cgi?id=754148
The cascade classifier changes its structure on new version of OpenCV 2.4.11.
It is need to migrate to C++ to utilize the new load method of OpenCV which
allows to load the old and new classifiers.
https://bugzilla.gnome.org/show_bug.cgi?id=752528
Change the gsthanddetect.c file to cpp and add it into Makefile.
It is necessary to migrate the handdetect plugin to C++,
in order to load new and old classifiers, to make handdetect work
with newer versions of Opencv.
https://bugzilla.gnome.org/show_bug.cgi?id=752528
This is mostly a copy/paste of the negotiation function in
basetextoverlay, which was improved recently to handle many more cases.
This will allow us to negotiate a window size with downstream.
https://bugzilla.gnome.org/show_bug.cgi?id=753824
Doing the contrary has no effect and the consequence is that playback
will start with the lowest bitrate even if we can already handle
higher bitrate.
https://bugzilla.gnome.org/show_bug.cgi?id=755108
Not doing this can lead the demuxer to attempt downloading fragments
for an invalid start time. The server would then send a HTTP
Precondition failed error, the demuxer would try some more times to
download the invalid fragment and eventually error out.
https://bugzilla.gnome.org/show_bug.cgi?id=754523
Move the TAG defines directly into the code, not sure what
their purposes is, these are printf format strings so having
them directly as literals in the code where they're used
makes the code easier to follow.
Remove playlist_str GString variable from GstM3U8Playlist struct,
since it's only used temporarily in playlist_render(). Might just
as well keep it local then.
Transform is set to be done in place in gstcvdilateerode.c, so the in-place
transform function is always used and the other is redundant. Removing it.
https://bugzilla.gnome.org/show_bug.cgi?id=753885
Transform is set to be done in place in gstcvdilateerode.c, so the in-place
transform function is always used and the other is redundant. Removing it.
https://bugzilla.gnome.org/show_bug.cgi?id=753885
When running GStreamer from uninstalled sources, the location of the haar
cascade files will be local. Check if running in uninstalled and set the
file paths accordingly.
Move handling of a GstSample in a separate function, and unref the
sample after calling it. libass copies the font data so we don't need to
keep it around.
https://bugzilla.gnome.org/show_bug.cgi?id=755759
Ignore the normal gap threshold for laggy streams and
immediately catch all streams up to the end of the segment
when processing gap updates for a segment during a
still frame sequence.
https://bugzilla.gnome.org/show_bug.cgi?id=755680
Create src pads for Representations that contain timed-text subtitles,
both when the subtitles are encapsulated in ISO BMFF (i.e., the
Representation has mimeType "application/mp4") and when they are
unencapsulated (i.e., the Representation has mimeType
"application/ttml+xml").
https://bugzilla.gnome.org/show_bug.cgi?id=747774
The same has to be done for AdaptationSet and SegmentList nodes still.
Also this does not correctly implement the semantics: by default Period (and
other nodes) should only be loaded when needed, not in the very beginning. We
need to implement lazy loading for them, which means adjusting
gst_mpd_client_setup_media_presentation().
https://bugzilla.gnome.org/show_bug.cgi?id=752230
gst_uri_join_strings() will return the second parameter if it is an absolute
URI. No need to do a (wrong) check if the URI is absolute or not beforehand.
https://bugzilla.gnome.org/show_bug.cgi?id=755134
In case the format changed fast and the pending format is different
than the currently set but the currently set is equal to the pending
one we could end up having mismatch between the finally set format
and the data stream format.
https://bugzilla.gnome.org/show_bug.cgi?id=755542
The spec defines these as signed in 5.3.9.6.1.
Since we don't support this behavior, warn and default to 0
(non repeating), which is the spec's default when the value
is not present.
https://bugzilla.gnome.org/show_bug.cgi?id=752480
Even if it doesn't actually advance the subfragment in the default way
for streams that have subfragments, it can help the base class to return
EOS when there is no more fragments instead of signaling it that it should
continue downloading.
https://bugzilla.gnome.org/show_bug.cgi?id=755042
This reverts commit 626a8f0a74.
This allows us to get the plain presentation offset and the period start time
separately. We have to adjust the timestamp by the presentation offset, but
the period start time should only adjust the stream time and running time.
https://bugzilla.gnome.org/show_bug.cgi?id=752409
This reverts commit e671ad25a9.
The timestamps should restart at 0 again for each period, but we have to
adjust the segment to map those timestamps to the actual stream time and
running time of that period.
Otherwise we would have timestamps that conflict with the ones from the tfdt
inside the MP4 container, which are restarting at 0 for each period.
https://bugzilla.gnome.org/show_bug.cgi?id=752409
In dash isombff profile the fragment is split into subframents where
bitrate switching is possible. Also take that into consideration
when checking if a stream has next fragments.