face detection will be performed only if image standard deviation is
greater that min-stddev. Default min-stddev is 0 for backward
compatibility. This property will avoid to perform face detection on
images with little changes improving cpu usage and reducing false
positives
https://bugzilla.gnome.org/show_bug.cgi?id=730510
* aspect should not be 0 on init
* rename fovy to fov
* add mvp to properties as boxed graphene type
* fix transformation order. scale first
* clear color with 1.0 alpha
https://bugzilla.gnome.org/show_bug.cgi?id=734223
If the language is not specified in the AdaptationSet, use the ContentComponent
node to get it. We only get it if there is only a single ContentComponent as
it doesn't seem clear on what to do if there are multiple entries
https://bugzilla.gnome.org/show_bug.cgi?id=732237
Dynamic pipelines that get and release the sink pads will finalize
the pad without going through gst_gl_mixer_stop() which is where the
upload object is usually freed. Don't leak objects in such case.
Instead always use the low bandwith playlist making things go smoother
as the current heuristic is rather set for normal playback, and
currently it does not behave properly.
https://bugzilla.gnome.org/show_bug.cgi?id=734445
When a seek with a negative rate is requested, find the target
segment where gstsegment.stop belongs in and then download from
this segment backwards until the first segment.
This allows proper reverse playback.
If window is resized, GstStructure pointer values have to be rescaled to
original geometry. A get_surface_dimensions GLWindow class method is added for
this purpose and used in the navigation send_event function.
https://bugzilla.gnome.org/show_bug.cgi?id=703486
When flushing, this will prevent dashdemux from trying to download more
fragments or more chunks of the same fragment before stopping.
Also improves the error handling to not transform everything non-ok into
an error.
https://bugzilla.gnome.org/show_bug.cgi?id=734014
templatematch operates on BGR data. In fact, OpenCV's IplImage always
stores color image data in BGR order -- this isn't documented at all in
the OpenCV source code, but there are hints around the web (see for
example
http://www.cs.iit.edu/~agam/cs512/lect-notes/opencv-intro/opencv-intro.html#SECTION00041000000000000000
and http://www.comp.leeds.ac.uk/vision/opencv/iplimage.html ).
gst_templatematch_load_template loads the template (the image to find)
from disk using OpenCV's cvLoadImage, so it is stored in an IplImage in
BGR order. But in gst_templatematch_chain, no OpenCV conversion
functions are used: the imageData pointer of the IplImage for the video
frame (the image to search in) is just set to point to the raw buffer
data. Without this fix, that raw data is in RGB order, so the call to
cvMatchTemplate ends up comparing the template's Blue channel against
the frame's Red channel, producing very poor results.