Specifically, when scanning for entropy data segment length and needing
more data, do not rescan from start next time around, but resume at
last position.
See also #583047.
That is, header configuration may start at Video Object (startcode),
rather than at Visual Object Sequence, which is catered for and parsed,
so let's also take it as codec_data if no more available.
Fixes (remainder of) #572551.
gcc 4.5 warns when comparing some integer with an enum value, in
the case of GstFlowReturn this is valid though. We should later
add GST_FLOW_CUSTOM_OK1, GST_FLOW_CUSTOM_OK2, etc. after new core
is released.
Adds video-capture-width, video-capture-height and
video-capture-framerate properties to allow applications to
get/set those values. Getting was not possible before this patch,
and setting was done through the set-video-resolution-fps
action, which sets the properties and promptly resets the
video source to use them.
Fixes#614958
Adds image-capture-width and image-capture-height properties
to camerabin, allowing the user to get/set them. Getting was
not possible before and setting was done through the
set-image-resolution action, which shouldn't now just set
the properties.
Fixes#614958
Adds block-after-capture property to block running viewfinder after capturing.
This property is useful if application wants to display capture preview and avoid
running viewfinder on background.
Based on a patch by Tommi Myöhänen <ext-tommi.1.myohanen@nokia.com>
Adds a new property called viewfinder-filter to camerabin.
This property is used to add a filter to process the video
flow right before the viewfinder sink.
Also updates test to check property exists.
Add video-source-filter property that can be used to inject application
specific gstreamer element to camerabin pipeline. The video-source-filter
element will process all frames coming from video source.
One could add image analyzers to collect information about the stream,
or add image enhancers to improve capture quality, for example.
If source-resize flag is disabled then set resolution to image capture caps
according to capture resolution video source element produces. Otherwise we
write wrong resolution to image metadata.
We wait to parse a minimum number of frames (10, arbitrarily) before
emiting bitrate tags so that our early estimates are not wildly
inaccurate for streams that start with a silence. If the stream ends
before that, we just emit the tags anyway.
While it _would_ be nicer to be specify the threshold to start pushing
the tags in terms of duration, this would introduce more complexity than
this merits.
https://bugzilla.gnome.org/show_bug.cgi?id=614991
The current code just uses table id, subtable extension and version number to
check if the section has been seen before. However, this comparison is not
sufficient, causing actually new tables being dismissed.
Fixes bug #614479.
This is optional because it's a quite expensive operation and it's very
unlikely that a non-frame is detected as frame after the header CRC check
and checking all bits for valid values. The overall frame checksums are
mainly useful to detect inconsistencies in the encoded payload.