camrabin2 connects a viewfinderbin on "vfsrc". viewfinderbin is made of:
vfbin-csp ! vfbin-videoscale ! videosink.
we should either remove csp/videoscale from wrappercamerabinsrc (as
done in this patch) or we should get rid of viewfinderbin altogether.
The use of this method was removed in:
commit 539f10f4d9
basecamerasrc: More cleanup
The code from wrappercamerabinsrc is from v4l2camerasrc but is unused:
get_allowed_input_caps is not called anywhere.
The audio source inside camerabin2 is put to READY and back to
PLAYING when starting capture, causing the pipeline to lose its
clock. As camerabin2 isn't put to PAUSED->PLAYING again during
this, a new clock isn't selected for elements.
A flags property has been added to encodebin to toggle whether the
conversion elements (ffmpegcolorspace, videoscale, audioconvert,
audioresample, audiorate) are created and linked into the appropriate
branches of encodebin.
Not including these elements avoids some slow caps negotiation and
allows the first buffers to flow through encodebin much more quickly.
However, it imposes that the uncompressed input is appropriate for the
target profile and elements selected to meet that profile.
If we bring the audio source up to the PAUSED state before emitting the
start-capture signal to the camera source, when subequently taking the
audio source to the PLAYING state, it will begin capture more quickly.
Since camerabin2 has switched to encodebin and encodebin has its own
queues and conversion elements, those preceding encodebin are no longer
necessary and as such can be removed.
Previously hlsdemux wasn't sending out any newsegment.
Here we push a GST_FORMAT_TIME newsegment, and whenever possible we
try to indicate the proper start time.
This allows downstream elements to relay the start/time values properly
to the sinks, allowing better stream switching.
The program_stopped vmethod was called before stream_removed vmethod
was being called. Since we only did stream-related operations in there,
we just remove the program_stopped vmethod and do everything in the
stream_removed one.
Also, make sure we flush out all pending data before sending EOS.
stream_type is stored as guint inside the GstStructure but was retreived
using valist with a pointer to guint16. This would cause stack gardening
when code is compiled without optimisation (e.g. in -O0 the compiler wont
pad the stack to optimise out required mask).
https://bugzilla.gnome.org/show_bug.cgi?id=655540
When switching bitrates, we might end up switching to a different
media-type (like from aac to/from mpeg-ts).
For this switch to behave properly in decodebin2, this patch adds:
* dynamic source pads (which will be added/removed whenever a stream
media type changes
* re-checking the fragment media type whenever we switch to a different
playlist
gstpcapparse.c: In function 'gst_pcap_parse_chain':
gstpcapparse.c:381:6: error: 'eth_type' may be used uninitialized in this function [-Werror=uninitialized]
gstpcapparse.c:354:11: note: 'eth_type' was declared here
The current code is not checking for ethernet type, as it's supposed to,
but link layer device type and it's hard-coded to only accept dumps from
ethernet (ARPHRD_ETHER; 1). We don't care where the dump was fetched
from (wlan, 3G, etc.)
What we care about is the that the ethernet type is IP (ETHERNET_IP;
0x800), which is clearly field 14:
http://www.tcpdump.org/pcap3_man.html
And do a bit of cleanup.
Signed-off-by: Felipe Contreras <felipe.contreras@nokia.com>
We first activate new streams before shutting down old ones.
We emit no-more-pads after we add new streams and emit EOS before
removing old ones.
Also cleanup/refactor a bit more of the code accordingly
Using a NULL string for location means that the application
doesn't want the image to be encoded, but wants to receive
the preview image. (Only works for image captures)
Useful for application that want the capture in memory only, like
displaying to the user before it choses to encode or take another
picture in avatar capturing scenarios.
https://bugzilla.gnome.org/show_bug.cgi?id=641918
We in fact get the size of the header (including stuffing bytes), therefore
use that instead of trying to skip 0xff bytes ourselves since some media
streams do start with 0xff (like mpeg audio's initial 0xfff).
That is, output timestamps can then either be the absolute capture time,
or the relative capture time (w.r.t. to first output buffer), or the relative
capture time incremented by some offset.
In mpegtsmux_choose_best_stream () call if the gst_collect_pads_pop () call
returns no buffer (NULL), the plugin SegFaults in the gst_buffer_unref call.
To fix this we check if a valid buffer is returned before calling
gst_buffer_unref ().
Fixes bug #654416.
Appears to be utterly incapable of parsing and decoding TTA streams.
Hasn't been updated to do TTA2. If you want this element to work,
fix the bloody thing. The gst-ffmpeg decoder works fine.
Also fixed an obvious endianness issue along the way.
Fixes: #652924
The default for tagsetters is to use merge keep mode, so tags
would never be replaced and all captures would have the same tags.
This commit watches all elements added into encodebin and sets
all tagsetters to merge replace mode
Using serialized custom events for switching image capture saving
location makes camerabin2 save each capture correctly to the location
that was set during the moment start-capture was called, and not
the moment the filesink was writing to disk.
This prevents captures to be overwriten by racyness among start-capture
and setting location for images.
We only need to change the state of the filesink to switch its
saving location. This might still cause some problems of dropping
captured buffers, but it is better than changing the state of
the whole branch.
buffer timestamps are converted to GstClockTime to cover pcr/pts wraps.
multiple pcr/pts wraps are handled with an index which ensures at most
a single pcr wraparound between two entries.
the last seen pcr is recorded to have a nearby index point for short seeks
resuming playback might be delayed if the postion is not a keyframe
TODO: replace manual packet scanning and parsing in the initial duration estimation
The incoming data has already been scanned in mpeg_packetizer_add_buf.
We can therefore stop scanning for picture data as soon as we've parsed
the header. Makes mpegvideoparse 2 times faster.
https://bugzilla.gnome.org/show_bug.cgi?id=648933
Camerabin2 allows setting a filter for image, video or viewfinder, but
not one filter for all three at the same time. I added a filter to
wrappercamerabinsrc to allow setting a global filter when using this
source.
https://bugzilla.gnome.org/show_bug.cgi?id=649822
Seriously faster. Algorithm is nearly the same as bilinear, which
given the speed of this code, should be considered the baseline of
quality. Speed appears to be limited by memory bandwidth, so I
didn't bother trying to make it any faster.
This element allows you to get information about buffers with bus messages. It
provides the same kind of information as identity does through a notify signal
on a string property, but in a more programmer-friendly way.
... by defaulting to allow splitting packetized input and having
negotiation with downstream deciding whether or not this applies.
Also enable pass-through parsing mode if input and output format
(stream-format and alignment) match.
API: GstH264Parse:split-packetized (removed)
Fixes#650228.
Rather than assert'ing in such case, emit warning if the length of a NAL unit
is less than expected 2 and discard it.
Based on patch by Benjamin M. Schwartz <bens@alum.mit.edu>
Fixes#650416.
Some properties (like viewfinder-filter) only are taken into use
on NULL->READY transitions and the get/set property was returning
the currently in use value, instead of the last set.
This is bad, as after setting 'a' to 'x', you expect that getting 'a'
will return 'x'. This patch fixes it.
If needed, later we could add current-* properties that are readonly
and get the current value in use.
handle the case where encodebin doesn't have the pad
camerabin2 is requesting, either because of its current profile
or because of missing elements, making it fail to provide
the pad
Use merge replace mode to allow new tags to override old ones
and fix the use case where the last sent tags should be serialized
to the captured images.
In video mode the tags should be pushed after sending the start capture
to the source, this allows the video recording elements to be reset
and leave the flushing state they were at after a previous capture.
This fixes the problem where tags only work for the first video capture
Fix this build error:
CC libgstdccp_la-gstdccpplugin.lo
In file included from ../../../gst/dccp/gstdccpclientsrc.h:29:0,
from ../../../gst/dccp/gstdccpplugin.c:24:
../../../gst/dccp/gstdccp_common.h:32:0: warning: WINVER redefined [enabled by default]
/usr/i686-w64-mingw32/sys-root/mingw/include/_mingw.h:231:0: note: this is the location of the previous definition
In file included from ../../../gst/dccp/gstdccpplugin.c:24:0:
../../../gst/dccp/gstdccpclientsrc.h:58:3: error: unknown type name 'uint8_t'
In file included from ../../../gst/dccp/gstdccpplugin.c:25:0:
../../../gst/dccp/gstdccpserversink.h:74:3: error: unknown type name 'uint8_t'
In file included from ../../../gst/dccp/gstdccpplugin.c:26:0:
../../../gst/dccp/gstdccpclientsink.h:67:3: error: unknown type name 'uint8_t'
In file included from ../../../gst/dccp/gstdccpplugin.c:27:0:
../../../gst/dccp/gstdccpserversrc.h:58:3: error: unknown type name 'uint8_t'
make: *** [libgstdccp_la-gstdccpplugin.lo] Error 1
https://bugzilla.gnome.org/show_bug.cgi?id=650171