Since the seeking byte offset is chosen by linear interpolation
from SCR values, we need to take that first SCR into account
to end up near the correct offset. Otherwise, as the code does
a linear search after that first seek, it will take a LOOOOOONG
time to get there for streams which don't start at zero.
https://bugzilla.gnome.org/show_bug.cgi?id=659485
Makes camerabin2 intercept preview-image messages and add
the filename corresponding to the message structure in the
'location' field.
Makes easier for applications to track preview images
Setting the audio source to NULL just after pushing the EOS event
on it could potentially cause loss of said EOS event. Instead, we
can set the audio source to NULL when ready-for-capture is
signalled and the boolean value is true as this indicates we are
not currently capturing video.
header_length contains the length in bytes after the header_length
field, excluding the 6 byte start code and header_length field.
H.264 streams and some other formats need to be announced in the PSM.
VLC wouldn't play files created with mpegpsmux containing H.264 because
we claim the system header is larger than it actually is, which makes
VLC skip the program stream map which follows the system header, which
in turn makes it not recognise our H.264 video stream.
VP8 uses a probabilistic bool coder, not a straight bit coder.
This fixes parsing when error-resilient is set.
This commit includes a copy of libvpx's bool coder, BSD licensed.
https://bugzilla.gnome.org/show_bug.cgi?id=652694
An element stores the result for the last state change it did and
GstBin's state change handler will use this last result for state
locked elements to decide if its state change was successfull or not.
In camerabin2, the filesinks have their state locked and when they
fail switching states, this last failure will be used if the application
tries to change camerabin2's state, causing any state change to fail.
This patch makes camerabin2 reset this last change failure, avoiding
that camerabin2 fails on its next state changes.
Camerabin2 has a zoom property that is simply proxied to its
internal camera-source element. This patch makes camerabin2 listen
to 'notify' signals from it so it can update its zoom property value
when camera-source changes its zoom as a side-effect of another operation
or because the user set the zoom directly to it, instead of doing
it from camerabin2.
Buffers would always start with timestamp 0 and we'd start streaming
from the first buffer, but live streams always start streaming from
the last fragment - 3 fragments in the playlist, which makes its
timestamp, as returned by get_next_fragment, be whatever position
they had in the playlist. This makes sure the position correctly
reports the position of the buffer in the playlist, and added a shifting
variable to allow seeking in the middle of fragments.
In some networks, especiall in 3G, a fragment download or playlist
update may fail. We allow for up to 3 consecutive failures, while using
the rfc's specs for retry delays before considering that there was an
error on the stream.
If we know that our camera source element produces buffers at the same
resolution and appropriate colourspace for the output, we don't need any
of the generic conversion elements in encodebin. This reduces caps
negotiation overheads among other things.
Fixes interesting race conditions that cause crashes in decodebin2
because pads are added/removed from child elements although they
should be in READY state already.
We should stop the update thread in PAUSED state and avoid fetching
new fragments when the queue is not empty. The queue should always be
empty since we push data into a queue. Also, in totem, if we seek and
pause the stream while it's buffering, then the state will stay playing
for some reason, so it's best not to continue fetching fragments forever.
This is to ensure that we reset the accumulate segment on the sinks
so if we start with audio only then switch to audio+video, then both
sinks will have the same segments and will be synchronized.
If we cancel the fetch and call the stop_fetcher, which holds the lock,
when it sets the fetcher's state to NULL, it might send an error
on the bus. In that case, we must ignore it, otherwise it will try
to take the lock and will block forever.
When caching fragments, if we set the current playlist to main, then
it will always think it's a live stream (no endlist in it) so it will
force the redownload of the main playlist after every seek, which is
unnecessary. Also, it causes a race condition where a seek migh happen
during that redownload, and we'll think we're trying to seek a live pipeline.
Reduce the viewfinder queue limits to only allow it to store
one buffer, preventing the queue from holding old buffers for
too long. This also avoids showing slightly outdated frames on
the viewfinder when the source has already produced new ones
and improves the buffer recycling rate, important for sources
that use bufferpools.
first pause the task, then stop all fetchers, then stop the update thread
then pause the task again, since it might have been restarted by
another thread in the meantime
The reason is to let rtpdtmfmux drop buffers during the inter digit interval,
this way, there will be more silence around the DTMF tones so IVFs will have
a better chance recognizing them.
When removing the current program, it will get freed by the
hash table removal callback, so ensure we clear our pointer
to it.
Fixes a crash later on in gst_ts_demux_push trying to access it.
https://bugzilla.gnome.org/show_bug.cgi?id=656927
http://dvd.sourceforge.net/spu_notes does not mention that high bits
are to be masked, and not clearing them makes a sample work, where
clearing them yielded left > right.
History does not shed any light, as tracing this code's origin shows
the same bitmasks being there in 2007 when it was imported.
https://bugzilla.gnome.org/show_bug.cgi?id=620119
The task function uses GST_TASK_WAIT which does a g_cond_wait giving it
the GST_OBJECT_GET_LOCK of the task. The mutex gets locked when
g_cond_wait returns, so if we don't lock/unlock it, it will
stay locked forever, preventing the task from ever finishing.
We shouldn't lock the task object lock, so let's remove the GST_TASK_WAIT
and make the task pause instead if there are no buffers in the queue.
When a program is changed, stream_added is called which sets the
need_newsegment to TRUE, then stream_removed is called, which calls
the flush_pending_data, which checks for the newsegment and causes
it to send a new-segment.
We must not send the newsegment when flushing the pending data on the
removed stream. We should only push it when flushing data on the newly
added streams (after they finish parsing their PTS header)
If a program/stream is changed, then a newsegment is sent which must
not be the same as the base segment since it happens later. We must
shift the start position by the time elapsed since the newsegment
and the current PTS of the stream
By using a separate variable, first it allows us to sort the lists
of alternates but keep the pointer on the first occurence in the main
playlist (to respect the spec of starting with the bitrate specified
first in the main playlist). It also avoid playing with the lists variable
which should be used to store the list of playlists and not as a pointer
to the current one.
Also fixes a memleak with the g_list_foreach freeing the lists, if it wasn't
pointing to the first element of the list.
Basesrc derived classes send an eos when they change state
from paused to ready and that breaks video recordings on camerabin2
as it makes the whole audio branch pads flushing.
Prevent it by using a pad probe that only allows the eos to pass
when it is caused by a stop-capture action.
Capsfilters are created on the constructor and their properties can
be set/get from camerabin2's set/get_property functions. The user with
a broken setup would cause assertions when trying to set/get the
capture caps of this camerabin2.
A proper missing-plugin message will be posted when the user tries to
set camerabin2 to READY state.
GET_BITS is a macro for gst_bit_reader_get_bits_uint32, which cannot
read more than 32 bits and will fail in this case where it is called
to read 79 bits. Since we want to skip those bits, gst_bit_reader_skip
is more appropriate in this case.
Adds a property to add a custom GstElement to the audio
branch of the pipeline. This allows the user to do custom audio
processing/analysis when recording videos.
Use macros to simplyfy the shading code. Those will ease to add support for
other colorspaces in the future. Add more variants for the shading (left,right,
horiz-in, vert-out, vert-in).
camrabin2 connects a viewfinderbin on "vfsrc". viewfinderbin is made of:
vfbin-csp ! vfbin-videoscale ! videosink.
we should either remove csp/videoscale from wrappercamerabinsrc (as
done in this patch) or we should get rid of viewfinderbin altogether.
The use of this method was removed in:
commit 539f10f4d9
basecamerasrc: More cleanup
The code from wrappercamerabinsrc is from v4l2camerasrc but is unused:
get_allowed_input_caps is not called anywhere.
The audio source inside camerabin2 is put to READY and back to
PLAYING when starting capture, causing the pipeline to lose its
clock. As camerabin2 isn't put to PAUSED->PLAYING again during
this, a new clock isn't selected for elements.
A flags property has been added to encodebin to toggle whether the
conversion elements (ffmpegcolorspace, videoscale, audioconvert,
audioresample, audiorate) are created and linked into the appropriate
branches of encodebin.
Not including these elements avoids some slow caps negotiation and
allows the first buffers to flow through encodebin much more quickly.
However, it imposes that the uncompressed input is appropriate for the
target profile and elements selected to meet that profile.
If we bring the audio source up to the PAUSED state before emitting the
start-capture signal to the camera source, when subequently taking the
audio source to the PLAYING state, it will begin capture more quickly.
Since camerabin2 has switched to encodebin and encodebin has its own
queues and conversion elements, those preceding encodebin are no longer
necessary and as such can be removed.
Previously hlsdemux wasn't sending out any newsegment.
Here we push a GST_FORMAT_TIME newsegment, and whenever possible we
try to indicate the proper start time.
This allows downstream elements to relay the start/time values properly
to the sinks, allowing better stream switching.
The program_stopped vmethod was called before stream_removed vmethod
was being called. Since we only did stream-related operations in there,
we just remove the program_stopped vmethod and do everything in the
stream_removed one.
Also, make sure we flush out all pending data before sending EOS.
stream_type is stored as guint inside the GstStructure but was retreived
using valist with a pointer to guint16. This would cause stack gardening
when code is compiled without optimisation (e.g. in -O0 the compiler wont
pad the stack to optimise out required mask).
https://bugzilla.gnome.org/show_bug.cgi?id=655540
When switching bitrates, we might end up switching to a different
media-type (like from aac to/from mpeg-ts).
For this switch to behave properly in decodebin2, this patch adds:
* dynamic source pads (which will be added/removed whenever a stream
media type changes
* re-checking the fragment media type whenever we switch to a different
playlist
gstpcapparse.c: In function 'gst_pcap_parse_chain':
gstpcapparse.c:381:6: error: 'eth_type' may be used uninitialized in this function [-Werror=uninitialized]
gstpcapparse.c:354:11: note: 'eth_type' was declared here
The current code is not checking for ethernet type, as it's supposed to,
but link layer device type and it's hard-coded to only accept dumps from
ethernet (ARPHRD_ETHER; 1). We don't care where the dump was fetched
from (wlan, 3G, etc.)
What we care about is the that the ethernet type is IP (ETHERNET_IP;
0x800), which is clearly field 14:
http://www.tcpdump.org/pcap3_man.html
And do a bit of cleanup.
Signed-off-by: Felipe Contreras <felipe.contreras@nokia.com>
We first activate new streams before shutting down old ones.
We emit no-more-pads after we add new streams and emit EOS before
removing old ones.
Also cleanup/refactor a bit more of the code accordingly
Using a NULL string for location means that the application
doesn't want the image to be encoded, but wants to receive
the preview image. (Only works for image captures)
Useful for application that want the capture in memory only, like
displaying to the user before it choses to encode or take another
picture in avatar capturing scenarios.
https://bugzilla.gnome.org/show_bug.cgi?id=641918
We in fact get the size of the header (including stuffing bytes), therefore
use that instead of trying to skip 0xff bytes ourselves since some media
streams do start with 0xff (like mpeg audio's initial 0xfff).
That is, output timestamps can then either be the absolute capture time,
or the relative capture time (w.r.t. to first output buffer), or the relative
capture time incremented by some offset.