Camerabin2 parses warning messages with gst_message_parse_warning(message,
&err, &debug) but doesn't free given GError and debug strings.
Documentation shows that the ownership of those fields is transferred
to caller (they are marked "[transfer full]" in the API docs).
Basic version with only the system header and the program
stream map. An advanced version could include codec-specific
bits like SPS/PPS too. This is useful in connection with
e.g. multifilesink to make sure new files always start with
the stream headers.
This code is to sync to a live source when there is a delay
between start and when we receive the first buffer, so it does
not make sense in a non live case.
This fixes playback of streams where the input timestamps are
based off some arbitrary offset.
https://bugzilla.gnome.org/show_bug.cgi?id=663756
Initially creating an identity element to forward serialized
events downstream before any caps are known is broken behaviour.
Serialized events should only be forwarded downstream if the
caps are already known, otherwise autopluggers and other elements
using pad-blocks will fail.
This behaviour also doesn't work anymore after basetransform
was fixed to queue serialized events until the caps are known
as a result of fixing bug #659571.
See bug #599469, #665205.
One of my dvds jump on some position and miss about 1 minute of stream.
The reason was mpeg timestamps. On some position scr difference is negative.
It produced negative timestamps. Since it was converted to unsigned value,
gstreamer timestamps was invalid. Instead of increasing mpeg ts,
they was decreasing till it started to be positive.
The jump in timestamps caused mpeg2dec to skip frames to make QoS happy.
This patch just make diff unsigned to avoid negative values.
Signed-off-by: Alexey Fisher <bug-track@fisher-privat.net>
https://bugzilla.gnome.org/show_bug.cgi?id=656115
The spec I found says "16 bits".
The existing code used log2(somevalue)+1.
ffmpeg uses log2(somevalue-1)+1.
The code now uses log2(somevalue-1)+1, and this makes it work with
some sample video without breaking another sample.
Now, I'm far from certain I've got the right spec, I found it by
searching the internet, so...
https://bugzilla.gnome.org/show_bug.cgi?id=654666
Some streams declare PIDs but will not send data for them.
Ensure we time out on those, and both send new segments to
keep their time synchronized with the rest, and do not wait
forever before deciding to signal no-more-pads.
https://bugzilla.gnome.org/show_bug.cgi?id=659924
We track streams for which a data callback is set (and for which
pads will be added only when data is received), and signal
no-more-pads when the last pad is added.
https://bugzilla.gnome.org/show_bug.cgi?id=659924
There was a second threshold, which apparently needs to be smaller
than the first, though I'm not certain of it as I don't understand
yet this nest of wtf that is the mpeg demuxer timing logic.
Fixes video freezing on one (corrupted) MPEG sample. It would
previously never think it was out of the discontinuity, and would
push buffers with no timestamp.
Now this took me more than a day's poking at the thing, for just
one constant change, and I'm scared to have to touch this again :S
https://bugzilla.gnome.org/show_bug.cgi?id=655804
In a test stream, I get one buffer with a PTS of about 15 seconds
in the future compared to the previous one, and next buffers with
timestamps continuing where the original ones left off.
This caused the sink to wait 15 seconds to display the frame while
more frames queued up, and then dump all the subsequent frames as
they "arrived too late".
Maybe that threshold should be made configurable, but for now,
make it more smaller to catch more of these.
https://bugzilla.gnome.org/show_bug.cgi?id=655804
Non AV streams keep using the larger threshold (10 minutes), as
subtitles may arrive only every so often.
Freeverb is a public domain reverb implementation. Port it as a gstreamer
element and make use of gstreamer specific features (gap aware, disconts,
controller, ...).
On the one hand, no need to collect nal if processing last one.
On the other hand, ensure AU collection processing to have sufficient
next NAL data in normal cases.
Fixes#663180.
Just proxying the downstream caps will prevent h264parse from
accepting a different stream-format than what is supported
downstream, although it could convert to a different stream-format.
Reduce start-capture workload by moving the elements' state reseting to the
finishing steps of the capture. This reduces the time start-capture takes to
actually start a capture and return to its caller, improving user experience.
As the elements' state reset is now triggered from the message handling
function, it needs to spawn a new thread, changing state from the pad's
task would cause a deadlock.
Adds a new variable to keep track of the state of the video
recording in camerabin2. This allows start-capture to reject
new video recording requests when one is already ongoing. This
fixes one of check tests.
Rename the image taglists' mutex into image capture mutex and
use it also for the image capture list to prevent concurrent
access from different threads (application and capture threads).
Do not store preview location is post-previews is false, this would
mess up preview naming in case application switches between enabling
and disabling previews
Tags are currently sent from start-capture, which is run in the
application thread. For images we can delay the tags pushing to the
buffer probe and push the tags with the location event and reduce
start-capture time.
Some messages might be interesting to applications, so we can only
decrement the processing counter and send the idle notification
when those messages are posted on the pipline's bus
Generating and posting preview image always comes with a performance
penalty so set default value as false. The preview-caps property that
defines the preview image format is also NULL by default, so instead
of generating preview image of unspecified format by default explicit
action from application should be required for enabling preview image
posting feature.
Application also has to add custom code to be able
to handle preview messages on its message handling function anyway.
This is probably the cause for an occasional crash while streaming
MPEG. Blind fix after staring at the code and following logic, so
may or may not fix the issue, I cannot test.
Makes camerabin2 only signal that it is idle after all previews have
been generated, images are captured and saved, and videos have
been finished properly.
Only access the preview location if it exists, to avoid acessing
a NULL variable. If the preview location list doesn't exist, it is
likely because the source has posted a preview message after camerabin2
has been put to READY.
The preview filename list is acessed whenever a new capture is started, when
camera-source posts a new preview message or on state changes. All of those can
occur simultaneously, so add a mutex to prevent concurrent access.
Since the seeking byte offset is chosen by linear interpolation
from SCR values, we need to take that first SCR into account
to end up near the correct offset. Otherwise, as the code does
a linear search after that first seek, it will take a LOOOOOONG
time to get there for streams which don't start at zero.
https://bugzilla.gnome.org/show_bug.cgi?id=659485
Makes camerabin2 intercept preview-image messages and add
the filename corresponding to the message structure in the
'location' field.
Makes easier for applications to track preview images
Setting the audio source to NULL just after pushing the EOS event
on it could potentially cause loss of said EOS event. Instead, we
can set the audio source to NULL when ready-for-capture is
signalled and the boolean value is true as this indicates we are
not currently capturing video.
header_length contains the length in bytes after the header_length
field, excluding the 6 byte start code and header_length field.
H.264 streams and some other formats need to be announced in the PSM.
VLC wouldn't play files created with mpegpsmux containing H.264 because
we claim the system header is larger than it actually is, which makes
VLC skip the program stream map which follows the system header, which
in turn makes it not recognise our H.264 video stream.
VP8 uses a probabilistic bool coder, not a straight bit coder.
This fixes parsing when error-resilient is set.
This commit includes a copy of libvpx's bool coder, BSD licensed.
https://bugzilla.gnome.org/show_bug.cgi?id=652694
An element stores the result for the last state change it did and
GstBin's state change handler will use this last result for state
locked elements to decide if its state change was successfull or not.
In camerabin2, the filesinks have their state locked and when they
fail switching states, this last failure will be used if the application
tries to change camerabin2's state, causing any state change to fail.
This patch makes camerabin2 reset this last change failure, avoiding
that camerabin2 fails on its next state changes.
Camerabin2 has a zoom property that is simply proxied to its
internal camera-source element. This patch makes camerabin2 listen
to 'notify' signals from it so it can update its zoom property value
when camera-source changes its zoom as a side-effect of another operation
or because the user set the zoom directly to it, instead of doing
it from camerabin2.
Buffers would always start with timestamp 0 and we'd start streaming
from the first buffer, but live streams always start streaming from
the last fragment - 3 fragments in the playlist, which makes its
timestamp, as returned by get_next_fragment, be whatever position
they had in the playlist. This makes sure the position correctly
reports the position of the buffer in the playlist, and added a shifting
variable to allow seeking in the middle of fragments.
In some networks, especiall in 3G, a fragment download or playlist
update may fail. We allow for up to 3 consecutive failures, while using
the rfc's specs for retry delays before considering that there was an
error on the stream.
If we know that our camera source element produces buffers at the same
resolution and appropriate colourspace for the output, we don't need any
of the generic conversion elements in encodebin. This reduces caps
negotiation overheads among other things.
Fixes interesting race conditions that cause crashes in decodebin2
because pads are added/removed from child elements although they
should be in READY state already.
We should stop the update thread in PAUSED state and avoid fetching
new fragments when the queue is not empty. The queue should always be
empty since we push data into a queue. Also, in totem, if we seek and
pause the stream while it's buffering, then the state will stay playing
for some reason, so it's best not to continue fetching fragments forever.
This is to ensure that we reset the accumulate segment on the sinks
so if we start with audio only then switch to audio+video, then both
sinks will have the same segments and will be synchronized.
If we cancel the fetch and call the stop_fetcher, which holds the lock,
when it sets the fetcher's state to NULL, it might send an error
on the bus. In that case, we must ignore it, otherwise it will try
to take the lock and will block forever.
When caching fragments, if we set the current playlist to main, then
it will always think it's a live stream (no endlist in it) so it will
force the redownload of the main playlist after every seek, which is
unnecessary. Also, it causes a race condition where a seek migh happen
during that redownload, and we'll think we're trying to seek a live pipeline.
Reduce the viewfinder queue limits to only allow it to store
one buffer, preventing the queue from holding old buffers for
too long. This also avoids showing slightly outdated frames on
the viewfinder when the source has already produced new ones
and improves the buffer recycling rate, important for sources
that use bufferpools.
first pause the task, then stop all fetchers, then stop the update thread
then pause the task again, since it might have been restarted by
another thread in the meantime
The reason is to let rtpdtmfmux drop buffers during the inter digit interval,
this way, there will be more silence around the DTMF tones so IVFs will have
a better chance recognizing them.
When removing the current program, it will get freed by the
hash table removal callback, so ensure we clear our pointer
to it.
Fixes a crash later on in gst_ts_demux_push trying to access it.
https://bugzilla.gnome.org/show_bug.cgi?id=656927
http://dvd.sourceforge.net/spu_notes does not mention that high bits
are to be masked, and not clearing them makes a sample work, where
clearing them yielded left > right.
History does not shed any light, as tracing this code's origin shows
the same bitmasks being there in 2007 when it was imported.
https://bugzilla.gnome.org/show_bug.cgi?id=620119
The task function uses GST_TASK_WAIT which does a g_cond_wait giving it
the GST_OBJECT_GET_LOCK of the task. The mutex gets locked when
g_cond_wait returns, so if we don't lock/unlock it, it will
stay locked forever, preventing the task from ever finishing.
We shouldn't lock the task object lock, so let's remove the GST_TASK_WAIT
and make the task pause instead if there are no buffers in the queue.
When a program is changed, stream_added is called which sets the
need_newsegment to TRUE, then stream_removed is called, which calls
the flush_pending_data, which checks for the newsegment and causes
it to send a new-segment.
We must not send the newsegment when flushing the pending data on the
removed stream. We should only push it when flushing data on the newly
added streams (after they finish parsing their PTS header)
If a program/stream is changed, then a newsegment is sent which must
not be the same as the base segment since it happens later. We must
shift the start position by the time elapsed since the newsegment
and the current PTS of the stream
By using a separate variable, first it allows us to sort the lists
of alternates but keep the pointer on the first occurence in the main
playlist (to respect the spec of starting with the bitrate specified
first in the main playlist). It also avoid playing with the lists variable
which should be used to store the list of playlists and not as a pointer
to the current one.
Also fixes a memleak with the g_list_foreach freeing the lists, if it wasn't
pointing to the first element of the list.
Basesrc derived classes send an eos when they change state
from paused to ready and that breaks video recordings on camerabin2
as it makes the whole audio branch pads flushing.
Prevent it by using a pad probe that only allows the eos to pass
when it is caused by a stop-capture action.
Capsfilters are created on the constructor and their properties can
be set/get from camerabin2's set/get_property functions. The user with
a broken setup would cause assertions when trying to set/get the
capture caps of this camerabin2.
A proper missing-plugin message will be posted when the user tries to
set camerabin2 to READY state.
GET_BITS is a macro for gst_bit_reader_get_bits_uint32, which cannot
read more than 32 bits and will fail in this case where it is called
to read 79 bits. Since we want to skip those bits, gst_bit_reader_skip
is more appropriate in this case.
Adds a property to add a custom GstElement to the audio
branch of the pipeline. This allows the user to do custom audio
processing/analysis when recording videos.
Use macros to simplyfy the shading code. Those will ease to add support for
other colorspaces in the future. Add more variants for the shading (left,right,
horiz-in, vert-out, vert-in).
camrabin2 connects a viewfinderbin on "vfsrc". viewfinderbin is made of:
vfbin-csp ! vfbin-videoscale ! videosink.
we should either remove csp/videoscale from wrappercamerabinsrc (as
done in this patch) or we should get rid of viewfinderbin altogether.
The use of this method was removed in:
commit 539f10f4d9
basecamerasrc: More cleanup
The code from wrappercamerabinsrc is from v4l2camerasrc but is unused:
get_allowed_input_caps is not called anywhere.
The audio source inside camerabin2 is put to READY and back to
PLAYING when starting capture, causing the pipeline to lose its
clock. As camerabin2 isn't put to PAUSED->PLAYING again during
this, a new clock isn't selected for elements.
A flags property has been added to encodebin to toggle whether the
conversion elements (ffmpegcolorspace, videoscale, audioconvert,
audioresample, audiorate) are created and linked into the appropriate
branches of encodebin.
Not including these elements avoids some slow caps negotiation and
allows the first buffers to flow through encodebin much more quickly.
However, it imposes that the uncompressed input is appropriate for the
target profile and elements selected to meet that profile.
If we bring the audio source up to the PAUSED state before emitting the
start-capture signal to the camera source, when subequently taking the
audio source to the PLAYING state, it will begin capture more quickly.
Since camerabin2 has switched to encodebin and encodebin has its own
queues and conversion elements, those preceding encodebin are no longer
necessary and as such can be removed.
Previously hlsdemux wasn't sending out any newsegment.
Here we push a GST_FORMAT_TIME newsegment, and whenever possible we
try to indicate the proper start time.
This allows downstream elements to relay the start/time values properly
to the sinks, allowing better stream switching.
The program_stopped vmethod was called before stream_removed vmethod
was being called. Since we only did stream-related operations in there,
we just remove the program_stopped vmethod and do everything in the
stream_removed one.
Also, make sure we flush out all pending data before sending EOS.
stream_type is stored as guint inside the GstStructure but was retreived
using valist with a pointer to guint16. This would cause stack gardening
when code is compiled without optimisation (e.g. in -O0 the compiler wont
pad the stack to optimise out required mask).
https://bugzilla.gnome.org/show_bug.cgi?id=655540
When switching bitrates, we might end up switching to a different
media-type (like from aac to/from mpeg-ts).
For this switch to behave properly in decodebin2, this patch adds:
* dynamic source pads (which will be added/removed whenever a stream
media type changes
* re-checking the fragment media type whenever we switch to a different
playlist
gstpcapparse.c: In function 'gst_pcap_parse_chain':
gstpcapparse.c:381:6: error: 'eth_type' may be used uninitialized in this function [-Werror=uninitialized]
gstpcapparse.c:354:11: note: 'eth_type' was declared here
The current code is not checking for ethernet type, as it's supposed to,
but link layer device type and it's hard-coded to only accept dumps from
ethernet (ARPHRD_ETHER; 1). We don't care where the dump was fetched
from (wlan, 3G, etc.)
What we care about is the that the ethernet type is IP (ETHERNET_IP;
0x800), which is clearly field 14:
http://www.tcpdump.org/pcap3_man.html
And do a bit of cleanup.
Signed-off-by: Felipe Contreras <felipe.contreras@nokia.com>
We first activate new streams before shutting down old ones.
We emit no-more-pads after we add new streams and emit EOS before
removing old ones.
Also cleanup/refactor a bit more of the code accordingly
Using a NULL string for location means that the application
doesn't want the image to be encoded, but wants to receive
the preview image. (Only works for image captures)
Useful for application that want the capture in memory only, like
displaying to the user before it choses to encode or take another
picture in avatar capturing scenarios.
https://bugzilla.gnome.org/show_bug.cgi?id=641918
We in fact get the size of the header (including stuffing bytes), therefore
use that instead of trying to skip 0xff bytes ourselves since some media
streams do start with 0xff (like mpeg audio's initial 0xfff).
That is, output timestamps can then either be the absolute capture time,
or the relative capture time (w.r.t. to first output buffer), or the relative
capture time incremented by some offset.
In mpegtsmux_choose_best_stream () call if the gst_collect_pads_pop () call
returns no buffer (NULL), the plugin SegFaults in the gst_buffer_unref call.
To fix this we check if a valid buffer is returned before calling
gst_buffer_unref ().
Fixes bug #654416.
Appears to be utterly incapable of parsing and decoding TTA streams.
Hasn't been updated to do TTA2. If you want this element to work,
fix the bloody thing. The gst-ffmpeg decoder works fine.
Also fixed an obvious endianness issue along the way.
Fixes: #652924
The default for tagsetters is to use merge keep mode, so tags
would never be replaced and all captures would have the same tags.
This commit watches all elements added into encodebin and sets
all tagsetters to merge replace mode
Using serialized custom events for switching image capture saving
location makes camerabin2 save each capture correctly to the location
that was set during the moment start-capture was called, and not
the moment the filesink was writing to disk.
This prevents captures to be overwriten by racyness among start-capture
and setting location for images.
We only need to change the state of the filesink to switch its
saving location. This might still cause some problems of dropping
captured buffers, but it is better than changing the state of
the whole branch.
buffer timestamps are converted to GstClockTime to cover pcr/pts wraps.
multiple pcr/pts wraps are handled with an index which ensures at most
a single pcr wraparound between two entries.
the last seen pcr is recorded to have a nearby index point for short seeks
resuming playback might be delayed if the postion is not a keyframe
TODO: replace manual packet scanning and parsing in the initial duration estimation
The incoming data has already been scanned in mpeg_packetizer_add_buf.
We can therefore stop scanning for picture data as soon as we've parsed
the header. Makes mpegvideoparse 2 times faster.
https://bugzilla.gnome.org/show_bug.cgi?id=648933
Camerabin2 allows setting a filter for image, video or viewfinder, but
not one filter for all three at the same time. I added a filter to
wrappercamerabinsrc to allow setting a global filter when using this
source.
https://bugzilla.gnome.org/show_bug.cgi?id=649822
Seriously faster. Algorithm is nearly the same as bilinear, which
given the speed of this code, should be considered the baseline of
quality. Speed appears to be limited by memory bandwidth, so I
didn't bother trying to make it any faster.
This element allows you to get information about buffers with bus messages. It
provides the same kind of information as identity does through a notify signal
on a string property, but in a more programmer-friendly way.
... by defaulting to allow splitting packetized input and having
negotiation with downstream deciding whether or not this applies.
Also enable pass-through parsing mode if input and output format
(stream-format and alignment) match.
API: GstH264Parse:split-packetized (removed)
Fixes#650228.
Rather than assert'ing in such case, emit warning if the length of a NAL unit
is less than expected 2 and discard it.
Based on patch by Benjamin M. Schwartz <bens@alum.mit.edu>
Fixes#650416.
Some properties (like viewfinder-filter) only are taken into use
on NULL->READY transitions and the get/set property was returning
the currently in use value, instead of the last set.
This is bad, as after setting 'a' to 'x', you expect that getting 'a'
will return 'x'. This patch fixes it.
If needed, later we could add current-* properties that are readonly
and get the current value in use.
handle the case where encodebin doesn't have the pad
camerabin2 is requesting, either because of its current profile
or because of missing elements, making it fail to provide
the pad
Use merge replace mode to allow new tags to override old ones
and fix the use case where the last sent tags should be serialized
to the captured images.
In video mode the tags should be pushed after sending the start capture
to the source, this allows the video recording elements to be reset
and leave the flushing state they were at after a previous capture.
This fixes the problem where tags only work for the first video capture
Fix this build error:
CC libgstdccp_la-gstdccpplugin.lo
In file included from ../../../gst/dccp/gstdccpclientsrc.h:29:0,
from ../../../gst/dccp/gstdccpplugin.c:24:
../../../gst/dccp/gstdccp_common.h:32:0: warning: WINVER redefined [enabled by default]
/usr/i686-w64-mingw32/sys-root/mingw/include/_mingw.h:231:0: note: this is the location of the previous definition
In file included from ../../../gst/dccp/gstdccpplugin.c:24:0:
../../../gst/dccp/gstdccpclientsrc.h:58:3: error: unknown type name 'uint8_t'
In file included from ../../../gst/dccp/gstdccpplugin.c:25:0:
../../../gst/dccp/gstdccpserversink.h:74:3: error: unknown type name 'uint8_t'
In file included from ../../../gst/dccp/gstdccpplugin.c:26:0:
../../../gst/dccp/gstdccpclientsink.h:67:3: error: unknown type name 'uint8_t'
In file included from ../../../gst/dccp/gstdccpplugin.c:27:0:
../../../gst/dccp/gstdccpserversrc.h:58:3: error: unknown type name 'uint8_t'
make: *** [libgstdccp_la-gstdccpplugin.lo] Error 1
https://bugzilla.gnome.org/show_bug.cgi?id=650171
This patch removes the audio source buffer probe that was used
to re-timestamp buffers to make them start from 0. As muxers
have been fixed to use running time instead of timestamps, this
is not needed anymore.
Fixes bug #646211
Only answer duration queries in TIME format with a duration
in seconds. Make sure we don't return GST_CLOCK_TIME_NONE as
duration (which is non-0, but still invalid/useless).
Don't try to push remaining data in the adapter on receiving a FLUSH event,
just flush the adapter. Do this on FLUSH_STOP, however, which is serialized,
unlike FLUSH_START, so we don't mess with the adapter at the same time as
the streaming thread.
This function will remove the whole marker from the buffer.
Also we set it as the default behavior for marker JPG{0-13}? in order to avoid
a useless #if
https://bugzilla.gnome.org/show_bug.cgi?id=626618
Remove the android/ top dir
Fixe the Makefile.am to be androgenized
To build gstreamer for android we are now using androgenizer which generates the needed Android.mk files.
Androgenizer can be found here: http://git.collabora.co.uk/?p=user/derek/androgenizer.git
Instead everything will be put into the last-message property and
gst-launch -v will print all changes of the property. This makes
the behaviour of fpsdisplay consistent with the fakesink/identity/etc
behaviour.
Use of the GAP flag is not really correct here and makes it difficult to
handle real GAP buffers in deinterlace. The RFF flag is unused and can
be reused with similar semantics - the buffers marked with RFF that are
in a telecine state contain only unneeded repeated fields and so can be
dropped.
Don't use g_idle_add() and friends to schedule things we can't do from the
streaming thread in another thread. The app may not be running the default
GLib main loop. Instead, just spawn a thread.
Also, we need to care for when acessing a pad variable, as another thread
might have taken camerabin to NULL while this gst_camerabin_imgbin_finished
didn't run.
https://bugzilla.gnome.org/show_bug.cgi?id=615655
The videobin and imagebin from camerabin have their states
locked and aren't put to READY when all the rest of camerabin
is set to it.
This might cause one of them to be still processing and post
an EOS after camerabin isn't expecting it anymore, this causes
an assertion as the processing counter would already be 0 and
would be decremented.
Make sure we only write the bottom 30 bits of the PCR to the m2ts header.
Don't use floating point computation for it, and remove weird bit fiddling
that messes up the PCR in a way I can't find any
justification/documentation for.
Don't accidentally lose PCR packets from the output.
Fix the description for the m2ts-mode property so it's clear it's a flag,
and which setting does what.
Fixes: #611061#644429
Partially fixes: #645006
Detects Munsell ColorChecker in a video image and automatically
white balances and color corrects based on the detected values.
This element is only a demonstration at this stage, it needs to
be separated into two elements.
Instead of probing the videosink sinkpad for passing EOS, better
to wait for EOS from the bus.
This makes sure the filesink has already processed it and is
ready to close the file. This is used to notify applications
that camerabin2 is idle and can be shut down.
This is not implemented in any of our real sources to which wrappercamerabinsrc
might connect but this is optional and can be implemented at any time. A
limit on the software zoom level using video{crop,scale} would be arbitrary.
Use resource warning messages to notify camerabin2 that a capture
as aborted or couldn't be started, making it decrement the
processing counter and making the idle property more reliable.
Setting the audio source to null isn't needed and it could
make the EOS that is still flowing be dropped if autoaudiosrc
is used because its pads go flushing before the EOS gets pushed
from the real source.
gst_caps_make_writable() takes ownership of the caps passed in, but
the caller doesn't own a ref to the caps here, because GST_PAD_CAPS
doesn't return a ref. Looks like the code relied on a caps leak
elsewhere for this to work properly.
If downstream doesn't handle the newsegment event, don't error out (esp.
not without posting a proper error message on the bus), but just continue.
If there's a problem, we'll find out when we start pushing buffers.
https://bugzilla.gnome.org/show_bug.cgi?id=644395
When recording 2 videos in sequence with the same video-capture-caps,
the second video would get a not-negotiated error because the
src caps were being cleared without any intention of
renegotiating it back to the requested capture caps.
This patch avoids this caps reset procedure unless a new
caps was set.
Don't leak string copy returned by gst_element_get_name(). Also, check
for certain elements by checking the plugin feature / factory name, not
the assigned object name.
... and not only when sort-of feeling like it.
In any case, if it turns out all really is in order,
and presumably DTS == PTS, then no ctts will be produced anyway.
We can't rely on audio sources pushing EOS when going PAUSED->READY
because this is a basesrc bahavior and when used inside autoaudiosrc
the ghostpad goes flushing before the real source pushes the EOS,
so it is dropped.
Audio elements are put into bin only when needed, so we need
to be careful with their states as camerabin2 won't manage
them if they are outside the bin.
Also we should reset their pad's flushing status before
starting a new capture.
Adds an audio source and audio capsfilter/queue/convert, creating
a new branch on camerabin2 that is used to feed encodebin with
audio buffers for video recording.
Adds properties to check what caps are supported on the
viewfinder (from the camerasrc viewfinder pad) and another
one to set a caps for the viewfinder.
Use video_renegotiate and image_renegotiate booleans to make
the videosrc negotiate the capture caps on the first capture because
the caps might be set before wrappercamerabinsrc goes into PLAYING
and pads drop the internal renegotiate event.
This is required as the output-selector is using the 'none' negotiation
mode.
When setting the internal capsfilter caps for capture we should put
the full caps instead of trying to fixate it ourselves. This way we let
the elements (and mostly the source) select the best format instead
of defaulting to what the pad fixation function picks.
This element analyses video buffers to identify if they are progressive,
interlaced or telecined and outputs buffers with appropriate flags for a
downstream element (which will be the deinterlace element, after some
forthcoming modifications) to be able to output progressive frames and
adjust timestamps resulting in a progressive stream.
Remove bogus freeing of pad element_private data that we
never set (collectpads uses it, which causes confusion here).
Also, check that our collectpads instance exists before using
it. Partial fix for #636011.
with permission from the license header:
"""
This library is licensed under 2 different licenses and you
can choose to use it under the terms of either one of them. The
two licenses are the MPL 1.1 and the LGPL.
"""
Even if we currently do not have a duration yet, assume seekable if
it looks like we'll likely be able to determine it later on
(which coincides with needed information to perform seeking).
Even if we currently do not have a duration yet, assume seekable if
it looks like we'll likely be able to determine it later on
(which coincides with needed information to perform seeking).
Fixes#641047.
If videobin/imagebin was never set to READY state the ownership
of elements created and set by application were never taken by
bin and therefore gst_object_sink is called for these elements
before unreffing (they may still be in floating state and not
unreffed properly without sinking first)
Even if VBR headers are missing, we can't guarantee that a stream is in
fact a CBR stream, so it's safer to let baseparse calculate the average
bitrate rather than assume a CBR stream. However, in order to make
/some/ metadata available before the requisite number of frames have
been parsed, this posts the bitrate from the non-VBR headers as the
nominal bitrate.
https://bugzilla.gnome.org/show_bug.cgi?id=641858
When select-all was set, input-selector wasn't handling upstream events.
Now input-selector forwards the event to all of its sink pads. This
changes the input-selector internal to camerabin until it is replaced
with a better solution.
Image previews where being posted in sync with the buffers
timestamps, this makes no sense as previews should be posted ASAP.
Also adds some debugging messages.
Camerabin2 uses state changes to force the source to renegotiate its
caps to the capture formats. The state changes makes the source lose
its clock and base_time, causing it to stop timestamping the buffers.
We still need a proper way to make sources renegotiate its caps, so this
patch is a hack to make the source continue timestamping buffers even
after changing state. The patch works by getting the clock and base
time before doing the state change to NULL and setting them back
after putting it to PLAYING again. It also cares to drop the first
new segment after this state change.