The spec I found says "16 bits".
The existing code used log2(somevalue)+1.
ffmpeg uses log2(somevalue-1)+1.
The code now uses log2(somevalue-1)+1, and this makes it work with
some sample video without breaking another sample.
Now, I'm far from certain I've got the right spec, I found it by
searching the internet, so...
https://bugzilla.gnome.org/show_bug.cgi?id=654666
Some streams declare PIDs but will not send data for them.
Ensure we time out on those, and both send new segments to
keep their time synchronized with the rest, and do not wait
forever before deciding to signal no-more-pads.
https://bugzilla.gnome.org/show_bug.cgi?id=659924
We track streams for which a data callback is set (and for which
pads will be added only when data is received), and signal
no-more-pads when the last pad is added.
https://bugzilla.gnome.org/show_bug.cgi?id=659924
There was a second threshold, which apparently needs to be smaller
than the first, though I'm not certain of it as I don't understand
yet this nest of wtf that is the mpeg demuxer timing logic.
Fixes video freezing on one (corrupted) MPEG sample. It would
previously never think it was out of the discontinuity, and would
push buffers with no timestamp.
Now this took me more than a day's poking at the thing, for just
one constant change, and I'm scared to have to touch this again :S
https://bugzilla.gnome.org/show_bug.cgi?id=655804
In a test stream, I get one buffer with a PTS of about 15 seconds
in the future compared to the previous one, and next buffers with
timestamps continuing where the original ones left off.
This caused the sink to wait 15 seconds to display the frame while
more frames queued up, and then dump all the subsequent frames as
they "arrived too late".
Maybe that threshold should be made configurable, but for now,
make it more smaller to catch more of these.
https://bugzilla.gnome.org/show_bug.cgi?id=655804
Non AV streams keep using the larger threshold (10 minutes), as
subtitles may arrive only every so often.
Freeverb is a public domain reverb implementation. Port it as a gstreamer
element and make use of gstreamer specific features (gap aware, disconts,
controller, ...).
On the one hand, no need to collect nal if processing last one.
On the other hand, ensure AU collection processing to have sufficient
next NAL data in normal cases.
Fixes#663180.
Just proxying the downstream caps will prevent h264parse from
accepting a different stream-format than what is supported
downstream, although it could convert to a different stream-format.
Reduce start-capture workload by moving the elements' state reseting to the
finishing steps of the capture. This reduces the time start-capture takes to
actually start a capture and return to its caller, improving user experience.
As the elements' state reset is now triggered from the message handling
function, it needs to spawn a new thread, changing state from the pad's
task would cause a deadlock.
Adds a new variable to keep track of the state of the video
recording in camerabin2. This allows start-capture to reject
new video recording requests when one is already ongoing. This
fixes one of check tests.
Rename the image taglists' mutex into image capture mutex and
use it also for the image capture list to prevent concurrent
access from different threads (application and capture threads).
Do not store preview location is post-previews is false, this would
mess up preview naming in case application switches between enabling
and disabling previews
Tags are currently sent from start-capture, which is run in the
application thread. For images we can delay the tags pushing to the
buffer probe and push the tags with the location event and reduce
start-capture time.
Some messages might be interesting to applications, so we can only
decrement the processing counter and send the idle notification
when those messages are posted on the pipline's bus
Generating and posting preview image always comes with a performance
penalty so set default value as false. The preview-caps property that
defines the preview image format is also NULL by default, so instead
of generating preview image of unspecified format by default explicit
action from application should be required for enabling preview image
posting feature.
Application also has to add custom code to be able
to handle preview messages on its message handling function anyway.
This is probably the cause for an occasional crash while streaming
MPEG. Blind fix after staring at the code and following logic, so
may or may not fix the issue, I cannot test.
Makes camerabin2 only signal that it is idle after all previews have
been generated, images are captured and saved, and videos have
been finished properly.
Only access the preview location if it exists, to avoid acessing
a NULL variable. If the preview location list doesn't exist, it is
likely because the source has posted a preview message after camerabin2
has been put to READY.
The preview filename list is acessed whenever a new capture is started, when
camera-source posts a new preview message or on state changes. All of those can
occur simultaneously, so add a mutex to prevent concurrent access.
Since the seeking byte offset is chosen by linear interpolation
from SCR values, we need to take that first SCR into account
to end up near the correct offset. Otherwise, as the code does
a linear search after that first seek, it will take a LOOOOOONG
time to get there for streams which don't start at zero.
https://bugzilla.gnome.org/show_bug.cgi?id=659485
Makes camerabin2 intercept preview-image messages and add
the filename corresponding to the message structure in the
'location' field.
Makes easier for applications to track preview images
Setting the audio source to NULL just after pushing the EOS event
on it could potentially cause loss of said EOS event. Instead, we
can set the audio source to NULL when ready-for-capture is
signalled and the boolean value is true as this indicates we are
not currently capturing video.
header_length contains the length in bytes after the header_length
field, excluding the 6 byte start code and header_length field.
H.264 streams and some other formats need to be announced in the PSM.
VLC wouldn't play files created with mpegpsmux containing H.264 because
we claim the system header is larger than it actually is, which makes
VLC skip the program stream map which follows the system header, which
in turn makes it not recognise our H.264 video stream.
VP8 uses a probabilistic bool coder, not a straight bit coder.
This fixes parsing when error-resilient is set.
This commit includes a copy of libvpx's bool coder, BSD licensed.
https://bugzilla.gnome.org/show_bug.cgi?id=652694
An element stores the result for the last state change it did and
GstBin's state change handler will use this last result for state
locked elements to decide if its state change was successfull or not.
In camerabin2, the filesinks have their state locked and when they
fail switching states, this last failure will be used if the application
tries to change camerabin2's state, causing any state change to fail.
This patch makes camerabin2 reset this last change failure, avoiding
that camerabin2 fails on its next state changes.
Camerabin2 has a zoom property that is simply proxied to its
internal camera-source element. This patch makes camerabin2 listen
to 'notify' signals from it so it can update its zoom property value
when camera-source changes its zoom as a side-effect of another operation
or because the user set the zoom directly to it, instead of doing
it from camerabin2.
Buffers would always start with timestamp 0 and we'd start streaming
from the first buffer, but live streams always start streaming from
the last fragment - 3 fragments in the playlist, which makes its
timestamp, as returned by get_next_fragment, be whatever position
they had in the playlist. This makes sure the position correctly
reports the position of the buffer in the playlist, and added a shifting
variable to allow seeking in the middle of fragments.
In some networks, especiall in 3G, a fragment download or playlist
update may fail. We allow for up to 3 consecutive failures, while using
the rfc's specs for retry delays before considering that there was an
error on the stream.
If we know that our camera source element produces buffers at the same
resolution and appropriate colourspace for the output, we don't need any
of the generic conversion elements in encodebin. This reduces caps
negotiation overheads among other things.
Fixes interesting race conditions that cause crashes in decodebin2
because pads are added/removed from child elements although they
should be in READY state already.
We should stop the update thread in PAUSED state and avoid fetching
new fragments when the queue is not empty. The queue should always be
empty since we push data into a queue. Also, in totem, if we seek and
pause the stream while it's buffering, then the state will stay playing
for some reason, so it's best not to continue fetching fragments forever.
This is to ensure that we reset the accumulate segment on the sinks
so if we start with audio only then switch to audio+video, then both
sinks will have the same segments and will be synchronized.