That is, as such formats allow subclass to extract position from frame,
it is possible to extract duration (if not otherwise provided)
from (near) last frame, and a seek can fairly accurately target the required
position.
Fixes#631389.
Arrange for upstream as well as downstream flushing when seeking.
Also determine upstream size as well as seekability. Adjust some comments
to reality and employ debug statement in proper order.
Thanks to Felipe Contreras for the suggestion. This is partially
based on his patches and makes flacparse more than 3.5 times faster.
Looking for valid frame headers is unlikely to give false positives
because every frame header is at least 9 bytes long, contains a
14 bit sync code and a 8 bit checksum over the first 8 bytes.
Fixes bug #631200.
The first newsegment event will be send by the first call to
gst_base_parse_push_buffer() if necessary, posting the tags
before that is not a good idea. Instead do it from the
GstBaseParse::pre_push_buffer vfunc.
This reverts commit b5a3d60363.
Reverting this for now, since no one really seems to remember why this
property exists or what it could possibly be good for. It seems to have
been in the original mp3parse since the beginning of time and was back-
ported from there.
Seekability, like duration, etc is unlikely to change (frequently), and
the default assumption covers most cases, so let subclass set when needed.
At the same time, allow subclass to indicate if it has seek-metadata (table)
available, and possibly have it provide an average bitrate.
This allows the child class to chain its event handler with
GstBaseParse, so that subclasses don't have to duplicate all the default
event handling logic.
https://bugzilla.gnome.org/show_bug.cgi?id=622276
We wait to parse a minimum number of frames (10, arbitrarily) before
emiting bitrate tags so that our early estimates are not wildly
inaccurate for streams that start with a silence. If the stream ends
before that, we just emit the tags anyway.
While it _would_ be nicer to be specify the threshold to start pushing
the tags in terms of duration, this would introduce more complexity than
this merits.
https://bugzilla.gnome.org/show_bug.cgi?id=614991
This is optional because it's a quite expensive operation and it's very
unlikely that a non-frame is detected as frame after the header CRC check
and checking all bits for valid values. The overall frame checksums are
mainly useful to detect inconsistencies in the encoded payload.
When called from the GST_FLAC_PARSE_STATE_HEADERS case,
gst_flac_parse_hand_headers() does a gst_buffer_set_caps() on a buffer
with refcount > 1. This change handles this case by making the buffer
metadata_Writable.
https://bugzilla.gnome.org/show_bug.cgi?id=614037
This patch adds the get_frame_overhead() vfunc so that baseparse can
accurately calculate the min/avg/max bitrates for aacparse.
Note: The bitrate was being incorrectly calculated for ADTS streams
(it's not in the header as the code suggests).
This makes baseparse keep a running average of the stream bitrate, as
well as the minimum and maximum bitrates. Subclasses can override a
vfunc to make sure that per-frame overhead from the container is not
accounted for in the bitrate calculation.
We take care not to override the bitrate, minimum-bitrate, and
maximum-bitrate tags if they have been posted upstream. We also
rate-limit the emission of bitrate so that it is only triggered by a
change of >10 kbps.
Because config.h defines __MSVCRT_VERSION__, which should be defined
before inclusion of any system header.
Also fixes mpegdemux Makefile.am LIBADD typo.
Fixes#606665
Perform sanity check on type of seek, and only perform one that is
appropriately supported. Adjust downstream newsegment event
to first buffer timestamp that is sent downstream.
In particular, consider DISCONT == !sync, and allow subclass to query
sync state, as it may want to perform additional checks depending
on whether sync was achieved earlier on.
Also arrange for subclass to query whether leftover data is being drained.
In particular, (optionally) provide baseparse with a notion of frames per second
(and therefore also frame duration) and have it track frame and byte counts.
This way, subclass can provide baseparse with fps and have it provide default
buffer time metadata and conversions, though subclass can still install
callbacks to handle such itself.
After all, stream is as-is, and there is little molding to downstream's
taste that can be done. If subclass can and wants to do so, it can
still override as such.
Also handle the case gracefully where the subclass decides to drop
the first buffers and has no caps set yet. It's still required to
have valid caps set when the first buffer should be passed downstream.
In one case we extracted the sample rate index from the codec data
and saved it as sample rate rather than getting the real sample
rate from the table. Fix that, and also make sure we don't access
non-existant table entries by adding a small helper function that
guards against out-of-bounds access in case of invalid input data.
Create output caps from input caps, so we maintain any fields we
might get on the input caps, such as codec_data or rate and channels.
Set channels and rate on the output caps if we don't have input caps
or they don't contain such fields. We do this partly because we can,
but also because some muxers need this information. Tagreadbin will
also be happy about this.
Sending the flush-start event forward before taking the stream lock actually
works, in contrast to deadlocking in downstream preroll_wait (hunk 1).
After that we get the chain function being stuck in a busy loop. This is fixed
by updating the minimum frame size inside the synchronization loop because the
subclass asks for more data in this way (hunk 2).
Finally, this leads to a very probable crash because the subclass can find a
valid frame with a size greater than the currently available data in the
adapter. This makes the subsequent gst_adapter_take_buffer call return NULL,
which is not expected (hunk 3).
The problem is that after a discont, set_min_frame_size(1024) is called when
detect_stream returns FALSE. However, detect_stream calls check_adts_frame
which sets the frame size on its own to something larger than 1024. This is the
same situation as in the beginning, so the base class ends up calling
check_valid_frame in an endless loop.
Baseparse internaly breaks the semantics of a _chain function by calling it with
buffer==NULL. The reson I belived it was okay to remove it was that there is
also an unchecked access to buffer later in _chain. Actually that code is wrong,
as it most probably wants to set discont on the outgoing buffer.
1) We need to lock and get a strong ref to the parent, if still there.
2) If it has gone away, we need to handle that gracefully.
This is necessary in order to safely modify a running pipeline. Has been
observed when a streaming thread is doing a buffer_alloc() while an
application thread sends an event on a pad further downstream, and from
within a pad probe (holding STREAM_LOCK) carries out the pipeline plumbing
while the streaming thread has its buffer_alloc() in progress.
If we get GAP samples, there is no need to transmitt it.
In some situations, microphone is muted, we can drop net traffick
usage to ~1 kbit/s. Without patch it will stay ~20 kbit/s
Parse session control attributes when no media control attribute is
present. Threat * control attributes as an empty string, just like the
spec says.
Fixes#646800
... by forcing a state changed to PLAYING, which should otherwise be a
no-op as elements should already be in that state.
In particular, jitterbuffer needs new base_time as soon as possible to perform
proper timing (e.g. eos timeout handling) and can't wait for the new base_time
that will be distributed when the whole pipeline returns to PLAYING.
See bug #646397.
This option allows the videomixer2 element to output a valid alpha
channel when the inputs contain a valid alpha channel. This allows
mixing to occur in multiple stages serially.
The following pipeline shows an example of such a pipeline:
gst-launch videotestsrc background-color=0x000000 pattern=ball ! video/x-raw-yuv,format=\(fourcc\)AYUV ! videomixer2 background=transparent name=mix1 ! videomixer2 name=mix2 ! ffmpegcolorspace ! autovideosink videotestsrc ! video/x-raw-yuv,format=\(fourcc\)AYUV ! mix2.
The first videotestsrc in this pipeline creates a moving ball on a
transparent background. It is then passed to the first videomixer2.
Previously, this videomixer2 would have forced the alpha channel to
1.0 and given a background of checker, black, or white to the
stream. With this patch, however, you can now specify the background
as transparent, and the alpha channel of the input will be
preserved. This allows for further mixing downstream, as is shown in
the above pipeline where the a second videomixer2 is used to mix in a
background of an smpte videotestsrc. So the result is a ball hovering
over the smpte test source. This could, of course, have been
accomplished with a single mixer element, but staged mixing is useful
when it is not convenient to mix all video at once (e.g. a pipeline
where a foreground and background bin exist and are mixed at the final
output, but the foreground bin needs an internal mixer to create
transitions between clips).
Fixes bug #639994.
Previously the chain function was working sample frame based. In each cycle it
was checking if it is time to run a fft or if it is time to send a message.
Now we changed the data transform functions to work on a block of data and
calculate the max length until either {end-of-data, do-fft, do-msg}. This allows
us also to avoid the duplicated code for the single and multi-channel case (as
the transformers have the same signature now).
Even though we wrap around the accumulated second, we still need to add the
error in the same cycle. Increase the todo in the same conditional as afterwards
the accumulated error will be below one second.
AUTHOR only existed in an old version of the spec and ARTIST is
the new replacement for this. We are still reading both to still
be compatible with old files.
Fixes bug #644875.
Before it was possible that we run an extra fft when the time for sending a new
message is due. Only do this if we have not run the fft for the interval at all.
Don't check the format for each sample frame to read. We can make that decission
in _setup already. This is still not ideal as we call the function per frame.
Ideally we determine how many samples we can copy and have a loop in the input
reader. As an alternative we might also consider to use the fft variants for the
various formats and not convert to float for all cases - we would still need to
mix or deinterleave though.
In case server-side fails to perform seek, i.e. PLAY at non-zero requested
position, recovery so far would arrange for streaming to continue, albeit
having lost position tracking in the process. So, query position prior
to seek and use upon failed seek.
Add a boolean multi-channel property with a default of FALSE. When set to TRUE
the element won't mix all input channels to mono, but instead run a FFT on each
channel. In that case the result message would contain a 2 dimensional array
of channel x data for magnitude and phase.
API: GstSpectrum:multi-channel
https://bugzilla.gnome.org/show_bug.cgi?id=593482
Use a separate function to read a sample frame into a ringbuffer slot. In the
future we can use format-specific function pointer to avoid the reoccuring
format checks.
We now keep the fft data that is related to one channel in a separate structure
to prepare for multichannel support. We also refactor the code to operate more
often on the channel context.
When using gstrtpbin with ignore-pt=true, the free_stream function tries to
call gst_element_set_locked_state and gst_element_set_state on a stream->demux
which is NULL.
fixes#642412
Fix slightly confused tag handling in some places: make it clear when
we're taking ownership of a tag list and when not. For example,
gst_icydemux_tag_found() was taking ownership when the source pad
existed, but otherwise not (leak). Also, gst_event_parse_tag() does
not return a newly-allocated taglist, but a tag list that belongs to
the tag event, so don't give ownership of it away.
While we're at it, some minor clean-ups: don't re-invent g_strndup()
and simplify gst_icydemux_parse_and_send_tags() a bit, and don't
leak the tag list in case no valid tags where found.
https://bugzilla.gnome.org/show_bug.cgi?id=641330
* gst/qtdemux/qtdemux.c (gst_qtdemux_src_convert): Unref the qtdemux; we
weren't doing so before.
(gst_qtdemux_handle_src_event, gst_qtdemux_chain): Fix some error
cases which would leak a ref to the qtdemux.
Extract MusicBrainz tags added by MusicBrainz's Picard
tagger application. These tags (esp. the album id) are
helpful for rhythmbox et.al. to automatically downloads
cover art.
https://bugzilla.gnome.org/show_bug.cgi?id=642205
Images might have framerate=0/1 in the caps, which caused an
assertion on deinterlace. I don't know of interlaced image formats
but deinterlace might be hardcoded on some generic pipelines and
it shouldn't assert.
The fix was to set field_duration to 0 if the input has a framerate
with a 0 numerator.
This patch also adds checks for this situation on the unit tests.
https://bugzilla.gnome.org/show_bug.cgi?id=641400
Theora can only use the last frame (or the keyframe) as a reference, so in
practice. If we receive a buffer that references an unknown codebook, request
new headers. It probably means that headers were lost.
Functions that process the rtcp buffer could decide to keep a ref
on the buffer for further processing. So make the metadata writable
only after they are done.
In particular, this avoids missing the intended keyframe when first converting
from the frame's mov time to global segment time, and then back from global
time to mov time when activating the segment.