1) We need to lock and get a strong ref to the parent, if still there.
2) If it has gone away, we need to handle that gracefully.
This is necessary in order to safely modify a running pipeline. Has been
observed when a streaming thread is doing a buffer_alloc() while an
application thread sends an event on a pad further downstream, and from
within a pad probe (holding STREAM_LOCK) carries out the pipeline plumbing
while the streaming thread has its buffer_alloc() in progress.
If we get GAP samples, there is no need to transmitt it.
In some situations, microphone is muted, we can drop net traffick
usage to ~1 kbit/s. Without patch it will stay ~20 kbit/s
Parse session control attributes when no media control attribute is
present. Threat * control attributes as an empty string, just like the
spec says.
Fixes#646800
... by forcing a state changed to PLAYING, which should otherwise be a
no-op as elements should already be in that state.
In particular, jitterbuffer needs new base_time as soon as possible to perform
proper timing (e.g. eos timeout handling) and can't wait for the new base_time
that will be distributed when the whole pipeline returns to PLAYING.
See bug #646397.
This option allows the videomixer2 element to output a valid alpha
channel when the inputs contain a valid alpha channel. This allows
mixing to occur in multiple stages serially.
The following pipeline shows an example of such a pipeline:
gst-launch videotestsrc background-color=0x000000 pattern=ball ! video/x-raw-yuv,format=\(fourcc\)AYUV ! videomixer2 background=transparent name=mix1 ! videomixer2 name=mix2 ! ffmpegcolorspace ! autovideosink videotestsrc ! video/x-raw-yuv,format=\(fourcc\)AYUV ! mix2.
The first videotestsrc in this pipeline creates a moving ball on a
transparent background. It is then passed to the first videomixer2.
Previously, this videomixer2 would have forced the alpha channel to
1.0 and given a background of checker, black, or white to the
stream. With this patch, however, you can now specify the background
as transparent, and the alpha channel of the input will be
preserved. This allows for further mixing downstream, as is shown in
the above pipeline where the a second videomixer2 is used to mix in a
background of an smpte videotestsrc. So the result is a ball hovering
over the smpte test source. This could, of course, have been
accomplished with a single mixer element, but staged mixing is useful
when it is not convenient to mix all video at once (e.g. a pipeline
where a foreground and background bin exist and are mixed at the final
output, but the foreground bin needs an internal mixer to create
transitions between clips).
Fixes bug #639994.
Previously the chain function was working sample frame based. In each cycle it
was checking if it is time to run a fft or if it is time to send a message.
Now we changed the data transform functions to work on a block of data and
calculate the max length until either {end-of-data, do-fft, do-msg}. This allows
us also to avoid the duplicated code for the single and multi-channel case (as
the transformers have the same signature now).
Even though we wrap around the accumulated second, we still need to add the
error in the same cycle. Increase the todo in the same conditional as afterwards
the accumulated error will be below one second.
AUTHOR only existed in an old version of the spec and ARTIST is
the new replacement for this. We are still reading both to still
be compatible with old files.
Fixes bug #644875.
Before it was possible that we run an extra fft when the time for sending a new
message is due. Only do this if we have not run the fft for the interval at all.
Don't check the format for each sample frame to read. We can make that decission
in _setup already. This is still not ideal as we call the function per frame.
Ideally we determine how many samples we can copy and have a loop in the input
reader. As an alternative we might also consider to use the fft variants for the
various formats and not convert to float for all cases - we would still need to
mix or deinterleave though.
In case server-side fails to perform seek, i.e. PLAY at non-zero requested
position, recovery so far would arrange for streaming to continue, albeit
having lost position tracking in the process. So, query position prior
to seek and use upon failed seek.
Add a boolean multi-channel property with a default of FALSE. When set to TRUE
the element won't mix all input channels to mono, but instead run a FFT on each
channel. In that case the result message would contain a 2 dimensional array
of channel x data for magnitude and phase.
API: GstSpectrum:multi-channel
https://bugzilla.gnome.org/show_bug.cgi?id=593482
Use a separate function to read a sample frame into a ringbuffer slot. In the
future we can use format-specific function pointer to avoid the reoccuring
format checks.
We now keep the fft data that is related to one channel in a separate structure
to prepare for multichannel support. We also refactor the code to operate more
often on the channel context.
When using gstrtpbin with ignore-pt=true, the free_stream function tries to
call gst_element_set_locked_state and gst_element_set_state on a stream->demux
which is NULL.
fixes#642412
Fix slightly confused tag handling in some places: make it clear when
we're taking ownership of a tag list and when not. For example,
gst_icydemux_tag_found() was taking ownership when the source pad
existed, but otherwise not (leak). Also, gst_event_parse_tag() does
not return a newly-allocated taglist, but a tag list that belongs to
the tag event, so don't give ownership of it away.
While we're at it, some minor clean-ups: don't re-invent g_strndup()
and simplify gst_icydemux_parse_and_send_tags() a bit, and don't
leak the tag list in case no valid tags where found.
https://bugzilla.gnome.org/show_bug.cgi?id=641330
* gst/qtdemux/qtdemux.c (gst_qtdemux_src_convert): Unref the qtdemux; we
weren't doing so before.
(gst_qtdemux_handle_src_event, gst_qtdemux_chain): Fix some error
cases which would leak a ref to the qtdemux.
Extract MusicBrainz tags added by MusicBrainz's Picard
tagger application. These tags (esp. the album id) are
helpful for rhythmbox et.al. to automatically downloads
cover art.
https://bugzilla.gnome.org/show_bug.cgi?id=642205
Images might have framerate=0/1 in the caps, which caused an
assertion on deinterlace. I don't know of interlaced image formats
but deinterlace might be hardcoded on some generic pipelines and
it shouldn't assert.
The fix was to set field_duration to 0 if the input has a framerate
with a 0 numerator.
This patch also adds checks for this situation on the unit tests.
https://bugzilla.gnome.org/show_bug.cgi?id=641400
Theora can only use the last frame (or the keyframe) as a reference, so in
practice. If we receive a buffer that references an unknown codebook, request
new headers. It probably means that headers were lost.
Functions that process the rtcp buffer could decide to keep a ref
on the buffer for further processing. So make the metadata writable
only after they are done.
In particular, this avoids missing the intended keyframe when first converting
from the frame's mov time to global segment time, and then back from global
time to mov time when activating the segment.
Make win32 build bot happy again, and nicefy output while we're at it.
qtdemux.c: In function 'qtdemux_parse_trun':
qtdemux.c:2162:3: error: format '%lu' expects type 'long unsigned int', but argument 9 has type 'guint32'
Check that the WAVEHEADER node is present instead of blindly using it.
If not present we won't be able to provide a more refined caps, but at
least we won't crash.
https://bugzilla.gnome.org/show_bug.cgi?id=640028
Old code was difficult to understand exactly how the neighboring
scan lines are calculated, and it appeared that some were off by
+2 or -2, depending on the field flag. Fixes#639321.
Set caps from the start so discoverer doesn't blow up on
seeing no negotiated caps between elements on preroll,
which might happen if no subtitle buffers have been
pushed yet at the time. See file from bug #603308.
The previous default, greedyh, takes 4 times as long as MPEG-2
video decoding, and is unlikely fast enough on any current CPU
to play 1080i video in real-time. greedyl isn't much faster.
linear was chosen over vfir, since the quality advantage of vfir
is minimal compared to the occasional visual artifacts and slower
processing.
Improve parsing of the samplerate.
Parse the framelen so that we can calculate timestamps.
When interpollate the incomming timestamp on outgoing buffers when there are
multiple subframes.
fixes#625825
It was an arbitrary limit from the start, meant as a basic sanity check,
so may just as well increase it a little. Would be good to provide
progress reporting while completing the block in any case..
https://bugzilla.gnome.org/show_bug.cgi?id=637060
Use g_ascii_dtostr() and g_ascii_strtod() to serialise/deserialise
floating point numbers, instead of ugly hacks that switch locale
before and after calling libc functions (which is not a good idea
in a multi-threaded application).
atof() converts strings according to the current locale, but the
framerate string will likely always use a dot as floating point
separator, so use g_ascii_strtod() instead (but also canonicalise
the string before, so we can handle both formats as input).
Include all possible stats of a source in the stats structure because we might
be interested in what happened in the past.
Document the stats property and the fields.
Using this in a demuxer will cause deadlocks if there's
a pad with a pending pad-block downstream, no matter if
there is a queue between the pad or not. Queues pass
bufferalloc downstream from the same thread and only
act as a thread boundary for events and buffers.