Display size is either transmitted in the display definition segment or
implicitly defined to 720x576. The subtitle window information also present in
the display definition segment is not yet used.
Don't get caught in an infinite loop if the source gets disconnected and also
support gracefully failing upon detecting the frame geometry has increased
(rather than segfaulting).
https://bugzilla.gnome.org/show_bug.cgi?id=635397
We need to reset the elements from the video recording branch, including
the queue and capsfilter in order to clear the eos state and activate
the pads.
This makes it possible to record multiple videos with camerabin2 in a
sequence, otherwise the source would get a unexpected return and
push EOS, stopping the whole pipeline.
As video recording bin's state is locked, we should always
remember of setting it to NULL when camerabin2 goes to NULL
Be more careful when using elements that might not
have been created yet
And do not set location property recursively on videorecordingbin
Adds a location property to enable applications to select
the captured files names. Locations are handled just like
multifilesink ones
Also disables -Wformat-nonliteral to allow to use non-literals
on g_strdup_printf on camerabin and generate a sequence of
locations for captures.
Keep vidbin state locked to avoid it going to playing without
being used and leaving an empty file created.
Check the docs on the code for details on the handling.
Adds property that informs if v4l2camerasrc is available
for starting a new capture.
It is useful for applications to know (via deep-notify) when the
property changes and a new capture is possible. Note, however, that
starting a new capture from the notify callback will cause a deadlock.
Removes the old logic for v4l2camerasrc that used the mode
property switching to start/stop captures to make it identical
to camerabin2 behavior and to allow the future addition of
pausing a video recording.
This also removes the MODE_PREVIEW as it became useless.
Implements video capture on v4l2camerasrc by using the mode property,
when mode is set to video, the pad probe pushes a new segment
and starts pushing buffers on the pad, when it the property is
sent back to preview, the pad probe pushes an EOS and stops
pushing buffers.
This is controlled by a Recording State variable, that is protected
by the GST_OBJECT_LOCK. I don't think locking for every buffer is
nice, so we could find an alternative lockless way here.
Use 'mode' enum definition from gstcamerabin-enum file to avoid
conflicts between v4l2camerasrc and gstcamerabin2 modes.
For now there is a MODE_PREVIEW there that is only used on the
camerasrc, not sure if we are keeping it at the future, but for
now this works.
Adds mode property to camerabin2, allowing users to
select between video and stills capture. Also adds
start/stop capture actions to trigger and stop
capturing
Adds an bin that is responsible for encoding and saving video
streams to files.
For now it is simply a ffmpegcolorspace ! theoraenc ! oggmux !
filesink bin.
Still uncapable of recording audio.
Adds viewfinder bin element, one of the modules of camerabin2
that is responsible for displaying the video from the camera.
For now it is only a bin with ffmpegcolorspace ! videoscale !
autovideosink
This holds all newsegement and most other events till there is enough
data to set srcpad caps, so that the downstream link is properly
negotiated before data starts flowing.
https://bugzilla.gnome.org/show_bug.cgi?id=635204
This holds all newsegement and most other events till there is enough
data to set srcpad caps, so that the downstream link is properly
negotiated before data starts flowing.
https://bugzilla.gnome.org/show_bug.cgi?id=635205
* libdvbsub gives us alpha channel already, not transparency level, so
don't do another "alpha = 255 - alpha", this is done by libdvbsub.
* Fix alpha channel handling in interpolation - assrender had an additional
1bpp alpha bitmap as a possible mask, we don't. So don't use the palette
index array as alpha values; bug from quick code porting long ago to
changing pixel colors (assrender has a single pixel color for whole
regions or something, unlike dvbsub, which has indexed colors).
* Don't forget to reassign our YUV and other local pixel color variables
after shifting to work on the bottom part of a 2x2 subsample block, or
it's obviously very blocky.
Remaining issues in blending:
* Should probably be interpolating or doing something else useful with the
resulting U and V channels, so that most of the source pixel UV values would
actually be actually cared about, except for just one out of possibly four.
* Don't convert AYUV to ARGB in libdvbsub, and then back from ARGB to AYUV in
dvbsuboverlay for no reason
* Re-factor the whole thing to something more like textoverlay blending
* Related to that, perhaps cache the current spu in a good format for quick
blending on each frame, after which the more often called blending parts
might become more straightforward
The spec has a page_time_out in the page composition segment to ensure
subtitles don't get stuck on screen for too much longer than intended,
when future page composition segments get lost on bad reception, or other
problems. Honor it in the gst plugin side.
Push incoming subtitle pages in a FIFO queue (pending_subtitles)
and dequeue the head when it's time to show it (when video running
time reaches the subtitle page running time).
Keep the subtitle page, that is supposed to be blended on top of video
currently, in a separate object variable (current_subtitle). As a
next step we can then pre-render current_subtitle to a better to blend
format.
Eases holding onto the information in gst plugins side queue of
DVBSubtitles, so we won't need to create yet another temporary struct
to keep the pts and page_time_out too.
And this really logically belongs at the toplevel information set anyway
and in that struct...
We want to allow queueing of raw region image data in the gst plugin side,
and keep the data around until we pop the item from the queue. So make
the callback handler responsible for memory cleanup, if one is installed.
Abuse libdvbsub PTS tracking to just store our running time in it, to get
it back in the callbacks. As GStreamer does its own PTS handling behind our
back (especially for video), we should just sync with video per running time,
not try to do it with PTS, which doesn't seem well accessible for video chain.
We can later relabel dvb-sub.c pts naming convention if wanted, it's just
passing along guint64 values, which GstClockTime fortunately is too.
The current idea is to collect the regions returned by the callback into
a FIFO buffer and pop and pre-render the top one into a separate
quick-to-blend cached format, which is then appropriately blended in the
video chain until the next one on top of the stack reaches the video chains
running time (or the fallback timer hits).
<tpm> leio, what's the mpegts demux hack about?
<leio> my libdvbsub code can't handle cut packets
<leio> so the hack instructs the demuxer to gather full packets before pushing down, but it applies that to more PES packet types than just dvbsub, but I'm not sure if that's a bad thing
<leio> either way, needs a cleaner solution, either in demuxer, or I need to handle cut packets
<tpm> ok, but really it should be fixed in the overlay, right?
<tpm> or a parser be inserted
<leio> the problem is that I don't know from the first packet beforehand if it is a cut one or no
<leio> not
<leio> err, first buffer
<leio> just when I receive the next one I see if it has a valid timestamp on it or not
<leio> so I can't very well queue it up in the chain either, I might be blocking the very last subtitle for no reason or something
<tpm> but you could just drop/ignore packets until you find one, right?
<leio> find what?
<tpm> a complete packet?
<leio> the problem isn't that they aren't complete
<leio> the problem is that they are cut across multiple GstBuffers by the demuxer without the hack
<tpm> sure, I understand that
<tpm> but you can't easily determine if a GstBuffer contains he start fragment of a packet or not?
<leio> I guess I could parse the packet and see if its length is enough, just like the libdvbsub code eventually does too
<leio> I can, it has a timestamp if it's the first chunk
<leio> I just never know if I need to wait for more, without some parsing
<tpm> ah ok
<leio> while the demuxer could just give me an uncut one in the first place
<leio> like it always does for program streams
<leio> that gather_pes is always set in gstmpegdemux, but not in gstmpegtsdemux
As imgbin_finished() is scheduled from g_idle_add, it might
be run a little later than expected, this can lead to the application
setting camerabin to ready before imgbin_finished() runs. In this case,
the processing counter goes to 0 and an assertion happens.
This patch relaxes the imgbin_finished() check on the processing
counter.
This property allows one to start at any point within the field pattern after
a discontinuity (whenever gst_interlace_reset () is called). Thus with the
2:3:3:2 pattern, for example, one can start at offset 2 and achieve 3:2:2:3
or offset 1 and achieve 3:3:2:2.
This patch refactors imagebin element creation and linking into separate functions,
and adds re-using also for imagebin internally created elements.
So this refactoring allows creating imagebin elements already in NULL state when
application sets the image mode, and next state change from NULL to READY will be faster.
This reduces first capture latency.
Earlier the elements were both created and linked in NULL to READY state change.
This patch makes outputselector take an extra ref when pushing
the last_buffer to avoid it losing it during the switch function.
This makes resend-latest properly work if the active-pad is changed
during the switch function buffer pushing (on a pad probe, for example).
https://bugzilla.gnome.org/show_bug.cgi?id=629917
This patch makes output-selector always recheck if there's a
pending pad switch after pushing a buffer, preventing that
it pushes a buffer on the 'wrong' pad.
https://bugzilla.gnome.org/show_bug.cgi?id=629917
In this mode, an initial empty moov (containing only stream metadata) is written,
followed by fragments containing actual data (along with required metadata).
New fragments are started either at keyframe (if such are sparse) or when
property configured duration exceeded.
Based on patch by Marc-André Lureau <mlureau@flumotion.com>
Fixes#632911.
TDT and TOT sections, with PID=0x14, doesn't extend to several packets
and the section filter is not needed here and shouldn't be used at all
for these tables because the have a different structure.
For example, TDT tables were not parsed for odd hours because this bit
is the 'current_next_indicator' bit for the other sections, and the table
was discarded.
That is, as such formats allow subclass to extract position from frame,
it is possible to extract duration (if not otherwise provided)
from (near) last frame, and a seek can fairly accurately target the required
position.
Fixes#631389.
Arrange for upstream as well as downstream flushing when seeking.
Also determine upstream size as well as seekability. Adjust some comments
to reality and employ debug statement in proper order.
Adds 'idle', a read-only boolean property that tells applications
if there is any capturing/saving/encoding going on in camerabin. If
not, it is safe to set it to NULL and release resources without
losing data.
Add "ready-for-capture" property to indicate if preparing a new
capture is possible.
"ready-for-capture" changes before the 'image-done' signal, so
the application can be notified that it can do a new capture
even before the previous one has finished encoding/saving.
Use information from the gop header and picture
header to calculate the picture timestamp. (time_code
and temporal_reference) and adapt to upstream timestamps if
provided.
https://bugzilla.gnome.org/show_bug.cgi?id=632222
When switching between video/still modes the pre-night-mode fps
should be reset to prevent it being used in the incorrect mode, causing
the videosource to fail configuring itself
Store width/height/fps for video captures in a separate variable
than the one that stores the currently used value.
This prevents the user preferences to be lost when resetting
the currently used dimensions for night mode, for example
Resets used caps so that camerabin doesn't try to use them
when restarting, where elements/properties might have changed
and the old caps be incompatible
Adds a higher priority to the idle_add function for when
the image bin finished the image capture. This reduces the
delay for the application to be notified about this.
As camerabin only gets notified of the changes from the
video source element, it should query the initial value
once the source is created so it initializes itself
correctly.
Adds a new rotate element to geometrictransform. It still
needs some work. But this is a good starting point.
Based on patch from Bert Douglas <bertd tplogic com>
Thanks to Felipe Contreras for the suggestion. This is partially
based on his patches and makes flacparse more than 3.5 times faster.
Looking for valid frame headers is unlikely to give false positives
because every frame header is at least 9 bytes long, contains a
14 bit sync code and a 8 bit checksum over the first 8 bytes.
Fixes bug #631200.
The first newsegment event will be send by the first call to
gst_base_parse_push_buffer() if necessary, posting the tags
before that is not a good idea. Instead do it from the
GstBaseParse::pre_push_buffer vfunc.
This effect converts all colors except a single one to
grey. The color is selected by an RGB triple and a
tolerance for the color matching in hue degree can be specified.
This reverts commit b5a3d60363.
Reverting this for now, since no one really seems to remember why this
property exists or what it could possibly be good for. It seems to have
been in the original mp3parse since the beginning of time and was back-
ported from there.
Seekability, like duration, etc is unlikely to change (frequently), and
the default assumption covers most cases, so let subclass set when needed.
At the same time, allow subclass to indicate if it has seek-metadata (table)
available, and possibly have it provide an average bitrate.
This allows the child class to chain its event handler with
GstBaseParse, so that subclasses don't have to duplicate all the default
event handling logic.
https://bugzilla.gnome.org/show_bug.cgi?id=622276
If the elements are in NULL/READY and changing state to
PAUSED/PLAYING while a capture is started
camerabin might not set the active_bin properly causing the
capture start to fail.
This patch fixes it by checking the current and pending state
of the branches instead of only the current one
The vertigo plugin for example claims to have 3 properties but
the 3rd property does nothing and has a NULL name.
Fixes bug #630783.
Thanks to Martti Kühne for debugging this.
Initialize interval_ts to first QOS event timestamp, otherwise the
fps statistics are printed always after one rendered frame.
Also, initialize last_frames_* counters, the values are bogus e.g. after
PLAYING-NULL-PLAYING state change.
Remove the weird (failing) code to figure out caps on the srcpad.
Add a caps property to decide what caps to put on the outgoing buffers.
Fix an event leak.
Add a property to avoid redirection to the rtsp-sdp:// url but instead embeds an
rtspsrc element inside sdpdemux as the session manager.
Based on patch by Marco Ballesio.
Fixes#630046
Adds another boolean to help controlling viewfinder blocking,
making it possible for the applications to reset the viewfinder
blocking after capture was started but before the blocking
actually happens.
Unblock the viewfinder when going to ready to avoid
blocking when setting camerabin to playing again and
attemping to capture. Keep the property as is.
Remove notify signal proxy for video-source. Application can use
video-source directly from now on to get notified of property changes.
Add monitoring scene-mode property change to select lowest possible
framerate for video capture when night mode is selected.
Fixes#616923
Work in progress. Colorspace handles most format conversion using
3-stage getline/matrix/putline process using an AYUV or ARGB
intermediate, with most functions handled by Orc. There is also
a table of single-pass conversions, all handled by Orc. The plan
is to add optional stages for various chroma upsampling/downsampling
algorithms, dithering, and float/int16 intermediates, and then have
Orc create multi-stage functions at runtime.
When we find an SDP with an rtsp:// url as the global control attribute or when
all streams have an rtsp:// control attribute, post an redirect message with an
rtsp-sdp:// url containing the SDP.
Fixes#628214
This makes it able to recombine rgb images, making it possible
to add tags to rgb jpegs as well.
Uses a simple strategy to check what are the possible colorspaces
and avoid adding jfif to ones that aren't YUV/Gray.
The doc says to use gst_element_send_event on the pipeline, but if
we are to call it on the element itself, it's a noop. This should make it
handle the event properly before delegating it to basesrc.
Use guint instead of guint16 to represent the size of the encoded image,
this would make some recombined images lose most of their data and
show like a big black image with a small line of content on top.
Also adds a minor log message.
EOI are not always at the last 4 bytes. We need to search
the last 5 bytes to find the 0xFFD9 sequence as jpegenc seems
to round the buffer size to the next 4 multiple.
The PES header length is calculated before setting the dynamic flags, returning
a wrong value. Small frames that should be sent in a single TS packet are
spawned to a new packet because of that error. For audio streams where a single
frame can cope in one TS packet it introduces a huge overhead.
For a 100B packet, we prepare a TS packet with a payload of(100+9)B. Then, we
write the TS header using this value in tsmux_write_ts_header, and call
tsmux_stream_get_data(). The dynamic flags where not set yet and now
tsmux_stream_pes_header_length() returns 14B instead of 9B. The payload of the
TS packet is 114B, 5B more than what was calculated. 109B are sent in a first
packet and the remaining 5B are sent in another one.
Fixes bug #628548.
The mutex locked is for the 'mux' object, but we unlock the
pad, which means that if the rtpmux gets a flush, then the
object lock will stay locked forever, causing it to freeze
the next time it tries to take it.
Fixes bug #627991
Rework bulge mapping function to give more predictable results.
Now the bulge is done dividing by a scale factor that smoothsteps from
"zoom" at the center to 1.0 at "radius".
https://bugzilla.gnome.org/show_bug.cgi?id=625908
This could happen because for example /usr/lib is linked
to /usr/lib64 and both are loaded. The frei0r specification
says that the plugin init function must only be called once
and for some plugin weird things (including crashes) are
happening.
Fixes bug #623710.
Make the "radius" property of CircleGeometricTransform relative.
This is more coherent with the way [x,y]-center properties are handled
and allow to set a radius without knowing the video size.
Radius is defined with respect to the circle circumscribed about the
video rectangle so that a point in the center has radius 0.0 and one in
a vertex has radius 1.0.
Note that this is not a regression from the previous absolute way of
defining the radius as a user who knows the video size can easily
calculate the relative radius and set that.
https://bugzilla.gnome.org/show_bug.cgi?id=625959
Currently implemented switching from yuv to rgb, looking up rgb from the
table in the usual way, getting back to yuv. With luma lookup presets
(sepia, heat, xray) a color space conversion is saved directly looking
up rgb for a given Y and converting to yuv.
Probably this latter step can even be made faster precalculating a luma
to yuv table in an outer loop.
https://bugzilla.gnome.org/show_bug.cgi?id=625817
Implements a color lookup table filter with 4 presets:
- heat: fake heat camera effect
- sepia: sepia toning
- xray: invert + shade to blue
- xpro: cross process
https://bugzilla.gnome.org/show_bug.cgi?id=625817
Ports gleffects "fisheye" filter to geometrictransform.
Fake fisheye lens filter. Somewhat empiric implementation because I
didn't find any good algorithm that does it with nice results.
https://bugzilla.gnome.org/show_bug.cgi?id=625722
Ports gleffects "mirror" filter to geometrictransform.
Simple yet effective mirror effect, splits the image into halves and
reflect the first into the second.
https://bugzilla.gnome.org/show_bug.cgi?id=625722
Ports gleffects "square" filter to geometrictransform.
Maps a region around the center into a zoomed square and smoothly get
back to normal zoom. With faces it makes a funny "cube-face" effect.
https://bugzilla.gnome.org/show_bug.cgi?id=625722
Ports gleffects "stretch" filter to geometrictransform.
Shrinks the image around the center and gradually return to normal zoom
creating funny caricatures.
https://bugzilla.gnome.org/show_bug.cgi?id=625722
Adds the new 'gaudieffects' plugin, originally found
here: http://github.com/luisbg/gaudi_effects
Contains the following video effect elements: burn, chromium, dilate,
dodge, exclusion and solarize.
Thanks to Jan Schmidt for the reviewing and refactoring
And don't fail if a plugin was already registered. Frei0r allows
plugins in directories with higher importance to override plugins
from directories with lower importance.
This writes out the optional 'btrt' atom (MPEG4BitrateBox) for H.264
media if either or both of average and maximum bitrate are available for
the stream.
https://bugzilla.gnome.org/show_bug.cgi?id=623678
This collects the 'bitrate' and 'maximum-bitrate' tags on the
corresponding pad and uses these to populate these fields in the ESDS
where applicable.
https://bugzilla.gnome.org/show_bug.cgi?id=623678
Factor out most of the buffer handling and implement a chain_list
function. Also, the DTMF muxer has been modified to just have a
function to accept or reject a buffer instead of having to subclass
both chain and chain_list.
This is a problem if you tune to a channel which uses pid X and later tune to
another channel where X is used for another table (e.g. PMT).
The code that does that was actually already there but never used because the
pat structure was freed before. The commit that introduced those lines intended
to fix a memory leak, but we clean things up elsewhere.
Fixes#622725.
Write uint tags that have complements (e.g. track-number/
track-count) even when we only have one of them available
and set the other one to 0.
Fixes#622484
Adds a signal for applications to receive the fps measurements made
instead of only printing them to the frame/stdout.
This signal is only emited if the signal-fps-measurements property
is set to TRUE
Previously we would end up with the collectpaddata structure already freed.
This would result in a bogus iteration of mux->sinkpads (all the
GstQTPad being freed) and it wouldn't be removed from that list.
Finally, due to it not being removed from that list, we would end up
calling a bogus gst_qt_mux_pad_reset on those structures => SEGFAULT
Having GST_DEBUG_CATEGORY and GST_DEBUG_CATEGORY_EXTERN together
might lead to 'undefined symbol' problems. This commit moves
the _EXTERN to a separate new file.
Add a new config-interval property to instruct the parser to insert
config (VOSH, VOS, etc) at periodic intervals in the stream
(when a GOP or VOP-I is encountered).
Based on patch by <marc.leeman at gmail.com>
Fixes#621205.
If the current incoming packet didn't carry a timestamp, but a
previous packet had one we didn't yet use, then apply that timestamp
to the next picture.
Fixes: #618336
Add a new config-interval property to insert SPS and PPS at periodic intervals
in the stream (when an IDR is encountered).
Based on patch by <marc.leeman at gmail.com>
Fixes#620978.
Use explicit format macros from gstvideo to avoid exposing
unsupported formats on template pads. Using the macros
also give us complete caps (width/height/framerate).
And add support for AYUV.
Fixes#620717
We really don't want this in gst-plugins-bad because of
legal complexities around RTMP and possible problems
for distributions.
Add README that explains how to build librtmp to be suitable
for linking to the GStreamer plugin.
Adds a prepare function to make subclasses precalculate values
that will be used throughout the mapping functions.
Also adds a missing cleanup to fix a memleak
Renames some variables and adds a minimum doc to the
mapping function for a little clarity.
Also uses gstvideo functions for the row and pixel strides
instead of hardcoded values
Adds a new plugin that has elements that perform geometric
transformations to images. By geometric transformations I mean
that the operations are functions that given the output pixel
position, get the pixel position in the input image. This pixel
is then copied from input to output.
The gstgeometrictransform baseclass makes it easy to write
such elements. It boils down to write the mapping function
and exposing properties
Already added the first of the elements, 'pinch'. It's a common
effect in image editors, like gimp (distort -> pinch)
If sp_open_shm errors out trying to open a shm area, it would crash
when trying to free the area. The RETURN_ERROR macro calls
sp_shm_area_dec with self == NULL. sp_shm_area_dec calls
sp_shm_close, with self == NULL, which it then tries to access a
parameter of without checking. This patch checks to make sure
self != NULL before accessing that parameter.
These two elements (shmsink and shmsrc) communicate buffers using POSIX
shared memory. They also communicate the caps. The source currently acts as
a live source and ignores the timestamps coming from the sink. It also does
not transfer the tags.
gst_mpeg_descriptor_find() expects the description field to be non-NULL.
This fixes a couple of calls where the value being passed is not
verified to be non-NULL first.
https://bugzilla.gnome.org/show_bug.cgi?id=620456
mpeg_packetizer_get_block() in some circumstances (here: if
downstream was unlinked) returns a block but does not set the
buffer causing mpegvideoparse_drain_avail() to cause invalid memory
access.
Fixes#619502.
Move include directives for gst-libs into GST_PLUGINS_BAD_CFLAGS,
and fix all the Makefiles that use it. This is so that all the
include directories are added in the proper order: first the
directories in srcdir/builddir, then gst-plugins-base dirs, then
gstreamer dirs. If the order is wrong, installed headers may be
used instead of local headers and/or uninstalled headers from -base.
Some tables in MPEG-TS do not have a crc in the spec, so also mpegtsparse
is not calculating crc for sections with table_id 0x70 - 0x72 because they
do not have a CRC in the spec. See EN300468. Parse Time and Date table and
output bus message.
Specifically, when scanning for entropy data segment length and needing
more data, do not rescan from start next time around, but resume at
last position.
See also #583047.
That is, header configuration may start at Video Object (startcode),
rather than at Visual Object Sequence, which is catered for and parsed,
so let's also take it as codec_data if no more available.
Fixes (remainder of) #572551.
gcc 4.5 warns when comparing some integer with an enum value, in
the case of GstFlowReturn this is valid though. We should later
add GST_FLOW_CUSTOM_OK1, GST_FLOW_CUSTOM_OK2, etc. after new core
is released.
Adds video-capture-width, video-capture-height and
video-capture-framerate properties to allow applications to
get/set those values. Getting was not possible before this patch,
and setting was done through the set-video-resolution-fps
action, which sets the properties and promptly resets the
video source to use them.
Fixes#614958
Adds image-capture-width and image-capture-height properties
to camerabin, allowing the user to get/set them. Getting was
not possible before and setting was done through the
set-image-resolution action, which shouldn't now just set
the properties.
Fixes#614958
Adds block-after-capture property to block running viewfinder after capturing.
This property is useful if application wants to display capture preview and avoid
running viewfinder on background.
Based on a patch by Tommi Myöhänen <ext-tommi.1.myohanen@nokia.com>
Adds a new property called viewfinder-filter to camerabin.
This property is used to add a filter to process the video
flow right before the viewfinder sink.
Also updates test to check property exists.
Add video-source-filter property that can be used to inject application
specific gstreamer element to camerabin pipeline. The video-source-filter
element will process all frames coming from video source.
One could add image analyzers to collect information about the stream,
or add image enhancers to improve capture quality, for example.
If source-resize flag is disabled then set resolution to image capture caps
according to capture resolution video source element produces. Otherwise we
write wrong resolution to image metadata.
We wait to parse a minimum number of frames (10, arbitrarily) before
emiting bitrate tags so that our early estimates are not wildly
inaccurate for streams that start with a silence. If the stream ends
before that, we just emit the tags anyway.
While it _would_ be nicer to be specify the threshold to start pushing
the tags in terms of duration, this would introduce more complexity than
this merits.
https://bugzilla.gnome.org/show_bug.cgi?id=614991
The current code just uses table id, subtable extension and version number to
check if the section has been seen before. However, this comparison is not
sufficient, causing actually new tables being dismissed.
Fixes bug #614479.
This is optional because it's a quite expensive operation and it's very
unlikely that a non-frame is detected as frame after the header CRC check
and checking all bits for valid values. The overall frame checksums are
mainly useful to detect inconsistencies in the encoded payload.
When called from the GST_FLAC_PARSE_STATE_HEADERS case,
gst_flac_parse_hand_headers() does a gst_buffer_set_caps() on a buffer
with refcount > 1. This change handles this case by making the buffer
metadata_Writable.
https://bugzilla.gnome.org/show_bug.cgi?id=614037
This patch adds the get_frame_overhead() vfunc so that baseparse can
accurately calculate the min/avg/max bitrates for aacparse.
Note: The bitrate was being incorrectly calculated for ADTS streams
(it's not in the header as the code suggests).
This makes baseparse keep a running average of the stream bitrate, as
well as the minimum and maximum bitrates. Subclasses can override a
vfunc to make sure that per-frame overhead from the container is not
accounted for in the bitrate calculation.
We take care not to override the bitrate, minimum-bitrate, and
maximum-bitrate tags if they have been posted upstream. We also
rate-limit the emission of bitrate so that it is only triggered by a
change of >10 kbps.