This allows metas to be serialized to be transmitted or stored. This is
intended to be used for example by gdppay or unixfdsink.
Implemented on GstCustomMeta, GstVideoMeta, GstReferenceTimestampMeta,
and GstAudioMeta.
Sponsored-by: Netflix Inc.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5355>
Otherwise if there is a huge gap it will only be considered a
discontinuity after another discont-time amount of buffers has passed.
Like this it will be immediately a discontinuity if the gap between the
expected and received time becomes bigger than the discont-time.
The last part of the test was actually testing for this behaviour and
expected the previous behaviour. Most other tests also had to be
adjusted because discont will now happen at slightly different times
than before.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5759>
Add gst_audio_ring_buffer_set_errored() that will mark the
ringbuffer as errored only if it is currently started or paused,
so gst_audio_ringbuffer_stop() can be sure that the error
state means that the ringbuffer was started and needs stop called.
Fixes a crash with osxaudiosrc if the source element posts
an error, because the ringbuffer would not get stopped and CoreAudio
would continue trying to do callbacks.
Also, anywhere that modifies the ringbuffer state, make sure to
use atomic operations, to guarantee their visibility
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5205>
Don't call wait_event() at all for gap events, as basesink will
end up waiting for the time that the gap event would be rendered
out at the audio device. There's no need to render it at all,
just treat it as a handy point to resync the audio if needed,
let the ringbuffer render silence, and place the next buffer
into the ringbuffer where it belongs.
The only thing we really need to do is make sure the ringbuffer
and clock are running, and wait for preroll.
Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/2749
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5178>
This should fix pipelines such as this one to work as expected
... ! opusenc ! capsfilter caps='audio/x-opus,
channels=1; audio/x-opus, channels=2' ! ...
The expectation is that the encoder will propose the first structure
before the second one to the source.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3673>
gst-launch-1.0 audiotestsrc ! udpsink host=127.0.0.1
gst-launch-1.0 udpsrc ! audioconvert ! autoaudiosink
would crash with a floating point exception when clipping the input
buffer owing to a division by zero because no caps event was received.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3469>
Setting force_live lets aggregator behave as if it had at least one of
its sinks connected to a live source, which should let us get rid of the
fake live test source hack that is probably present in dozens of
applications by now.
+ Expose API for subclasses to set and get force_live
+ Expose force-live properties in GstVideoAggregator and GstAudioAggregator
+ Adds a simple test
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3008>
The purpose of a deep buffer copy is to be able to release the source
buffer and all its dependencies. Attaching the parent buffer meta to
the newly created deep copy needlessly keeps holding a reference to the
parent buffer.
The issue this solves is the fact you need to allocate more
buffers, as you have free buffers being held for no reason. In the good
cases it will use more memory, in the bad case it will stall your
pipeline (since codecs often need a minimum number of buffers to
actually work).
Fixes#283
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2928>
GLib guarantees libintl is always present, using proxy-libintl as
last resort. There is no need to mock gettex API any more.
This fix static build on Windows because G_INTL_STATIC_COMPILATION must
be defined before including libintl.h, and glib does it for us as part
as including glib.h.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2028>
While this is slightly more expensive (~48% slower per random number) it
does not cause any measurable difference when running through a complete
audio conversion pipeline.
On the other hand its random numbers are of much higher quality and on
spectrograms for 32 bit to 24 bit conversion the difference is clearly
visible.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1729>
If a serialized event arrives behind a buffer, it should not be send before
it. This fixes the pending event handling so that only early pending events,
the one that arrrived or was generated while the adapter was empty get send
before pushing buffer. All other events are not pushed after.
This issue lead the latency tracer to think our audio encoder did not have any
latency. This was testing with opusenc in a live pipeline.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1266>
Sometimes we can't output anything because we don't have enough
incoming frames. In that case, the resampler was trying to call
do_quantize() and do_resample() in a loop forever because there would
never be samples to output (so chain->samples would always be NULL).
Fix this by not calling chain->make_func() in a loop -- seems
completely unnecessary since calling it over and over won't change
anything if the make_func() can't output samples.
Also add some checks for the input and / or output being NULL when
doing conversion or quantization. This will happen when we have
nothing to output.
We can't bail early, because we need resampler->samples_avail to be
updated in gst_audio_resampler_resample(), so we must call that and
no-op everything along the way.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1461>
Sometimes the resampler has enough space to store all the incoming
samples without outputting anything. When this happens,
gst_audio_resampler_get_out_frames() returns 0.
In that case, the resampler should consume samples and just return.
Otherwise, we get a segfault when gst_audio_resampler_resample() tries
to resample into a NULL 'out' pointer.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1343>
Introduces a `libraries` variable that contains all libraries in a
list with the following format:
``` meson
libraries = [
[pkg_name, {
'lib': library_object
'gir': [ {full gir definition in a dict } ]
],
....
]
```
It therefore refactors the way we build the gir so that we can reuse the
same information to build them against 'gstreamer-full' in gst-build
when linking statically
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1093>
Actually extract the .o objects from the convience libraries and put
them into the main one. Without this, they will just be referenced by
the .pc file, but it will be unusable because they are not installed.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1122>