Currently, videocrop, only negotiates raw caps (system memory) because
it's the type of memory it can modify. Nonetheless, it's also possible
for the element to handle non-raw caps when only adding the crop meta
is possible, in other words, when downstream buffer pools expose the
crop API.
This patch enable non-raw caps negotiation. If downstream doesn't
expose crop API and negotiated caps are featured, the negotiation
fails.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/915>
This commit negotiate F32 audio format if MODE_FLOAT used in wavpack file.
Wavpack float mode is always in 32-bit IEEE format.
The following pipeline plays distorted audio if source file is encoded in float mode:
gst-launch-1.0 filesrc ... ! wavpackparse ! wavpackdec ! pulsesink
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/894>
A classic case of not enough locking.
One interesting thing with this is the interaction between the
rotation value and caps negotiation. i.e. the width/height of the caps
can be swapped depending on the video-direction property. We can't lock
the entirety of the caps negotiation for obvious reasons so we need to
do something else. This takes the approach of trying to use a single
rotation value throughout the entirety of the negotiation and then
subsequent output frame in a kind of latching sequence.
Fixes: https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/issues/792
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/836>
This action signal will delegate to clear-ssrc onto the rtpssrcdemux element
associated with the session. This allow rtpbin users to clear pads and
elements for a specific ssrc that is known to no longer be in use. This
happens when a pad is reused in rtpsrc or ristsrc.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/736>
Add property to set the initial value for picture-id. RFC7741 says
that picture-id MAY be initialized to a random value, thus it's also
valid to simply set it to a fixed initial value. A fixed value is very
useful for testing.
Default behavior is not changed.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/728>
- introduce two new properties:
* temporal-scalability-layer-flags:
Provide fine-grained control of layer encoding to the
outside world. The flags sequence should be a multiple of
the periodicity and is indexed by a running count of encoded
frames modulo the sequence length.
* temporal-scalability-layer-sync-flags:
Specify the pattern of inter-layer synchronisation (i.e.
which of the frames generated by the layer encoding
specification represent an inter-layer synchronisation).
There must be one entry per entry in
temporal-scalability-layer-flags.
- apply temporal scalability settings and expose as buffer
metadata.
This allows the codec to allocate a given frame to the correct
internal bitrate allocator. Additionally, all the
non-bitstream metadata needed to payload a temporally scaled
stream is now attached to each output buffer as a
GstVideoVP8Meta.
- add unit test for temporally scaled encoding.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/728>
Used by some proprietary software for their fragmented files.
Adds some support for multi-stream fragmented files
Flow is as follows.
1. The first 'fragment' is written as a self-contained fragmented
mdat+moov complete with an edit list and durations, tags, etc.
2. Subsequent fragments are written with a mdat+moof and each stream is
interleaved as data arrives (currently ignoring the interleave-*
properties). data-offsets in both the traf and the trun ensure
data is read from the correct place on demuxing. Data/chunk offsets
are also kept for writing out the final moov.
3. On finalisation, the initial moov is invalidated to a hoov and the
size of the first mdat is extended to cover the entire file contents.
Then a moov is written as regularly would in moov-at-end mode (the
default).
This results in a file that is playable throughout while leaving a
finalised file on completion for players that do not understand
fragmented mp4.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/643>
With recent libvpx versions, multithreading can be enabled on
a per-tile basis, instead of on a per tile-column basis.
In combination with the new tile-rows property, this allows the
encoder to make much better use of the available CPU power.
Bump minimum libvpx version to 1.7.0
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/707>
Previously imagefreeze would always operate as non-live element and
output frames as fast as possible according to the configured segment
(via SEEK events) and the negotiated framerate from start to stop or the
other way around.
With the new live mode (enabled via the is-live property) it would only
output frames in PLAYING. Frames would be output according to the
negotiated framerate unless it would be too late, in which case it would
jump ahead and skip over the requirement amount of frames.
This makes it possible to actually use imagefreeze in live pipelines
without having to manually ensure somehow that it would start outputting
at the current running time and without still risking to fall behind
without recovery.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/653>