Instead of requiring interlaced video, simply skip CC detection
when the input is progressive.
This allows placing line21decoder unconditionally in pipelines,
without having to worry about whether the input stream will be
interlaced, or even worse interlacing just in case!
+ update doc cache
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1885>
Useful when having a service that runs a GStreamer pipeline
or application in Google Cloud to avoid storing the inputs
and outputs in the running container or service. For example
when analyzing a video from a Google Cloud Storage bucket
and extracting images or converting the video and then uploading
the results into another Google Cloud Storage bucket.
- gssrc allows to read from a file located in Google Cloud
Storage and it supports seeking.
- gssink allows to write to a file located in Google Cloud
Storage. There are 2 modes, one similar to multifilesink and
the other similar to filesink.
Example:
gst-launch-1.0 gssrc location=gs://mybucket/videos/sample.mp4 ! decodebin ! glimagesink
gst-launch-1.0 playbin uri=gs://mybucket/videos/sample.mp4
gst-launch-1.0 videotestsrc num-buffers=5 ! pngenc ! gssink object-name="img/img%05d.png" bucket-name="mybucket" next-file=buffer
gst-launch-1.0 filesrc location=sample.mp4 ! gssink object-name="videos/video.mp4" bucket-name="mybucket" next-file=none
When running locally simply set GOOGLE_APPLICATION_CREDENTIALS. But
when running in Google Cloud Run or Google Cloud Engine, just set the
"service-account-email" property on each element.
Closes#1264
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1369>
Prior to that, cccombiner's behaviour was essentially that of
a funnel: it strictly looked at input timestamps to associate
together video and caption buffers.
This patch instead exposes a "schedule" property, with a default
of TRUE, to control whether caption buffers should be smoothly
scheduled, in order to have exactly one per output video buffer.
This can involve rewriting input captions, for example when the
input is CDP sequence counters are rewritten, time codes are dropped
and potentially re-injected if the input video frame had a time code
meta.
Caption buffers may also get split up in order to assign captions to
the correct field when the input is interlaced.
This can also imply that the input will drift from synchronization,
when there isn't enough padding in the input stream to catch up. In
that case the element will start dropping old caption buffers once
the number of buffers in its internal queue reaches a certain limit
(configurable).
The property is exposed so that existing users of cccombiner can
revert back to the original behaviour, but should eventually be
removed, as that behaviour was simply inadequate.
This commit also disallows changing the input caption type, as
this would needlessly complicate implementation, and removes
the corresponding test.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2076>
Especially specify the field-order in the interleaved mode. Otherwise it
might cause the negotiation to fail, because
GST_PAD_SET_ACCEPT_INTERSECT is not set on the sinkpad, and the
field-order is missing in the sink template but can be present in the
outside caps.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2062>
default min port == 0, max port == 65535 -- if min port == 0, uses existing random port selection (range ignored)
add 'gathering_started' flag to avoid changing ports after gathering has started
validity checks: min port <= max port enforced, error thrown otherwise
include tests to ensure port range is being utilized (by @hhardy)
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/119>
Various software, including ffmpeg's Decklink support, fails parsing CDP
packets that contain anything but CC data in the CDP packets.
Based on this property, timecodes are not written into the CDP packets
even if they're present.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1833>
This causes no changes to the profile but keeps the existing settings.
The profile can also be changed from e.g. the card's configuration
application and in that case probably should be left alone.
The default is the new value as it keeps the profile setting as it is,
which is consistent with the previous behaviour in 1.18.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1721>
Currently for buffer splitting only output duration can be specified.
Allow specifying a buffer size in bytes for splitting.
Consider a use case of the below pipeline
appsrc ! rptL16pay ! capsfilter ! rtpbin ! udpsink
Maintaining MTU for RTP transfer is desirable but in a scenario
where the buffers being pushed to appsrc do not adhere to this,
an audiobuffersplit element placed between appsrc and rtpL16pay
with output buffer size specified considering the MTU can help
mitigate this.
While rtpL16pay already has a MTU setting, in case of where an
incoming buffer has a size close to MTU, for eg. with a MTU of
1280, a buffer of size 1276 bytes would be split into two buffers,
one of 1268 and other of 8 bytes considering RTP header size of
12 bytes. Putting audiobuffersplit between appsrc and rtpL16pay
can take care of this.
While buffer duration could still be used being able to specify
the size in bytes is helpful here.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1578>
Adding vp9parse element to parse various stream information such as
resolution, profile, and so on. If upstream does not provide resolution and/or
profile, this would be useful for decodebin pipeline for autoplugging
suitable decoder element depending on template caps of each decoder element.
In addition, vp9parse element supports unpacking superframe into
single frame for decoders. The vp9 superframe is a frame which consists
of multiple frames (or superframe with one frame is allowed) followed by superframe
index block. Then unpacked each frame will be considered as normal frame
by decoder. The decision for unpacking will be done by downstream element's
"alignment" caps field, which can be "super-frame" or "frame".
If downstream specifies the "alignment" as "frame",
then vp9parse element will split an incoming superframe into single frames
and the superframe index (located at the end of the superframe) data
will be discarded by vp9parse element.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1041>