The monitor sets the object->element object as a GstObject. This
works for debug traces, but will assert for ELEMENT_ERROR. This
was the only case where that could happen. Add a check for that.
If we're at the end of a range request, read again to let libsoup
finalize the request. This allows to reuse the connection again later,
otherwise we would have to cancel the message and close the connection.
We have to get rid of the message on EOS when the complete stream is read to
remember that we successfully finished handling this specific message.
Otherwise we will cancel it later and close the connection instead of reusing
it at a later time.
It might also make sense to reuse connections if a non-200 response is
received. As long as there was no connection error, the HTTP connection should
be re-usable.
Most of those have V4L2 drivers these days enabling it make sure that it
this code is enabled in major distribution, hence that HW accelerated
decoder/encoder can be used on platforms that support it. The probes are
slightly increasing the first init of gstreamer library, though the
result is cached in the registry for later use.
When parsing NAL unit type in codec_data, check the 6bits of
NAL_unit_type only and do not require the array_completeness bit to be
0, since the default and mandatory value of array_completeness is 1 for
hvc1.
https://bugzilla.gnome.org/show_bug.cgi?id=768653
After switching to using V4L2_CAP_DEVICE_CAPS we lost support for
multiplanar device types. After some research, it looks like
vcap.capabilities treated the multiplanar flag of output and capture
devices equally, but not the new device_caps.
https://bugzilla.gnome.org/show_bug.cgi?id=768195
There's no real reason to avoid sending QOS/NAVIGATION events upstrea.
Some elements might want to have that information.
Also remove downstream-only CAPS event handling and minimize code
A typo in gst_v4l2_probe_and_register() caused a build error when building
with --enable-v4l2-probe. Fixing it.
gstv4l2.c: In function 'gst_v4l2_probe_and_register':
gstv4l2.c:150:25: error: 'struct v4l2_capability' has no member named 'capabilitites'
device_caps = vcap.capabilitites;
The same physical device can export multiple devices. In
this case, the capabilities field now contains a union of
all caps available from all exported V4L2 devices alongside
a V4L2_CAP_DEVICE_CAPS flag that should be used to decide
what capabilities to consider. In our case, we need the
ones from the exported device we are using.
https://bugzilla.gnome.org/show_bug.cgi?id=768195
We should add all pads, no matter if they are linked or active or not at this
point. Skipping some that are not will cause different behaviour than with
other muxers.
This can only happen if a) upstream somehow gets around the CAPS event failing
or b) there never being any CAPS event.
The following code assumes that all pads have a codec-id.
https://bugzilla.gnome.org/show_bug.cgi?id=768509
Handle sprop-vps, sprop-sps and sprop-pps in caps instead of
sprop-parameter-sets.
rtph265pay works with byte-stream and hvc1 formats but not hev1 yet. It
handles profile-id, tier-flag and level-id in caps query.
https://bugzilla.gnome.org/show_bug.cgi?id=753760
The FLV header cannot be trusted to indicate video or
audio presence, as the comments already mention. Don't
delay pushing tags waiting for streams that might never
appear.
Tags are now pushed immediately after they change:
- After parsing an onMetaData script object
- After negotiating caps on a pad
https://bugzilla.gnome.org/show_bug.cgi?id=768440
As seen in the parent switch for object_type_id, the 4 possible values are
0x40, 0x66, 0x67 and 0x68. Fixing the nested switch to match these values.
Looks like it was a typo making them decimal instead of hexadecimal.
CID 1363328
Without, raw AAC can't be handled and we have some information available in
the decoder that most likely allows us to decode the stream in one way or
another. This is the same code already used by matroskademux for the same
reasons, and ffmpeg/vlc play such files just fine too by guesswork.
This is to handle cases where upstream handles the fragmented streaming in TIME
segments and sends us data with gaps within fragments. This would happen when dealing
with trick-modes.
When upstream (push-based, TIME SEGMENT) wishes to send discontinuous samples,
it must obey the following rules:
* The buffer containing the [moof] must have a valid GST_BUFFER_OFFSET
* The buffers containing the first sample after a gap:
* MUST start at the beginning of a sample,
* MUST have the DISCONT flag set,
* MUST have a valid GST_BUFFER_OFFSET relative to the beginning of the fragment.
https://bugzilla.gnome.org/show_bug.cgi?id=767354
gst_v4l2_clear_error() doesn't work like g_clear_error(), it
doesn't NULLify the pointer, so set freed debug string to NULL
so it doesn't get freed again if gst_v4l2_clear_error() is
called twice on the error.
CID 1362901
Update the blocksize depending on how much is obtained from a read
of the input stream. This avoids doing too many reads in small chunks
when larger amounts of data are available and also prevents using
a very large memory area to read a small chunk of data.
https://bugzilla.gnome.org/show_bug.cgi?id=767833
If we consider the RTSP state, what can happen is that it is PLAYING but the
element already asynchronously tried to PAUSE and it just did not happen yet.
We would then override this setting to PAUSED (while the element actually is
in PAUSED) and set the RTSP state to PLAYING again. This would then cause us
to produce packets while the sinks are all PAUSED, piling up thousands of
packets in the rtpjitterbuffer and other elements and finally failing.