Currently during header parsing, we scan through the entire file
and skip every moof+mdat chunk for fragmented mp4s, which makes
start-up incredibly slow. Instead, just stop at the first moof
chunk when have a moov, and start exposing the streams, so we
can go and start handling the moofs for real.
When an caps-event is received, we must immediately change the crop
to videocrop correctly changed caps-event dimension, otherwise the
videocrop will first use the previous value of the crop that when
resizing video to a smaller resolution may cause an error.
https://bugzilla.gnome.org/show_bug.cgi?id=740671
Empty segments in an edit list have a media_start time of -1,
as they don't actually play any media. Allow for that when
aligning to the reference stream in reverse play.
Put a 0-byte at the end of the event string. Does not break ABI because
old depayloaders will skip the 0 byte (which is included in the length).
Expect a 0-byte at the end of the event string or a ; for old
payloaders.
See https://bugzilla.gnome.org/show_bug.cgi?id=737591
Both Firefox and Chrome uses VP8 as the encoding in their SDP.
Adding this now defacto standard name removes the need for special
case in SDP parsing code.
https://bugzilla.gnome.org/show_bug.cgi?id=737810
Add fixed payload type for mp2t to template caps as well, so
our output caps match the advertised default pt. Fixes a
regression from 1.2.
There's still something wrong with caps negotiation though,
rtpmp2tpay payload=96 ! fakesink will not output caps with
payload=96.
Fixes crash in audiotestsrc because of an unsupported format
getting negotiated on big-endian systems with
audiotestsrc ! interleave ! audioconvert ! wavenc
When the RTT and jitter are very low (such as on a local network), the
calculated retransmission timeout is very small. Set some sensible lower
boundary to the timeout by adding a new property. We use the packet
spacing as a lower boundary by default.
In early retransmission we are allowed to schedule 1 regular RTCP packet
at an earlier time. When we do that, we need to set allow_early to FALSE
and ignore/drop (or merge) all future requests for early transmission.
We now first check if we can schedule an early RTCP and if we can,
actually prepare the data for the next RTCP interval.
After we send the next regular RTCP after the early RTCP, we set
allow_early to TRUE again to allow more early requests.
Remove the condition for the immediate feedback for now.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=738319
Add a need-resync state, this is when we need to try to lock on to a
time/RTPtime pair.
Always check the RTP timestamps and if they go backwards, mark ourselves
as need-resync.
Only resync when need-resync is TRUE and we have a valid time. Otherwise
we keep the old values. This avoids locking on to an invalid time and
causing us to timestamp everything with -1.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=730417
rtpmux behaves like a funnel in that it forwards whatever upstream is
sending buffers. So setting proxy caps doesn't make sense as the
upstream don't have to have compatible caps, thus resulting in an empty
caps set as a result of a caps query. Instead set fixed caps just
as funnel does.
https://bugzilla.gnome.org/show_bug.cgi?id=738722
left, right, top, bottom can be set from range of -2147483648 to 2147483647
when i launch the videobox element with that values, it gives a critical error
(gst-check-1.0:29869): GStreamer-CRITICAL **: gst_value_set_int_range_step: assertion 'start < end' failed
This happens because min cannot be equal to max.
https://bugzilla.gnome.org/show_bug.cgi?id=738838
The loop in zoomFilterSetResolution is meant to change the values in the
zf->firedec[] array. Each iteration writes the value of decc onto the arrya,
but no conditions that change the value of decc are ever met and the array is
filled with zero for each element. Which is the initial state of the
array before the loop begins.
The loop does nothing.
https://bugzilla.gnome.org/show_bug.cgi?id=728353
We never initialize clock_rate explicitly, therefore it is 0 by default. The
parameter is a uint32 and the only caller ensure that it is >0, therefore it
won't become -1 ever.
In order to have a full mapping between channel positions in the audio
stream and loudspeaker positions, the channel-mask alone is not enough:
the channels must be interleaved following some Default Channel Ordering
as mentioned in the WAVEFORMATEXTENSIBLE[1] specification.
As a Default Channel Ordering use the one implied by
GstAudioChannelPosition which follows the ordering defined in SMPTE
2036-2-2008[2].
NOTE that the relative order in the Top Layer is not exactly the same as
the one from the WAVEFORMATEXTENSIBLE[1] specification; let's hope users
using so may channels are already aware of such discrepancies.
[1] http://msdn.microsoft.com/en-us/library/windows/hardware/dn653308%28v=vs.85%29.aspx
[2] http://www.itu.int/dms_pub/itu-r/opb/rep/R-REP-BS.2159-2-2011-PDF-E.pdf
Fixes: https://bugzilla.gnome.org/show_bug.cgi?id=737127
Otherwise the CAPS event will be dropped and we never configure any caps at
all, leading to weird behaviour in many situations. Especially header
rewriting is not going to work if a capsfilter is after wavenc.
https://bugzilla.gnome.org/show_bug.cgi?id=737735
This is about converting the format, not about converting any widths and
heights. Subclasses are expected to handler different resolutions themselves,
like the videomixers already do properly.
gstrtspsrc.c:7939:11: error: implicit conversion from enumeration type 'GstSDPResult' to different enumeration type
'GstRTSPResult' [-Werror,-Wenum-conversion]
res = gst_sdp_message_new (&sdp);
~ ^~~~~~~~~~~~~~~~~~~~~~~~~~
gstrtspsrc.c:7944:11: error: implicit conversion from enumeration type 'GstSDPResult' to different enumeration type
'GstRTSPResult' [-Werror,-Wenum-conversion]
res = gst_sdp_message_parse_uri (uri, sdp);
~ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Remove pads from flow combiner and reset last
flow return to FLOW_OK by resetting the flow combiner.
This prevents FLOW_FLUSHING when trying to re-use the
demuxer after setting it back to NULL/READY state.
https://bugzilla.gnome.org/show_bug.cgi?id=737359
DTS delta is used to calculate sample duration. If buffer has missing DTS, we take either segment start or previous buffer end time, whichever is later.
This must only be done for non sparse streams, sparse streams can have gaps between buffers (which is handled later by adding extra empty buffer with duration that fills the gap)
https://bugzilla.gnome.org/show_bug.cgi?id=737095
In 1.0, we pass the complete caps to transform_caps to allow for better
optimizations. Make this function actually work on non-simple caps
instead of just ignoring the configured filter caps.
We have to skip 12 bytes of data for the chunk, and the data size
passed to the sub-chunk parsing functions should have 4 bytes less
than the data size.
Also when parsing the sub-chunks, check if we actually have enough
data to read instead of just crashing.
https://bugzilla.gnome.org/show_bug.cgi?id=736266
Drop use of g_socket_get_available_bytes() which is
not useful on all systems (where it returns the size
of the entire buffer not that of the next pending
packet), and is yet another syscall and apparently
very inefficient on Windows in the UDP case.
Instead, when reading UDP packets, use the more featureful
g_socket_receive_message() call that allows to read into
scattered memory, and allocate one memory chunk which is
likely to be large enough for a packet, while also providing
a larger allocated memory chunk just in case the packet
is larger than expected. If the received data fits into the
first chunk, we'll just add that to the buffer we return
and re-use the fallback buffer for next time, otherwise we
add both chunks to the buffer.
This reduces memory waste more reliably on systems where
get_available_bytes() doesn't work properly.
In a multimedia streaming scenario, incoming UDP packets
are almost never fragmented and thus almost always smaller
than the MTU size, which is also why we don't try to do
something smarter with more fallback memory chunks of
different sizes. The fallback scenario is just for when
someone built a broken sender pipeline (not using a
payloader or somesuch)
https://bugzilla.gnome.org/show_bug.cgi?id=610364
This makes sure that also properties like the pixel-aspect-ratio are the same
between both streams and that the output caps contain all fields necessary for
complete video caps.
https://bugzilla.gnome.org/show_bug.cgi?id=735804
gst_buffer_ref and gst_buffer_writable is being used to create a writable copy of source buffer.
replacing the same with gst_buffer_copy as the functionality is same.
https://bugzilla.gnome.org/show_bug.cgi?id=735880