Move the multiview caps calculations to the configure_stream()
function, so the rest of the video info is available, and
use the gst_video_multiview_guess_half_aspect() function to
determine if the half-aspect flag should be set on frame-packed
video.
The cslg atom provide information about the DTS shift. This is
needed in recent version of ctts atom where the offset can be
negative. When cslg is missing, we parse the CTTS table as proposed
in the spec to calculate these values.
In this implementation, we only need to know the shift. As GStreamer
cannot transport negative timestamps, we shift the timestamps forward
using that value and adapt the segment to compensate. This patch also
removes bogus offset of ctts_soffset, this offset shall be included
in the edit list.
https://bugzilla.gnome.org/show_bug.cgi?id=751103
We shift DTS forward to avoid negative timestamps which cannot be
represented with version 0 of the CTTS table. To stick with that
version (backward compatibility), the spec recommend using an
edit list entry to move back the presentation time to where it
should be.
https://bugzilla.gnome.org/show_bug.cgi?id=751242
No need to check for context availability while freeing. We are inside
inside a code block with a condition that dereferences context.
if (context->type == 0 ...
https://bugzilla.gnome.org/show_bug.cgi?id=751306
If update_receiver_stats() fails, we can't really do anything with this buffer
anymore and have to drop it. This happens if there's a big seqnum
discontinuity for example.
https://bugzilla.gnome.org/show_bug.cgi?id=751311
The new property allows to select the time source that should be used for the
NTP time in RTCP packets. By default it will continue to calculate the NTP
timestamp (1900 epoch) based on the realtime clock. Alternatively it can use
the UNIX timestamp (1970 epoch), the pipeline's running time or the pipeline's
clock time. The latter is especially useful for synchronizing multiple
receivers if all of them share the same clock.
If use-pipeline-clock is set to TRUE, it will override the ntp-time-source
setting and continue to use the running time plus 70 years. This is only kept
for backwards compatibility.
In presence of a CTTS, the segment start/stop must be offset so
the segment start/stop include the PTS. This is needed since the
PTS cannot be negative in this format. This fixes issues where the
running time of the first buffer isn't at the start.
https://bugzilla.gnome.org/show_bug.cgi?id=740575
This is done by using new feature of the CollectPad clip function
which sets the DTS as a gint64 in the collected data. It also simplify
the code a bit.
https://bugzilla.gnome.org/show_bug.cgi?id=740575
When deciding whether it's time to switch to a new file, take into
account data that's been released for pushing, but hasn't yet
been pushed - because downstream is slow or the threads haven't been
scheduled.
Fixes a race in the unit test and probably in practice - sometimes
failing to switch when it should for an extra GOP or two.
Also fix a problem in splitmuxsrc where playback sometimes
stalls at startup if types are found too quickly.
https://bugzilla.gnome.org/show_bug.cgi?id=750747
Remove a custom specialized version of gst_buffer_new_wrapped by
using gst_buffer_new_wrapped_full inside a macro to simplify
parameters and give it a more meaningful name.
It is only used to create temporary buffers to have its data copied.
Adds AC-3 muxing support. It is defined for mp4 and 3gp formats.
One extra feature that was added was the ability to add extension
atoms after set_caps as the AC-3 extension atom needs some data
that has to be extracted from the stream itself and is not
present on caps.
Implement support for the packed video formats WebM
uses, not all the values that Matroska might use.
In practice, it's really hard to find any samples in the
wild of any.
Supported in both the muxer and demuxer.
The MPEG-A format provides an extension to the ISO base media
file format to store stereoscopic content encoded with different
codecs like H.264 and MPEG-4:2. The stereo video media information(svmi)
atom declares the presence and storage method for the video.
Stereo video information for MPEG-A can also be supplied through
the 'stvi' atom (ref: ISO/IEC_14496-12, ISO/IEC_23000-11), which
is not implemented in this patch.
Also missing is support for stereo video encoded as separate video tracks
for now.
Based on a patch by Sreerenj Balachandran <sreerenj.balachandran@intel.com>
https://bugzilla.gnome.org/show_bug.cgi?id=611157
When performing seek, segment->start is being updated with desired_offset,
but in case of reverse playback segment->start should be 0 and
segment->stop should be updated with desired offset.
https://bugzilla.gnome.org/show_bug.cgi?id=750675
Build fails with the latest snapshot of gcc-4.9 because param1 and param2 might
possibly be used uninitialized. They are set depending on the cases of a switch
statement and the compiler sees this as not a complete guarantee.
Set them to 0 if the switch statement falls down to the default case.
https://bugzilla.gnome.org/show_bug.cgi?id=750566#c6
Compiling (with gcc-4.9-20150603) produces an error because of an access beyond
the end of an array. This patch fixes the error by initializing the loop
control/array index variable (i) to 1 and returning i - 1 when a match is found.
Also, because the values stored in the array increase in value as the index
increases, the >= test unnecessary, so it is removed.
Only update the moov header into the caps if it's the finalised
moov at EOS time. Avoids posting a bogus moov at startup and
repeated updates in robust-recording mode
Implement a robust recording mode, where the output
file is always in a playable state, seeking and rewriting
the moov header at a configurable interval. Rewriting
moov is done using reserved space at the start of
the file, and a ping-pong strategy where the moov
is replaced atomically so it's never invalid.
Track when tags have actually changed, and don't write them into
the moov unless they've changed. Clear any existing tags when
re-writing them, so we can do progressive moov updating in robust
recording mode.
Write placeholder mdat as a free atom plus a 32-bit mdat
with '0' size, which means "rest of the file" in the spec.
Re-write it later to a full 64-bit extended size atom if needed.
Correctly update any edit lists each time the moov is recalculated,
updating existing table entries if they already exist instead of just
adding new ones.
self->channels is being incremented only when
channel-positions-from-input is set as TRUE. So in case of FALSE
self->func is not set and hence creating assertion error.
Hence removing the condition to increment self->channels.
https://bugzilla.gnome.org/show_bug.cgi?id=744211
According to RFC 5506, reduce size packages can be sent, this
packages may not be compound, so we need to add support for
getting ssrc from other types of packages.
https://bugzilla.gnome.org/show_bug.cgi?id=750327
This depayloader clash with the standard one for H263p. It produces an
H263p stream with a modified header. It uses encoding-name that is the
same as H263p (H263-1998) though the resulting ES is not decodable or
parsable in GStreamer, making it unsuable in dynamic pipeline. This
patch unrank this specialized depayloader since it can only be used in
custom pipeline.
https://bugzilla.gnome.org/show_bug.cgi?id=739935
Otherwise we will have 10s-100s of thread wakeups in feedback profiles, create
RTCP packets, etc. just to suppress them in 99% of the cases (i.e. if no
feedback is actually pending and no regular RTCP has to be sent).
This improves CPU usage and battery life quite a lot.
https://bugzilla.gnome.org/show_bug.cgi?id=746543
If we may suppress the packet due to the rules of RFC4585 (i.e. when
below the t-rr-int), we can send a smaller RTCP packet without RRs
and full SDES. In theory we could even send a minimal RTCP packet
according to RFC5506, but we don't support that yet.
https://bugzilla.gnome.org/show_bug.cgi?id=746543
Otherwise we can't properly schedule RTCP in feedback profiles as we need to
distinguish the time when we last checked for sending RTCP (tp) but might have
suppressed it, and the time when we last actually sent a non-early RTCP
packet.
This together with the other changes should now properly implement RTCP
scheduling according to RFC4585, and especially allow us to send feedback
packets a lot if needed but only send regular RTCP packets every once in a
while.
https://bugzilla.gnome.org/show_bug.cgi?id=746543
And modify our RTCP scheduling algorithm accordingly. We now can send more
RTCP packets if needed for feedback, but will throttle full RTCP packets by
rtcp-min-interval (t-rr-int from RFC4585).
In non-feedback mode, rtcp-min-interval is Tmin from RFC3550, which is
statically set to 1s or 0s by RFC4585. Tmin defines how often we should
send RTCP packets at most.
https://bugzilla.gnome.org/show_bug.cgi?id=746543
Otherwise we constantly create/close event file descriptors,
every time we call g_socket_condition_timed_wait() or
g_socket_send_message(s)(), i.e. a lot. Which is not
particularly good for performance.
Can't create GCancellable in ::start() here because it's used
in client_new() which may be called via the add-client action
signal which may be called before the element is up and running.
Otherwise we constantly create/close event file descriptors,
every single time we call g_socket_condition_timed_wait() or
g_socket_receive_message(), i.e. twice per packet received!
This was not particularly good for performance.
Also only create GCancellable on start-up.
qtdemux creates a samples array and gets the timestamps for buffers by
accumulating their durations. When doing reverse playback of fragments,
accumulating samples will lead to wrong timestamps as the timestamps
should go decreasing from fragment to fragment and the accumulation
will produce wrong results.
In this case, when receiving a discont for fragmented reverse playback,
the previous samples information should be flushed before new data
is processed.
This new mode ensures that files will never exceed a certain duration
based on incoming buffer PTS (and duration if present)
Note:
* You need timestamped buffers (duh). If some of the incoming buffers don't
have PTS, then it will just accept them in the current file
This property can be used in combination with next-file=max-size
(and perhaps a future next-file=max-duration) to make sure that
each file part starts cleanly with a key frame and the appropriate headers.
In order for this property to work correctly, upstream elements should make
sure than any headers that need to be written in a standalone file are:
1) in the streamheader caps field
2) and/or in the stream as one or more buffers marked with GST_BUFFER_FLAG_HEADER
that are just before the keyframe buffer
This is useful for MPEG-TS/MPEG-PS file segmenting in
combination with mpegtsmux or mpegpsmux.
Original patch by: Tim-Philipp Müller <tim@centricular.com>
From the API documentation: "Note that it is generally not
a good idea to reuse an existing cancellable for more
operations after it has been cancelled once, as this
function might tempt you to do. The recommended practice
is to drop the reference to a cancellable after cancelling
it, and let it die with the outstanding async operations.
You should create a fresh cancellable for further async
operations."
https://bugzilla.gnome.org/show_bug.cgi?id=739132
It might just be a late retransmission or spurious packet from elsewhere, but
resetting everything would mean that we will cause a noticeable hickup. Let's
get some confidence first that the sequence numbers changed for whatever
reason.
https://bugzilla.gnome.org/show_bug.cgi?id=747922
The gst-launch script for example launch line to test qtdemux is
missing a queue before the decodebins, otherwise the gst-launch-1.0
command won't work.
https://bugzilla.gnome.org/show_bug.cgi?id=749054
This reverts commit d22ec49632.
Application code might expect that it only gets external sources on those
signals, and get confused by this. If anything we would need to add new
signals.
Without this it seems impossible for an application to easily get notified
about the internal ssrcs that are created, e.g. sender sources, and also
to know when they are active and produce RTCP packets.
https://bugzilla.gnome.org/show_bug.cgi?id=746747
We now take the maximum of 2*jitter and 0.5*packet_spacing for the extra
delay. If jitter is very low, this should prevent unnecessary retransmission
requests to some degree.
https://bugzilla.gnome.org/show_bug.cgi?id=748041
When we are in passthrough, the transform function doesn't run and if the
passthrough check is in this function it will never be deactivated. Fix this by
checking directly whenever a gain is changed.
Also set the passthrough to TRUE at init because the gains default to 0, so we
can passthrough until any gain property is changed.
https://bugzilla.gnome.org/show_bug.cgi?id=748068
Prevents an extra unref of GstBuffer when passing a non-icy stream through
icydemux with metadata-interval set to 0.
Reproducible with:
gst-launch-1.0 filesrc location=~/testsong.mp3 ! \
'application/x-icy,metadata-interval=(int)0' ! icydemux ! decodebin ! wavenc ! \
filesink location=~/testsong.wav
https://bugzilla.gnome.org/show_bug.cgi?id=748024
because _release_pad tries to release it from ctx->sinkpad, which is
multiqueue's sink pad, and currently fails because the probe is not
installed there
This also happens in the very beginning when we receive the first packet, a
warning would be very confusing here. In all places where we should warn about
this, we would've printed a warning already before.
Right above we consider lost_packet packets, each of them having duration,
as lost and triggered their timers immediately. Below we use expected_dts
to schedule retransmission or schedule lost timers for the packets that
come after expected_dts.
As we just triggered lost_packets packets as lost, there's no point in
scheduling new timers for them and we can just skip over all lost packets.
https://bugzilla.gnome.org/show_bug.cgi?id=739868
Resetting the jitterbuffer drops all packets and other things, and will cause
a discontinuity in the packets received by the depayloaders. They should now
also flush anything they had pending as the new data will start at a different
position.
https://bugzilla.gnome.org/show_bug.cgi?id=739868
When doing key uint seek, qtdemux calls gst_qtdemux_adjust_seek
to get proper offset. And then this offset is set to
segment.position and segment.time in gst_qtdemux_perform_seek but
segment.start is not updated.
After that, application sends segment query,
qtdemux sets start and stop to query using gst_segment_to_stream_time. Due
to the wrong value in segment.start, the stop position is smaller than
it should.
https://bugzilla.gnome.org/show_bug.cgi?id=746822
We always write the CTTS in qtmux. Ideally we only want to do that
for streams that need DTS, it should be present on the track information
rather than be decided based on each buffer
As qt uses durations, it doesn't matter, only the difference
between consecutive buffers is important. Also, collectpads
already replaces PTS/DTS with the running times for them.
Instead of checking various state variables around the muxer,
track the current muxing mode in a single 'mux_mode' enum.
Add some implementation notes about the different mux modes
gst_segment_do_seek() does that for us already, and doing it twice
will break non-flushing seeks in interesting ways. Leftover from 1.0
porting.
Also copy over segment offset and applied_rate, just in case.
When multifilesink is operating in any mode other than one file
per buffer, the last file created won't have a file message posted
as multifilesink doesn't handle the EOS event.
This patch fixes it by using the last position to post a file
message when EOS is received. This should ensure at least the
time related data and the filename are posted to the application
or other elements
https://bugzilla.gnome.org/show_bug.cgi?id=747000
When not in fast-start or fragmented mode, we need to be able
to rewrite the size of the mdat atom, or else the output just
won't be playable - the mdat placeholder with size == 0 will
cover the rest of the file, including any moov atom we write out.
https://bugzilla.gnome.org/show_bug.cgi?id=708808
New tags can be found on different parts of the file, so this patch
keeps the stream taglists around for the life cycle of the pad
and adds those new tags as found. Then a new tag is found, the
pad's is marked with a tags changed flag, making the element push
a new tag event on the next check. Before this, we were sending
only the newly found tags, as the element was losing its taglist
when pushing the event.
Global tags are already being read in matroskaparse, but they are not
currently being sent.
This patch makes global tags get sent incrementally whenever new ones
are found.
https://bugzilla.gnome.org/show_bug.cgi?id=746242
When planes property is set to 0, the pipeline executes in
an infinite loop and never exits. Since planes must never
be 0, set the minimum value in the property description
to 1.
https://bugzilla.gnome.org/show_bug.cgi?id=743906
It is expected that buffers are time-stamped with running time. Set
a segment accordingly. In this case we pick 0,-1 as this is what udpsrc
would do. Depayloaders will update the segment to reflect the playback
position.
https://bugzilla.gnome.org/show_bug.cgi?id=635701
The segment start/stop in the query is meant to represent the seekable
portion of the stream. It does not match the segment start/stop. Instead
export 0 to duration.
Previously we were setting new caps with the same content for every H264 or
AAC codec_data we found in the stream, spamming everything and causing
renegotiations.
Instead delay creating the caps until we read the codec_data from the stream,
or fail if we get normal data before the codec_data.
AAC raw caps and H264 avc caps always need codec_data, setting caps on the pad
without them is going to make negotiation fail most of the time. Even if we
later set new caps with the codec_data, that's usually going to be too late.
https://bugzilla.gnome.org/show_bug.cgi?id=746682
Make sure that the sync_src pad has caps before the segment event.
Otherwise we might get a segment event before caps from the receive
RTCP pad, and then later when receiving RTCP packets will set caps.
This will results in a sticky event misordering warning
This fixes warnings in the rtpaux unit test but also in the
rtpaux and rtx examples in tests/examples/rtp
https://bugzilla.gnome.org/show_bug.cgi?id=746445
Before we only started it when either:
- there is no send RTP stream
or
- we received an RTP packet for sending
This could mean that if the send RTP pads are connected but never receive any
RTP data, and the same session is also used for receiving RTP/RTCP, we would
never start the RTCP thread and would never send RTCP for the receiving part
of the session.
This can be reproduced with a pipeline like:
gst-launch-1.0 rtpbin name=rtpbin \
udpsrc port=5000 ! "application/x-rtp, media=video, clock-rate=90000, encoding-name=H264" ! rtpbin.recv_rtp_sink_0 \
udpsrc port=5001 ! rtpbin.recv_rtcp_sink_0 \
rtpbin.send_rtcp_src_0 ! fakesink name=rtcp_fakesink silent=false async=false sync=false \
rtpbin.recv_rtp_src_0_2553225531_96 ! decodebin ! xvimagesink \
fakesrc ! valve drop=true ! rtpbin.send_rtp_sink_0 \
rtpbin.send_rtp_src_0 ! fakesink name=rtp_fakesink silent=false async=false sync=false -v
Before this change the rtcp_fakesink would never send RTCP for the receiving
part of the session (i.e. no receiver reports!), after the change it does.
And before and after this change it would send RTCP for the receiving part of
the session if the sender part was omitted (the last two lines).
* Fix critical when new tags are found after segment event has already
been sent.
* Send global tags before stream tags.
* Split sending of tags out of gst_matroska_demux_send_event() into its
own function.
https://bugzilla.gnome.org/show_bug.cgi?id=745973
Previously we advanced the in_data pointer by bps for every channel, and then
later again for block_size*bps. This caused us to be one sample further than
expected if an input buffer covered two analysis frames. And in the end lead
to completely bogus values reported by level.
https://bugzilla.gnome.org/show_bug.cgi?id=746065
These are outside the expected range of sequence numbers and should be
clipped, especially for RTSP they might belong to packets from before a seek
or a previous stream in general.
We need to set up the transport in any case, not just if we have a container
stream or a non-interleaved stream. Only if we have an interleaved stream and
are retrying, we should not set up the stream again.
https://bugzilla.gnome.org/show_bug.cgi?id=745599
Otherwise we will get not-negotiated later from rtpbin, and will never be able
to send RTCP packets back to the server. Note that error flow returns from the
RTCP pads are ignored, that's why it didn't fail more visible before.
This reverts commit 1591adf4cd.
https://bugzilla.gnome.org/show_bug.cgi?id=745586#c1:
It's the beginning of an implementation of RFC 2762, which is needed for
large multicast groups. The implementation is not yet complete but why
not leave what is there and implement RFC 2762 instead?
rtpsession declares an array of maps to store srrcs but only the
the key 0 is being used. This patch replaces the array of maps
for just one map and remove useless parameters in rtpsession
https://bugzilla.gnome.org/show_bug.cgi?id=745586
In gst_avi_demux_handle_src_query, there is not needed code.
We already check about stream is vbr or not at the upper line.
o, we don't need to check this condition becase stream is not
vbr 100% in this case.
https://bugzilla.gnome.org/show_bug.cgi?id=745276
Unlike many other seek flags, the KEY_UNIT seek
flag is not copied over into the GstSegment,
since it's only relevant for the seek itself,
so we need to pass it explicitly to the seek
handler here.
https://bugzilla.gnome.org/show_bug.cgi?id=745339
We need different symbol names, because these symbols are also present
in the fragmented plugin ... which will cause conflicts when doing
static linking
The number of FFTs is calculated with the following formula:
guint nfft = 2 * bands - 2;
nfft is passed to gst_fft_f32_new() as the len argument and is of type
unsigned integer. This method required that len is at leas 1, then
maximum G_MAXINT, as other values would be negative. If we extrapolate
from the formula above it means we need "bands" to be between 2 and
((guint)G_MAXINT + 2) / 2).
https://bugzilla.gnome.org/show_bug.cgi?id=744213
Using the sparse streams can make the push-based seeking return
too far in the stream. It also can lead to issues as the
sparse streams will be ignored when restarting playback and,
if the sparse stream is the one that has the earliest sample,
it will confuse qtdemux's offsets as one stream will have
an earlier offset than the demuxer's one which might lead to
early EOS.
https://bugzilla.gnome.org/show_bug.cgi?id=742661
Parse the 'sidx' atom and update the total duration according to the
parser result. The isoff parser code is imported from
gst-plugins-bad's dashdemux and a gst_isoff_sidx_parser_add_data()
function was factored out of the gst_isoff_sidx_parser_add_buffer()
function.
https://bugzilla.gnome.org/show_bug.cgi?id=743578
According to RFC 4585 section 3.5.3 step 1 we are not allowed to send
an early RTCP packet for the very first one. It must be a regular one.
Also make sure to not use last_rtcp_send_time in any calculations until
we actually sent an RTCP packet already. In specific this means that we
must not use it for forward reconsideration of the current RTCP send time.
Instead we don't do any forward reconsideration for the first RTCP packet.
Assignment is done to variable segment.stop when the intention was to assign to
local variable stop. Instead of overwriting it, the value is now clamped and
segment.stop is set to it soon after.
CID #1265773
Handle the case where a short file reaches EOS while we're still
waiting for no-more-pads, and make sure we continue to the internal
READY state for real playback to work properly later.
Implement 2 new elements - splitmuxsink and splitmuxsrc.
splitmuxsink is a bin which wraps a muxer and takes 1 video stream,
plus audio/subtitle streams, and starts a new file
whenever necessary to avoid overrunning a threshold of either bytes
or time. New files are started at a keyframe, and corresponding audio
and subtitle streams are split at packet boundaries to match
video GOP timestamps.
splitmuxsrc is a corresponding source element which handles
the splitmux:// URL and plays back all component files,
reconstructing the original elementary streams as it goes.
We detect a container correctly now so we need to revert the weird
check there was before.
Use gst_rtspsrc_stream_push_event() to push the caps event on the
right pad.
See https://bugzilla.gnome.org/show_bug.cgi?id=739391
Keep global and stream tags separately and parse the udta node
that can be found under the trak atom. The udta will contain
stream specific tags and will be pushed as such
https://bugzilla.gnome.org/show_bug.cgi?id=692473
Tags received via events, when marked as stream tags, will
be stored on that stream's trak atom instead of being stored
in the main tags atom. This allows the resulting file to have
global and stream tags stored.
https://bugzilla.gnome.org/show_bug.cgi?id=692473
Refactor the functions that were bound to the 'moov' atom to
directly pass the desired 'udta' that should receive the tags.
This allows the tags to be written to 'udta' at the 'moov' or
the 'trak' level, creating tags that are for the container or
for a stream only.
https://bugzilla.gnome.org/show_bug.cgi?id=692473
Snap to the end of the file when seeking past the end in reverse mode,
and also fix GST_SEEK_TYPE_END and GST_SEEK_TYPE_NONE handling
for the stop position by always seeking on a segment in stream time
This will be emitted whenever an RTCP packet is received. Different to
on-feedback-rtcp, this signal gets every complete RTCP packet and not
just the individual feedback packets.
For fragmented streams with extra data at the end of the mdat
qtdemux was not dropping those bytes and would try to use
that extra data as the beginning of a new atom, causing the
stream to fail.
https://bugzilla.gnome.org/show_bug.cgi?id=743407
It had no effect since quite some time and also is not needed in general,
especially not to switch between immediate feedback mode and early feedback
mode. The latest understanding of the RFC is that from the endpoint point of
view, both modes are exactly the same. RTCP is only allowed to use the
bandwidth as given by the RFC constraints, as such it is only ever possible
to schedule a RTCP packet early but it's against the RFC to schedule more RTCP
packets.
The difference between immediate feedback mode and early feedback mode is that
the former guarantees that an RTCP packet can be sent for every event
"immediately", which means that the bandwidth calculations from the RFC have
resulted in an RTCP scheduling interval that is small enough. Early feedback
mode on the other hand means that we can schedule some packets early to make
that happen, but it's not guaranteed at all that it's possible to schedule
an RTCP packet per event (i.e. they need to be accumulated or dropped).
This indicates with a boolean return value if scheduling a new RTCP packet
within the requested delay was possible. Otherwise it behaves exactly like
send-rtcp. The only reason for adding a new signal is ABI compatibility.
No matter if gst_matroska_read_common_parse_index_cuetrack () returns that the
flow is OK or not, the check there will be a break from the switch. Removing the
check since the outcome is the same.
CID #1265762
Bringing back the check removed in the previous commit but have that check be a
g_assert. Changing the function to static void since return can never be False,
because audio format will never be unkown.
func_index is set by the sum of three ternary operators which add, 0:4, 0:2,
and 1:0. Minimum value would be 0+0+0=0, and maximum would be 4+2+1=7.
The conditional checking if func_index is >= 0 and < 8 will always be true.
Removing it.
CID 1226442
We (currently?) can't really handle gaps between RTP packets if they're not
properly timestamped. The current code would go into calculations with
GST_CLOCK_TIME_NONE and then cause assertions everywhere. It's probably
better to error out cleanly instead.
We set to PLAYING after we have configured the caps, otherwise we
might end up calling request_key (with SRTP) while caps are still
being configured, ending in a crash.
https://bugzilla.gnome.org/show_bug.cgi?id=740505
Actually copy the codec data instead of copying nothing
and then bombing out because there's no data.
Fixes: gst-launch-1.0 audiotestsrc ! avenc_alac ! qtmux ! fakesink
https://bugzilla.gnome.org/show_bug.cgi?id=741783
Apparently linphone sends an invalid RTP packet as very
first packet. We want to ignore that instead of erroring
out (same for any other invalid packets really).
https://bugzilla.gnome.org/show_bug.cgi?id=741398
Instead of constantly querying upstream, just cache the last duration,
and in the unlikelyness we might have gone over query again before
deciding we are EOS.
Cut 15% cpu off matroskademux streaming thread (srsly...)
This is meant to be so (https://wiki.xiph.org/MatroskaOpus - while
it is marked as a draft, this part was confirmed to be correct on
IRC), and allows one to determine whether a demuxed stream is
multistream or not, and thus set the multistream caps field
accordingly. In turn, this means downstream does not have to guess.
https://bugzilla.gnome.org/show_bug.cgi?id=740744
It's like rendering a buffer list, just with one buffer.
Has the added advantage that if there are multiple clients
we can send the buffer to all the clients in one go.
We unlock and re-lock the client lock while emitting the
removed signal, which causes inconsistencies in the client
list vs. the client counts. Instead, remove the client from
the list already before emitting the signal and put it into
a temporary list of clients to be removed. That way things
look consistent to the streaming thread, but signal callbacks
can still do things like get stats from removed clients.
Add prototype for a render_list() function that can use a
sendmmsg-style g_socket_send_messages() function once it lands
in GLib. We can use this infrastructure to send multiple buffers
made up by multiple memories to multiple clients in one go, which
drastically reduces the number of syscalls made when sending
high-bitrate video streams.
https://bugzilla.gnome.org/show_bug.cgi?id=732152
Use the refcount for memory management and keep track
of the number of duplicate clients in a separate
variable. This will be useful later, and means we
don't have to hold the OBJECT_LOCK all the time.
https://bugzilla.gnome.org/show_bug.cgi?id=732866
Since "basetransform: Fix caps equality check" commit a7f357,
set_info() will not be called anymore if crop didn't change
the caps. This is fixed by setting "need_update" boolean when
cropping properties has been changed, and then applying these
if they where not applied before rendering the next frame. This
patch also fixed the locking, dropping un-needed custom lock,
and no holding needless lock while doing the operation as we
already hold the streaming lock.
https://bugzilla.gnome.org/show_bug.cgi?id=740787
In some cases the currently set GstVideoInfo is not interlaced, but
upstream caps are interlaced and the info is passed in the filter,
we should take that info into account and make sure that we do not
consider that case as a "pass through" case.
https://bugzilla.gnome.org/show_bug.cgi?id=741407
A race condition in the state change function may cause buffers
to be unreffed while they are still used by the streaming thread
in gst_rtp_h264_pay_send_sps_pps() resulting in a crash. Chain
up to the parent class first in the state change function to
make sure streaming has stopped and only then free those buffers.
https://bugzilla.gnome.org/show_bug.cgi?id=741381
When dealing with fragmented files, we will get more accurate duration
information via the mfra and moof atoms.
In order for playback to not stop at the initial duration (from the
moov atom), we need to check and update the various duration variables
when we find more information.
Fixes playback of fragmented files in pull mode
Adds a new set of properties to make pushfilesrc output a TIME SEGMENT
(instead of the filesrc BYTE SEGMENT).
When time-segment is set to True the following will happen:
* Seeks are refused (data starts from the beginning of the file)
* The BYTE segment will be replaced by a TIME segment with the values
specified in the various properties
* The first outgoing buffer will have a timestamp set on it (by default
it has a value of GST_CLOCK_TIME_NONE)
When seeking or finding the previous keyframe, do
comparisons against targets and segments using composition time
to correctly decide which sample times match.
We used to setup an iterator with 1 GValue set with a NULL object
pointer which is not the normal way to do that. Instead we should make
sure that the first call to gst_iterator_next returns GST_ITERATOR_DONE.
Currently during header parsing, we scan through the entire file
and skip every moof+mdat chunk for fragmented mp4s, which makes
start-up incredibly slow. Instead, just stop at the first moof
chunk when have a moov, and start exposing the streams, so we
can go and start handling the moofs for real.
When an caps-event is received, we must immediately change the crop
to videocrop correctly changed caps-event dimension, otherwise the
videocrop will first use the previous value of the crop that when
resizing video to a smaller resolution may cause an error.
https://bugzilla.gnome.org/show_bug.cgi?id=740671
Empty segments in an edit list have a media_start time of -1,
as they don't actually play any media. Allow for that when
aligning to the reference stream in reverse play.
Put a 0-byte at the end of the event string. Does not break ABI because
old depayloaders will skip the 0 byte (which is included in the length).
Expect a 0-byte at the end of the event string or a ; for old
payloaders.
See https://bugzilla.gnome.org/show_bug.cgi?id=737591
Both Firefox and Chrome uses VP8 as the encoding in their SDP.
Adding this now defacto standard name removes the need for special
case in SDP parsing code.
https://bugzilla.gnome.org/show_bug.cgi?id=737810
Add fixed payload type for mp2t to template caps as well, so
our output caps match the advertised default pt. Fixes a
regression from 1.2.
There's still something wrong with caps negotiation though,
rtpmp2tpay payload=96 ! fakesink will not output caps with
payload=96.
Fixes crash in audiotestsrc because of an unsupported format
getting negotiated on big-endian systems with
audiotestsrc ! interleave ! audioconvert ! wavenc
When the RTT and jitter are very low (such as on a local network), the
calculated retransmission timeout is very small. Set some sensible lower
boundary to the timeout by adding a new property. We use the packet
spacing as a lower boundary by default.
In early retransmission we are allowed to schedule 1 regular RTCP packet
at an earlier time. When we do that, we need to set allow_early to FALSE
and ignore/drop (or merge) all future requests for early transmission.
We now first check if we can schedule an early RTCP and if we can,
actually prepare the data for the next RTCP interval.
After we send the next regular RTCP after the early RTCP, we set
allow_early to TRUE again to allow more early requests.
Remove the condition for the immediate feedback for now.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=738319
Add a need-resync state, this is when we need to try to lock on to a
time/RTPtime pair.
Always check the RTP timestamps and if they go backwards, mark ourselves
as need-resync.
Only resync when need-resync is TRUE and we have a valid time. Otherwise
we keep the old values. This avoids locking on to an invalid time and
causing us to timestamp everything with -1.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=730417
rtpmux behaves like a funnel in that it forwards whatever upstream is
sending buffers. So setting proxy caps doesn't make sense as the
upstream don't have to have compatible caps, thus resulting in an empty
caps set as a result of a caps query. Instead set fixed caps just
as funnel does.
https://bugzilla.gnome.org/show_bug.cgi?id=738722
left, right, top, bottom can be set from range of -2147483648 to 2147483647
when i launch the videobox element with that values, it gives a critical error
(gst-check-1.0:29869): GStreamer-CRITICAL **: gst_value_set_int_range_step: assertion 'start < end' failed
This happens because min cannot be equal to max.
https://bugzilla.gnome.org/show_bug.cgi?id=738838
The loop in zoomFilterSetResolution is meant to change the values in the
zf->firedec[] array. Each iteration writes the value of decc onto the arrya,
but no conditions that change the value of decc are ever met and the array is
filled with zero for each element. Which is the initial state of the
array before the loop begins.
The loop does nothing.
https://bugzilla.gnome.org/show_bug.cgi?id=728353