We cannot take the RTSPStream lock while holding a transport backlog
lock, as remove_transport may be called externally, which will
take first the RTSPStream lock then the transport backlog lock.
This ensures we don't end up calling any of transports' callbacks
with a potentially unreffed user_data (in practice, a client that
may have been removed)
In order to address the race condition pointed out at
https://gitlab.freedesktop.org/gstreamer/gst-rtsp-server/merge_requests/108#note_403579
we get rid of the send thread pool, and instead spawn and manage
a single thread to pull samples from app sinks and add them to
the transport's backlogs.
Additionally, we now also always go through the backlogs in order
to simplify the logic.
Fixes#97
We cannot hold stream->lock while pushing data, but need
to consistently check the state of the backlog both from
the send_tcp_message function and the on_message_sent function,
which may or may not be called from the same thread.
This commit introduces internal API to allow for potentially
recursive locking of transport streams, addressing a race
condition where the RTSP stream could push items out of order
when popping them from the backlog.
The internal index of our appsinks, while it can be used to
determine whether a message is RTP or RTCP, is not necessarily
the same as the interleaved channel. Let the stream-transport
determine the channel to check backpressure for, the same way
it determines the channel according to whether it is sending
RTP or RTCP.
When removing transports an assertion was that the transports passed in
for removal are present in the list, however that can't be assumed.
As an example if a transport was removed from a thread running
send_tcp_message, the main thread can try to remove the same transport
again if it gets a handle_pause_request. This will not effect the
transport list but it will effect n_tcp_transports as it will be
decrement and then have the wrong value.
By passing NULL to `g_signal_new` instead of a marshaller, GLib will
actually internally optimize the signal (if the marshaller is available
in GLib itself) by also setting the valist marshaller. This makes the
signal emission a bit more performant than the regular marshalling,
which still needs to box into `GValue` and call libffi in case of a
generic marshaller.
Note that for custom marshallers, one would use
`g_signal_set_va_marshaller()` with the valist marshaller instead.
The previous implementation stopped sending TCP messages to
all clients when a single one stopped consuming them, which
obviously created problems for shared media.
Instead, we now manage a backlog in stream-transport, and slow
clients are removed once this backlog exceeds a maximum duration,
currently hardcoded.
Fixes#80
If one thread is inside the send_tcp_message function and are done
sending rtp or rtcp messages so the n_outstanding variable is zero
however have not exit the loop sending the messages. While sending its
messages, transports have been added or removed to the transport list,
so the cache should be updated. If now an additional thread comes to
the function send_tcp_message and trying to send rtp messages it will
first destroy the rtp cache that is still being iterated trough by the
first thread.
Fixes#81
Without this patch there are problem pre-rolling when using audio back
channel.
Without this patch a probe will be created for all streams including
the stream for audio backchannel. To pre-roll all this pads have to
receive data. Since the stream for audio backchannel is a receiver this
will never happen.
The solution is to never create any probes for streams that are for
incomming data and instead set them as blocking already from beginning.
GStreamer plays segment from stop to start when doing reverse playback.
RTSP implies that media should be played from start of Range header to
its stop. Hence we swap start and stop times before passing them to
gst_element_seek.
Also make gst_rtsp_stream_query_stop always return value that can be
used as stop time of Range header.
Add new function gst_rtsp_media_seek_full_with_rate() which allows the
caller to specify the rate for the seek. Also added functions in
rtsp-stream and rtsp-media for retreiving current rate and applied rate.
https://bugzilla.gnome.org/show_bug.cgi?id=754575
If not waiting for free thread pool before clean transport caches, there
can be a crash if a thread is executing in transport list loop in
function send_tcp_message.
Also add a check if priv->send_pool in on_message_sent to avoid that a
new thread is pushed during wait of free thread pool. This is possible
since when waiting for free thread pool mutex have to be unlocked.
This adds new functions for passing buffer lists through the different
layers without breaking API/ABI, and enables the appsink to actually
provide buffer lists.
This should already reduce CPU usage and potentially context switches a
bit by passing a whole buffer list from the appsink instead of
individual buffers. As a next step it would be necessary to
a) Add support for a vector of data for the GstRTSPMessage body
b) Add support for sending multiple messages at once to the
GstRTSPWatch and let it be handled internally
c) Adding API to GOutputStream that works like writev()
Fixes https://gitlab.freedesktop.org/gstreamer/gst-rtsp-server/issues/29
In plug_src we changed the element state before adding it to
the owner container. This prevented the pipeline from intercepting
a GST_STREAM_STATUS_TYPE_CREATE message from the pad in order
to assign a custom task pool.
Fixes https://gitlab.freedesktop.org/gstreamer/gst-rtsp-server/issues/53
The sequence number in the rtpinfo is supposed to be the first RTP
sequence number. The "seqnum" property on a payloader is supposed to be
the number from the last processed RTP packet. The sequence number for
payloaders that inherit gstrtpbasepayload will not be correct in case of
buffer lists. In order to fix the seqnum property on the payloaders
gst-rtsp-server must get the sequence number for rtpinfo elsewhere and
"seqnum-offset" from the "stats" property contains the value of the
very first RTP packet in a stream. The server will, however, try to look
at the last simple in the sink element and only use properties on the
payloader in case there no sink elements yet, and by looking at the last
sample of the sink gives the server full control of which RTP packet it
looks at. If the payloader does not have the "stats" property, "seqnum"
is still used since "seqnum-offset" is only present in as part of
"stats" and this is still an issue not solved with this patch.
Needed for gst-plugins-base!17
When the underlying layers are running on_message_sent, this sometimes
causes the underlying layer to send more data, which will cause the
underlying layer to run callback on_message_sent again. This can go on
and on.
To break this chain, we introduce an idle source that takes care of
sending data if there are more to send when running callback
https://bugzilla.gnome.org/show_bug.cgi?id=797289
By default the multicast sockets are bound to INADDR_ANY,
as it's not allowed to bind sockets to multicast addresses
in Windows. This default behaviour can be changed by setting
bind-mcast-address property on the media-factory object.
https://bugzilla.gnome.org/show_bug.cgi?id=797059
Export rtsp-server library API in headers when we're building the
library itself, otherwise import the API from the headers.
This fixes linker warnings on Windows when building with MSVC.
Fix up some missing config.h includes when building the lib which
is needed to get the export api define from config.h
https://bugzilla.gnome.org/show_bug.cgi?id=797185
When media is shared, the same media stream can be sent
to multiple multicast groups. Currently, there is no API
to retrieve multicast addresses from the stream.
When calling gst_rtsp_stream_get_multicast_address() function,
only the first multicast address is returned.
With this patch, each multicast destination requested in SETUP
will be stored in an internal list (call to
gst_rtsp_stream_add_multicast_client_address()).
The list of multicast groups requested by the clients can be
retrieved by calling gst_rtsp_stream_get_multicast_client_addresses().
There still exist some problems with the current implementation
in the multicast case:
1) The receiving part is currently only configured with
regard to the first multicast client (see
https://bugzilla.gnome.org/show_bug.cgi?id=796917).
2) Secondly, of security reasons, some constraints should be
put on the requested multicast destinations (see
https://bugzilla.gnome.org/show_bug.cgi?id=796916).
Change-Id: I6b060746e472a0734cc2fd828ffe4ea2956733ea
https://bugzilla.gnome.org/show_bug.cgi?id=793441
The maximum ttl value provided so far by the multicast clients
will be chosen and reported in the response to the current
client request.
Change-Id: I5408646e3b5a0a224d907ae215bdea60c4f1905f
https://bugzilla.gnome.org/show_bug.cgi?id=793441
If "transport.client-settings" parameter is set to true, the client is
allowed to specify destination, ports and ttl.
There is no need for pre-configured address pool.
Change-Id: I6ae578fb5164d78e8ec1e2ee82dc4eaacd0912d1
https://bugzilla.gnome.org/show_bug.cgi?id=793441
If "transport.client-settings" parameter is set to true, the client is
allowed to specify destination, ports and ttl.
There is no need for pre-configured address pool.
https://bugzilla.gnome.org/show_bug.cgi?id=793441
multiudpsink does not support setting the socket* properties
after it has started, which meant that rtsp-server could no
longer serve on both IPV4 and IPV6 sockets since the patches
from https://bugzilla.gnome.org/show_bug.cgi?id=757488 were
merged.
When first connecting an IPV6 client then an IPV4 client,
multiudpsink fell back to using the IPV6 socket.
When first connecting an IPV4 client, then an IPV6 client,
multiudpsink errored out, released the IPV4 socket, then
crashed when trying to send a message on NULL nevertheless,
that is however a separate issue.
This could probably be fixed by handling the setting of
sockets in multiudpsink after it has started, that will
however be a much more significant effort.
For now, this commit simply partially reverts the behaviour
of rtsp-stream: it will continue to only create the udpsinks
when needed, as was the case since the patches were merged,
it will however when creating them, always allocate both
sockets and set them on the sink before it starts, as was
the case prior to the patches.
Transport configuration will only error out if the allocation
of UDP sockets fails for the actual client's family, this
also downgrades the GST_ERRORs in alloc_ports_one_family
to GST_WARNINGs, as failing to allocate is no longer
necessarily fatal.
https://bugzilla.gnome.org/show_bug.cgi?id=796875
Before, the watch backlog size in GstRTSPClient was changed
dynamically between unlimited and a fixed size, trying to avoid both
unlimited memory usage and deadlocks while waiting for place in the
queue. (Some of the deadlocks were described in a long comment in
handle_request().)
In the previous commit, we changed to a fixed backlog size of 100.
This is possible, because we now handle RTP/RTCP data messages differently
from RTSP request/response messages.
The data messages are messages tunneled over TCP. We allow at most one
queued data message per stream in GstRTSPClient at a time, and
successfully sent data messages are acked by sending a "message-sent"
callback from the GstStreamTransport. Until that ack comes, the
GstRTSPStream does not call pull_sample() on its appsink, and
therefore the streaming thread in the pipeline will not be blocked
inside GstRTSPClient, waiting for a place in the queue.
pull_sample() is called when we have both an ack and a "new-sample"
signal from the appsink. Then, we know there is a buffer to write.
RTSP request/response messages are not acked in the same way as data
messages. The rest of the 100 places in the queue are used for
them. If the queue becomes full of request/response messages, we
return an error and close the connection to the client.
Change-Id: I275310bc90a219ceb2473c098261acc78be84c97
Fix race when setting up source elements.
Since we set the source element(s) to PLAYING state before hooking
them up to the downstream funnel, it's possible for the source element
to receive packets before we actually get to linking it to the funnel,
in which case buffers would be pushed out on an unlinked pad, causing
it to error out and stop receiving more data.
We fix this by blocking the source's srcpad until we have linked it.
https://bugzilla.gnome.org/show_bug.cgi?id=796160