There was a race condition where client was being finalized and
concurrently in some other thread the rtsp ctrl timout was relying on
client data that was being freed.
When rtsp ctrl timeout is setup, a WeakRef on Client is set.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-rtsp-server/-/merge_requests/121>
We always need to take the lock while accessing it as otherwise another
thread might've removed it in the meantime. Also when destroying and
creating a new one, ensure that the mutex is not shortly unlocked in
between as during that time another one might potentially be created
already.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-rtsp-server/-/merge_requests/119>
They previously used the same state but different mechanisms and
functions, which was difficult to follow, error prone and simply
confusing.
Also adjust the test for the post-session timeout a bit to be less racy
now that the timing has slightly changed.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-rtsp-server/-/merge_requests/115>
There might be other sessions that are running over the same RTSP
connection and we should not simply close the client directly if one of
them is torn down.
By default the connection will be closed once the client closes it or
the OS does. This behaviour can be adjusted with the
post-session-timeout property, which allows to close it automatically
from the server side after all sessions are gone and the given timeout
is reached.
This reverts the previous commit.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-rtsp-server/-/merge_requests/115>
Move the usage of priv->watch_context to beginning of function
gst_rtsp_client_finalize. Instead of use it after
g_main_context_unref (priv->watch_context).
+ Take the watch lock prior to using priv->watch
+ Flush both the watch and connection before closing / unreffing
gst_rtsp_connection_close() is not threadsafe on its own, this is
a workaround at the client level, where we control both the watch
and the connection
This is a TCP connection timeout for client connections, in seconds.
If a positive value is set for this property, the client connection
will be kept alive for this amount of seconds after the last session
timeout. For negative values of this property the connection timeout
handling is delegated to the system (just as it was before).
Fixes#83
By passing NULL to `g_signal_new` instead of a marshaller, GLib will
actually internally optimize the signal (if the marshaller is available
in GLib itself) by also setting the valist marshaller. This makes the
signal emission a bit more performant than the regular marshalling,
which still needs to box into `GValue` and call libffi in case of a
generic marshaller.
Note that for custom marshallers, one would use
`g_signal_set_va_marshaller()` with the valist marshaller instead.
The previous implementation stopped sending TCP messages to
all clients when a single one stopped consuming them, which
obviously created problems for shared media.
Instead, we now manage a backlog in stream-transport, and slow
clients are removed once this backlog exceeds a maximum duration,
currently hardcoded.
Fixes#80
For shared media we got race conditions. Concurrently rtsp clients might
suspend or unsuspend the shared media and thus change the state without
the clients expecting that.
By introducing a lock that can be taken by callers such as rtsp_client
one can force rtsp clients calling, eg. PLAY, SETUP and that uses shared media,
to handle the media sequentially thus allowing one client to finish its
rtsp call before another client calls on the same media.
https://gitlab.freedesktop.org/gstreamer/gst-rtsp-server/issues/86Fixes#86
Change condition that should be fulfilled regarding RTPInfo.
Replace !gst_rtsp_media_is_receive_only with
gst_rtsp_media_has_completed_sender. It is more correct to actually look
for a sender pipeline that is complete. Only then a RTPInfo should
exist.
gst_rtsp_media_is_receive_only gives different answears depending on
state of server.
If Describe is called wth URL+options for backchannel SDP will give only
audio and only backchannel a=sendonly
If Describe is called on URL+options that gives both audio and video
direction from server to client, pipelines are created. Thus
receive_only will return false, even though Setup only would setup
backchannel.
RTP-Info is only for outgoing streams. Thus one should look if outgoing
streams are complete.
If RTP Info is missing and it is not a receiver only, eg. audio
backchannel. Then return GST_RTSP_STS_INTERNAL_SERVER_ERROR.
In rfc2326 it says RTP-info is req. but in RFC7826 it is conditional.
Since 1.14 there is audio backchannel support. Thus RTP-info is
conditional now. When audio backchannel only mode, there is no RTP-info.
Fixes#82
Without this patch it's always stream0 that is used to get segment event
that is used to set scale and speed. This even if client not doing SETUP
for stream0. At least in suspend mode reset this not working since then
it's just random if send_rtp_sink have got any segment event. There are
no check if send_rtp_sink for stream0 got any data before media is
prerolled after PLAY request.
We then pass those to adjust_play_mode, which needs to operate
on the "final" seek flags, as previously the code in rtsp-media
was assuming that accuracy seek flags (accurate / key_unit) should
not be set if the flags passed to the seek method were already set.
This will be used in the onvif tests in order to validate the
data transmitted over TCP: for streaming to continue after a
data message has been provided to client->send_func, the client
is responsible for marking the message as sent on the relevant
stream transport.
Add support for the RTSP Scale and Speed headers by setting the rate in
the seek to (scale*speed). We then check the resulting segment for rate
and applied rate, and use them as values for the Speed and Scale headers
respectively.
https://bugzilla.gnome.org/show_bug.cgi?id=754575
Adds a new virtual function, adjust_play_mode(), that allows
sub classes to adjust the seek done on the media. The sub class can
modify the values of the the seek flags and the rate.
https://bugzilla.gnome.org/show_bug.cgi?id=754575
Add functionality to limit the Content-Length.
API addition, Enhancement.
Define an appropriate request size limit and reject requests
exceeding the limit with response status 413 Request Entity Too Large
Related to !182
This adds new functions for passing buffer lists through the different
layers without breaking API/ABI, and enables the appsink to actually
provide buffer lists.
This should already reduce CPU usage and potentially context switches a
bit by passing a whole buffer list from the appsink instead of
individual buffers. As a next step it would be necessary to
a) Add support for a vector of data for the GstRTSPMessage body
b) Add support for sending multiple messages at once to the
GstRTSPWatch and let it be handled internally
c) Adding API to GOutputStream that works like writev()
Fixes https://gitlab.freedesktop.org/gstreamer/gst-rtsp-server/issues/29
The close handler could trigger a crash because it invalidated the
watch_context while still leaving a source attached to it which would be
cleaned up at a later point.
When the underlying layers are running on_message_sent, this sometimes
causes the underlying layer to send more data, which will cause the
underlying layer to run callback on_message_sent again. This can go on
and on.
To break this chain, we introduce an idle source that takes care of
sending data if there are more to send when running callback
https://bugzilla.gnome.org/show_bug.cgi?id=797289
Avoids ending up with races where a timeout would still be around
*after* a client was gone. This could happen rather easily in
RTSP-over-HTTP mode on a local connection, where each RTSP message
would be sent as a different HTTP connection with the same tunnelid.
If not properly removed, that timeout would then try to free again
a client (and its contents).
Export rtsp-server library API in headers when we're building the
library itself, otherwise import the API from the headers.
This fixes linker warnings on Windows when building with MSVC.
Fix up some missing config.h includes when building the lib which
is needed to get the export api define from config.h
https://bugzilla.gnome.org/show_bug.cgi?id=797185
If a (strange) client would reuse interleaved channel numbers in
multiple SETUP requests, we should not accept them. The channel
numbers are used for looking up stream transports in the
priv->transports hash table, and transports disappear from the table
if channel numbers are reused.
RFC 7826 (RTSP 2.0), Section 18.54, clarifies that it is OK for the
server to change the channel numbers suggested by the client.
https://bugzilla.gnome.org/show_bug.cgi?id=796988
When media is shared, the same media stream can be sent
to multiple multicast groups. Currently, there is no API
to retrieve multicast addresses from the stream.
When calling gst_rtsp_stream_get_multicast_address() function,
only the first multicast address is returned.
With this patch, each multicast destination requested in SETUP
will be stored in an internal list (call to
gst_rtsp_stream_add_multicast_client_address()).
The list of multicast groups requested by the clients can be
retrieved by calling gst_rtsp_stream_get_multicast_client_addresses().
There still exist some problems with the current implementation
in the multicast case:
1) The receiving part is currently only configured with
regard to the first multicast client (see
https://bugzilla.gnome.org/show_bug.cgi?id=796917).
2) Secondly, of security reasons, some constraints should be
put on the requested multicast destinations (see
https://bugzilla.gnome.org/show_bug.cgi?id=796916).
Change-Id: I6b060746e472a0734cc2fd828ffe4ea2956733ea
https://bugzilla.gnome.org/show_bug.cgi?id=793441
The maximum ttl value provided so far by the multicast clients
will be chosen and reported in the response to the current
client request.
Change-Id: I5408646e3b5a0a224d907ae215bdea60c4f1905f
https://bugzilla.gnome.org/show_bug.cgi?id=793441
When two multicast clients request specific transport
configurations, and "transport.client-settings" parameter is
set to true, it's wrong to actually require that these two
clients request the same multicast group.
Removed test_client_multicast_invalid_transport_specific test
cases as they wrongly require that the requested destination
address is supposed to be present in the address pool, also in
the case when "transport.client-settings" parameter is set to true.
Change-Id: I4580182ef35996caf644686d6139f72ec599c9fa
https://bugzilla.gnome.org/show_bug.cgi?id=793441
multiudpsink does not support setting the socket* properties
after it has started, which meant that rtsp-server could no
longer serve on both IPV4 and IPV6 sockets since the patches
from https://bugzilla.gnome.org/show_bug.cgi?id=757488 were
merged.
When first connecting an IPV6 client then an IPV4 client,
multiudpsink fell back to using the IPV6 socket.
When first connecting an IPV4 client, then an IPV6 client,
multiudpsink errored out, released the IPV4 socket, then
crashed when trying to send a message on NULL nevertheless,
that is however a separate issue.
This could probably be fixed by handling the setting of
sockets in multiudpsink after it has started, that will
however be a much more significant effort.
For now, this commit simply partially reverts the behaviour
of rtsp-stream: it will continue to only create the udpsinks
when needed, as was the case since the patches were merged,
it will however when creating them, always allocate both
sockets and set them on the sink before it starts, as was
the case prior to the patches.
Transport configuration will only error out if the allocation
of UDP sockets fails for the actual client's family, this
also downgrades the GST_ERRORs in alloc_ports_one_family
to GST_WARNINGs, as failing to allocate is no longer
necessarily fatal.
https://bugzilla.gnome.org/show_bug.cgi?id=796875
Before, the watch backlog size in GstRTSPClient was changed
dynamically between unlimited and a fixed size, trying to avoid both
unlimited memory usage and deadlocks while waiting for place in the
queue. (Some of the deadlocks were described in a long comment in
handle_request().)
In the previous commit, we changed to a fixed backlog size of 100.
This is possible, because we now handle RTP/RTCP data messages differently
from RTSP request/response messages.
The data messages are messages tunneled over TCP. We allow at most one
queued data message per stream in GstRTSPClient at a time, and
successfully sent data messages are acked by sending a "message-sent"
callback from the GstStreamTransport. Until that ack comes, the
GstRTSPStream does not call pull_sample() on its appsink, and
therefore the streaming thread in the pipeline will not be blocked
inside GstRTSPClient, waiting for a place in the queue.
pull_sample() is called when we have both an ack and a "new-sample"
signal from the appsink. Then, we know there is a buffer to write.
RTSP request/response messages are not acked in the same way as data
messages. The rest of the 100 places in the queue are used for
them. If the queue becomes full of request/response messages, we
return an error and close the connection to the client.
Change-Id: I275310bc90a219ceb2473c098261acc78be84c97
Change to using a fixed backlog size WATCH_BACKLOG_SIZE.
Preparation for the next commit, which changes to a different way of
avoiding both deadlocks and unlimited memory usage with the watch
backlog.