To prevent cases with prerolling when the inactive stream prerolls first
and the server proceeds without waiting for the active stream, we will
ignore GstRTSPStreamBlocking messages from incomplete streams. When
there are no complete streams (during DESCRIBE), we will listen to all
streams.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-rtsp-server/-/merge_requests/167>
When play a media with both sender and receiver stream, like ONVIF
back channel audio in, gst_rtsp_media_get_rates call
gst_rtsp_stream_get_rates for each stream to set the rates. But
gst_rtsp_stream_get_rates return false for the receiver steam, which
lead a g_assert crash.
Instead to get rates on all streams, now just get rates on sender
streams.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-rtsp-server/-/merge_requests/150>
ulpfec correction is obviously useless when receiving a stream
over TCP, and in TCP modes the rtp storage receives non
timestamped buffers, causing it to queue buffers indefinitely,
until the queue grows so large that sanity checks kick in and
warnings start to get emitted.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-rtsp-server/-/merge_requests/149>
rtsp_media_unsuspend() is called from handle_play_request()
before sending the play response. Unblocking the streams here
was causing data to be sent out before the client was ready
to handle it, with obvious side effects such as initial packets
getting discarded, causing decoding errors.
Instead we can simply let the media streams be unblocked when
the state of the media is set to PLAYING, which occurs after
sending the play response.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-rtsp-server/-/merge_requests/147>
clang 10 is complaining about incompatible types due to the
glib typesystem.
```
../subprojects/gst-rtsp-server/gst/rtsp-server/rtsp-thread-pool.c:534:10: error: incompatible pointer types passing 'typeof ((((void *)0))) *' (aka 'void **') to parameter of type 'GThreadPool **' (aka 'struct _GThreadPool **') [-Werror,-Wincompatible-pointer-types]
```
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-rtsp-server/-/merge_requests/145>
clang 10 is complaining about incompatible types due to the
glib typesystem.
```
../subprojects/gst-rtsp-server/gst/rtsp-server/rtsp-thread-pool.c:534:10: error: incompatible pointer types passing 'typeof ((((void *)0))) *' (aka 'void **') to parameter of type 'GThreadPool **' (aka 'struct _GThreadPool **') [-Werror,-Wincompatible-pointer-types]
```
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-rtsp-server/-/merge_requests/145>
This causes them to send caps events before data flow, which is
usually a pretty correct thing to do!
Not doing so manifested in a bug where ssrcdemux wouldn't forward
the caps it had received with an extra ssrc field, as it hadn't
received any caps event.
Fixes#85
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-rtsp-server/-/merge_requests/141>
Otherwise the transports are not set up yet during the PLAY request
handling when unsuspending (and thus unblocking) the media.
In case of live pipelines this then causes the first few packets to go
to the sinks before they know what to do with them, and they simply
discard them which is rather suboptimal in case of keyframes.
For non-live pipelines this is not a problem because the sink will still
be PAUSED and as such not send out the data yet but wait until it goes
to PLAYING, which is late enough.
Adding the transports multiple times is not a problem: if the transport
is already added it won't be added another time and TRUE will be
returned.
This fixes a regression introduced by a7732a68e8
before 1.14.0.
Fixes https://gitlab.freedesktop.org/gstreamer/gst-rtsp-server/-/issues/107
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-rtsp-server/-/merge_requests/135>
Make sure rtsp-media have received a GstRTSPStreamBlocking message from
each active stream when checking if all streams are blocked.
Without this change there will be a race condition when using two or
more streams and rtsp-media receives a GstRTSPStreamBlocking message
from one of the streams. This is because rtsp-media then checks if all
streams are blocked by calling gst_rtsp_stream_is_blocking() for each
stream. This function call returns TRUE if the stream has sent a
GstRTSPStreamBlocking message, however, rtsp-media may have yet to
receive this message. This would then result in that rtsp-media
erroneously thinks it is blocking all streams which could result in
rtsp-media changing state, from PREPARING to PREPARED. In the case of a
preroll, this could result in that rtsp-media thinks that the pipeline
is prerolled even though that might not be the case.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-rtsp-server/-/merge_requests/124>
Set expected_async_done to FALSE in default_suspend() if a state change
occurs and the return value from set_target_state() is something other
than GST_STATE_CHANGE_ASYNC.
Without this change there is a risk that expected_async_done will be
TRUE even though no asynchronous state change is taking place. This
could happen if the pipeline is set to PAUSED using
media_set_pipeline_state_locked(), an asynchronous state change starts
and then the media is suspended (which could result in a state change,
aborting the asynchronous state change). If the media is suspended
before the asynchronous state change ends then expected_async_done will
be TRUE but no asynchronous state change is taking place.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-rtsp-server/-/merge_requests/123>
There was a race condition where client was being finalized and
concurrently in some other thread the rtsp ctrl timout was relying on
client data that was being freed.
When rtsp ctrl timeout is setup, a WeakRef on Client is set.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-rtsp-server/-/merge_requests/121>
We always need to take the lock while accessing it as otherwise another
thread might've removed it in the meantime. Also when destroying and
creating a new one, ensure that the mutex is not shortly unlocked in
between as during that time another one might potentially be created
already.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-rtsp-server/-/merge_requests/119>
They previously used the same state but different mechanisms and
functions, which was difficult to follow, error prone and simply
confusing.
Also adjust the test for the post-session timeout a bit to be less racy
now that the timing has slightly changed.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-rtsp-server/-/merge_requests/115>
There might be other sessions that are running over the same RTSP
connection and we should not simply close the client directly if one of
them is torn down.
By default the connection will be closed once the client closes it or
the OS does. This behaviour can be adjusted with the
post-session-timeout property, which allows to close it automatically
from the server side after all sessions are gone and the given timeout
is reached.
This reverts the previous commit.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-rtsp-server/-/merge_requests/115>
When using the basic authentication scheme, we wouldn't validate that
the authorization field of the credentials is not NULL and pass it on
to g_hash_table_lookup(). g_str_hash() however is not NULL-safe and will
dereference the NULL pointer and crash.
A specially crafted (read: invalid) RTSP header can cause this to
happen.
As a solution, check for the authorization to be not NULL before
continuing processing it and if it is simply fail authentication.
This fixes CVE-2020-6095 and TALOS-2020-1018.
Discovered by Peter Wang of Cisco ASIG.
Move the usage of priv->watch_context to beginning of function
gst_rtsp_client_finalize. Instead of use it after
g_main_context_unref (priv->watch_context).
We cannot take the RTSPStream lock while holding a transport backlog
lock, as remove_transport may be called externally, which will
take first the RTSPStream lock then the transport backlog lock.
This ensures we don't end up calling any of transports' callbacks
with a potentially unreffed user_data (in practice, a client that
may have been removed)
In order to address the race condition pointed out at
https://gitlab.freedesktop.org/gstreamer/gst-rtsp-server/merge_requests/108#note_403579
we get rid of the send thread pool, and instead spawn and manage
a single thread to pull samples from app sinks and add them to
the transport's backlogs.
Additionally, we now also always go through the backlogs in order
to simplify the logic.
Fixes#97
We cannot hold stream->lock while pushing data, but need
to consistently check the state of the backlog both from
the send_tcp_message function and the on_message_sent function,
which may or may not be called from the same thread.
This commit introduces internal API to allow for potentially
recursive locking of transport streams, addressing a race
condition where the RTSP stream could push items out of order
when popping them from the backlog.
It's taken ownership of by the media, and returned with `transfer none`
from the GstRTSPMedia::create_pipeline() vfunc. If we don't sink it
first then any bindings will wrongly take ownership of the pipeline once
it arrives in bindings code.
+ Take the watch lock prior to using priv->watch
+ Flush both the watch and connection before closing / unreffing
gst_rtsp_connection_close() is not threadsafe on its own, this is
a workaround at the client level, where we control both the watch
and the connection
This is a TCP connection timeout for client connections, in seconds.
If a positive value is set for this property, the client connection
will be kept alive for this amount of seconds after the last session
timeout. For negative values of this property the connection timeout
handling is delegated to the system (just as it was before).
Fixes#83
The internal index of our appsinks, while it can be used to
determine whether a message is RTP or RTCP, is not necessarily
the same as the interleaved channel. Let the stream-transport
determine the channel to check backpressure for, the same way
it determines the channel according to whether it is sending
RTP or RTCP.