+ Take the watch lock prior to using priv->watch
+ Flush both the watch and connection before closing / unreffing
gst_rtsp_connection_close() is not threadsafe on its own, this is
a workaround at the client level, where we control both the watch
and the connection
This is a TCP connection timeout for client connections, in seconds.
If a positive value is set for this property, the client connection
will be kept alive for this amount of seconds after the last session
timeout. For negative values of this property the connection timeout
handling is delegated to the system (just as it was before).
Fixes#83
The internal index of our appsinks, while it can be used to
determine whether a message is RTP or RTCP, is not necessarily
the same as the interleaved channel. Let the stream-transport
determine the channel to check backpressure for, the same way
it determines the channel according to whether it is sending
RTP or RTCP.
The commit "rtsp-client: define all seek accuracy flags from
setup_play_mode" changed the behaviour of when doing a seek.
Before that commit, having the flush flag set would result in a seek
(forced seek).
Even if no seek was needed. One reason to force seek is to flush old buffers
created in Describe requests.
Thus adding force seek also for flush flag will result in play request
with fresh buffers.
Only attempt to use the various timing values iif gst_rtsp_stream_get_info()
returns TRUE. Also avoid the whole clock signalling block if we're not
dealing with senders.
CID: 1439524
CID: 1439536
CID: 1439520
When removing transports an assertion was that the transports passed in
for removal are present in the list, however that can't be assumed.
As an example if a transport was removed from a thread running
send_tcp_message, the main thread can try to remove the same transport
again if it gets a handle_pause_request. This will not effect the
transport list but it will effect n_tcp_transports as it will be
decrement and then have the wrong value.
By passing NULL to `g_signal_new` instead of a marshaller, GLib will
actually internally optimize the signal (if the marshaller is available
in GLib itself) by also setting the valist marshaller. This makes the
signal emission a bit more performant than the regular marshalling,
which still needs to box into `GValue` and call libffi in case of a
generic marshaller.
Note that for custom marshallers, one would use
`g_signal_set_va_marshaller()` with the valist marshaller instead.
The documentation of gst_rtsp_mount_points_add_factory() says "Any
previous mount point will be freed" which was true when it was
implemented using a GHashTable. But in 2012 it got rewrote using a
GSequence and since then it could have 2 factories for the same path.
Which one gets used is random, depending on the sorting order of 2
identical items.
The previous implementation stopped sending TCP messages to
all clients when a single one stopped consuming them, which
obviously created problems for shared media.
Instead, we now manage a backlog in stream-transport, and slow
clients are removed once this backlog exceeds a maximum duration,
currently hardcoded.
Fixes#80
For shared media we got race conditions. Concurrently rtsp clients might
suspend or unsuspend the shared media and thus change the state without
the clients expecting that.
By introducing a lock that can be taken by callers such as rtsp_client
one can force rtsp clients calling, eg. PLAY, SETUP and that uses shared media,
to handle the media sequentially thus allowing one client to finish its
rtsp call before another client calls on the same media.
https://gitlab.freedesktop.org/gstreamer/gst-rtsp-server/issues/86Fixes#86
Extra time to add to the timeout, in seconds. This only
affects the time until a session is considered timed out
and is not signalled in the RTSP request responses.
Only the value of the timeout property is signalled in the
request responses.
If one thread is inside the send_tcp_message function and are done
sending rtp or rtcp messages so the n_outstanding variable is zero
however have not exit the loop sending the messages. While sending its
messages, transports have been added or removed to the transport list,
so the cache should be updated. If now an additional thread comes to
the function send_tcp_message and trying to send rtp messages it will
first destroy the rtp cache that is still being iterated trough by the
first thread.
Fixes#81
When unsuspending and going to PLAYING, unblock all streams instead of
only those that are linked (the linked streams are the ones for which
SETUP has been called). GST_FLOW_NOT_LINKED will be returned when
pushing buffers on unlinked streams.
This change is because playback using single-threaded demuxers like
matroska-demux could be blocked if SETUP was not called for all media.
Demuxers that use GstFlowCombiner (including gstoggdemux, gstavidemux,
gstflvdemux, qtdemux, and matroska-demux) will handle
GST_FLOW_NOT_LINKED automatically.
Fixes#39
Wait on asyn-done when needed in gst_rtsp_media_seek_trickmode.
In the unit test the pause from adjust_play_mode will cause a preroll
and after that async-done will be produced.
Without this patch there are no one consuming this async-done and when
later when seek fluch is done in gst_rtsp_media_seek_trickmode then it
wait for async-done. But then it wrongly find the async-done prodused by
adjus_play_mode and continue executing without waiting for the preroll
to finish.
Change condition that should be fulfilled regarding RTPInfo.
Replace !gst_rtsp_media_is_receive_only with
gst_rtsp_media_has_completed_sender. It is more correct to actually look
for a sender pipeline that is complete. Only then a RTPInfo should
exist.
gst_rtsp_media_is_receive_only gives different answears depending on
state of server.
If Describe is called wth URL+options for backchannel SDP will give only
audio and only backchannel a=sendonly
If Describe is called on URL+options that gives both audio and video
direction from server to client, pipelines are created. Thus
receive_only will return false, even though Setup only would setup
backchannel.
RTP-Info is only for outgoing streams. Thus one should look if outgoing
streams are complete.
If RTP Info is missing and it is not a receiver only, eg. audio
backchannel. Then return GST_RTSP_STS_INTERNAL_SERVER_ERROR.
In rfc2326 it says RTP-info is req. but in RFC7826 it is conditional.
Since 1.14 there is audio backchannel support. Thus RTP-info is
conditional now. When audio backchannel only mode, there is no RTP-info.
Fixes#82
Without this patch it's always stream0 that is used to get segment event
that is used to set scale and speed. This even if client not doing SETUP
for stream0. At least in suspend mode reset this not working since then
it's just random if send_rtp_sink have got any segment event. There are
no check if send_rtp_sink for stream0 got any data before media is
prerolled after PLAY request.
We then pass those to adjust_play_mode, which needs to operate
on the "final" seek flags, as previously the code in rtsp-media
was assuming that accuracy seek flags (accurate / key_unit) should
not be set if the flags passed to the seek method were already set.
First try "pay", then "pay_%s" (where %s == pad name). And only then
fall back to the code that simply takes the first payloader that is
found.
The current code usually works (but is racy) because it will always take
the payloader that was last added (due to g_list_prepend() when adding
elements) in pad-added and that's usually the correct one. But if a new
payloader is added between pad-added and us trying to get it, we would
get the wrong payloader.
Without this patch there are problem pre-rolling when using audio back
channel.
Without this patch a probe will be created for all streams including
the stream for audio backchannel. To pre-roll all this pads have to
receive data. Since the stream for audio backchannel is a receiver this
will never happen.
The solution is to never create any probes for streams that are for
incomming data and instead set them as blocking already from beginning.
The recent ONVIF work exposed a race condition when dealing with
multiple streams: one of the sinks may preroll before other streams
have started flushing. This led to the pipeline posting async-done
prematurely, when some streams were actually still in the middle
of performing a flushing seek. The newly-added code looks up a
sticky segment event on the first stream in order to respond to
the PLAY request with accurate Scale and Speed headers. In the
failure condition, the first stream was flushing, and thus had
no sticky segment event, leading to the PLAY request failing,
and in turn the test.
This will be used in the onvif tests in order to validate the
data transmitted over TCP: for streaming to continue after a
data message has been provided to client->send_func, the client
is responsible for marking the message as sent on the relevant
stream transport.
GStreamer plays segment from stop to start when doing reverse playback.
RTSP implies that media should be played from start of Range header to
its stop. Hence we swap start and stop times before passing them to
gst_element_seek.
Also make gst_rtsp_stream_query_stop always return value that can be
used as stop time of Range header.
Add support for the RTSP Scale and Speed headers by setting the rate in
the seek to (scale*speed). We then check the resulting segment for rate
and applied rate, and use them as values for the Speed and Scale headers
respectively.
https://bugzilla.gnome.org/show_bug.cgi?id=754575
Adds a new virtual function, adjust_play_mode(), that allows
sub classes to adjust the seek done on the media. The sub class can
modify the values of the the seek flags and the rate.
https://bugzilla.gnome.org/show_bug.cgi?id=754575
Add new function gst_rtsp_media_seek_full_with_rate() which allows the
caller to specify the rate for the seek. Also added functions in
rtsp-stream and rtsp-media for retreiving current rate and applied rate.
https://bugzilla.gnome.org/show_bug.cgi?id=754575
Without this we can easily run into a race condition with async state changes:
- the pipeline is doing an async state change
- we set the internal bins to PLAYING but that's ignored because an
async state change is currently pending
- the async state change finishes but does not change the state of the
internal bins because of locked_state==TRUE
- the internal bins stay in PAUSED forever
Add functionality to limit the Content-Length.
API addition, Enhancement.
Define an appropriate request size limit and reject requests
exceeding the limit with response status 413 Request Entity Too Large
Related to !182
Otherwise it will never try to send us the next one: it tries to keep
exactly one message in-flight all the time.
In gst-rtsp-server this is done asynchronously via the GstRTSPWatch but
in the client sink we always write data out synchronously.
If not waiting for free thread pool before clean transport caches, there
can be a crash if a thread is executing in transport list loop in
function send_tcp_message.
Also add a check if priv->send_pool in on_message_sent to avoid that a
new thread is pushed during wait of free thread pool. This is possible
since when waiting for free thread pool mutex have to be unlocked.
Handle the situation when a call to gst_rtsp_media_set_state is done
when media status is preparing.
Also add unit test for this scenario.
The unit test simulate on a media level when two clients share a (live)
media.
Both clients have done SETUP and got responses. Now client 1 is doing
play and client 2 is just closing the connection.
Then without patch there are a problem when
client1 is calling gst_rtsp_media_unsuspend in handle_play_request.
And client2 is doing closing connection we can end up in a call
to gst_rtsp_media_set_state when
priv->status == GST_RTSP_MEDIA_STATUS_PREPARING and all the logic for
shut down media is jumped over .
With this patch and this scenario we wait until
priv->status == GST_RTSP_MEDIA_STATUS_PREPARED and then continue to
execute after that and now we will execute the logic for
shut down media.
This adds new functions for passing buffer lists through the different
layers without breaking API/ABI, and enables the appsink to actually
provide buffer lists.
This should already reduce CPU usage and potentially context switches a
bit by passing a whole buffer list from the appsink instead of
individual buffers. As a next step it would be necessary to
a) Add support for a vector of data for the GstRTSPMessage body
b) Add support for sending multiple messages at once to the
GstRTSPWatch and let it be handled internally
c) Adding API to GOutputStream that works like writev()
Fixes https://gitlab.freedesktop.org/gstreamer/gst-rtsp-server/issues/29
The close handler could trigger a crash because it invalidated the
watch_context while still leaving a source attached to it which would be
cleaned up at a later point.
The previous fix for race condition around finish_unprepare where the
function could be called twice assumed that the status wouldn't change
during execution of the function. This assumption is incorrect as the
state may change, for example if an error message arrives from the
pipeline bus.
Instead a flag keeping track on whether the finish_unprepare function
is currently executing is introduced and checked.
Fixes https://gitlab.freedesktop.org/gstreamer/gst-rtsp-server/issues/59
In plug_src we changed the element state before adding it to
the owner container. This prevented the pipeline from intercepting
a GST_STREAM_STATUS_TYPE_CREATE message from the pad in order
to assign a custom task pool.
Fixes https://gitlab.freedesktop.org/gstreamer/gst-rtsp-server/issues/53
Media is considered to be blocked when all streams that belong to
that media are blocked.
This patch solves the problem of inconsistent updates of
priv->blocked that are not synchronized with the media state.
Before the seek operation is performed on media, it's required that
its pipeline is prepared <=> the pipeline is in the PAUSED state.
At this stage, all transport parts (transport sinks) have been successfully
added to the pipeline and there is no need for blocking the streams.
The sequence number in the rtpinfo is supposed to be the first RTP
sequence number. The "seqnum" property on a payloader is supposed to be
the number from the last processed RTP packet. The sequence number for
payloaders that inherit gstrtpbasepayload will not be correct in case of
buffer lists. In order to fix the seqnum property on the payloaders
gst-rtsp-server must get the sequence number for rtpinfo elsewhere and
"seqnum-offset" from the "stats" property contains the value of the
very first RTP packet in a stream. The server will, however, try to look
at the last simple in the sink element and only use properties on the
payloader in case there no sink elements yet, and by looking at the last
sample of the sink gives the server full control of which RTP packet it
looks at. If the payloader does not have the "stats" property, "seqnum"
is still used since "seqnum-offset" is only present in as part of
"stats" and this is still an issue not solved with this patch.
Needed for gst-plugins-base!17
... by actually making it a single-include header and moving everything
related to the GstRTSPServer type to rtsp-server-object.h instead.
Otherwise there are too many circular includes.
https://bugzilla.gnome.org/show_bug.cgi?id=797361
When the underlying layers are running on_message_sent, this sometimes
causes the underlying layer to send more data, which will cause the
underlying layer to run callback on_message_sent again. This can go on
and on.
To break this chain, we introduce an idle source that takes care of
sending data if there are more to send when running callback
https://bugzilla.gnome.org/show_bug.cgi?id=797289
Avoids ending up with races where a timeout would still be around
*after* a client was gone. This could happen rather easily in
RTSP-over-HTTP mode on a local connection, where each RTSP message
would be sent as a different HTTP connection with the same tunnelid.
If not properly removed, that timeout would then try to free again
a client (and its contents).
By default the multicast sockets are bound to INADDR_ANY,
as it's not allowed to bind sockets to multicast addresses
in Windows. This default behaviour can be changed by setting
bind-mcast-address property on the media-factory object.
https://bugzilla.gnome.org/show_bug.cgi?id=797059
Export rtsp-server library API in headers when we're building the
library itself, otherwise import the API from the headers.
This fixes linker warnings on Windows when building with MSVC.
Fix up some missing config.h includes when building the lib which
is needed to get the export api define from config.h
https://bugzilla.gnome.org/show_bug.cgi?id=797185
If a (strange) client would reuse interleaved channel numbers in
multiple SETUP requests, we should not accept them. The channel
numbers are used for looking up stream transports in the
priv->transports hash table, and transports disappear from the table
if channel numbers are reused.
RFC 7826 (RTSP 2.0), Section 18.54, clarifies that it is OK for the
server to change the channel numbers suggested by the client.
https://bugzilla.gnome.org/show_bug.cgi?id=796988
When media is shared, the same media stream can be sent
to multiple multicast groups. Currently, there is no API
to retrieve multicast addresses from the stream.
When calling gst_rtsp_stream_get_multicast_address() function,
only the first multicast address is returned.
With this patch, each multicast destination requested in SETUP
will be stored in an internal list (call to
gst_rtsp_stream_add_multicast_client_address()).
The list of multicast groups requested by the clients can be
retrieved by calling gst_rtsp_stream_get_multicast_client_addresses().
There still exist some problems with the current implementation
in the multicast case:
1) The receiving part is currently only configured with
regard to the first multicast client (see
https://bugzilla.gnome.org/show_bug.cgi?id=796917).
2) Secondly, of security reasons, some constraints should be
put on the requested multicast destinations (see
https://bugzilla.gnome.org/show_bug.cgi?id=796916).
Change-Id: I6b060746e472a0734cc2fd828ffe4ea2956733ea
https://bugzilla.gnome.org/show_bug.cgi?id=793441
The maximum ttl value provided so far by the multicast clients
will be chosen and reported in the response to the current
client request.
Change-Id: I5408646e3b5a0a224d907ae215bdea60c4f1905f
https://bugzilla.gnome.org/show_bug.cgi?id=793441
If "transport.client-settings" parameter is set to true, the client is
allowed to specify destination, ports and ttl.
There is no need for pre-configured address pool.
Change-Id: I6ae578fb5164d78e8ec1e2ee82dc4eaacd0912d1
https://bugzilla.gnome.org/show_bug.cgi?id=793441
When two multicast clients request specific transport
configurations, and "transport.client-settings" parameter is
set to true, it's wrong to actually require that these two
clients request the same multicast group.
Removed test_client_multicast_invalid_transport_specific test
cases as they wrongly require that the requested destination
address is supposed to be present in the address pool, also in
the case when "transport.client-settings" parameter is set to true.
Change-Id: I4580182ef35996caf644686d6139f72ec599c9fa
https://bugzilla.gnome.org/show_bug.cgi?id=793441