If the pipeline is stalled for too long, souphttpsrc will block and
stop fetching data from the network. This can cause the connection to
drop and souphttpsrc would handle it as an EOS. This patch makes it
persist and try to fetch more data until the end of the content length
or until receiving an error that it is beyong limits in case the content
is unknown.
https://bugzilla.gnome.org/show_bug.cgi?id=683536
The HTTP server could give wrong information, e.g. if the HTTP stream is
chunk-encoded or compressed, or if the server does not know the complete size
at the time when the file is requested by the client.
Also see
https://bugs.webkit.org/show_bug.cgi?id=115354
Answer to scheduling queries with default parameters and the new
_BANDWIDTH_LIMITED_FLAG so that downstream is advised to minimize seek
operations and perform on-disk buffering if possible.
Bug 693484
In 1.0 we now always send the icecast request headers by default, which
makes the server send icecasts metadata inserted into the stream if it
supports that. However, there are some use cases where this is not
desirable, like when just saving a radio stream to disk, so add back
the "iradio-mode" property to allow people to disable this.
https://bugzilla.gnome.org/show_bug.cgi?id=697984
Apparently there's no reason to use it any longer. Drop libsoup-gnome
dependency while at it, now that we don't need anything from it any
more (it only consists entirely of deprecated API now anyways).
https://bugzilla.gnome.org/show_bug.cgi?id=693911
When receiving an error code from the http server, such as 404,
data might be sent along with it, like a web page. We don't want
to output that data in this case, and we also want to pass the
FLOW_ERROR return back to the base class, so it can stop properly.
https://bugzilla.gnome.org/show_bug.cgi?id=678429
Leave it to the application to decide on the header. No header at all
is better than having the wrong header as DLNA mandates that a missing
header has to be tolerated while a wrong header is an error.
https://bugzilla.gnome.org/show_bug.cgi?id=676020
Consider a downstream element that may issue seeks in very short
succession (e.g. queue2), depending on the access pattern of
the downstream element (e.g. qtdemux with audio/video chunks
interleaved so that there's always a sizeable gap between the
current chunks for each stream). In this case, queue2 will maintain
two ranges, and even when it serves a chunk from memory, it will
switch ranges and make souphttpsrc seek to the end of the available
data for that range, assuming that that's where we'll want to
continue reading from next.
This may lead to the following seek request pattern:
- source reading position A
- seek to B
- now reading position still A, requested_postion is B
- streaming thread to be restarted to continue from B
- seek to A, before streaming thread had time to do the seek
- do_seek() now sees reading position == seek position and
returns early.
- however, requested position is still B from the earlier
seek request
- streaming thread starts up, sees that a seek to B is pending
and requests data from B from the server, while the GstBaseSrc
segment has of course been updated/reset to position A, which
was the last seek request.
- we will now send data for position B and pretend that's the
data from position A (via the newsegment event, etc.)
- this causes data corruption
Reproducible doing seek-emulated fast-forward/backward on 006648.
Error messages should be translated. URIs and filenames should not
be part of the error message string that's shown to the user.
soup_message->reason_phrase is not translated and not suitable as
error message for users (see libsoup documentation). Also fix up
error codes a bit, as far as possible with the existing codes.