If a lot of seek method is called very quickly, sometimes data reading
and do_request occurs while seek flush event is occurring and error
occurs because retry_count
reaches to the max. Thus, reset retry_count if flush occurs after
do_request and read_buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=790199
This is a regression introduced by "03db374 - souphttpsrc: retry
request on early termination from the server"
The problem was that when seeking back to 0, we would not end up calling
add_range_header() which in addition to adding range headers *ALSO* sets
the read_position to the requested one.
This would result in a wide variety of later failures, like reading
again and again instead of stopping properly.
Instead of just sending a sticky event with them downstream. This allows
getting the HTTP headers easily in the application, and especially also
on errors.
Soup allows only up to two connections per host in a session,
if we use default value. When session sharing is used, however,
more connections might be required in a session.
(e.g., multi-audio adaptive streaming case)
https://bugzilla.gnome.org/show_bug.cgi?id=784495
Currently we only allowed HTTP proxy. Don't filter for the scheme, just check
if it looks like an URI. Soup will warn if the URI is invalid or if
proxy protocol is not supported. This enables using SOCKS 4/5 which is
directly implemented into GIO.
https://bugzilla.gnome.org/show_bug.cgi?id=783012
souphttpsrc now shares its SoupSession with other elements in the
pipeline via GstContext if possible (session-wide settings are all the
defaults), or if the context was forced by the application.
This allows multiple souphttpsrcs to reuse connections, cookies, etc.
https://bugzilla.gnome.org/show_bug.cgi?id=780140
Let libsoup handle redirection automatically.
And then, to figure out redirection uri, extract it on "restarted"
callback which will be fired before soup_session_send() is returned.
https://bugzilla.gnome.org/show_bug.cgi?id=778428
Fix a regression introduced by commit 183695c61a (refactor to use
Soup's sync API). The code previously attempted to reconnect when the
server closed the connection early, for example when the stream was put
in pause for some time.
Reintroduce this feature by checking if EOS is received before the
expected content size is downloaded. In this case, do the request
starting at the previous read position.
https://bugzilla.gnome.org/show_bug.cgi?id=776720
This check must be done only when we are sure the request was
successfully sent. soup_session_send() might fail without setting the
status code. In this case status code is 0 so we would only catch the
error after the seek range check. In this case we would report an error
saying that the seek range was not respected, instead of reporting the
underlying error that triggered the soup_session_send() failure.
https://bugzilla.gnome.org/attachment.cgi?bugid=777222
The flow return values was stored in the element before because the
result had to be set from callbacks. This is not the case anymore, we
can return the flow result directly from functions, making the code
easier to understand.
https://bugzilla.gnome.org/show_bug.cgi?id=777222
The current code configures libsoup to handle redirections
transparently, without informing the caller, thus preventing the element
to record the redirect code and location uri.
Fix this by always setting the SOUP_MESSAGE_NO_REDIRECT, preventing
libsoup from handling the redirection. When we receive a redirection
request and libsoup can safely handle it, return a custom error which
triggers a retry with the new URI.
https://bugzilla.gnome.org/show_bug.cgi?id=777222
Especially don't put them into GstStructures in one way or another, just
ignore them or error out cleanly depending on the importance of their
content.
souphttpsrc maintains two variables for the position:
* 'request_position' is where we want to be
* 'read_position' is where we are
During Normal operations both are updated in sync when data arrives. A seek
changes 'request_position' but not 'read_position'.
When the two positions get out of sync, then a new request is send and the
'Range' header is adjusted to the current 'request_position'.
Without this patch, if reading fails, then the source is destroyed. This
triggers a new request, but the range remains unchanged. As a result, the
old range is used and old data will be read.
Changing the 'read_position' to -1 makes it explicitly different from
'request_position' and as a result the 'Range' header is updated correctly.
https://bugzilla.gnome.org/show_bug.cgi?id=773509
https://github.com/mesonbuild/meson
With contributions from:
Tim-Philipp Müller <tim@centricular.com>
Jussi Pakkanen <jpakkane@gmail.com> (original port)
Highlights of the features provided are:
* Faster builds on Linux (~40-50% faster)
* The ability to build with MSVC on Windows
* Generate Visual Studio project files
* Generate XCode project files
* Much faster builds on Windows (on-par with Linux)
* Seriously fast configure and building on embedded
... and many more. For more details see:
http://blog.nirbheek.in/2016/05/gstreamer-and-meson-new-hope.htmlhttp://blog.nirbheek.in/2016/07/building-and-developing-gstreamer-using.html
Building with Meson should work on both Linux and Windows, but may
need a few more tweaks on other operating systems.
At the end of a range request, we don't want to return GST_FLOW_EOS otherwise
the last bytes we just read will be dropped by basesrc.
Instead just return GST_FLOW_OK (which was set just before) and let basesrc
handle the fact we are at the end of the segment.
If we're at the end of a range request, read again to let libsoup
finalize the request. This allows to reuse the connection again later,
otherwise we would have to cancel the message and close the connection.
We have to get rid of the message on EOS when the complete stream is read to
remember that we successfully finished handling this specific message.
Otherwise we will cancel it later and close the connection instead of reusing
it at a later time.
It might also make sense to reuse connections if a non-200 response is
received. As long as there was no connection error, the HTTP connection should
be re-usable.
Update the blocksize depending on how much is obtained from a read
of the input stream. This avoids doing too many reads in small chunks
when larger amounts of data are available and also prevents using
a very large memory area to read a small chunk of data.
https://bugzilla.gnome.org/show_bug.cgi?id=767833
When early returning in gst_soup_http_src_read_buffer() because the
element is FLUSHING, we need to unmap and unref the buffer which was just created.
https://bugzilla.gnome.org/show_bug.cgi?id=766718
Directly setting audio/x-raw caps leads to problems when the delivered
data blocks do not align properly at sample boundaries (for example, a
data block with 391 bytes). So, instead, set audio/x-unaligned-raw to
let a parser be autoplugged.
https://bugzilla.gnome.org/show_bug.cgi?id=689460
Non-blocking read will return the amount of data available without
blocking to wait for the full requested size.
The downside is that now it souphttpsrc needs to have a waiting
mechanism in case there is no data available yet to avoid busy
looping arond the inputstream.
The problem is that the filesrc and souphttpsrc are behaving
differently regarding the calculation of the segment boundaries. The
filesrc is using a non-inclusive boundaries, while the souphttpsrc
uses inclusive. Currently the hlsdemux calculates the boundaries as
inclusive, so for this reason there is no problem with the souphttpsrc,
but there is an issue in the filesrc.
The GstSegment is non-inclusive, so the proposed solution is to use
non-inclusive boundaries in the hlsdemux in order to be consistent.
Make the change in the hlsdemux, will break the souphttpsrc, which
will expect inclusive boundaries, but the hlsdemux will offer
non-inclusive. This change makes sure that the non-inclusive
boundaries are converted to inclusive.
https://bugzilla.gnome.org/show_bug.cgi?id=748316
These allow a failed request to be retried after the given number of seconds
instead of failing the pipeline. Take account of the Retry-After header if
present. Add retries parameter that controls the number of times an HTTP
request will be retried before failing.
https://bugzilla.gnome.org/show_bug.cgi?id=756318