We need to pass the X11 display to GstGL or else it will
use its own X11 Display pointer, and the GL Context won't get shared
correctly on newer X servers
In file included from /home/thiagoss/gst/head/gstreamer/gst/gst.h:54:0,
from /home/thiagoss/gst/head/gstreamer/libs/gst/check/gstcheck.h:34,
from elements/hlsdemux_m3u8.c:27:
../../ext/hls/gstfragmented.h:8:28: error: redundant redeclaration of ‘fragmented_debug’ [-Werror=redundant-decls]
GST_DEBUG_CATEGORY_EXTERN (fragmented_debug);
Move the definition of the category to after the declaration.
Ported from https://github.com/ylatuya/gst-plugins-bad
This still has some unit tests for alternative renditions and
seeking, which are commented out for the time being until we
support them properly.
1. glcontextid function is replaced by gstreamer gst_gl_context_new_wrapped .
2. call gst_init before gst_gl_display_new , seems gst_gl_display_new depends
on gst_allocator_register , which only worked after gst_init called
3. flush gstreamer OpenGL context before using shared texture, fix
flicker problem.
https://bugzilla.gnome.org/show_bug.cgi?id=735566
Use the sticky events to compose the streamheader as they are the
ones that are persisted to config new pads linked. Instead of storing
them ourselves rely on the pad storage that already orders it for us
https://bugzilla.gnome.org/show_bug.cgi?id=732596
Check that end_of_seq() [EOSEQ] and end_of_stream [EOS] NAL units
are correctly parsed and the reported NAL unit size yields 1 byte,
i.e. the only NalHeaderBytes in there.
https://bugzilla.gnome.org/show_bug.cgi?id=732553
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Check that conversion to byte-stream/au formats work and that we
can effectively drop broken/invalid NAL units from the resulting
access unit buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=732203
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
If an SEI NAL unit with a buffering_period() message is inserted
between an SPS and PPS NAL unit, check that the output buffer still
contain it. i.e. make sure that this SEI message is not dropped.
https://bugzilla.gnome.org/show_bug.cgi?id=732156
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
It was previously a mix and match of both variants, introducing just too much
confusion.
The prefix are from now on:
* GstMpegts for structures and type names (and not GstMpegTs)
* gst_mpegts_ for functions (and not gst_mpeg_ts_)
* GST_MPEGTS_ for enums/flags (and not GST_MPEG_TS_)
* GST_TYPE_MPEGTS_ for types (and not GST_TYPE_MPEG_TS_)
The rationale for chosing that is:
* the namespace is shorter/direct (it's mpegts, not mpeg_ts nor mpeg-ts)
* the namespace is one word under Gst
* it's shorter (yah)
The reshape property was never used.
Replace the draw property with a signal.
Based on patch by Mathieu Duponchelle <mathieu.duponchelle@epitech.eu>
https://bugzilla.gnome.org/show_bug.cgi?id=704507
ATSC has its own version of the EIT table (DVB also has one).
This patch adds parsing for the ATSC EIT table and also fixed
the section identification to mark it as the ATSC one.
The implementation aws refactored to reuse some common internal
structures from ETT.
Also adds its dumping function to ts-parser example
https://bugzilla.gnome.org/show_bug.cgi?id=730435
Adds the system time table structure and functions for convenient parsing of
it and for getting the UTC datetime that it represents. Also adds its
information dumping to the ts-parser example
https://bugzilla.gnome.org/show_bug.cgi?id=730435
Add a parsing function for MGT and also detect the EIT tables
for ATSC, the EIT pids are reported inside the MGT and we are still
only relying only on the table id for detecting it. In the future we
would want to also check the pid and compare with whatever the MGT
previously reported to confirm that it is indeed the EIT.
https://bugzilla.gnome.org/show_bug.cgi?id=730435
Before:
GST_GL_PLATFORM=cocoa GST_GL_WINDOW=cocoa
gst-launch-1.0 videotestsrc ! glimagesink
After:
GST_GL_PLATFORM=cgl GST_GL_WINDOW=cocoa
gst-launch-1.0 videotestsrc ! glimagesink
but still pass --enable-cocoa to configure script
because currently it can only be used with cocoa API.
We could later have cgl/gstglcontext_cgl.h that manages
a CGLContextObj directly and cocoa/gstglcontext_cocoa.h
would just wrap it.
So that it could be used with other Apple's window APIs.
https://bugzilla.gnome.org/show_bug.cgi?id=729245
Expose one more libcurl option: CURLOPT_SSH_HOST_PUBLIC_KEY_MD5.
This allows authenticating the server by the MD5 fingerprint of
the server's public key.
https://bugzilla.gnome.org/show_bug.cgi?id=723167
Use COGL_VERSION_ENCODE to check for the minimum required and maximum allowed
cogl version. In certain situations just using the COGL_VERSION_* macro name can
give you the following error:
error "COGL_VERSION_MAX_ALLOWED must be >= COGL_VERSION_MIN_REQUIRED"
Add standalone test application that demonstrates how to use the new
VP8 bitstream parsing library, while also allowing simple debugging/
tracing of IVF files.
[clean-ups, updated to new parser API]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
This patch provides the basic infrastructure required for this.
Upload and Download has been ported to this.
Has the nice effect of allowing GstGLMemory to be our
refcounted texture object for any texture type (not just RGBA).
Should not lose any features/video formats.
We create our textures (in Desktop GL) with GL_TEXTURE_RECTANGLE,
vaapi attempts to bind our texture to GL_TEXTURE_2D which throws a
GL_INVALID_OPERATION error and as thus, no video.
Also, by moving exclusively to GL_TEXTURE_2D and the npot extension
we also remove a difference between the Desktop GL and GLES2 code.
https://bugzilla.gnome.org/show_bug.cgi?id=712287
- cmake could not find glib
- put gtk variables at the beginning to avoid GL conflicts
- update examples to clutter-1.8
- use const instead of deprecated G_CONST_RETURN
- set max pending events to 0 to make cube example works again
On linux, the GSource func attached to the clutter_threads_add_idle
was not getting the cpu ressource periodically.
Because the use of clutter_threads_enter/leave inside the fakesink
callback seems to be too strong.
So remove the use if clutter_threads_enter/leave in the fakesink callback.
Then replace GQueue by GAsyncQueue to keep thread safe access to the
communication queues between clutter and gst-gl.
Call clutter_threads_add_idle with high priority.
It requires at least clutter 0.8.6 since lower clutter versions are
not compatible with GL_TEXTURE_RECTANGLE_ARB.
Remove use of ClutterEffectTemplace since it does not exist in
clutter 0.9.
The external opengl context must be specify when creating
our OpenGL context (glx) or just after (wgl).
When calling glXCreateContext or wglShareLists, the
external opengl context must not be current.
Then our gl context can be current in the gl thread while
the external gl context is current in an other thread.
See tests/examples/clutter/cluttershare.c
Partially revert previous commit. It's not an issue with glimagesink
Xoverlay interface. It's always the same intel bug with direct
rendering redirection (the one that affects each opengl application
with compositing managers). It works fine with DRI2 and UXA
acceleration. Still leaving effects disabled because I'm testing intel
hardware that doesn't support FBOs.
GLimagesink XOverlay interface doesn't seem to work with composite
redirection on intel (and I believe ati too). Windows aren't
redirected offscreen at all. This commit just shows that the example
correcty works with ximagesink. The most evident difference I see is
that glimagesink reparents the xoverlay window into its own while both
x and xvimagesink destroy their window and render directly to the
xoverlay one.
Revert the "move windows" thing from commit
175f7a707bc922f3facc63e7d9b6d01f9bb6b1b0
Windows are offscreen who cares about their position? If you see the
windows something is going wrong with composite redirection.
This reverts commit 96e4ab18c2cf9876f6c031b9aba6282d0bd45a93.
You should have asked first. And you would have been told "no",
because it causes people on development branches to do a huge
amount of extra work.
Add xray effect. Maps luma to a negative, slightly cyan tinted, curve,
applies some light gaussian blur and multiplies it with its sobel edges. Not
sure about the name, likely to change. Probably still needs some tuning.
Tests for PAT, PMT, and NIT
Creates a new table, and populates it with descriptors.
Parses the newly created tables, and checks the data.
Creates a GstMpegTsSection from the tables, and packetize the sections.
The packetized section data is byte-wise compared to a static byte array
https://bugzilla.gnome.org/show_bug.cgi?id=723953
The thread that calls the success/failure callback can be the
same that is adding/removing the element as the IDLE probe can
happen instantly if the pad is not 'busy'.
This required moving some checks for the callback counter around
as well as removing some pad pushes from the main test thread as
they were made useless after the IDLE pad probe was fixed in core
by commit 0324358ebc
* stream-start-id is mandatory at the beginning, so add that to the
gdp headers
* caps must be sent before new segment, invert the order from legacy
0.10 code
And fix the tests as a ref is now kept for those buffers that compose
the header
Most of the tests weren't updated after the sticky events order
and stream start. Fix that and refactor those tests check that
are the same to some common functions.
Those functions still don't actually test the content but at
least now they are in a single place and can be improved
without replication
Commit 6af387cd5a made h264parse
strip a leading 0x00 byte from some output scenarios. This broke
tests as bs_to_nal test expects one more byte on the output.
Fix this by comparing the output with the expected stripped version,
too.
When outputting in AVC3 stream format, the codec_data should not
contain any SPS or PPS, because they are embedded inside the stream.
In case of avc->bytestream h264parse will push the SPS and PPS from
codec_data downstream at the start of the stream, at intervals
controlled by "config-interval" and when there is a codec_data change.
In the case of avc3->bytstream h264parse detects that there is
already SPS/PPS in the stream and sets h264parse->push_codec to FALSE.
Therefore avc3->bytstream was already supported, except for the stream
type.
In the case of bystream->avc h264parse will generate codec_data caps
from the parsed SPS/PPS in the stream. However it does not remove these
SPS/PPS from the stream. bytestream->avc3 is the same as bytestream->avc
except that the codec_data must not have any SPS/PPS in it.
|--------------+-------------+-------------------|
|stream-format | SPS in-band | SPS in codec_data |
|--------------+-------------+-------------------|
| avc | maybe | always |
|--------------+-------------+-------------------|
| avc3 | always | never |
|--------------+-------------+-------------------|
Amendment 2 of ISO/IEC 14496-15 (AVC file format) is defining a new
structure for fragmented MP4 called "avc3". The principal difference
between AVC1 and AVC3 is the location of the codec initialisation
data (e.g. SPS, PPS). In AVC1 this data is placed in the initial MOOV box
(moov.trak.mdia.minf.stbl.stsd.avc1) but in AVC3 this data goes in the
first sample of every fragment.
https://bugzilla.gnome.org/show_bug.cgi?id=702004
While it was a great idea, various g-i based bindings don't support
GArray with entries greater than sizeof(gpointer) :(
So let's just make everybody happy by just using GPtrArray.
And since we're breaking the API, also rename the various descriptor fields
to no longer have the descriptor_ prefix.
It does cost a bit more in terms of memory/cpu usage, but makes it usable
from bindings.
Do state changes from sink to src. Fixes race condition in
pull mode test where the source will start up and push buffers
to queue/identity or aiffparse before the main thread has
managed to set them to playing yet.