The pixel buffer release callback is called if the void *
dataPtr given to the CVPixelBufferCreateWithPlanarBytes
is not NULL.
According to the documentation dataPtr is supposed to be a
"plane description block" but no specific type is given.
https://bugzilla.gnome.org/show_bug.cgi?id=711847
It was previously a mix and match of both variants, introducing just too much
confusion.
The prefix are from now on:
* GstMpegts for structures and type names (and not GstMpegTs)
* gst_mpegts_ for functions (and not gst_mpeg_ts_)
* GST_MPEGTS_ for enums/flags (and not GST_MPEG_TS_)
* GST_TYPE_MPEGTS_ for types (and not GST_TYPE_MPEG_TS_)
The rationale for chosing that is:
* the namespace is shorter/direct (it's mpegts, not mpeg_ts nor mpeg-ts)
* the namespace is one word under Gst
* it's shorter (yah)
Interestingly, Coverity implies that close takes an unsigned
argument, while my close(2) man page shows it taking a signed
argument. I guess it may be platforms specific.
Coverity 1214602
New approach attempts to be more accurate by measuring
the elapsed time by iteration. Also:
* Use a 10 seconds default timeout and a half a second
polling step. New values should better match the tuning
process on real-life scenarios.
* Correct elapsed_time computation.
* Add _retry_ioctl() to avoid bailing out on temporary
ioctl EINTR failures (no need to check for EAGAIN cause
we are opening the frontend on blocking mode)
* Small corrections to fail condition handling
Check if libnativehelper is loaded in the process and if
it has these awful wrappers for JNI_CreateJavaVM and
JNI_GetCreatedJavaVMs that crash the app if you don't
create a JniInvocation instance first. If it isn't we
just fail here and don't initialize anything.
See this code for reference:
https://android.googlesource.com/platform/libnativehelper/+/master/JniInvocation.cpp
* Drop remaining sleep() logic in favor of polling
* Use best guess delivery system if none is set
* Make tuning/locking timeout configurable
* Add signals for tuning start, done and fail
* Drop gst_dvbsrc_frontend_status(). It was used only
for signal LOCK checking. This is now part of the
tuning/locking loop
* Break up frontend configuration and tuning
on separate functions
Plus:
* Add some more useful DEBUG/TRACE messages
* Move over misplaced DVB API message
* Fix wrong comment for default DVB buffer size (http://linuxtv.org/downloads/v4l-dvb-apis/dmx_fcalls.html#DMX_SET_BUFFER_SIZE)
This patch builds up on previous work done by
Fabrizio (Misto) Milo <mistobaan@gmail.com>
https://bugzilla.gnome.org/show_bug.cgi?id=641204
On Samsung Galaxy S4 it is impossible to have more than one
hardware decoder at the same time. If we do not release it
explicitly the GC only releases it whenever the whole application
is finished not whenever the activity is finished and thus a player
will not be able to work correctly
gst_amc_color_format_copy will copy in/out a frame resides at a
GstAmcBuffer. Lots of codes in gst_amc_video_*_fill_buffer are moved to
this new function.
Some hack logic needs also to be present in create_src|sink_caps, for
working around some broken codecs. These hacks are hidden
in color_format/video_format conversion -- the prototypes of these
functions are also changed to include more args for hack judgement.
Also in case of multi-color_formats mapped to one video_format, then
map that video_format back will not give the original color_format, which
causes gst_amc_codec_configure failed with something like
'does not support color format N'.
The new prototype involves with GstAmcCodecInfo and mime, which
ensures the converted color_format is supported by the codec.
A COLOR_FormatYCbYCr to GST_VIDEO_FORMAT_YUY2 mapping is also added, in
order to work around bugs in OMX.k3.video.decoder.avc(which incorrectly
reports supporting COLOR_FormatYCbYCr, which is actually
COLOR_FormatYUV420SemiPlanar). There are already hacks for this in
gst_amc_video_format_to_color_format, gst_amc_color_format_to_video_format
and gst_amc_color_format_info_set, but the codec will still not work(be
ignored because of "has unknown color formats") without adding this mapping.
If the application is using the new ART runtime it will otherwise
load dalvik and start a dalvik VM next to the ART VM.
Does not work very well obviously.
c400eef377 introduced some defines to handle
older kernel headers. However, the check is done before the corresponding
kernel header (dvb/frontend.h) is included. As a result the macros are
always defined with results in 'redefined' errors with newer kernel
headers.
Move the check after the include to fix this.
https://bugzilla.gnome.org/show_bug.cgi?id=730570
We need to sleep a bit before destroying the player object
because of a bug in Android in versions < 4.2.
OpenSLES is using AudioTrack for rendering the sound. AudioTrack
has a thread that pulls raw audio from the buffer queue and then
passes it forward to AudioFlinger (AudioTrack::processAudioBuffer()).
This thread is calling various callbacks on events, e.g. when
an underrun happens or to request data. OpenSLES sets this callback
on AudioTrack (audioTrack_callBack_pullFromBuffQueue() from
android_AudioPlayer.cpp). Among other things this is taking a lock
on the player interface.
Now if we destroy the player interface object, it will first of all
take the player interface lock (IObject_Destroy()). Then it destroys
the audio player instance (android_audioPlayer_destroy()) which then
calls stop() on the AudioTrack and deletes it. Now the destructor of
AudioTrack will wait until the rendering thread (AudioTrack::processAudioBuffer())
has finished.
If all this happens with bad timing it can happen that the rendering
thread is currently e.g. handling underrun but did not lock the player
interface object yet. Then destroying happens and takes the lock and waits
for the thread to finish. Then the thread tries to take the lock and waits
forever.
We wait a bit before destroying the player object to make sure that
the rendering thread finished whatever it was doing, and then stops
(note: we called gst_opensles_ringbuffer_stop() before this already).
Handle stride alignment through the use of the video meta API. The
code is based on the corevideobuffer implementation.
If the video meta API is not supported and the underlying buffer
contains padding, the core media buffer is copied to a system memory
buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=727885
Devices suitable for decklinksrc may not have any output, hence querying
the input returns NULL. Add support for all cases where
input/output/config may be missing.
https://bugzilla.gnome.org/show_bug.cgi?id=727306
usecount is unsigned, so too many "unuse" will wrap the counter
around and the >= 0 check will always be fine.
It would be much simpler to just make the counter signed, but
moving the checks where the decrements happen allow a mistake
to be detected earlier, and thus easier to debug.
Coverity 1139791
While the code that creates the object sets priv to some existing
pointer after new, this ensures any future new not doing this will
hit the various priv!=NULL asserts in the code.
Coverity 1139935
As per discussed in bug #725383, it doesn't make much sense to default
to FALSE in the "iradio-mode" property. Better, let's sent the header
by default and just ignore headers that are not understood, if so.
https://bugzilla.gnome.org/show_bug.cgi?id=725659
Some audio decoders (at least the MP3 decoder on MTK based devices) outputs
raw audio in batches of multiple audio frames. We need to handle that
properly, otherwise the base class will be kind of unhappy.
It's impossible to create another pipeline with d3dvideosink after disposing
the previous one due to some problem in d3dvideosink. The message is: "Unable
to register Direct3D hidden window class".
I've evaluated the problem and it's that UnregisterClass() in working thread is
called before DestroyWindow() and UnregisterClass() does nothing.
https://bugzilla.gnome.org/show_bug.cgi?id=722622
The original size of 256k was too small for anything where
one would want to use shm. If the buffer's size needs to be limit, it is
better to use buffer-time in most cases anyway.
-add delsys property
-add delivery system capability to the gstreamer adapter structure
-ready for add new delivery systems
Application must ask the adapter structure to know which delivery systems are avaible.
The property delsys must be set.
https://bugzilla.gnome.org/show_bug.cgi?id=709414
Add a new color format seen on my Galaxy S3
(OMX_SEC_COLOR_FormatNV12Tiled = 0x7fc00002) to the table,
but don't actually implement it - the decoder doesn't choose it.
Remove an assert that makes the plugin fail noisily and take the app down
if it sees a color format it doesn't recognise (just skip the codec instead)
Modify the debug output when plugin scanning to print color format info to
make this sort of thing easier in the future.
As it does not inherit from basesrc, this flag is not automatically set
and e.g. gst_bin_iterate_sources() and other code does not consider this
element a source.
https://bugzilla.gnome.org/show_bug.cgi?id=680700
Set reorder_queue_frame_delay from the DPB size (in frames). Still not optimal,
as the DPB size is larger than the max bframe forward prediction length, but I
don't know how to compute the latter without parsing every group of pictures.
AVDeviceFormat and AVFrameRateRange are available in iOS since 7.0
so we need a more dynamic approach to support compilation with older
SDK's. We use a NSObject to avoid referencing those types and key-value
coding or preformSelector to access properties.
On OSX, setting the pixel format on the output reset the capture device
to its native resolution, so we need to update the caps if the output
frame size has changed before a proper solution is found.
Flushing the decoder invalidates all buffers, so it should be done
after quiting the decoding loop. Otherwise we can jump into
"failed_release" and stop everything
https://bugzilla.gnome.org/show_bug.cgi?id=711156
On a device lost, all the surfaces allocated in the
device need to be released before resetting the device,
which can't be done for the allocated buffers.
https://bugzilla.gnome.org/show_bug.cgi?id=706566
It appears that the Logitech C920 sometimes drops the next
to last segment of RAW aux data contained within the MJPEG
container. H264 data that is multiplexed with in the same
container does not appear to be affected. This appears to
be a bug in the Logitech C920 firmware and uvch264src should
not error out in this case.
Sometimes it can take 24 hours of continuous streaming for
the problem to occur, but sometimes it takes only a couple
of hours.
https://bugzilla.gnome.org/show_bug.cgi?id=706276
Which causes problems when used with _GNU_SOURCE apparently, and it
seems it was only set because of usleep(), which we can just replace
with g_usleep() until we get rid of those entirely.
https://bugzilla.gnome.org/show_bug.cgi?id=705208
If no client has received the command, unref the buffer. This will
make sure that the shared memory area does not get filled with buffers
no one knows about.
https://bugzilla.gnome.org/show_bug.cgi?id=702684
The decoder output frames in DTS order, even with the flag
kVTDecodeFrame_EnableTemporalProcessing. We store a internal
queue of the decoded frames and push them PTS order.
FigVideoFormatDescriptionCreateWithSampleDescriptionExtensionAtom
is an un-documented private function which might change its signature
as it already did in the past. Replace it with
CMVideoFormatDescriptionCreate and the also un-documented Extensions
dictionary.
Public frameworks don't need to build the API dynamically, we instead
use the framework directly.
The exception is for VideoToolbox which went public in the 10.8 SDK,
but it's still private in older version of the SDK and iOS. This allow
building the plugin against SDK's where it's not a public framework.