Fix copy'n'paste bug which made us allocate a slice of the
size of a rectangle for the overlay composition, but then
free it passing the size of an overlay composition, which
is not something GSlice takes to kindly, resulting in scary
aborts like:
***MEMORY-ERROR***: GSlice: assertion failed: sinfo->n_allocated > 0
Also, g_slice_new already includes a cast, so remove our
own casts, without which the compiler would probably have
told us about this ages ago.
https://bugzilla.gnome.org/show_bug.cgi?id=680091
The decoder might have been de-activated in the meantime (resulting
in NULL pad caps).
If the decoder really isn't configured, then it will error out further
down when checking whether the GST_AUDIO_INFO_IS_VALID()
https://bugzilla.gnome.org/show_bug.cgi?id=667562
Add support RTP buffers with multiple memory blocks. We allow one block for the
header, one for the extension data, N for data and one memory block for the
padding.
Remove the validate function, we validate now when we map because we need to
parse things in order to map multiple memory blocks.
Add a method to get the offset and scale values to transform the color values of
a format to their normalized [0.0 .. 1.0] range. This is usually required as
the first step of a colorspace conversion.
Add an unpack option to specify what to do with the least significant bits of
the destination when the source format has less bits than the destination. By
default we will now copy the most significant bits of the source into the least
significant bits of the destination so that the full color range is represented.
Add an option to leave the extra destination bits 0, which may be faster and
could be compensated for in the element algorithm.
The x/y values are meant to be signed.
This bug was introduced by 76c0881549
Conflicts:
gst-libs/gst/video/video-blend.c
gst-libs/gst/video/video-blend.h
When the ringbuffer gets restarted (like in setcaps), we *will* have
to resync against the new values.
Without this we end up blindly assuming the new samples align to the
old ones.
... which is supposed to align with WAVEFORMATEX, but has confusing
names compared to the last 2 fields in the latter (and still
misses 1 field compared to the latter).
If we're in continuous mode where we'll play the entire CD from
start to finish, send a TOC event downstream so any downstream
muxers can write a TOC to indicate where the various tracks
start and end.
The DATE field may contain dates, partial dates, or dates with
time. Store the result in GST_TAG_DATE_TIME, so we can express
properly which fields are present or not, and can store the
time if there is one, and can serialise and deserialise the
tag without loss of information and without making up
information that's not there.
Instead of using short YYYY-MM-DD form we will store
long YYYY-MM-DDTHH:MM:SS+TS date and time.
According to this documentation we can do it:
http://wiki.xiph.org/VorbisComment#Date_and_time
This datetime format is needed by apps where more information
is needed. For example voice, meeting recording, etc.
https://bugzilla.gnome.org/show_bug.cgi?id=677712
Check that we have a valid output_state before attempting to use it to calculate
the duration of a buffer. It is possible that we don't have a state yet, for
example when we are dropping the first buffers.
Make sure the frame deadline was set before calculating the
max_decode_time. Fixes problems with ffmpeg skipping frames when
it doesn't need to, when the input doesn't have full timestamping
(divx in avi)
Interpolating the timestamps from the picture numbers
does more harm than good, getting it wrong in a lot of
cases (especially reverse playback). Removing it in favour
of simply incrementing the timestamps until there's
something better
Use g_list_free_full instead of walking lists twice when freeing
them.
Remove pointless clause in gst_video_decoder_chain that doesn't
actually have any effect.
Other changes to make the code slightly more like the 0.11
version.
Move processing of the gather list into the flush_parse function.
Add a last ditch attempt to apply timestamps to outgoing buffers
when walking backwards through decoded frames. Requires that each
gathered region has at least one timestamp.
Make sure to remove decoded packets from the decode list when
they are sent - otherwise the list just grows on each cycle, with
more and more frames being decoded and then clipped away.
Break out of the processing loop early on a bad flow return to make
seeking more responsive.
Use the gst_video_decoder_clip_and_push_buf function in reverse
mode, instead of pushing all buffers arbitrarily.
A couple of small efficiency gains in the list handling, by moving
list elements directly and not reallocating, and by reversing
and concatenating the gather list instead of moving it one node
at a time.
Rename the gst_video_decoder_do_finish_frame function to
gst_video_decoder_release_frame.
Rename gst_video_decoder_have_frame_2 to
gst_video_decoder_decode_frame and pass the frame to process
directly, rather than using the current_frame pointer as a holding
pen.
Move the negative rate handling out of the function to where it
is needed, and remove the process flag.
The frames are the owners of the buffers. In cases where a decoder
would keep around reference frames, we need to ensure they don't
disappear early.
To handle this, we pass downstream a complete sub-buffer of the output
buffer, ensuring that the buffer will only be released when downstream
is done with it *AND* the frame is no longer used.
Conflicts:
gst-libs/gst/video/gstvideodecoder.c
Don't replace the initial frame's timestamp with a bogus
one calculated from the (incorrect for Ogg) frame number just
because the 'sync time' hasn't changed.
Also, don't output a bogus warning about the output_frame being
NULL when it's being dropped/skipped due to QoS.
Use a separate variable to describe the amount of lines that will be used in
packing instead of abusing the h_sub variable. Some formats might have no
subsampling but need to operate on multipe lines.
RGB8_PALETTED -> RGB8P
Fix the definition of paletted formats, store the palette in the second
plane.
Make sure we copy the palette correctly in gst_video_frame_copy()
Don't do alignment on the palette in videopool
Remove Y800 and Y16 wich are the same as GRAY8 and GRAY16_LE
Add const to the GstVideoFormatInfo when used in argument
Add GRAY8 and GRAY16 pack/unpack functions
Add support for supporting chroma subsampling correctly in the pack
function.
Fill in the pack and unpack functions for most formats.
Add some missing pack/unpack functions to the orc file.
Add a flag argument to the pack and unpack function so that we can expand it
later when needed. We could for example prefer a High Quality pack/unpack
operation later.
Add a flag argument to the pack and unpack function so that we can expand it
later when needed. We could for example prefer a High Quality pack/unpack
operation later.
Add 10 bits I420 format definitions
Move encoded format as second entry in the array so that it doesn't end up in a
weird place when we add formats.
See https://bugzilla.gnome.org/show_bug.cgi?id=665034
DTS type I-III specify the burst length in bits. Only type IV (which we
do not currently support) needs it to be specified in bytes. Thanks to
Julien Moutte for pointing this out.
When closing the connection, unref the currently used sockets. This should close
them when not in use. We need to do this because else we cannot reconnect
anymore after a close, the connect function requires that the sockets are NULL.
Clear the GError after g_socket_connect tells us that the connection is pending.
If we don't do this, glib complains when we try to reuse the non-NULL GError
variable a little below.
They're hardly used, and probably more confusing than anything
else, and it's not clear that anyone would really need to be
able to tell them apart at the media type level.
This makes sure that we wait until we received all tags for the
subtitle streams and have all information that is collected by
the discoverer.
Fixes bug #673504.
When need to push out all the previously received events, concatenate all the
events from the previous frames (instead of leaking the old ones)
Improve debugging a little
Conflicts:
gst-libs/gst/video/gstvideodecoder.c
Frames receive a refcount when added to the frames list so release that refcount
in gst_video_decoder_do_finish_frame(). Also release the ref on the frame
because gst_video_decoder_do_finish_frame() takes ownership of the passed frame.
This allows subclasses to override it, as is necessary for e.g. the
video-crop meta. It is now necessary that after decide_allocation()
there is always a allocator and a configured buffer pool inside the
query.