Merge branch 'master' into 0.11

Conflicts:
	common
	configure.ac
	gst-libs/gst/audio/gstbaseaudiosink.c
	gst/playback/gstdecodebin2.c
	gst/playback/gstplaysinkaudioconvert.c
	gst/playback/gstplaysinkaudioconvert.h
	gst/playback/gstplaysinkvideoconvert.c
	gst/playback/gstplaysinkvideoconvert.h
This commit is contained in:
Wim Taymans 2011-11-07 12:23:15 +01:00
commit 7ac25e9b26
17 changed files with 1592 additions and 886 deletions

2
common

@ -1 +1 @@
Subproject commit 0546e5881d1ec7137c71d35247729e28c1a4ab66
Subproject commit 762b6927ffdd1726cb4f4783f49b5cfaa9edd941

View file

@ -832,8 +832,8 @@ AC_SUBST(GST_PLUGINS_BASE_CFLAGS)
dnl FIXME: do we want to rename to GST_ALL_* ?
dnl add GST_OPTION_CFLAGS, but overridable
GST_CFLAGS="$GST_CFLAGS -DGST_USE_UNSTABLE_API"
GST_CXXFLAGS="$GLIB_CFLAGS $GST_CFLAGS $GLIB_EXTRA_CFLAGS \$(GST_OPTION_CXXFLAGS)"
GST_CFLAGS="$GLIB_CFLAGS $GST_CFLAGS $GLIB_EXTRA_CFLAGS \$(GST_OPTION_CFLAGS)"
GST_CXXFLAGS="$GLIB_CFLAGS $GST_CFLAGS \$(GLIB_EXTRA_CFLAGS) \$(GST_OPTION_CXXFLAGS)"
GST_CFLAGS="$GLIB_CFLAGS $GST_CFLAGS \$(GLIB_EXTRA_CFLAGS) \$(GST_OPTION_CFLAGS)"
AC_SUBST(GST_CFLAGS)
AC_SUBST(GST_CXXFLAGS)
dnl add GCOV libs because libtool strips -fprofile-arcs -ftest-coverage

View file

@ -0,0 +1,548 @@
===============================================================
Subtitle overlays, hardware-accelerated decoding and playbin2
===============================================================
Status: EARLY DRAFT / BRAINSTORMING
The following text will use "playbin" synonymous with "playbin2".
=== 1. Background ===
Subtitles can be muxed in containers or come from an external source.
Subtitles come in many shapes and colours. Usually they are either
text-based (incl. 'pango markup'), or bitmap-based (e.g. DVD subtitles
and the most common form of DVB subs). Bitmap based subtitles are
usually compressed in some way, like some form of run-length encoding.
Subtitles are currently decoded and rendered in subtitle-format-specific
overlay elements. These elements have two sink pads (one for raw video
and one for the subtitle format in question) and one raw video source pad.
They will take care of synchronising the two input streams, and of
decoding and rendering the subtitles on top of the raw video stream.
Digression: one could theoretically have dedicated decoder/render elements
that output an AYUV or ARGB image, and then let a videomixer element do
the actual overlaying, but this is not very efficient, because it requires
us to allocate and blend whole pictures (1920x1080 AYUV = 8MB,
1280x720 AYUV = 3.6MB, 720x576 AYUV = 1.6MB) even if the overlay region
is only a small rectangle at the bottom. This wastes memory and CPU.
We could do something better by introducing a new format that only
encodes the region(s) of interest, but we don't have such a format yet, and
are not necessarily keen to rewrite this part of the logic in playbin2
at this point - and we can't change existing elements' behaviour, so would
need to introduce new elements for this.
Playbin2 supports outputting compressed formats, i.e. it does not
force decoding to a raw format, but is happy to output to a non-raw
format as long as the sink supports that as well.
In case of certain hardware-accelerated decoding APIs, we will make use
of that functionality. However, the decoder will not output a raw video
format then, but some kind of hardware/API-specific format (in the caps)
and the buffers will reference hardware/API-specific objects that
the hardware/API-specific sink will know how to handle.
=== 2. The Problem ===
In the case of such hardware-accelerated decoding, the decoder will not
output raw pixels that can easily be manipulated. Instead, it will
output hardware/API-specific objects that can later be used to render
a frame using the same API.
Even if we could transform such a buffer into raw pixels, we most
likely would want to avoid that, in order to avoid the need to
map the data back into system memory (and then later back to the GPU).
It's much better to upload the much smaller encoded data to the GPU/DSP
and then leave it there until rendered.
Currently playbin2 only supports subtitles on top of raw decoded video.
It will try to find a suitable overlay element from the plugin registry
based on the input subtitle caps and the rank. (It is assumed that we
will be able to convert any raw video format into any format required
by the overlay using a converter such as ffmpegcolorspace.)
It will not render subtitles if the video sent to the sink is not
raw YUV or RGB or if conversions have been disabled by setting the
native-video flag on playbin2.
Subtitle rendering is considered an important feature. Enabling
hardware-accelerated decoding by default should not lead to a major
feature regression in this area.
This means that we need to support subtitle rendering on top of
non-raw video.
=== 3. Possible Solutions ===
The goal is to keep knowledge of the subtitle format within the
format-specific GStreamer plugins, and knowledge of any specific
video acceleration API to the GStreamer plugins implementing
that API. We do not want to make the pango/dvbsuboverlay/dvdspu/kate
plugins link to libva/libvdpau/etc. and we do not want to make
the vaapi/vdpau plugins link to all of libpango/libkate/libass etc.
Multiple possible solutions come to mind:
(a) backend-specific overlay elements
e.g. vaapitextoverlay, vdpautextoverlay, vaapidvdspu, vdpaudvdspu,
vaapidvbsuboverlay, vdpaudvbsuboverlay, etc.
This assumes the overlay can be done directly on the backend-specific
object passed around.
The main drawback with this solution is that it leads to a lot of
code duplication and may also lead to uncertainty about distributing
certain duplicated pieces of code. The code duplication is pretty
much unavoidable, since making textoverlay, dvbsuboverlay, dvdspu,
kate, assrender, etc. available in form of base classes to derive
from is not really an option. Similarly, one would not really want
the vaapi/vdpau plugin to depend on a bunch of other libraries
such as libpango, libkate, libtiger, libass, etc.
One could add some new kind of overlay plugin feature though in
combination with a generic base class of some sort, but in order
to accommodate all the different cases and formats one would end
up with quite convoluted/tricky API.
(Of course there could also be a GstFancyVideoBuffer that provides
an abstraction for such video accelerated objects and that could
provide an API to add overlays to it in a generic way, but in the
end this is just a less generic variant of (c), and it is not clear
that there are real benefits to a specialised solution vs. a more
generic one).
(b) convert backend-specific object to raw pixels and then overlay
Even where possible technically, this is most likely very
inefficient.
(c) attach the overlay data to the backend-specific video frame buffers
in a generic way and do the actual overlaying/blitting later in
backend-specific code such as the video sink (or an accelerated
encoder/transcoder)
In this case, the actual overlay rendering (i.e. the actual text
rendering or decoding DVD/DVB data into pixels) is done in the
subtitle-format-specific GStreamer plugin. All knowledge about
the subtitle format is contained in the overlay plugin then,
and all knowledge about the video backend in the video backend
specific plugin.
The main question then is how to get the overlay pixels (and
we will only deal with pixels here) from the overlay element
to the video sink.
This could be done in multiple ways: One could send custom
events downstream with the overlay data, or one could attach
the overlay data directly to the video buffers in some way.
Sending inline events has the advantage that is is fairly
transparent to any elements between the overlay element and
the video sink: if an effects plugin creates a new video
buffer for the output, nothing special needs to be done to
maintain the subtitle overlay information, since the overlay
data is not attached to the buffer. However, it slightly
complicates things at the sink, since it would also need to
look for the new event in question instead of just processing
everything in its buffer render function.
If one attaches the overlay data to the buffer directly, any
element between overlay and video sink that creates a new
video buffer would need to be aware of the overlay data
attached to it and copy it over to the newly-created buffer.
One would have to do implement a special kind of new query
(e.g. FEATURE query) that is not passed on automatically by
gst_pad_query_default() in order to make sure that all elements
downstream will handle the attached overlay data. (This is only
a problem if we want to also attach overlay data to raw video
pixel buffers; for new non-raw types we can just make it
mandatory and assume support and be done with it; for existing
non-raw types nothing changes anyway if subtitles don't work)
(we need to maintain backwards compatibility for existing raw
video pipelines like e.g.: ..decoder ! suboverlay ! encoder..)
Even though slightly more work, attaching the overlay information
to buffers seems more intuitive than sending it interleaved as
events. And buffers stored or passed around (e.g. via the
"last-buffer" property in the sink when doing screenshots via
playbin2) always contain all the information needed.
(d) create a video/x-raw-*-delta format and use a backend-specific videomixer
This possibility was hinted at already in the digression in
section 1. It would satisfy the goal of keeping subtitle format
knowledge in the subtitle plugins and video backend knowledge
in the video backend plugin. It would also add a concept that
might be generally useful (think ximagesrc capture with xdamage).
However, it would require adding foorender variants of all the
existing overlay elements, and changing playbin2 to that new
design, which is somewhat intrusive. And given the general
nature of such a new format/API, we would need to take a lot
of care to be able to accommodate all possible use cases when
designing the API, which makes it considerably more ambitious.
Lastly, we would need to write videomixer variants for the
various accelerated video backends as well.
Overall (c) appears to be the most promising solution. It is the least
intrusive and should be fairly straight-forward to implement with
reasonable effort, requiring only small changes to existing elements
and requiring no new elements.
Doing the final overlaying in the sink as opposed to a videomixer
or overlay in the middle of the pipeline has other advantages:
- if video frames need to be dropped, e.g. for QoS reasons,
we could also skip the actual subtitle overlaying and
possibly the decoding/rendering as well, if the
implementation and API allows for that to be delayed.
- the sink often knows the actual size of the window/surface/screen
the output video is rendered to. This *may* make it possible to
render the overlay image in a higher resolution than the input
video, solving a long standing issue with pixelated subtitles on
top of low-resolution videos that are then scaled up in the sink.
This would require for the rendering to be delayed of course instead
of just attaching an AYUV/ARGB/RGBA blog of pixels to the video buffer
in the overlay, but that could all be supported.
- if the video backend / sink has support for high-quality text
rendering (clutter?) we could just pass the text or pango markup
to the sink and let it do the rest (this is unlikely to be
supported in the general case - text and glyph rendering is
hard; also, we don't really want to make up our own text markup
system, and pango markup is probably too limited for complex
karaoke stuff).
=== 4. API needed ===
(a) Representation of subtitle overlays to be rendered
We need to pass the overlay pixels from the overlay element to the
sink somehow. Whatever the exact mechanism, let's assume we pass
a refcounted GstVideoOverlayComposition struct or object.
A composition is made up of one or more overlays/rectangles.
In the simplest case an overlay rectangle is just a blob of
RGBA/ABGR [FIXME?] or AYUV pixels with positioning info and other
metadata, and there is only one rectangle to render.
We're keeping the naming generic ("OverlayFoo" rather than
"SubtitleFoo") here, since this might also be handy for
other use cases such as e.g. logo overlays or so. It is not
designed for full-fledged video stream mixing though.
// Note: don't mind the exact implementation details, they'll be hidden
// FIXME: might be confusing in 0.11 though since GstXOverlay was
// renamed to GstVideoOverlay in 0.11, but not much we can do,
// maybe we can rename GstVideoOverlay to something better
struct GstVideoOverlayComposition
{
guint num_rectangles;
GstVideoOverlayRectangle ** rectangles;
/* lowest rectangle sequence number still used by the upstream
* overlay element. This way a renderer maintaining some kind of
* rectangles <-> surface cache can know when to free cached
* surfaces/rectangles. */
guint min_seq_num_used;
/* sequence number for the composition (same series as rectangles) */
guint seq_num;
}
struct GstVideoOverlayRectangle
{
/* Position on video frame and dimension of output rectangle in
* output frame terms (already adjusted for the PAR of the output
* frame). x/y can be negative (overlay will be clipped then) */
gint x, y;
guint render_width, render_height;
/* Dimensions of overlay pixels */
guint width, height, stride;
/* This is the PAR of the overlay pixels */
guint par_n, par_d;
/* Format of pixels, GST_VIDEO_FORMAT_ARGB on big-endian systems,
* and BGRA on little-endian systems (i.e. pixels are treated as
* 32-bit values and alpha is always in the most-significant byte,
* and blue is in the least-significant byte).
*
* FIXME: does anyone actually use AYUV in practice? (we do
* in our utility function to blend on top of raw video)
* What about AYUV and endianness? Do we always have [A][Y][U][V]
* in memory? */
/* FIXME: maybe use our own enum? */
GstVideoFormat format;
/* Refcounted blob of memory, no caps or timestamps */
GstBuffer *pixels;
// FIXME: how to express source like text or pango markup?
// (just add source type enum + source buffer with data)
//
// FOR 0.10: always send pixel blobs, but attach source data in
// addition (reason: if downstream changes, we can't renegotiate
// that properly, if we just do a query of supported formats from
// the start). Sink will just ignore pixels and use pango markup
// from source data if it supports that.
//
// FOR 0.11: overlay should query formats (pango markup, pixels)
// supported by downstream and then only send that. We can
// renegotiate via the reconfigure event.
//
/* sequence number: useful for backends/renderers/sinks that want
* to maintain a cache of rectangles <-> surfaces. The value of
* the min_seq_num_used in the composition tells the renderer which
* rectangles have expired. */
guint seq_num;
/* FIXME: we also need a (private) way to cache converted/scaled
* pixel blobs */
}
(a1) Overlay consumer API:
How would this work in a video sink that supports scaling of textures:
gst_foo_sink_render () {
/* assume only one for now */
if video_buffer has composition:
composition = video_buffer.get_composition()
for each rectangle in composition:
if rectangle.source_data_type == PANGO_MARKUP
actor = text_from_pango_markup (rectangle.get_source_data())
else
pixels = rectangle.get_pixels_unscaled (FORMAT_RGBA, ...)
actor = texture_from_rgba (pixels, ...)
.. position + scale on top of video surface ...
}
(a2) Overlay producer API:
e.g. logo or subpicture overlay: got pixels, stuff into rectangle:
if (logoverlay->cached_composition == NULL) {
comp = composition_new ();
rect = rectangle_new (format, pixels_buf,
width, height, stride, par_n, par_d,
x, y, render_width, render_height);
/* composition adds its own ref for the rectangle */
composition_add_rectangle (comp, rect);
rectangle_unref (rect);
/* buffer adds its own ref for the composition */
video_buffer_attach_composition (comp);
/* we take ownership of the composition and save it for later */
logoverlay->cached_composition = comp;
} else {
video_buffer_attach_composition (logoverlay->cached_composition);
}
FIXME: also add some API to modify render position/dimensions of
a rectangle (probably requires creation of new rectangle, unless
we handle writability like with other mini objects).
(b) Fallback overlay rendering/blitting on top of raw video
Eventually we want to use this overlay mechanism not only for
hardware-accelerated video, but also for plain old raw video,
either at the sink or in the overlay element directly.
Apart from the advantages listed earlier in section 3, this
allows us to consolidate a lot of overlaying/blitting code that
is currently repeated in every single overlay element in one
location. This makes it considerably easier to support a whole
range of raw video formats out of the box, add SIMD-optimised
rendering using ORC, or handle corner cases correctly.
(Note: side-effect of overlaying raw video at the video sink is
that if e.g. a screnshotter gets the last buffer via the last-buffer
property of basesink, it would get an image without the subtitles
on top. This could probably be fixed by re-implementing the
property in GstVideoSink though. Playbin2 could handle this
internally as well).
void
gst_video_overlay_composition_blend (GstVideoOverlayComposition * comp
GstBuffer * video_buf)
{
guint n;
g_return_if_fail (gst_buffer_is_writable (video_buf));
g_return_if_fail (GST_BUFFER_CAPS (video_buf) != NULL);
... parse video_buffer caps into BlendVideoFormatInfo ...
for each rectangle in the composition: {
if (gst_video_format_is_yuv (video_buf_format)) {
overlay_format = FORMAT_AYUV;
} else if (gst_video_format_is_rgb (video_buf_format)) {
overlay_format = FORMAT_ARGB;
} else {
/* FIXME: grayscale? */
return;
}
/* this will scale and convert AYUV<->ARGB if needed */
pixels = rectangle_get_pixels_scaled (rectangle, overlay_format);
... clip output rectangle ...
__do_blend (video_buf_format, video_buf->data,
overlay_format, pixels->data,
x, y, width, height, stride);
gst_buffer_unref (pixels);
}
}
(c) Flatten all rectangles in a composition
We cannot assume that the video backend API can handle any
number of rectangle overlays, it's possible that it only
supports one single overlay, in which case we need to squash
all rectangles into one.
However, we'll just declare this a corner case for now, and
implement it only if someone actually needs it. It's easy
to add later API-wise. Might be a bit tricky if we have
rectangles with different PARs/formats (e.g. subs and a logo),
though we could probably always just use the code from (b)
with a fully transparent video buffer to create a flattened
overlay buffer.
(d) core API: new FEATURE query
For 0.10 we need to add a FEATURE query, so the overlay element
can query whether the sink downstream and all elements between
the overlay element and the sink support the new overlay API.
Elements in between need to support it because the render
positions and dimensions need to be updated if the video is
cropped or rescaled, for example.
In order to ensure that all elements support the new API,
we need to drop the query in the pad default query handler
(so it only succeeds if all elements handle it explicitly).
Might want two variants of the feature query - one where
all elements in the chain need to support it explicitly
and one where it's enough if some element downstream
supports it.
In 0.11 this could probably be handled via GstMeta and
ALLOCATION queries (and/or we could simply require
elements to be aware of this API from the start).
There appears to be no issue with downstream possibly
not being linked yet at the time when an overlay would
want to do such a query.
Other considerations:
- renderers (overlays or sinks) may be able to handle only ARGB or only AYUV
(for most graphics/hw-API it's likely ARGB of some sort, while our
blending utility functions will likely want the same colour space as
the underlying raw video format, which is usually YUV of some sort).
We need to convert where required, and should cache the conversion.
- renderers may or may not be able to scale the overlay. We need to
do the scaling internally if not (simple case: just horizontal scaling
to adjust for PAR differences; complex case: both horizontal and vertical
scaling, e.g. if subs come from a different source than the video or the
video has been rescaled or cropped between overlay element and sink).
- renderers may be able to generate (possibly scaled) pixels on demand
from the original data (e.g. a string or RLE-encoded data). We will
ignore this for now, since this functionality can still be added later
via API additions. The most interesting case would be to pass a pango
markup string, since e.g. clutter can handle that natively.
- renderers may be able to write data directly on top of the video pixels
(instead of creating an intermediary buffer with the overlay which is
then blended on top of the actual video frame), e.g. dvdspu, dvbsuboverlay
However, in the interest of simplicity, we should probably ignore the
fact that some elements can blend their overlays directly on top of the
video (decoding/uncompressing them on the fly), even more so as it's
not obvious that it's actually faster to decode the same overlay
70-90 times (say) (ie. ca. 3 seconds of video frames) and then blend
it 70-90 times instead of decoding it once into a temporary buffer
and then blending it directly from there, possibly SIMD-accelerated.
Also, this is only relevant if the video is raw video and not some
hardware-acceleration backend object.
And ultimately it is the overlay element that decides whether to do
the overlay right there and then or have the sink do it (if supported).
It could decide to keep doing the overlay itself for raw video and
only use our new API for non-raw video.
- renderers may want to make sure they only upload the overlay pixels once
per rectangle if that rectangle recurs in subsequent frames (as part of
the same composition or a different composition), as is likely. This caching
of e.g. surfaces needs to be done renderer-side and can be accomplished
based on the sequence numbers. The composition contains the lowest
sequence number still in use upstream (an overlay element may want to
cache created compositions+rectangles as well after all to re-use them
for multiple frames), based on that the renderer can expire cached
objects. The caching needs to be done renderer-side because attaching
renderer-specific objects to the rectangles won't work well given the
refcounted nature of rectangles and compositions, making it unpredictable
when a rectangle or composition will be freed or from which thread
context it will be freed. The renderer-specific objects are likely bound
to other types of renderer-specific contexts, and need to be managed
in connection with those.
- composition/rectangles should internally provide a certain degree of
thread-safety. Multiple elements (sinks, overlay element) might access
or use the same objects from multiple threads at the same time, and it
is expected that elements will keep a ref to compositions and rectangles
they push downstream for a while, e.g. until the current subtitle
composition expires.
=== 5. Future considerations ===
- alternatives: there may be multiple versions/variants of the same subtitle
stream. On DVDs, there may be a 4:3 version and a 16:9 version of the same
subtitles. We could attach both variants and let the renderer pick the best
one for the situation (currently we just use the 16:9 version). With totem,
it's ultimately totem that adds the 'black bars' at the top/bottom, so totem
also knows if it's got a 4:3 display and can/wants to fit 4:3 subs (which
may render on top of the bars) or not, for example.
=== 6. Misc. FIXMEs ===
TEST: should these look (roughly) alike (note text distortion) - needs fixing in textoverlay
gst-launch-0.10 \
videotestsrc ! video/x-raw-yuv,width=640,height=480,pixel-aspect-ratio=1/1 ! textoverlay text=Hello font-desc=72 ! xvimagesink \
videotestsrc ! video/x-raw-yuv,width=320,height=480,pixel-aspect-ratio=2/1 ! textoverlay text=Hello font-desc=72 ! xvimagesink \
videotestsrc ! video/x-raw-yuv,width=640,height=240,pixel-aspect-ratio=1/2 ! textoverlay text=Hello font-desc=72 ! xvimagesink
~~~ THE END ~~~

View file

@ -40,7 +40,6 @@
#endif
#include <gst/gst.h>
#include <gst/base/gstcollectpads.h>
#include <gst/base/gstbytewriter.h>
#include <gst/tag/tag.h>

View file

@ -57,9 +57,19 @@ struct _GstBaseAudioSinkPrivate
GstClockTime eos_time;
/* number of microseconds we alow timestamps or clock slaving to drift
/* number of microseconds we allow clock slaving to drift
* before resyncing */
guint64 drift_tolerance;
/* number of nanoseconds we allow timestamps to drift
* before resyncing */
GstClockTime alignment_threshold;
/* time of the previous detected discont candidate */
GstClockTime discont_time;
/* number of nanoseconds to wait until creating a discontinuity */
GstClockTime discont_wait;
};
/* BaseAudioSink signals and args */
@ -78,10 +88,18 @@ enum
/* FIXME, enable pull mode when clock slaving and trick modes are figured out */
#define DEFAULT_CAN_ACTIVATE_PULL FALSE
/* when timestamps or clock slaving drift for more than 40ms we resync. This is
/* when timestamps drift for more than 40ms we resync. This should
* be anough to compensate for timestamp rounding errors. */
#define DEFAULT_ALIGNMENT_THRESHOLD (40 * GST_MSECOND)
/* when clock slaving drift for more than 40ms we resync. This is
* a reasonable default */
#define DEFAULT_DRIFT_TOLERANCE ((40 * GST_MSECOND) / GST_USECOND)
/* allow for one second before resyncing to see if the timestamps drift will
* fix itself, or is a permanent offset */
#define DEFAULT_DISCONT_WAIT (1 * GST_SECOND)
enum
{
PROP_0,
@ -91,7 +109,9 @@ enum
PROP_PROVIDE_CLOCK,
PROP_SLAVE_METHOD,
PROP_CAN_ACTIVATE_PULL,
PROP_ALIGNMENT_THRESHOLD,
PROP_DRIFT_TOLERANCE,
PROP_DISCONT_WAIT,
PROP_LAST
};
@ -213,16 +233,42 @@ gst_base_audio_sink_class_init (GstBaseAudioSinkClass * klass)
/**
* GstBaseAudioSink:drift-tolerance
*
* Controls the amount of time in milliseconds that timestamps or clocks are allowed
* Controls the amount of time in microseconds that clocks are allowed
* to drift before resynchronisation happens.
*
* Since: 0.10.26
*/
g_object_class_install_property (gobject_class, PROP_DRIFT_TOLERANCE,
g_param_spec_int64 ("drift-tolerance", "Drift Tolerance",
"Tolerance for timestamp and clock drift in microseconds", 1,
"Tolerance for clock drift in microseconds", 1,
G_MAXINT64, DEFAULT_DRIFT_TOLERANCE,
G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS));
/**
* GstBaseAudioSink:alignment_threshold
*
* Controls the amount of time in nanoseconds that timestamps are allowed
* to drift from their ideal time before choosing not to align them.
*
* Since: 0.10.26
*/
g_object_class_install_property (gobject_class, PROP_ALIGNMENT_THRESHOLD,
g_param_spec_int64 ("alignment-threshold", "Alignment Threshold",
"Timestamp alignment threshold in nanoseconds", 1,
G_MAXINT64, DEFAULT_ALIGNMENT_THRESHOLD,
G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS));
/**
* GstBaseAudioSink:discont-wait
*
* A window of time in nanoseconds to wait before creating a discontinuity as
* a result of breaching the drift-tolerance.
*/
g_object_class_install_property (gobject_class, PROP_DISCONT_WAIT,
g_param_spec_int64 ("discont-wait", "Discont Wait",
"Window of time in nanoseconds to wait before "
"creating a discontinuity", 0,
G_MAXINT64, DEFAULT_DISCONT_WAIT,
G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS));
gstelement_class->change_state =
GST_DEBUG_FUNCPTR (gst_base_audio_sink_change_state);
@ -264,6 +310,8 @@ gst_base_audio_sink_init (GstBaseAudioSink * baseaudiosink)
baseaudiosink->provide_clock = DEFAULT_PROVIDE_CLOCK;
baseaudiosink->priv->slave_method = DEFAULT_SLAVE_METHOD;
baseaudiosink->priv->drift_tolerance = DEFAULT_DRIFT_TOLERANCE;
baseaudiosink->priv->alignment_threshold = DEFAULT_ALIGNMENT_THRESHOLD;
baseaudiosink->priv->discont_wait = DEFAULT_DISCONT_WAIT;
baseaudiosink->provided_clock = gst_audio_clock_new ("GstAudioSinkClock",
(GstAudioClockGetTimeFunc) gst_base_audio_sink_get_time, baseaudiosink);
@ -632,6 +680,94 @@ gst_base_audio_sink_get_drift_tolerance (GstBaseAudioSink * sink)
return result;
}
/**
* gst_base_audio_sink_set_alignment_threshold:
* @sink: a #GstBaseAudioSink
* @alignment_threshold: the new alignment threshold in nanoseconds
*
* Controls the sink's alignment threshold.
*
* Since: 0.10.31
*/
void
gst_base_audio_sink_set_alignment_threshold (GstBaseAudioSink * sink,
GstClockTime alignment_threshold)
{
g_return_if_fail (GST_IS_BASE_AUDIO_SINK (sink));
GST_OBJECT_LOCK (sink);
sink->priv->alignment_threshold = alignment_threshold;
GST_OBJECT_UNLOCK (sink);
}
/**
* gst_base_audio_sink_get_alignment_threshold
* @sink: a #GstBaseAudioSink
*
* Get the current alignment threshold, in nanoseconds, used by @sink.
*
* Returns: The current alignment threshold used by @sink.
*
* Since: 0.10.31
*/
GstClockTime
gst_base_audio_sink_get_alignment_threshold (GstBaseAudioSink * sink)
{
gint64 result;
g_return_val_if_fail (GST_IS_BASE_AUDIO_SINK (sink), -1);
GST_OBJECT_LOCK (sink);
result = sink->priv->alignment_threshold;
GST_OBJECT_UNLOCK (sink);
return result;
}
/**
* gst_base_audio_sink_set_discont_wait:
* @sink: a #GstBaseAudioSink
* @discont_wait: the new discont wait in nanoseconds
*
* Controls how long the sink will wait before creating a discontinuity.
*
* Since: 0.10.31
*/
void
gst_base_audio_sink_set_discont_wait (GstBaseAudioSink * sink,
GstClockTime discont_wait)
{
g_return_if_fail (GST_IS_BASE_AUDIO_SINK (sink));
GST_OBJECT_LOCK (sink);
sink->priv->discont_wait = discont_wait;
GST_OBJECT_UNLOCK (sink);
}
/**
* gst_base_audio_sink_get_discont_wait
* @sink: a #GstBaseAudioSink
*
* Get the current discont wait, in nanoseconds, used by @sink.
*
* Returns: The current discont wait used by @sink.
*
* Since: 0.10.31
*/
GstClockTime
gst_base_audio_sink_get_discont_wait (GstBaseAudioSink * sink)
{
GstClockTime result;
g_return_val_if_fail (GST_IS_BASE_AUDIO_SINK (sink), -1);
GST_OBJECT_LOCK (sink);
result = sink->priv->discont_wait;
GST_OBJECT_UNLOCK (sink);
return result;
}
static void
gst_base_audio_sink_set_property (GObject * object, guint prop_id,
const GValue * value, GParamSpec * pspec)
@ -659,6 +795,13 @@ gst_base_audio_sink_set_property (GObject * object, guint prop_id,
case PROP_DRIFT_TOLERANCE:
gst_base_audio_sink_set_drift_tolerance (sink, g_value_get_int64 (value));
break;
case PROP_ALIGNMENT_THRESHOLD:
gst_base_audio_sink_set_alignment_threshold (sink,
g_value_get_uint64 (value));
break;
case PROP_DISCONT_WAIT:
gst_base_audio_sink_set_discont_wait (sink, g_value_get_uint64 (value));
break;
default:
G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec);
break;
@ -692,6 +835,13 @@ gst_base_audio_sink_get_property (GObject * object, guint prop_id,
case PROP_DRIFT_TOLERANCE:
g_value_set_int64 (value, gst_base_audio_sink_get_drift_tolerance (sink));
break;
case PROP_ALIGNMENT_THRESHOLD:
g_value_set_uint64 (value,
gst_base_audio_sink_get_alignment_threshold (sink));
break;
case PROP_DISCONT_WAIT:
g_value_set_uint64 (value, gst_base_audio_sink_get_discont_wait (sink));
break;
default:
G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec);
break;
@ -864,6 +1014,7 @@ gst_base_audio_sink_event (GstBaseSink * bsink, GstEvent * event)
sink->priv->avg_skew = -1;
sink->next_sample = -1;
sink->priv->eos_time = -1;
sink->priv->discont_time = -1;
if (sink->ringbuffer)
gst_ring_buffer_set_flushing (sink->ringbuffer, FALSE);
break;
@ -1274,6 +1425,7 @@ gst_base_audio_sink_sync_latency (GstBaseSink * bsink, GstMiniObject * obj)
sink->priv->avg_skew = -1;
sink->next_sample = -1;
sink->priv->eos_time = -1;
sink->priv->discont_time = -1;
return GST_FLOW_OK;
@ -1302,43 +1454,65 @@ gst_base_audio_sink_get_alignment (GstBaseAudioSink * sink,
{
GstRingBuffer *ringbuf = sink->ringbuffer;
gint64 align;
gint64 diff;
gint64 maxdrift;
gint64 sample_diff;
gint64 max_sample_diff;
gint segdone = g_atomic_int_get (&ringbuf->segdone) - ringbuf->segbase;
gint64 samples_done = segdone * ringbuf->samples_per_seg;
gint64 headroom = sample_offset - samples_done;
gboolean allow_align = TRUE;
gboolean discont = FALSE;
gint rate;
/* now try to align the sample to the previous one, first see how big the
* difference is. */
if (sample_offset >= sink->next_sample)
diff = sample_offset - sink->next_sample;
sample_diff = sample_offset - sink->next_sample;
else
diff = sink->next_sample - sample_offset;
sample_diff = sink->next_sample - sample_offset;
rate = GST_AUDIO_INFO_RATE (&ringbuf->spec.info);
/* calculate the max allowed drift in units of samples. By default this is
* 20ms and should be anough to compensate for timestamp rounding errors. */
maxdrift = (rate * sink->priv->drift_tolerance) / GST_MSECOND;
/* calculate the max allowed drift in units of samples. */
max_sample_diff = gst_util_uint64_scale_int (sink->priv->alignment_threshold,
rate, GST_SECOND);
/* calc align with previous sample */
align = sink->next_sample - sample_offset;
/* don't align if it means writing behind the read-segment */
if (diff > headroom && align < 0)
if (sample_diff > headroom && align < 0)
allow_align = FALSE;
if (G_LIKELY (diff < maxdrift && allow_align)) {
if (G_UNLIKELY (sample_diff >= max_sample_diff)) {
/* wait before deciding to make a discontinuity */
if (sink->priv->discont_wait > 0) {
GstClockTime time = gst_util_uint64_scale_int (sample_offset,
GST_SECOND, rate);
if (sink->priv->discont_time == -1) {
/* discont candidate */
sink->priv->discont_time = time;
} else if (time - sink->priv->discont_time >= sink->priv->discont_wait) {
/* discont_wait expired, discontinuity detected */
discont = TRUE;
sink->priv->discont_time = -1;
}
} else {
discont = TRUE;
}
} else if (G_UNLIKELY (sink->priv->discont_time != -1)) {
/* we have had a discont, but are now back on track! */
sink->priv->discont_time = -1;
}
if (G_LIKELY (!discont && allow_align)) {
GST_DEBUG_OBJECT (sink,
"align with prev sample, ABS (%" G_GINT64_FORMAT ") < %"
G_GINT64_FORMAT, align, maxdrift);
G_GINT64_FORMAT, align, max_sample_diff);
} else {
gint64 diff_s G_GNUC_UNUSED;
/* calculate sample diff in seconds for error message */
diff_s = gst_util_uint64_scale_int (diff, GST_SECOND, rate);
diff_s = gst_util_uint64_scale_int (sample_diff, GST_SECOND, rate);
/* timestamps drifted apart from previous samples too much, we need to
* resync. We log this as an element warning. */
@ -1941,6 +2115,7 @@ gst_base_audio_sink_change_state (GstElement * element,
sink->next_sample = -1;
sink->priv->last_align = -1;
sink->priv->eos_time = -1;
sink->priv->discont_time = -1;
gst_ring_buffer_set_flushing (sink->ringbuffer, FALSE);
gst_ring_buffer_may_start (sink->ringbuffer, FALSE);

View file

@ -175,6 +175,16 @@ void gst_base_audio_sink_set_drift_tolerance (GstBaseAudioSink *sink,
gint64 drift_tolerance);
gint64 gst_base_audio_sink_get_drift_tolerance (GstBaseAudioSink *sink);
void gst_base_audio_sink_set_alignment_threshold (GstBaseAudioSink * sink,
GstClockTime alignment_threshold);
GstClockTime
gst_base_audio_sink_get_alignment_threshold (GstBaseAudioSink * sink);
void gst_base_audio_sink_set_discont_wait (GstBaseAudioSink * sink,
GstClockTime discont_wait);
GstClockTime
gst_base_audio_sink_get_discont_wait (GstBaseAudioSink * sink);
G_END_DECLS
#endif /* __GST_BASE_AUDIO_SINK_H__ */

View file

@ -110,7 +110,7 @@ Android.mk: Makefile.am
-:TAGS eng debug \
-:REL_TOP $(top_srcdir) -:ABS_TOP $(abs_top_srcdir) \
-:SOURCES $(libgsttag_@GST_MAJORMINOR@_la_SOURCES) \
-:CFLAGS $(DEFS) $(libgsttag_@GST_MAJORMINOR@_la_CFLAGS) \
-:CFLAGS $(DEFS) $(DEFAULT_INCLUDES) $(libgsttag_@GST_MAJORMINOR@_la_CFLAGS) \
-:LDFLAGS $(libgsttag_@GST_MAJORMINOR@_la_LDFLAGS) \
$(libgsttag_@GST_MAJORMINOR@_la_LIBADD) \
-ldl \

View file

@ -21,6 +21,7 @@ libgstplayback_la_SOURCES = \
gstsubtitleoverlay.c \
gstplaysinkvideoconvert.c \
gstplaysinkaudioconvert.c \
gstplaysinkconvertbin.c \
gststreamsynchronizer.c
nodist_libgstplayback_la_SOURCES = $(built_sources)
@ -42,6 +43,7 @@ noinst_HEADERS = \
gstsubtitleoverlay.h \
gstplaysinkvideoconvert.h \
gstplaysinkaudioconvert.h \
gstplaysinkconvertbin.h \
gststreamsynchronizer.h
BUILT_SOURCES = $(built_headers) $(built_sources)

View file

@ -3350,7 +3350,8 @@ sort_end_pads (GstDecodePad * da, GstDecodePad * db)
}
static GstCaps *
_gst_element_get_linked_caps (GstElement * src, GstElement * sink)
_gst_element_get_linked_caps (GstElement * src, GstElement * sink,
GstPad ** srcpad)
{
GstIterator *it;
GstElement *parent;
@ -3369,6 +3370,10 @@ _gst_element_get_linked_caps (GstElement * src, GstElement * sink)
parent = gst_pad_get_parent_element (peer);
if (parent == sink) {
caps = gst_pad_get_current_caps (pad);
if (srcpad) {
gst_object_ref (pad);
*srcpad = pad;
}
done = TRUE;
}
@ -3397,6 +3402,7 @@ static GQuark topology_structure_name = 0;
static GQuark topology_caps = 0;
static GQuark topology_next = 0;
static GQuark topology_pad = 0;
static GQuark topology_element_srcpad = 0;
/* FIXME: Invent gst_structure_take_structure() to prevent all the
* structure copying for nothing
@ -3422,8 +3428,11 @@ gst_decode_chain_get_topology (GstDecodeChain * chain)
gst_structure_id_set (u, topology_caps, GST_TYPE_CAPS, chain->endcaps,
NULL);
if (chain->endpad)
if (chain->endpad) {
gst_structure_id_set (u, topology_pad, GST_TYPE_PAD, chain->endpad, NULL);
gst_structure_id_set (u, topology_element_srcpad, GST_TYPE_PAD,
chain->endpad, NULL);
}
gst_structure_id_set (s, topology_next, GST_TYPE_STRUCTURE, u, NULL);
gst_structure_free (u);
u = s;
@ -3453,13 +3462,15 @@ gst_decode_chain_get_topology (GstDecodeChain * chain)
GstDecodeElement *delem, *delem_next;
GstElement *elem, *elem_next;
GstCaps *caps;
GstPad *srcpad;
delem = l->data;
elem = delem->element;
delem_next = l->next->data;
elem_next = delem_next->element;
srcpad = NULL;
caps = _gst_element_get_linked_caps (elem_next, elem);
caps = _gst_element_get_linked_caps (elem_next, elem, &srcpad);
if (caps) {
s = gst_structure_new_id_empty (topology_structure_name);
@ -3470,6 +3481,12 @@ gst_decode_chain_get_topology (GstDecodeChain * chain)
gst_structure_free (u);
u = s;
}
if (srcpad) {
gst_structure_id_set (u, topology_element_srcpad, GST_TYPE_PAD, srcpad,
NULL);
gst_object_unref (srcpad);
}
}
/* Caps that resulted in this chain */
@ -3483,7 +3500,9 @@ gst_decode_chain_get_topology (GstDecodeChain * chain)
caps = NULL;
}
}
gst_structure_set (u, "caps", GST_TYPE_CAPS, caps, NULL);
gst_structure_id_set (u, topology_caps, GST_TYPE_CAPS, caps, NULL);
gst_structure_id_set (u, topology_element_srcpad, GST_TYPE_PAD, chain->pad,
NULL);
gst_caps_unref (caps);
return u;
@ -4065,6 +4084,7 @@ gst_decode_bin_plugin_init (GstPlugin * plugin)
topology_caps = g_quark_from_static_string ("caps");
topology_next = g_quark_from_static_string ("next");
topology_pad = g_quark_from_static_string ("pad");
topology_element_srcpad = g_quark_from_static_string ("element-srcpad");
return gst_element_register (plugin, "decodebin", GST_RANK_NONE,
GST_TYPE_DECODE_BIN);

View file

@ -1798,9 +1798,15 @@ gen_audio_chain (GstPlaySink * playsink, gboolean raw)
if (!(playsink->flags & GST_PLAY_FLAG_NATIVE_AUDIO) || (!have_volume
&& playsink->flags & GST_PLAY_FLAG_SOFT_VOLUME)) {
GST_DEBUG_OBJECT (playsink, "creating audioconvert");
gboolean use_converters = !(playsink->flags & GST_PLAY_FLAG_NATIVE_AUDIO);
gboolean use_volume =
!have_volume && playsink->flags & GST_PLAY_FLAG_SOFT_VOLUME;
GST_DEBUG_OBJECT (playsink,
"creating audioconvert with use-converters %d, use-volume %d",
use_converters, use_volume);
chain->conv =
g_object_new (GST_TYPE_PLAY_SINK_AUDIO_CONVERT, "name", "aconv", NULL);
g_object_new (GST_TYPE_PLAY_SINK_AUDIO_CONVERT, "name", "aconv",
"use-converters", use_converters, "use-volume", use_volume, NULL);
gst_bin_add (bin, chain->conv);
if (prev) {
if (!gst_element_link_pads_full (prev, "src", chain->conv, "sink",
@ -1811,11 +1817,6 @@ gen_audio_chain (GstPlaySink * playsink, gboolean raw)
}
prev = chain->conv;
GST_PLAY_SINK_AUDIO_CONVERT_CAST (chain->conv)->use_converters =
!(playsink->flags & GST_PLAY_FLAG_NATIVE_AUDIO);
GST_PLAY_SINK_AUDIO_CONVERT_CAST (chain->conv)->use_volume = (!have_volume
&& playsink->flags & GST_PLAY_FLAG_SOFT_VOLUME);
if (!have_volume && playsink->flags & GST_PLAY_FLAG_SOFT_VOLUME) {
GstPlaySinkAudioConvert *conv =
GST_PLAY_SINK_AUDIO_CONVERT_CAST (chain->conv);
@ -1963,13 +1964,13 @@ setup_audio_chain (GstPlaySink * playsink, gboolean raw)
G_CALLBACK (notify_mute_cb), playsink);
}
GST_PLAY_SINK_AUDIO_CONVERT_CAST (chain->conv)->use_volume = FALSE;
g_object_set (chain->conv, "use-volume", FALSE, NULL);
} else {
GstPlaySinkAudioConvert *conv =
GST_PLAY_SINK_AUDIO_CONVERT_CAST (chain->conv);
/* no volume, we need to add a volume element when we can */
conv->use_volume = TRUE;
g_object_set (chain->conv, "use-volume", TRUE, NULL);
GST_DEBUG_OBJECT (playsink, "the sink has no volume property");
/* Disconnect signals */

View file

@ -32,332 +32,63 @@ GST_DEBUG_CATEGORY_STATIC (gst_play_sink_audio_convert_debug);
#define parent_class gst_play_sink_audio_convert_parent_class
G_DEFINE_TYPE (GstPlaySinkAudioConvert, gst_play_sink_audio_convert,
GST_TYPE_BIN);
GST_TYPE_PLAY_SINK_CONVERT_BIN);
static GstStaticPadTemplate srctemplate = GST_STATIC_PAD_TEMPLATE ("src",
GST_PAD_SRC,
GST_PAD_ALWAYS,
GST_STATIC_CAPS_ANY);
static GstStaticPadTemplate sinktemplate = GST_STATIC_PAD_TEMPLATE ("sink",
GST_PAD_SINK,
GST_PAD_ALWAYS,
GST_STATIC_CAPS_ANY);
enum
{
PROP_0,
PROP_USE_CONVERTERS,
PROP_USE_VOLUME,
};
static gboolean
is_raw_caps (GstCaps * caps)
gst_play_sink_audio_convert_add_conversion_elements (GstPlaySinkAudioConvert *
self)
{
gint i, n;
GstStructure *s;
const gchar *name;
GstPlaySinkConvertBin *cbin = GST_PLAY_SINK_CONVERT_BIN (self);
GstElement *el, *prev = NULL;
n = gst_caps_get_size (caps);
for (i = 0; i < n; i++) {
s = gst_caps_get_structure (caps, i);
name = gst_structure_get_name (s);
if (!g_str_has_prefix (name, "audio/x-raw"))
return FALSE;
}
g_assert (cbin->conversion_elements == NULL);
return TRUE;
}
GST_DEBUG_OBJECT (self,
"Building audio conversion with use-converters %d, use-volume %d",
self->use_converters, self->use_volume);
static void
post_missing_element_message (GstPlaySinkAudioConvert * self,
const gchar * name)
{
GstMessage *msg;
msg = gst_missing_element_message_new (GST_ELEMENT_CAST (self), name);
gst_element_post_message (GST_ELEMENT_CAST (self), msg);
}
static GstPadProbeReturn
pad_blocked_cb (GstPad * pad, GstPadProbeType type, gpointer type_data,
gpointer user_data)
{
GstPlaySinkAudioConvert *self = user_data;
GstPad *peer;
GstCaps *caps;
gboolean raw;
GST_PLAY_SINK_AUDIO_CONVERT_LOCK (self);
GST_DEBUG_OBJECT (self, "Pad blocked");
/* There must be a peer at this point */
peer = gst_pad_get_peer (self->sinkpad);
caps = gst_pad_get_current_caps (peer);
if (!caps)
caps = gst_pad_get_caps (peer, NULL);
gst_object_unref (peer);
raw = is_raw_caps (caps);
GST_DEBUG_OBJECT (self, "Caps %" GST_PTR_FORMAT " are raw: %d", caps, raw);
gst_caps_unref (caps);
if (raw == self->raw)
goto unblock;
self->raw = raw;
if (raw) {
GstBin *bin = GST_BIN_CAST (self);
GstElement *head = NULL, *prev = NULL;
GstPad *pad;
GST_DEBUG_OBJECT (self, "Creating raw conversion pipeline");
gst_ghost_pad_set_target (GST_GHOST_PAD_CAST (self->sinkpad), NULL);
gst_ghost_pad_set_target (GST_GHOST_PAD_CAST (self->srcpad), NULL);
if (self->use_converters) {
self->conv = gst_element_factory_make ("audioconvert", "paconv");
if (self->conv == NULL) {
post_missing_element_message (self, "audioconvert");
GST_ELEMENT_WARNING (self, CORE, MISSING_PLUGIN,
(_("Missing element '%s' - check your GStreamer installation."),
"audioconvert"), ("audio rendering might fail"));
} else {
gst_bin_add (bin, self->conv);
gst_element_sync_state_with_parent (self->conv);
prev = head = self->conv;
}
self->resample = gst_element_factory_make ("audioresample", "resample");
if (self->resample == NULL) {
post_missing_element_message (self, "audioresample");
GST_ELEMENT_WARNING (self, CORE, MISSING_PLUGIN,
(_("Missing element '%s' - check your GStreamer installation."),
"audioresample"), ("possibly a liboil version mismatch?"));
} else {
gst_bin_add (bin, self->resample);
gst_element_sync_state_with_parent (self->resample);
if (prev) {
if (!gst_element_link_pads_full (prev, "src", self->resample, "sink",
GST_PAD_LINK_CHECK_TEMPLATE_CAPS))
goto link_failed;
} else {
head = self->resample;
}
prev = self->resample;
}
if (self->use_converters) {
el = gst_play_sink_convert_bin_add_conversion_element_factory (cbin,
"audioconvert", "conv");
if (el) {
prev = el;
}
if (self->use_volume && self->volume) {
gst_bin_add (bin, gst_object_ref (self->volume));
gst_element_sync_state_with_parent (self->volume);
el = gst_play_sink_convert_bin_add_conversion_element_factory (cbin,
"audioresample", "resample");
if (el) {
if (prev) {
if (!gst_element_link_pads_full (prev, "src", self->volume, "sink",
if (!gst_element_link_pads_full (prev, "src", el, "sink",
GST_PAD_LINK_CHECK_TEMPLATE_CAPS))
goto link_failed;
} else {
head = self->volume;
}
prev = self->volume;
}
if (head) {
pad = gst_element_get_static_pad (head, "sink");
gst_ghost_pad_set_target (GST_GHOST_PAD_CAST (self->sinkpad), pad);
gst_object_unref (pad);
prev = el;
}
}
if (self->use_volume && self->volume) {
el = self->volume;
gst_play_sink_convert_bin_add_conversion_element (cbin, el);
if (prev) {
pad = gst_element_get_static_pad (prev, "src");
gst_ghost_pad_set_target (GST_GHOST_PAD_CAST (self->srcpad), pad);
gst_object_unref (pad);
if (!gst_element_link_pads_full (prev, "src", el, "sink",
GST_PAD_LINK_CHECK_TEMPLATE_CAPS))
goto link_failed;
}
if (!head && !prev) {
gst_ghost_pad_set_target (GST_GHOST_PAD_CAST (self->srcpad),
self->sink_proxypad);
}
GST_DEBUG_OBJECT (self, "Raw conversion pipeline created");
} else {
GstBin *bin = GST_BIN_CAST (self);
GST_DEBUG_OBJECT (self, "Removing raw conversion pipeline");
gst_ghost_pad_set_target (GST_GHOST_PAD_CAST (self->sinkpad), NULL);
gst_ghost_pad_set_target (GST_GHOST_PAD_CAST (self->srcpad), NULL);
if (self->conv) {
gst_element_set_state (self->conv, GST_STATE_NULL);
gst_bin_remove (bin, self->conv);
self->conv = NULL;
}
if (self->resample) {
gst_element_set_state (self->resample, GST_STATE_NULL);
gst_bin_remove (bin, self->resample);
self->resample = NULL;
}
if (self->volume) {
gst_element_set_state (self->volume, GST_STATE_NULL);
if (GST_OBJECT_PARENT (self->volume) == GST_OBJECT_CAST (self)) {
gst_bin_remove (GST_BIN_CAST (self), self->volume);
}
}
gst_ghost_pad_set_target (GST_GHOST_PAD_CAST (self->srcpad),
self->sink_proxypad);
GST_DEBUG_OBJECT (self, "Raw conversion pipeline removed");
prev = el;
}
unblock:
self->sink_proxypad_block_id = 0;
GST_PLAY_SINK_AUDIO_CONVERT_UNLOCK (self);
return GST_PAD_PROBE_REMOVE;
link_failed:
{
GST_ELEMENT_ERROR (self, CORE, PAD,
(NULL), ("Failed to configure the audio converter."));
gst_ghost_pad_set_target (GST_GHOST_PAD_CAST (self->srcpad),
self->sink_proxypad);
self->sink_proxypad_block_id = 0;
GST_PLAY_SINK_AUDIO_CONVERT_UNLOCK (self);
return GST_PAD_PROBE_REMOVE;
}
}
static void
block_proxypad (GstPlaySinkAudioConvert * self)
{
if (self->sink_proxypad_block_id == 0) {
self->sink_proxypad_block_id =
gst_pad_add_probe (self->sink_proxypad, GST_PAD_PROBE_TYPE_BLOCK,
pad_blocked_cb, gst_object_ref (self),
(GDestroyNotify) gst_object_unref);
}
}
static void
unblock_proxypad (GstPlaySinkAudioConvert * self)
{
if (self->sink_proxypad_block_id != 0) {
gst_pad_remove_probe (self->sink_proxypad, self->sink_proxypad_block_id);
self->sink_proxypad_block_id = 0;
}
}
static gboolean
gst_play_sink_audio_convert_sink_setcaps (GstPlaySinkAudioConvert * self,
GstCaps * caps)
{
GstStructure *s;
const gchar *name;
gboolean reconfigure = FALSE;
GST_PLAY_SINK_AUDIO_CONVERT_LOCK (self);
s = gst_caps_get_structure (caps, 0);
name = gst_structure_get_name (s);
if (g_str_has_prefix (name, "audio/x-raw")) {
if (!self->raw && !gst_pad_is_blocked (self->sink_proxypad)) {
GST_DEBUG_OBJECT (self, "Changing caps from non-raw to raw");
reconfigure = TRUE;
block_proxypad (self);
}
} else {
if (self->raw && !gst_pad_is_blocked (self->sink_proxypad)) {
GST_DEBUG_OBJECT (self, "Changing caps from raw to non-raw");
reconfigure = TRUE;
block_proxypad (self);
}
}
/* Otherwise the setcaps below fails */
if (reconfigure) {
gst_ghost_pad_set_target (GST_GHOST_PAD_CAST (self->sinkpad), NULL);
gst_ghost_pad_set_target (GST_GHOST_PAD_CAST (self->srcpad), NULL);
}
GST_PLAY_SINK_AUDIO_CONVERT_UNLOCK (self);
GST_DEBUG_OBJECT (self, "Setting sink caps %" GST_PTR_FORMAT, caps);
return TRUE;
}
static gboolean
gst_play_sink_audio_convert_sink_event (GstPad * pad, GstEvent * event)
{
GstPlaySinkAudioConvert *self =
GST_PLAY_SINK_AUDIO_CONVERT (gst_pad_get_parent (pad));
gboolean ret;
switch (GST_EVENT_TYPE (event)) {
case GST_EVENT_CAPS:
{
GstCaps *caps;
gst_event_parse_caps (event, &caps);
ret = gst_play_sink_audio_convert_sink_setcaps (self, caps);
break;
}
default:
break;
}
ret = gst_proxy_pad_event_default (pad, gst_event_ref (event));
switch (GST_EVENT_TYPE (event)) {
case GST_EVENT_SEGMENT:
GST_PLAY_SINK_AUDIO_CONVERT_LOCK (self);
GST_DEBUG_OBJECT (self, "Segment before %" GST_SEGMENT_FORMAT,
&self->segment);
gst_event_copy_segment (event, &self->segment);
GST_DEBUG_OBJECT (self, "Segment after %" GST_SEGMENT_FORMAT,
&self->segment);
GST_PLAY_SINK_AUDIO_CONVERT_UNLOCK (self);
break;
case GST_EVENT_FLUSH_STOP:
GST_PLAY_SINK_AUDIO_CONVERT_LOCK (self);
GST_DEBUG_OBJECT (self, "Resetting segment");
gst_segment_init (&self->segment, GST_FORMAT_UNDEFINED);
GST_PLAY_SINK_AUDIO_CONVERT_UNLOCK (self);
break;
default:
break;
}
gst_event_unref (event);
gst_object_unref (self);
return ret;
}
static GstCaps *
gst_play_sink_audio_convert_getcaps (GstPad * pad, GstCaps * filter)
{
GstPlaySinkAudioConvert *self =
GST_PLAY_SINK_AUDIO_CONVERT (gst_pad_get_parent (pad));
GstCaps *ret;
GstPad *otherpad, *peer = NULL;
GST_PLAY_SINK_AUDIO_CONVERT_LOCK (self);
otherpad = gst_ghost_pad_get_target (GST_GHOST_PAD_CAST (pad));
GST_PLAY_SINK_AUDIO_CONVERT_UNLOCK (self);
if (otherpad) {
peer = gst_pad_get_peer (otherpad);
gst_object_unref (otherpad);
otherpad = NULL;
}
if (peer) {
ret = gst_pad_get_caps (peer, filter);
gst_object_unref (peer);
} else {
ret = (filter ? gst_caps_ref (filter) : gst_caps_new_any ());
}
gst_object_unref (self);
return ret;
link_failed:
return FALSE;
}
static void
@ -368,67 +99,65 @@ gst_play_sink_audio_convert_finalize (GObject * object)
if (self->volume)
gst_object_unref (self->volume);
gst_object_unref (self->sink_proxypad);
g_mutex_free (self->lock);
G_OBJECT_CLASS (parent_class)->finalize (object);
}
static GstStateChangeReturn
gst_play_sink_audio_convert_change_state (GstElement * element,
GstStateChange transition)
static void
gst_play_sink_audio_convert_set_property (GObject * object, guint prop_id,
const GValue * value, GParamSpec * pspec)
{
GstStateChangeReturn ret;
GstPlaySinkAudioConvert *self = GST_PLAY_SINK_AUDIO_CONVERT_CAST (element);
GstPlaySinkAudioConvert *self = GST_PLAY_SINK_AUDIO_CONVERT_CAST (object);
gboolean v, changed = FALSE;
switch (transition) {
case GST_STATE_CHANGE_PAUSED_TO_READY:
GST_PLAY_SINK_AUDIO_CONVERT_LOCK (self);
unblock_proxypad (self);
GST_PLAY_SINK_AUDIO_CONVERT_UNLOCK (self);
GST_PLAY_SINK_CONVERT_BIN_LOCK (self);
switch (prop_id) {
case PROP_USE_CONVERTERS:
v = g_value_get_boolean (value);
if (v != self->use_converters) {
self->use_converters = v;
changed = TRUE;
}
break;
case PROP_USE_VOLUME:
v = g_value_get_boolean (value);
if (v != self->use_volume) {
self->use_volume = v;
changed = TRUE;
}
break;
default:
break;
}
ret = GST_ELEMENT_CLASS (parent_class)->change_state (element, transition);
if (ret == GST_STATE_CHANGE_FAILURE)
return ret;
if (changed) {
GstPlaySinkConvertBin *cbin = GST_PLAY_SINK_CONVERT_BIN (self);
GST_DEBUG_OBJECT (self, "Rebuilding converter bin");
gst_play_sink_convert_bin_remove_elements (cbin);
gst_play_sink_audio_convert_add_conversion_elements (self);
gst_play_sink_convert_bin_add_identity (cbin);
gst_play_sink_convert_bin_cache_converter_caps (cbin);
}
GST_PLAY_SINK_CONVERT_BIN_UNLOCK (self);
}
switch (transition) {
case GST_STATE_CHANGE_PAUSED_TO_READY:
GST_PLAY_SINK_AUDIO_CONVERT_LOCK (self);
gst_segment_init (&self->segment, GST_FORMAT_UNDEFINED);
if (self->conv) {
gst_element_set_state (self->conv, GST_STATE_NULL);
gst_bin_remove (GST_BIN_CAST (self), self->conv);
self->conv = NULL;
}
if (self->resample) {
gst_element_set_state (self->resample, GST_STATE_NULL);
gst_bin_remove (GST_BIN_CAST (self), self->resample);
self->resample = NULL;
}
if (self->volume) {
gst_element_set_state (self->volume, GST_STATE_NULL);
if (GST_OBJECT_PARENT (self->volume) == GST_OBJECT_CAST (self)) {
gst_bin_remove (GST_BIN_CAST (self), self->volume);
}
}
gst_ghost_pad_set_target (GST_GHOST_PAD_CAST (self->srcpad),
self->sink_proxypad);
self->raw = FALSE;
GST_PLAY_SINK_AUDIO_CONVERT_UNLOCK (self);
static void
gst_play_sink_audio_convert_get_property (GObject * object, guint prop_id,
GValue * value, GParamSpec * pspec)
{
GstPlaySinkAudioConvert *self = GST_PLAY_SINK_AUDIO_CONVERT_CAST (object);
GST_PLAY_SINK_CONVERT_BIN_LOCK (self);
switch (prop_id) {
case PROP_USE_CONVERTERS:
g_value_set_boolean (value, self->use_converters);
break;
case PROP_USE_VOLUME:
g_value_set_boolean (value, self->use_volume);
break;
case GST_STATE_CHANGE_READY_TO_PAUSED:
GST_PLAY_SINK_AUDIO_CONVERT_LOCK (self);
unblock_proxypad (self);
GST_PLAY_SINK_AUDIO_CONVERT_UNLOCK (self);
default:
break;
}
return ret;
GST_PLAY_SINK_CONVERT_BIN_UNLOCK (self);
}
static void
@ -444,50 +173,31 @@ gst_play_sink_audio_convert_class_init (GstPlaySinkAudioConvertClass * klass)
gstelement_class = (GstElementClass *) klass;
gobject_class->finalize = gst_play_sink_audio_convert_finalize;
gobject_class->set_property = gst_play_sink_audio_convert_set_property;
gobject_class->get_property = gst_play_sink_audio_convert_get_property;
g_object_class_install_property (gobject_class, PROP_USE_CONVERTERS,
g_param_spec_boolean ("use-converters", "Use converters",
"Whether to use conversion elements", FALSE,
G_PARAM_READWRITE | G_PARAM_CONSTRUCT_ONLY | G_PARAM_STATIC_STRINGS));
g_object_class_install_property (gobject_class, PROP_USE_VOLUME,
g_param_spec_boolean ("use-volume", "Use volume",
"Whether to use a volume element", FALSE,
G_PARAM_READWRITE | G_PARAM_CONSTRUCT_ONLY | G_PARAM_STATIC_STRINGS));
gst_element_class_add_pad_template (gstelement_class,
gst_static_pad_template_get (&srctemplate));
gst_element_class_add_pad_template (gstelement_class,
gst_static_pad_template_get (&sinktemplate));
gst_element_class_set_details_simple (gstelement_class,
"Player Sink Audio Converter", "Audio/Bin/Converter",
"Convenience bin for audio conversion",
"Sebastian Dröge <sebastian.droege@collabora.co.uk>");
gstelement_class->change_state =
GST_DEBUG_FUNCPTR (gst_play_sink_audio_convert_change_state);
}
static void
gst_play_sink_audio_convert_init (GstPlaySinkAudioConvert * self)
{
GstPadTemplate *templ;
GstPlaySinkConvertBin *cbin = GST_PLAY_SINK_CONVERT_BIN (self);
self->lock = g_mutex_new ();
gst_segment_init (&self->segment, GST_FORMAT_UNDEFINED);
templ = gst_static_pad_template_get (&sinktemplate);
self->sinkpad = gst_ghost_pad_new_no_target_from_template ("sink", templ);
gst_pad_set_event_function (self->sinkpad,
GST_DEBUG_FUNCPTR (gst_play_sink_audio_convert_sink_event));
gst_pad_set_getcaps_function (self->sinkpad,
GST_DEBUG_FUNCPTR (gst_play_sink_audio_convert_getcaps));
self->sink_proxypad =
GST_PAD_CAST (gst_proxy_pad_get_internal (GST_PROXY_PAD (self->sinkpad)));
gst_element_add_pad (GST_ELEMENT_CAST (self), self->sinkpad);
gst_object_unref (templ);
templ = gst_static_pad_template_get (&srctemplate);
self->srcpad = gst_ghost_pad_new_no_target_from_template ("src", templ);
gst_pad_set_getcaps_function (self->srcpad,
GST_DEBUG_FUNCPTR (gst_play_sink_audio_convert_getcaps));
gst_element_add_pad (GST_ELEMENT_CAST (self), self->srcpad);
gst_object_unref (templ);
gst_ghost_pad_set_target (GST_GHOST_PAD_CAST (self->srcpad),
self->sink_proxypad);
cbin->audio = TRUE;
/* FIXME: Only create this on demand but for now we need
* it to always exist because of playsink's volume proxying
@ -496,4 +206,7 @@ gst_play_sink_audio_convert_init (GstPlaySinkAudioConvert * self)
self->volume = gst_element_factory_make ("volume", "volume");
if (self->volume)
gst_object_ref_sink (self->volume);
gst_play_sink_audio_convert_add_conversion_elements (self);
gst_play_sink_convert_bin_cache_converter_caps (cbin);
}

View file

@ -18,6 +18,7 @@
*/
#include <gst/gst.h>
#include "gstplaysinkconvertbin.h"
#ifndef __GST_PLAY_SINK_AUDIO_CONVERT_H__
#define __GST_PLAY_SINK_AUDIO_CONVERT_H__
@ -35,52 +36,22 @@ G_BEGIN_DECLS
(G_TYPE_CHECK_INSTANCE_TYPE ((obj), GST_TYPE_PLAY_SINK_AUDIO_CONVERT))
#define GST_IS_PLAY_SINK_AUDIO_CONVERT_CLASS(klass) \
(G_TYPE_CHECK_CLASS_TYPE ((klass), GST_TYPE_PLAY_SINK_AUDIO_CONVERT))
#define GST_PLAY_SINK_AUDIO_CONVERT_LOCK(obj) G_STMT_START { \
GST_LOG_OBJECT (obj, \
"locking from thread %p", \
g_thread_self ()); \
g_mutex_lock (GST_PLAY_SINK_AUDIO_CONVERT_CAST(obj)->lock); \
GST_LOG_OBJECT (obj, \
"locked from thread %p", \
g_thread_self ()); \
} G_STMT_END
#define GST_PLAY_SINK_AUDIO_CONVERT_UNLOCK(obj) G_STMT_START { \
GST_LOG_OBJECT (obj, \
"unlocking from thread %p", \
g_thread_self ()); \
g_mutex_unlock (GST_PLAY_SINK_AUDIO_CONVERT_CAST(obj)->lock); \
} G_STMT_END
typedef struct _GstPlaySinkAudioConvert GstPlaySinkAudioConvert;
typedef struct _GstPlaySinkAudioConvertClass GstPlaySinkAudioConvertClass;
struct _GstPlaySinkAudioConvert
{
GstBin parent;
/* < private > */
GMutex *lock;
GstPad *sinkpad, *sink_proxypad;
gulong sink_proxypad_block_id;
GstSegment segment;
GstPad *srcpad;
gboolean raw;
GstElement *conv, *resample;
GstPlaySinkConvertBin parent;
/* < pseudo public > */
GstElement *volume;
gboolean use_volume;
gboolean use_converters;
gboolean use_volume;
};
struct _GstPlaySinkAudioConvertClass
{
GstBinClass parent;
GstPlaySinkConvertBinClass parent;
};
GType gst_play_sink_audio_convert_get_type (void);

View file

@ -0,0 +1,569 @@
/* GStreamer
* Copyright (C) <2011> Sebastian Dröge <sebastian.droege@collabora.co.uk>
* Copyright (C) <2011> Vincent Penquerch <vincent.penquerch@collabora.co.uk>
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Library General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Library General Public License for more details.
*
* You should have received a copy of the GNU Library General Public
* License along with this library; if not, write to the
* Free Software Foundation, Inc., 59 Temple Place - Suite 330,
* Boston, MA 02111-1307, USA.
*/
#ifdef HAVE_CONFIG_H
#include "config.h"
#endif
#include "gstplaysinkconvertbin.h"
#include <gst/pbutils/pbutils.h>
#include <gst/gst-i18n-plugin.h>
GST_DEBUG_CATEGORY_STATIC (gst_play_sink_convert_bin_debug);
#define GST_CAT_DEFAULT gst_play_sink_convert_bin_debug
#define parent_class gst_play_sink_convert_bin_parent_class
G_DEFINE_TYPE (GstPlaySinkConvertBin, gst_play_sink_convert_bin, GST_TYPE_BIN);
static GstStaticPadTemplate srctemplate = GST_STATIC_PAD_TEMPLATE ("src",
GST_PAD_SRC,
GST_PAD_ALWAYS,
GST_STATIC_CAPS_ANY);
static GstStaticPadTemplate sinktemplate = GST_STATIC_PAD_TEMPLATE ("sink",
GST_PAD_SINK,
GST_PAD_ALWAYS,
GST_STATIC_CAPS_ANY);
static gboolean
is_raw_caps (GstCaps * caps, gboolean audio)
{
gint i, n;
GstStructure *s;
const gchar *name;
const gchar *prefix = audio ? "audio/x-raw-" : "video/x-raw-";
n = gst_caps_get_size (caps);
for (i = 0; i < n; i++) {
s = gst_caps_get_structure (caps, i);
name = gst_structure_get_name (s);
if (!g_str_has_prefix (name, prefix))
return FALSE;
}
return TRUE;
}
static void
gst_play_sink_convert_bin_post_missing_element_message (GstPlaySinkConvertBin *
self, const gchar * name)
{
GstMessage *msg;
msg = gst_missing_element_message_new (GST_ELEMENT_CAST (self), name);
gst_element_post_message (GST_ELEMENT_CAST (self), msg);
}
static void
distribute_running_time (GstElement * element, const GstSegment * segment)
{
GstEvent *event;
GstPad *pad;
pad = gst_element_get_static_pad (element, "sink");
gst_pad_send_event (pad, gst_event_new_flush_start ());
gst_pad_send_event (pad, gst_event_new_flush_stop ());
if (segment->accum) {
event = gst_event_new_new_segment_full (FALSE, segment->rate,
segment->applied_rate, segment->format, 0, segment->accum, 0);
gst_pad_send_event (pad, event);
}
event = gst_event_new_new_segment_full (FALSE, segment->rate,
segment->applied_rate, segment->format,
segment->start, segment->stop, segment->time);
gst_pad_send_event (pad, event);
gst_object_unref (pad);
}
void
gst_play_sink_convert_bin_add_conversion_element (GstPlaySinkConvertBin * self,
GstElement * el)
{
self->conversion_elements = g_list_append (self->conversion_elements, el);
gst_bin_add (GST_BIN (self), gst_object_ref (el));
}
GstElement *
gst_play_sink_convert_bin_add_conversion_element_factory (GstPlaySinkConvertBin
* self, const char *factory, const char *name)
{
GstElement *el;
el = gst_element_factory_make (factory, name);
if (el == NULL) {
gst_play_sink_convert_bin_post_missing_element_message (self, factory);
GST_ELEMENT_WARNING (self, CORE, MISSING_PLUGIN,
(_("Missing element '%s' - check your GStreamer installation."),
factory),
(self->audio ? "audio rendering might fail" :
"video rendering might fail"));
} else {
gst_play_sink_convert_bin_add_conversion_element (self, el);
}
return el;
}
void
gst_play_sink_convert_bin_add_identity (GstPlaySinkConvertBin * self)
{
if (self->identity)
return;
self->identity = gst_element_factory_make ("identity", "identity");
if (self->identity == NULL) {
gst_play_sink_convert_bin_post_missing_element_message (self, "identity");
GST_ELEMENT_WARNING (self, CORE, MISSING_PLUGIN,
(_("Missing element '%s' - check your GStreamer installation."),
"identity"), (self->audio ?
"audio rendering might fail" : "video rendering might fail")
);
} else {
gst_bin_add (GST_BIN_CAST (self), self->identity);
}
}
static void
gst_play_sink_convert_bin_set_targets (GstPlaySinkConvertBin * self,
gboolean passthrough)
{
GstPad *pad;
GstElement *head, *tail;
GST_DEBUG_OBJECT (self, "Setting pad targets with passthrough %d",
passthrough);
if (self->conversion_elements == NULL || passthrough) {
GST_DEBUG_OBJECT (self, "no conversion elements, using identity (%p) as "
"head/tail", self->identity);
if (!passthrough) {
GST_WARNING_OBJECT (self,
"Doing passthrough as no converter elements were added");
}
head = tail = self->identity;
} else {
head = GST_ELEMENT (g_list_first (self->conversion_elements)->data);
tail = GST_ELEMENT (g_list_last (self->conversion_elements)->data);
GST_DEBUG_OBJECT (self, "conversion elements in use, picking "
"head:%s and tail:%s", GST_OBJECT_NAME (head), GST_OBJECT_NAME (tail));
}
g_return_if_fail (head != NULL);
g_return_if_fail (tail != NULL);
pad = gst_element_get_static_pad (head, "sink");
GST_DEBUG_OBJECT (self, "Ghosting bin sink pad to %" GST_PTR_FORMAT, pad);
gst_ghost_pad_set_target (GST_GHOST_PAD_CAST (self->sinkpad), pad);
gst_object_unref (pad);
pad = gst_element_get_static_pad (tail, "src");
GST_DEBUG_OBJECT (self, "Ghosting bin src pad to %" GST_PTR_FORMAT, pad);
gst_ghost_pad_set_target (GST_GHOST_PAD_CAST (self->srcpad), pad);
gst_object_unref (pad);
}
static void
gst_play_sink_convert_bin_remove_element (GstElement * element,
GstPlaySinkConvertBin * self)
{
gst_element_set_state (element, GST_STATE_NULL);
gst_bin_remove (GST_BIN_CAST (self), element);
}
static void
gst_play_sink_convert_bin_on_element_added (GstElement * element,
GstPlaySinkConvertBin * self)
{
gst_element_sync_state_with_parent (element);
distribute_running_time (element, &self->segment);
}
static void
pad_blocked_cb (GstPad * pad, gboolean blocked, GstPlaySinkConvertBin * self)
{
GstPad *peer;
GstCaps *caps;
gboolean raw;
GST_PLAY_SINK_CONVERT_BIN_LOCK (self);
self->sink_proxypad_blocked = blocked;
GST_DEBUG_OBJECT (self, "Pad blocked: %d", blocked);
if (!blocked)
goto done;
/* There must be a peer at this point */
peer = gst_pad_get_peer (self->sinkpad);
caps = gst_pad_get_negotiated_caps (peer);
if (!caps)
caps = gst_pad_get_caps_reffed (peer);
gst_object_unref (peer);
raw = is_raw_caps (caps, self->audio);
GST_DEBUG_OBJECT (self, "Caps %" GST_PTR_FORMAT " are raw: %d", caps, raw);
gst_caps_unref (caps);
if (raw == self->raw)
goto unblock;
self->raw = raw;
gst_ghost_pad_set_target (GST_GHOST_PAD_CAST (self->sinkpad), NULL);
gst_ghost_pad_set_target (GST_GHOST_PAD_CAST (self->srcpad), NULL);
if (raw) {
GST_DEBUG_OBJECT (self, "Switching to raw conversion pipeline");
if (self->conversion_elements)
g_list_foreach (self->conversion_elements,
(GFunc) gst_play_sink_convert_bin_on_element_added, self);
} else {
GST_DEBUG_OBJECT (self, "Switch to passthrough pipeline");
gst_play_sink_convert_bin_on_element_added (self->identity, self);
}
gst_play_sink_convert_bin_set_targets (self, !raw);
unblock:
gst_pad_set_blocked_async_full (self->sink_proxypad, FALSE,
(GstPadBlockCallback) pad_blocked_cb, gst_object_ref (self),
(GDestroyNotify) gst_object_unref);
done:
GST_PLAY_SINK_CONVERT_BIN_UNLOCK (self);
return;
}
static gboolean
gst_play_sink_convert_bin_sink_event (GstPad * pad, GstEvent * event)
{
GstPlaySinkConvertBin *self =
GST_PLAY_SINK_CONVERT_BIN (gst_pad_get_parent (pad));
gboolean ret;
ret = gst_proxy_pad_event_default (pad, gst_event_ref (event));
if (GST_EVENT_TYPE (event) == GST_EVENT_NEWSEGMENT) {
gboolean update;
gdouble rate, applied_rate;
GstFormat format;
gint64 start, stop, position;
GST_PLAY_SINK_CONVERT_BIN_LOCK (self);
gst_event_parse_new_segment_full (event, &update, &rate, &applied_rate,
&format, &start, &stop, &position);
GST_DEBUG_OBJECT (self, "Segment before %" GST_SEGMENT_FORMAT,
&self->segment);
gst_segment_set_newsegment_full (&self->segment, update, rate, applied_rate,
format, start, stop, position);
GST_DEBUG_OBJECT (self, "Segment after %" GST_SEGMENT_FORMAT,
&self->segment);
GST_PLAY_SINK_CONVERT_BIN_UNLOCK (self);
} else if (GST_EVENT_TYPE (event) == GST_EVENT_FLUSH_STOP) {
GST_PLAY_SINK_CONVERT_BIN_LOCK (self);
GST_DEBUG_OBJECT (self, "Resetting segment");
gst_segment_init (&self->segment, GST_FORMAT_UNDEFINED);
GST_PLAY_SINK_CONVERT_BIN_UNLOCK (self);
}
gst_event_unref (event);
gst_object_unref (self);
return ret;
}
static gboolean
gst_play_sink_convert_bin_sink_setcaps (GstPad * pad, GstCaps * caps)
{
GstPlaySinkConvertBin *self =
GST_PLAY_SINK_CONVERT_BIN (gst_pad_get_parent (pad));
gboolean ret;
GstStructure *s;
const gchar *name;
gboolean reconfigure = FALSE;
gboolean raw;
GST_DEBUG_OBJECT (pad, "setcaps: %" GST_PTR_FORMAT, caps);
GST_PLAY_SINK_CONVERT_BIN_LOCK (self);
s = gst_caps_get_structure (caps, 0);
name = gst_structure_get_name (s);
if (self->audio) {
raw = g_str_has_prefix (name, "audio/x-raw-");
} else {
raw = g_str_has_prefix (name, "video/x-raw-");
}
GST_DEBUG_OBJECT (self, "raw %d, self->raw %d, blocked %d",
raw, self->raw, gst_pad_is_blocked (self->sink_proxypad));
if (raw) {
if (!self->raw && !gst_pad_is_blocked (self->sink_proxypad)) {
GST_DEBUG_OBJECT (self, "Changing caps from non-raw to raw");
reconfigure = TRUE;
gst_pad_set_blocked_async_full (self->sink_proxypad, TRUE,
(GstPadBlockCallback) pad_blocked_cb, gst_object_ref (self),
(GDestroyNotify) gst_object_unref);
}
} else {
if (self->raw && !gst_pad_is_blocked (self->sink_proxypad)) {
GST_DEBUG_OBJECT (self, "Changing caps from raw to non-raw");
reconfigure = TRUE;
gst_pad_set_blocked_async_full (self->sink_proxypad, TRUE,
(GstPadBlockCallback) pad_blocked_cb, gst_object_ref (self),
(GDestroyNotify) gst_object_unref);
}
}
/* Otherwise the setcaps below fails */
if (reconfigure) {
gst_ghost_pad_set_target (GST_GHOST_PAD_CAST (self->sinkpad), NULL);
gst_ghost_pad_set_target (GST_GHOST_PAD_CAST (self->srcpad), NULL);
}
GST_PLAY_SINK_CONVERT_BIN_UNLOCK (self);
ret = gst_ghost_pad_setcaps_default (pad, caps);
GST_DEBUG_OBJECT (self, "Setting sink caps %" GST_PTR_FORMAT ": %d", caps,
ret);
gst_object_unref (self);
return ret;
}
static GstCaps *
gst_play_sink_convert_bin_getcaps (GstPad * pad)
{
GstPlaySinkConvertBin *self =
GST_PLAY_SINK_CONVERT_BIN (gst_pad_get_parent (pad));
GstCaps *ret;
GstPad *otherpad, *peer;
GST_PLAY_SINK_CONVERT_BIN_LOCK (self);
if (pad == self->srcpad) {
otherpad = self->sinkpad;
} else if (pad == self->sinkpad) {
otherpad = self->srcpad;
} else {
GST_ERROR_OBJECT (pad, "Not one of our pads");
otherpad = NULL;
}
if (otherpad) {
peer = gst_pad_get_peer (otherpad);
if (peer) {
GstCaps *peer_caps = gst_pad_get_caps_reffed (peer);
gst_object_unref (peer);
if (self->converter_caps) {
gst_caps_merge (peer_caps, gst_caps_ref (self->converter_caps));
ret = peer_caps;
} else {
ret = peer_caps;
}
} else {
ret = gst_caps_ref (self->converter_caps);
}
} else {
ret = gst_caps_new_any ();
}
GST_PLAY_SINK_CONVERT_BIN_UNLOCK (self);
gst_object_unref (self);
return ret;
}
void
gst_play_sink_convert_bin_remove_elements (GstPlaySinkConvertBin * self)
{
if (self->conversion_elements) {
g_list_foreach (self->conversion_elements,
(GFunc) gst_play_sink_convert_bin_remove_element, self);
g_list_free (self->conversion_elements);
self->conversion_elements = NULL;
}
if (self->converter_caps) {
gst_caps_unref (self->converter_caps);
self->converter_caps = NULL;
}
}
static void
gst_play_sink_convert_bin_finalize (GObject * object)
{
GstPlaySinkConvertBin *self = GST_PLAY_SINK_CONVERT_BIN_CAST (object);
gst_play_sink_convert_bin_remove_elements (self);
gst_object_unref (self->sink_proxypad);
g_mutex_free (self->lock);
G_OBJECT_CLASS (parent_class)->finalize (object);
}
void
gst_play_sink_convert_bin_cache_converter_caps (GstPlaySinkConvertBin * self)
{
GstElement *head;
GstPad *pad;
if (self->converter_caps) {
gst_caps_unref (self->converter_caps);
self->converter_caps = NULL;
}
if (!self->conversion_elements) {
GST_WARNING_OBJECT (self, "No conversion elements");
return;
}
head = GST_ELEMENT (g_list_first (self->conversion_elements)->data);
pad = gst_element_get_static_pad (head, "sink");
if (!pad) {
GST_WARNING_OBJECT (self, "No sink pad found");
return;
}
self->converter_caps = gst_pad_get_caps_reffed (pad);
GST_INFO_OBJECT (self, "Converter caps: %" GST_PTR_FORMAT,
self->converter_caps);
gst_object_unref (pad);
}
static GstStateChangeReturn
gst_play_sink_convert_bin_change_state (GstElement * element,
GstStateChange transition)
{
GstStateChangeReturn ret;
GstPlaySinkConvertBin *self = GST_PLAY_SINK_CONVERT_BIN_CAST (element);
switch (transition) {
case GST_STATE_CHANGE_PAUSED_TO_READY:
GST_PLAY_SINK_CONVERT_BIN_LOCK (self);
if (gst_pad_is_blocked (self->sink_proxypad))
gst_pad_set_blocked_async_full (self->sink_proxypad, FALSE,
(GstPadBlockCallback) pad_blocked_cb, gst_object_ref (self),
(GDestroyNotify) gst_object_unref);
GST_PLAY_SINK_CONVERT_BIN_UNLOCK (self);
break;
case GST_STATE_CHANGE_READY_TO_PAUSED:
GST_PLAY_SINK_CONVERT_BIN_LOCK (self);
gst_segment_init (&self->segment, GST_FORMAT_UNDEFINED);
gst_play_sink_convert_bin_set_targets (self, TRUE);
self->raw = FALSE;
GST_PLAY_SINK_CONVERT_BIN_UNLOCK (self);
break;
default:
break;
}
ret = GST_ELEMENT_CLASS (parent_class)->change_state (element, transition);
if (ret == GST_STATE_CHANGE_FAILURE)
return ret;
switch (transition) {
case GST_STATE_CHANGE_PAUSED_TO_READY:
GST_PLAY_SINK_CONVERT_BIN_LOCK (self);
gst_segment_init (&self->segment, GST_FORMAT_UNDEFINED);
gst_play_sink_convert_bin_set_targets (self, TRUE);
self->raw = FALSE;
GST_PLAY_SINK_CONVERT_BIN_UNLOCK (self);
break;
case GST_STATE_CHANGE_READY_TO_PAUSED:
GST_PLAY_SINK_CONVERT_BIN_LOCK (self);
if (!gst_pad_is_blocked (self->sink_proxypad))
gst_pad_set_blocked_async_full (self->sink_proxypad, TRUE,
(GstPadBlockCallback) pad_blocked_cb, gst_object_ref (self),
(GDestroyNotify) gst_object_unref);
GST_PLAY_SINK_CONVERT_BIN_UNLOCK (self);
break;
default:
break;
}
return ret;
}
static void
gst_play_sink_convert_bin_class_init (GstPlaySinkConvertBinClass * klass)
{
GObjectClass *gobject_class;
GstElementClass *gstelement_class;
GST_DEBUG_CATEGORY_INIT (gst_play_sink_convert_bin_debug,
"playsinkconvertbin", 0, "play bin");
gobject_class = (GObjectClass *) klass;
gstelement_class = (GstElementClass *) klass;
gobject_class->finalize = gst_play_sink_convert_bin_finalize;
gst_element_class_add_pad_template (gstelement_class,
gst_static_pad_template_get (&srctemplate));
gst_element_class_add_pad_template (gstelement_class,
gst_static_pad_template_get (&sinktemplate));
gst_element_class_set_details_simple (gstelement_class,
"Player Sink Converter Bin", "Bin/Converter",
"Convenience bin for audio/video conversion",
"Sebastian Dröge <sebastian.droege@collabora.co.uk>");
gstelement_class->change_state =
GST_DEBUG_FUNCPTR (gst_play_sink_convert_bin_change_state);
}
static void
gst_play_sink_convert_bin_init (GstPlaySinkConvertBin * self)
{
GstPadTemplate *templ;
self->lock = g_mutex_new ();
gst_segment_init (&self->segment, GST_FORMAT_UNDEFINED);
templ = gst_static_pad_template_get (&sinktemplate);
self->sinkpad = gst_ghost_pad_new_no_target_from_template ("sink", templ);
gst_pad_set_event_function (self->sinkpad,
GST_DEBUG_FUNCPTR (gst_play_sink_convert_bin_sink_event));
gst_pad_set_setcaps_function (self->sinkpad,
GST_DEBUG_FUNCPTR (gst_play_sink_convert_bin_sink_setcaps));
gst_pad_set_getcaps_function (self->sinkpad,
GST_DEBUG_FUNCPTR (gst_play_sink_convert_bin_getcaps));
self->sink_proxypad =
GST_PAD_CAST (gst_proxy_pad_get_internal (GST_PROXY_PAD (self->sinkpad)));
gst_element_add_pad (GST_ELEMENT_CAST (self), self->sinkpad);
gst_object_unref (templ);
templ = gst_static_pad_template_get (&srctemplate);
self->srcpad = gst_ghost_pad_new_no_target_from_template ("src", templ);
gst_pad_set_getcaps_function (self->srcpad,
GST_DEBUG_FUNCPTR (gst_play_sink_convert_bin_getcaps));
gst_element_add_pad (GST_ELEMENT_CAST (self), self->srcpad);
gst_object_unref (templ);
gst_play_sink_convert_bin_add_identity (self);
}

View file

@ -0,0 +1,103 @@
/* GStreamer
* Copyright (C) <2011> Sebastian Dröge <sebastian.droege@collabora.co.uk>
* Copyright (C) <2011> Vincent Penquerc'h <vincent.penquerch@collabora.co.uk>
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Library General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Library General Public License for more details.
*
* You should have received a copy of the GNU Library General Public
* License along with this library; if not, write to the
* Free Software Foundation, Inc., 59 Temple Place - Suite 330,
* Boston, MA 02111-1307, USA.
*/
#include <gst/gst.h>
#ifndef __GST_PLAY_SINK_CONVERT_BIN_H__
#define __GST_PLAY_SINK_CONVERT_BIN_H__
G_BEGIN_DECLS
#define GST_TYPE_PLAY_SINK_CONVERT_BIN \
(gst_play_sink_convert_bin_get_type())
#define GST_PLAY_SINK_CONVERT_BIN(obj) \
(G_TYPE_CHECK_INSTANCE_CAST ((obj), GST_TYPE_PLAY_SINK_CONVERT_BIN, GstPlaySinkConvertBin))
#define GST_PLAY_SINK_CONVERT_BIN_CAST(obj) \
((GstPlaySinkConvertBin *) obj)
#define GST_PLAY_SINK_CONVERT_BIN_CLASS(klass) \
(G_TYPE_CHECK_CLASS_CAST ((klass), GST_TYPE_PLAY_SINK_CONVERT_BIN, GstPlaySinkConvertBinClass))
#define GST_IS_PLAY_SINK_CONVERT_BIN(obj) \
(G_TYPE_CHECK_INSTANCE_TYPE ((obj), GST_TYPE_PLAY_SINK_CONVERT_BIN))
#define GST_IS_PLAY_SINK_CONVERT_BIN_CLASS(klass) \
(G_TYPE_CHECK_CLASS_TYPE ((klass), GST_TYPE_PLAY_SINK_CONVERT_BIN))
#define GST_PLAY_SINK_CONVERT_BIN_LOCK(obj) G_STMT_START { \
GST_LOG_OBJECT (obj, \
"locking from thread %p", \
g_thread_self ()); \
g_mutex_lock (GST_PLAY_SINK_CONVERT_BIN_CAST(obj)->lock); \
GST_LOG_OBJECT (obj, \
"locked from thread %p", \
g_thread_self ()); \
} G_STMT_END
#define GST_PLAY_SINK_CONVERT_BIN_UNLOCK(obj) G_STMT_START { \
GST_LOG_OBJECT (obj, \
"unlocking from thread %p", \
g_thread_self ()); \
g_mutex_unlock (GST_PLAY_SINK_CONVERT_BIN_CAST(obj)->lock); \
} G_STMT_END
typedef struct _GstPlaySinkConvertBin GstPlaySinkConvertBin;
typedef struct _GstPlaySinkConvertBinClass GstPlaySinkConvertBinClass;
struct _GstPlaySinkConvertBin
{
GstBin parent;
/* < private > */
GMutex *lock;
GstPad *sinkpad, *sink_proxypad;
gboolean sink_proxypad_blocked;
GstSegment segment;
GstPad *srcpad;
gboolean raw;
GList *conversion_elements;
GstElement *identity;
GstCaps *converter_caps;
/* configuration for derived classes */
gboolean audio;
};
struct _GstPlaySinkConvertBinClass
{
GstBinClass parent;
};
GType gst_play_sink_convert_bin_get_type (void);
GstElement *
gst_play_sink_convert_bin_add_conversion_element_factory (GstPlaySinkConvertBin *self,
const char *factory, const char *name);
void
gst_play_sink_convert_bin_add_conversion_element (GstPlaySinkConvertBin *self,
GstElement *el);
void
gst_play_sink_convert_bin_cache_converter_caps (GstPlaySinkConvertBin * self);
void
gst_play_sink_convert_bin_remove_elements (GstPlaySinkConvertBin * self);
void
gst_play_sink_convert_bin_add_identity (GstPlaySinkConvertBin * self);
G_END_DECLS
#endif /* __GST_PLAY_SINK_CONVERT_BIN_H__ */

View file

@ -32,428 +32,61 @@ GST_DEBUG_CATEGORY_STATIC (gst_play_sink_video_convert_debug);
#define parent_class gst_play_sink_video_convert_parent_class
G_DEFINE_TYPE (GstPlaySinkVideoConvert, gst_play_sink_video_convert,
GST_TYPE_BIN);
static GstStaticPadTemplate srctemplate = GST_STATIC_PAD_TEMPLATE ("src",
GST_PAD_SRC,
GST_PAD_ALWAYS,
GST_STATIC_CAPS_ANY);
static GstStaticPadTemplate sinktemplate = GST_STATIC_PAD_TEMPLATE ("sink",
GST_PAD_SINK,
GST_PAD_ALWAYS,
GST_STATIC_CAPS_ANY);
GST_TYPE_PLAY_SINK_CONVERT_BIN);
static gboolean
is_raw_caps (GstCaps * caps)
gst_play_sink_video_convert_add_conversion_elements (GstPlaySinkVideoConvert *
self)
{
gint i, n;
GstStructure *s;
const gchar *name;
GstPlaySinkConvertBin *cbin = GST_PLAY_SINK_CONVERT_BIN (self);
GstElement *el, *prev = NULL;
n = gst_caps_get_size (caps);
for (i = 0; i < n; i++) {
s = gst_caps_get_structure (caps, i);
name = gst_structure_get_name (s);
if (!g_str_has_prefix (name, "video/x-raw"))
return FALSE;
el = gst_play_sink_convert_bin_add_conversion_element_factory (cbin,
COLORSPACE, "conv");
if (el)
prev = el;
el = gst_play_sink_convert_bin_add_conversion_element_factory (cbin,
"videoscale", "scale");
if (el) {
/* Add black borders if necessary to keep the DAR */
g_object_set (el, "add-borders", TRUE, NULL);
if (prev) {
if (!gst_element_link_pads_full (prev, "src", el, "sink",
GST_PAD_LINK_CHECK_TEMPLATE_CAPS))
goto link_failed;
}
prev = el;
}
return TRUE;
}
static void
post_missing_element_message (GstPlaySinkVideoConvert * self,
const gchar * name)
{
GstMessage *msg;
msg = gst_missing_element_message_new (GST_ELEMENT_CAST (self), name);
gst_element_post_message (GST_ELEMENT_CAST (self), msg);
}
static GstPadProbeReturn
pad_blocked_cb (GstPad * pad, GstPadProbeType type, gpointer type_data,
gpointer user_data)
{
GstPlaySinkVideoConvert *self = user_data;
GstPad *peer;
GstCaps *caps;
gboolean raw;
GST_PLAY_SINK_VIDEO_CONVERT_LOCK (self);
GST_DEBUG_OBJECT (self, "Pad blocked");
/* There must be a peer at this point */
peer = gst_pad_get_peer (self->sinkpad);
caps = gst_pad_get_current_caps (peer);
if (!caps)
caps = gst_pad_get_caps (peer, NULL);
gst_object_unref (peer);
raw = is_raw_caps (caps);
GST_DEBUG_OBJECT (self, "Caps %" GST_PTR_FORMAT " are raw: %d", caps, raw);
gst_caps_unref (caps);
if (raw == self->raw)
goto unblock;
self->raw = raw;
if (raw) {
GstBin *bin = GST_BIN_CAST (self);
GstElement *head = NULL, *prev = NULL;
GstPad *pad;
GST_DEBUG_OBJECT (self, "Creating raw conversion pipeline");
gst_ghost_pad_set_target (GST_GHOST_PAD_CAST (self->sinkpad), NULL);
gst_ghost_pad_set_target (GST_GHOST_PAD_CAST (self->srcpad), NULL);
self->conv = gst_element_factory_make (COLORSPACE, "pvconv");
if (self->conv == NULL) {
post_missing_element_message (self, COLORSPACE);
GST_ELEMENT_WARNING (self, CORE, MISSING_PLUGIN,
(_("Missing element '%s' - check your GStreamer installation."),
COLORSPACE), ("video rendering might fail"));
} else {
gst_bin_add (bin, self->conv);
gst_element_sync_state_with_parent (self->conv);
prev = head = self->conv;
}
self->scale = gst_element_factory_make ("videoscale", "scale");
if (self->scale == NULL) {
post_missing_element_message (self, "videoscale");
GST_ELEMENT_WARNING (self, CORE, MISSING_PLUGIN,
(_("Missing element '%s' - check your GStreamer installation."),
"videoscale"), ("possibly a liboil version mismatch?"));
} else {
/* Add black borders if necessary to keep the DAR */
g_object_set (self->scale, "add-borders", TRUE, NULL);
gst_bin_add (bin, self->scale);
gst_element_sync_state_with_parent (self->scale);
if (prev) {
if (!gst_element_link_pads_full (prev, "src", self->scale, "sink",
GST_PAD_LINK_CHECK_TEMPLATE_CAPS))
goto link_failed;
} else {
head = self->scale;
}
prev = self->scale;
}
if (head) {
pad = gst_element_get_static_pad (head, "sink");
gst_ghost_pad_set_target (GST_GHOST_PAD_CAST (self->sinkpad), pad);
gst_object_unref (pad);
}
if (prev) {
pad = gst_element_get_static_pad (prev, "src");
gst_ghost_pad_set_target (GST_GHOST_PAD_CAST (self->srcpad), pad);
gst_object_unref (pad);
}
if (!head && !prev) {
gst_ghost_pad_set_target (GST_GHOST_PAD_CAST (self->srcpad),
self->sink_proxypad);
}
GST_DEBUG_OBJECT (self, "Raw conversion pipeline created");
} else {
GstBin *bin = GST_BIN_CAST (self);
GST_DEBUG_OBJECT (self, "Removing raw conversion pipeline");
gst_ghost_pad_set_target (GST_GHOST_PAD_CAST (self->sinkpad), NULL);
gst_ghost_pad_set_target (GST_GHOST_PAD_CAST (self->srcpad), NULL);
if (self->conv) {
gst_element_set_state (self->conv, GST_STATE_NULL);
gst_bin_remove (bin, self->conv);
self->conv = NULL;
}
if (self->scale) {
gst_element_set_state (self->scale, GST_STATE_NULL);
gst_bin_remove (bin, self->scale);
self->scale = NULL;
}
gst_ghost_pad_set_target (GST_GHOST_PAD_CAST (self->srcpad),
self->sink_proxypad);
GST_DEBUG_OBJECT (self, "Raw conversion pipeline removed");
}
unblock:
self->sink_proxypad_block_id = 0;
GST_PLAY_SINK_VIDEO_CONVERT_UNLOCK (self);
return GST_PAD_PROBE_REMOVE;
link_failed:
{
GST_ELEMENT_ERROR (self, CORE, PAD,
(NULL), ("Failed to configure the video converter."));
gst_ghost_pad_set_target (GST_GHOST_PAD_CAST (self->srcpad),
self->sink_proxypad);
self->sink_proxypad_block_id = 0;
GST_PLAY_SINK_VIDEO_CONVERT_UNLOCK (self);
return GST_PAD_PROBE_REMOVE;
}
}
static void
block_proxypad (GstPlaySinkVideoConvert * self)
{
if (self->sink_proxypad_block_id == 0) {
self->sink_proxypad_block_id =
gst_pad_add_probe (self->sink_proxypad, GST_PAD_PROBE_TYPE_BLOCK,
pad_blocked_cb, gst_object_ref (self),
(GDestroyNotify) gst_object_unref);
}
}
static void
unblock_proxypad (GstPlaySinkVideoConvert * self)
{
if (self->sink_proxypad_block_id != 0) {
gst_pad_remove_probe (self->sink_proxypad, self->sink_proxypad_block_id);
self->sink_proxypad_block_id = 0;
}
}
static gboolean
gst_play_sink_video_convert_sink_setcaps (GstPlaySinkVideoConvert * self,
GstCaps * caps)
{
GstStructure *s;
const gchar *name;
gboolean reconfigure = FALSE;
GST_PLAY_SINK_VIDEO_CONVERT_LOCK (self);
s = gst_caps_get_structure (caps, 0);
name = gst_structure_get_name (s);
if (g_str_has_prefix (name, "video/x-raw")) {
if (!self->raw && !gst_pad_is_blocked (self->sink_proxypad)) {
GST_DEBUG_OBJECT (self, "Changing caps from non-raw to raw");
reconfigure = TRUE;
block_proxypad (self);
}
} else {
if (self->raw && !gst_pad_is_blocked (self->sink_proxypad)) {
GST_DEBUG_OBJECT (self, "Changing caps from raw to non-raw");
reconfigure = TRUE;
block_proxypad (self);
}
}
/* Otherwise the setcaps below fails */
if (reconfigure) {
gst_ghost_pad_set_target (GST_GHOST_PAD_CAST (self->sinkpad), NULL);
gst_ghost_pad_set_target (GST_GHOST_PAD_CAST (self->srcpad), NULL);
}
GST_PLAY_SINK_VIDEO_CONVERT_UNLOCK (self);
GST_DEBUG_OBJECT (self, "Setting sink caps %" GST_PTR_FORMAT, caps);
return TRUE;
}
static gboolean
gst_play_sink_video_convert_sink_event (GstPad * pad, GstEvent * event)
{
GstPlaySinkVideoConvert *self =
GST_PLAY_SINK_VIDEO_CONVERT (gst_pad_get_parent (pad));
gboolean ret;
switch (GST_EVENT_TYPE (event)) {
case GST_EVENT_CAPS:
{
GstCaps *caps;
gst_event_parse_caps (event, &caps);
ret = gst_play_sink_video_convert_sink_setcaps (self, caps);
break;
}
default:
break;
}
ret = gst_proxy_pad_event_default (pad, gst_event_ref (event));
switch (GST_EVENT_TYPE (event)) {
case GST_EVENT_SEGMENT:
GST_PLAY_SINK_VIDEO_CONVERT_LOCK (self);
GST_DEBUG_OBJECT (self, "Segment before %" GST_SEGMENT_FORMAT,
&self->segment);
gst_event_copy_segment (event, &self->segment);
GST_DEBUG_OBJECT (self, "Segment after %" GST_SEGMENT_FORMAT,
&self->segment);
GST_PLAY_SINK_VIDEO_CONVERT_UNLOCK (self);
break;
case GST_EVENT_FLUSH_STOP:
GST_PLAY_SINK_VIDEO_CONVERT_LOCK (self);
GST_DEBUG_OBJECT (self, "Resetting segment");
gst_segment_init (&self->segment, GST_FORMAT_UNDEFINED);
GST_PLAY_SINK_VIDEO_CONVERT_UNLOCK (self);
break;
default:
break;
}
gst_event_unref (event);
gst_object_unref (self);
return ret;
}
static GstCaps *
gst_play_sink_video_convert_getcaps (GstPad * pad, GstCaps * filter)
{
GstPlaySinkVideoConvert *self =
GST_PLAY_SINK_VIDEO_CONVERT (gst_pad_get_parent (pad));
GstCaps *ret;
GstPad *otherpad, *peer;
GST_PLAY_SINK_VIDEO_CONVERT_LOCK (self);
otherpad = gst_ghost_pad_get_target (GST_GHOST_PAD_CAST (pad));
GST_PLAY_SINK_VIDEO_CONVERT_UNLOCK (self);
peer = gst_pad_get_peer (otherpad);
if (peer) {
ret = gst_pad_get_caps (peer, filter);
gst_object_unref (peer);
} else {
ret = (filter ? gst_caps_ref (filter) : gst_caps_new_any ());
}
gst_object_unref (otherpad);
gst_object_unref (self);
return ret;
}
static void
gst_play_sink_video_convert_finalize (GObject * object)
{
GstPlaySinkVideoConvert *self = GST_PLAY_SINK_VIDEO_CONVERT_CAST (object);
gst_object_unref (self->sink_proxypad);
g_mutex_free (self->lock);
G_OBJECT_CLASS (parent_class)->finalize (object);
}
static GstStateChangeReturn
gst_play_sink_video_convert_change_state (GstElement * element,
GstStateChange transition)
{
GstStateChangeReturn ret;
GstPlaySinkVideoConvert *self = GST_PLAY_SINK_VIDEO_CONVERT_CAST (element);
switch (transition) {
case GST_STATE_CHANGE_PAUSED_TO_READY:
GST_PLAY_SINK_VIDEO_CONVERT_LOCK (self);
unblock_proxypad (self);
GST_PLAY_SINK_VIDEO_CONVERT_UNLOCK (self);
break;
default:
break;
}
ret = GST_ELEMENT_CLASS (parent_class)->change_state (element, transition);
if (ret == GST_STATE_CHANGE_FAILURE)
return ret;
switch (transition) {
case GST_STATE_CHANGE_PAUSED_TO_READY:
GST_PLAY_SINK_VIDEO_CONVERT_LOCK (self);
gst_segment_init (&self->segment, GST_FORMAT_UNDEFINED);
if (self->conv) {
gst_element_set_state (self->conv, GST_STATE_NULL);
gst_bin_remove (GST_BIN_CAST (self), self->conv);
self->conv = NULL;
}
if (self->scale) {
gst_element_set_state (self->scale, GST_STATE_NULL);
gst_bin_remove (GST_BIN_CAST (self), self->scale);
self->scale = NULL;
}
gst_ghost_pad_set_target (GST_GHOST_PAD_CAST (self->srcpad),
self->sink_proxypad);
self->raw = FALSE;
GST_PLAY_SINK_VIDEO_CONVERT_UNLOCK (self);
break;
case GST_STATE_CHANGE_READY_TO_PAUSED:
GST_PLAY_SINK_VIDEO_CONVERT_LOCK (self);
if (!gst_pad_is_blocked (self->sink_proxypad))
block_proxypad (self);
GST_PLAY_SINK_VIDEO_CONVERT_UNLOCK (self);
default:
break;
}
return ret;
return FALSE;
}
static void
gst_play_sink_video_convert_class_init (GstPlaySinkVideoConvertClass * klass)
{
GObjectClass *gobject_class;
GstElementClass *gstelement_class;
GST_DEBUG_CATEGORY_INIT (gst_play_sink_video_convert_debug,
"playsinkvideoconvert", 0, "play bin");
gobject_class = (GObjectClass *) klass;
gstelement_class = (GstElementClass *) klass;
gobject_class->finalize = gst_play_sink_video_convert_finalize;
gst_element_class_add_pad_template (gstelement_class,
gst_static_pad_template_get (&srctemplate));
gst_element_class_add_pad_template (gstelement_class,
gst_static_pad_template_get (&sinktemplate));
gst_element_class_set_details_simple (gstelement_class,
"Player Sink Video Converter", "Video/Bin/Converter",
"Convenience bin for video conversion",
"Sebastian Dröge <sebastian.droege@collabora.co.uk>");
gstelement_class->change_state =
GST_DEBUG_FUNCPTR (gst_play_sink_video_convert_change_state);
}
static void
gst_play_sink_video_convert_init (GstPlaySinkVideoConvert * self)
{
GstPadTemplate *templ;
GstPlaySinkConvertBin *cbin = GST_PLAY_SINK_CONVERT_BIN (self);
cbin->audio = FALSE;
self->lock = g_mutex_new ();
gst_segment_init (&self->segment, GST_FORMAT_UNDEFINED);
templ = gst_static_pad_template_get (&sinktemplate);
self->sinkpad = gst_ghost_pad_new_no_target_from_template ("sink", templ);
gst_pad_set_event_function (self->sinkpad,
GST_DEBUG_FUNCPTR (gst_play_sink_video_convert_sink_event));
gst_pad_set_getcaps_function (self->sinkpad,
GST_DEBUG_FUNCPTR (gst_play_sink_video_convert_getcaps));
self->sink_proxypad =
GST_PAD_CAST (gst_proxy_pad_get_internal (GST_PROXY_PAD (self->sinkpad)));
gst_element_add_pad (GST_ELEMENT_CAST (self), self->sinkpad);
gst_object_unref (templ);
templ = gst_static_pad_template_get (&srctemplate);
self->srcpad = gst_ghost_pad_new_no_target_from_template ("src", templ);
gst_pad_set_getcaps_function (self->srcpad,
GST_DEBUG_FUNCPTR (gst_play_sink_video_convert_getcaps));
gst_element_add_pad (GST_ELEMENT_CAST (self), self->srcpad);
gst_object_unref (templ);
gst_ghost_pad_set_target (GST_GHOST_PAD_CAST (self->srcpad),
self->sink_proxypad);
gst_play_sink_video_convert_add_conversion_elements (self);
gst_play_sink_convert_bin_cache_converter_caps (cbin);
}

View file

@ -18,6 +18,7 @@
*/
#include <gst/gst.h>
#include "gstplaysinkconvertbin.h"
#ifndef __GST_PLAY_SINK_VIDEO_CONVERT_H__
#define __GST_PLAY_SINK_VIDEO_CONVERT_H__
@ -35,47 +36,18 @@ G_BEGIN_DECLS
(G_TYPE_CHECK_INSTANCE_TYPE ((obj), GST_TYPE_PLAY_SINK_VIDEO_CONVERT))
#define GST_IS_PLAY_SINK_VIDEO_CONVERT_CLASS(klass) \
(G_TYPE_CHECK_CLASS_TYPE ((klass), GST_TYPE_PLAY_SINK_VIDEO_CONVERT))
#define GST_PLAY_SINK_VIDEO_CONVERT_LOCK(obj) G_STMT_START { \
GST_LOG_OBJECT (obj, \
"locking from thread %p", \
g_thread_self ()); \
g_mutex_lock (GST_PLAY_SINK_VIDEO_CONVERT_CAST(obj)->lock); \
GST_LOG_OBJECT (obj, \
"locked from thread %p", \
g_thread_self ()); \
} G_STMT_END
#define GST_PLAY_SINK_VIDEO_CONVERT_UNLOCK(obj) G_STMT_START { \
GST_LOG_OBJECT (obj, \
"unlocking from thread %p", \
g_thread_self ()); \
g_mutex_unlock (GST_PLAY_SINK_VIDEO_CONVERT_CAST(obj)->lock); \
} G_STMT_END
typedef struct _GstPlaySinkVideoConvert GstPlaySinkVideoConvert;
typedef struct _GstPlaySinkVideoConvertClass GstPlaySinkVideoConvertClass;
struct _GstPlaySinkVideoConvert
{
GstBin parent;
GstPlaySinkConvertBin parent;
/* < private > */
GMutex *lock;
GstPad *sinkpad, *sink_proxypad;
gulong sink_proxypad_block_id;
GstSegment segment;
GstPad *srcpad;
gboolean raw;
GstElement *conv, *scale;
};
struct _GstPlaySinkVideoConvertClass
{
GstBinClass parent;
GstPlaySinkConvertBinClass parent;
};
GType gst_play_sink_video_convert_get_type (void);

View file

@ -333,26 +333,16 @@ _factory_filter (GstPluginFeature * feature, GstCaps ** subcaps)
templ_caps = _get_sub_caps (factory);
if (is_renderer && have_video_sink && templ_caps) {
GstCaps *tmp;
GST_DEBUG ("Found renderer element %s (%s) with caps %" GST_PTR_FORMAT,
gst_element_factory_get_longname (factory),
gst_plugin_feature_get_name (feature), templ_caps);
tmp = gst_caps_union (*subcaps, templ_caps);
gst_caps_unref (templ_caps);
gst_caps_replace (subcaps, tmp);
gst_caps_unref (tmp);
gst_caps_merge (*subcaps, templ_caps);
return TRUE;
} else if (!is_renderer && !have_video_sink && templ_caps) {
GstCaps *tmp;
GST_DEBUG ("Found parser element %s (%s) with caps %" GST_PTR_FORMAT,
gst_element_factory_get_longname (factory),
gst_plugin_feature_get_name (feature), templ_caps);
tmp = gst_caps_union (*subcaps, templ_caps);
gst_caps_unref (templ_caps);
gst_caps_replace (subcaps, tmp);
gst_caps_unref (tmp);
gst_caps_merge (*subcaps, templ_caps);
return TRUE;
} else {
if (templ_caps)