mirror of
https://gitlab.freedesktop.org/gstreamer/gstreamer.git
synced 2025-01-03 05:59:10 +00:00
A few ideas looking for feedback
Original commit message from CVS: A few ideas looking for feedback
This commit is contained in:
parent
7524c49140
commit
dc8008a9b9
2 changed files with 48 additions and 0 deletions
20
docs/random/thaytan/opengl
Normal file
20
docs/random/thaytan/opengl
Normal file
|
@ -0,0 +1,20 @@
|
|||
# OpenGL usage
|
||||
How to set up openGL using plugins in such a way that they can
|
||||
render to an on-screen window, or offscreen equally?
|
||||
|
||||
eg:
|
||||
superfoo3d ! glwindow
|
||||
or
|
||||
superfoo3d ! gloffscreen ! xvideowindow
|
||||
|
||||
This would imply that there is some mime type which connects a glwindow/gloffscreen and a GL using component sensibly. I don't see that there is any actual
|
||||
data to send, however - the only purpose of the glwindow/gloffscreen is to
|
||||
set up an openGL context and then 'output' it somehow one the superfoo3d has
|
||||
finished drawing each frame.
|
||||
|
||||
In the case of glwindow, 'output' it means translate into an on-screen window,
|
||||
but for gloffscreen it means produce a video packet and go.
|
||||
|
||||
These components really need some other 2 way communication, rather than the
|
||||
pads metaphor, I think?
|
||||
|
28
docs/random/thaytan/video-overlays
Normal file
28
docs/random/thaytan/video-overlays
Normal file
|
@ -0,0 +1,28 @@
|
|||
# Compositing and video overlays.
|
||||
We want a way to composite a video frame and an overlay frame. The video frame is as we expect, the overlay is a normal video frame + an alpha mask. In the following monologue, please consider that I mean RGB/I420/YV12/YUY2 wherever I say RGB.
|
||||
|
||||
So we have a plugin with 2 sinks and one src:
|
||||
+----------------+
|
||||
|video |
|
||||
| video|
|
||||
|overlay |
|
||||
+----------------+
|
||||
|
||||
What should the format for the overlay data look like?
|
||||
Ideas:
|
||||
* RGBA - 4 byte per pixel video frame
|
||||
* RGB+A - 3 Bytes per pixel video frame, then 1 byte per pixel overlay frame.
|
||||
A new mime type, or a variation on video/raw? I'm not sure
|
||||
* Overlay is actually 2 sinks, one takes a RGB/YUV data the other the alpha channel.
|
||||
|
||||
I'm not sure which approach is better, but I think it is probably neatest to
|
||||
use RGB+A, and then have a separate plugin which has 2 sinks and converts an
|
||||
RGB/YUV stream plus an alpha stream into an RGB+A stream. The benefit of RGB+A over RGBA in this scenario, is that it is easier (correct me if I'm wrong) to optimise 2 block copies which appends an alpha frame to a RGB frame than it is to
|
||||
do the many copies required to interleave them into an RGBA stream.
|
||||
|
||||
So, I see this producing a few new plugins:
|
||||
videooverlay - takes an RGB and an RGB+A frame from 2 sinks, does the overlay (according to some properties) and outputs a result frame in RGB or RGB+A (as negotiated) format on 1 src.
|
||||
rgb2rgba - takes 1 RGB frame and one A frame from 2 sinks and outputs RGB+A on 1 src. If the A sink is not connected, we just add a fixed alpha channel based on an object-property.
|
||||
rgb2rgba - takes an RGB+A frame and discards the RGB component.
|
||||
textoverlay - This plugin, instead of taking a video frame and overlaying text, can just output an RGB+A stream with appropriate timestamping. This prevents duplicating code to composite with an alpha mask and allows us to optimise it in one spot only.
|
||||
|
Loading…
Reference in a new issue