Instead of only supporting writing SPU data directly to YUV frames,
render the SPU data to an intermediate AYUV overlay buffer. The overlay
data is then attached to the video frame if downstream supports overlay
composition, otherwise the AYUV overlay is blended to the video frame.
For the PGS format, the overlay buffer size is set to the size of the
Composition Window, and its position in the overlay composition is set
to the window position. The objects to render are now cropped when the
cropping flag is set.
For the Vobsub format, the overlay buffer size is set to the size of the
Display Area.
Once rendered, the overlay composition rectangle is now moved and scaled
to fit the video output size, to avoid clipping.
https://bugzilla.gnome.org/show_bug.cgi?id=663750
Refactor the DVD subpicture compositing, switching it to 8-bit alpha
calculations. Reuse some of the resulting code to implement PGS
subpicture blending.
Implement parsing and collecting of composition objects properly, but
assuming a single active window and colour palette for now. I need more
PGS samples.
Add setcaps logic on the subpicture sink pad for configuring
which subpicture format is arriving.
Add the first piece of PGS subpicture handling by dumping the stream
contents out to the terminal as the packets arrive.
Add some more debug.
Don't calculate the running time for our subpicture packets twice,
once is enough.