mirror of
https://gitlab.freedesktop.org/gstreamer/gstreamer.git
synced 2024-11-19 16:21:17 +00:00
lv2: planning update
This commit is contained in:
parent
13d963fbf0
commit
8330f7aa26
1 changed files with 27 additions and 0 deletions
|
@ -35,9 +35,36 @@ gst-launch-1.0 calf-sourceforge-net-plugins-Organ event-in="C-3" name=s ! interl
|
||||||
|
|
||||||
TODO
|
TODO
|
||||||
* make filters gap-aware
|
* make filters gap-aware
|
||||||
|
* report latency
|
||||||
|
lilv_plugin_has_latency(), lilv_plugin_get_latency_port_index()
|
||||||
* support more host features
|
* support more host features
|
||||||
GST_DEBUG="lv2:4" GST_DEBUG_FILE=/tmp/gst.log gst-inspect lv2
|
GST_DEBUG="lv2:4" GST_DEBUG_FILE=/tmp/gst.log gst-inspect lv2
|
||||||
grep -o "needs host feature: .*$" /tmp/gst.log | sort | uniq -c | sort -n
|
grep -o "needs host feature: .*$" /tmp/gst.log | sort | uniq -c | sort -n
|
||||||
|
* make source elements useful
|
||||||
|
Most source elements use Atom port where they accept midi events. Since it is
|
||||||
|
full fledged midi, we should only ever use channel-0 and instead use multiple
|
||||||
|
instances
|
||||||
|
1) We could expose those as "audio/x-midi-event" sink pads, but then those are
|
||||||
|
not sources anymore.
|
||||||
|
2) We could expose those as properties using a new type like GST_FOURCC. If we
|
||||||
|
ignore "System Exclusive" messages, all other midi messages are 1-3 bytes.
|
||||||
|
We stuff the bytes right to left. E.g: the lowest byte is the status byte
|
||||||
|
3) We could use the GstBtNote enum + the toneconversion classes
|
||||||
|
https://www.midi.org/specifications/item/table-1-summary-of-midi-message
|
||||||
|
Open questions:
|
||||||
|
- with one property, we can't handle polyphony
|
||||||
|
- we can't query lv2 plugins for their polyphony :/
|
||||||
|
|
||||||
|
If lv2 gets a polyphony extension it would be a single non-negative integer:
|
||||||
|
0: unbound, 1 monophonic, > 1 N-polyphonic
|
||||||
|
I've checked a few synths and they all have an upper limit. Still the way
|
||||||
|
voices work in trackers is not the same as they work in synths - e.g. in a
|
||||||
|
tracker one can play a note twice
|
||||||
|
|
||||||
|
Or we just play them monophonic and try to implement a polychildbin. The bin
|
||||||
|
takes a GstElement to be used as a voice. For each new voice it would create a
|
||||||
|
new instance and it uses a built-in mixer. Downside is that all properties
|
||||||
|
would be per voice.
|
||||||
|
|
||||||
* example sources:
|
* example sources:
|
||||||
http://svn.drobilla.net/lad/trunk/lilv/utils/lv2info.c
|
http://svn.drobilla.net/lad/trunk/lilv/utils/lv2info.c
|
||||||
|
|
Loading…
Reference in a new issue