Spencer Salazar, a doctoral student at Stanford CCRMA, has introduced Auraglyph, an audio software programming, composition, and design system for iPad that lets you draw modular synth patches, using either a stylus or multi-touch input.
Users draw a variety of audio and control nodes and the interconnections between them. Settings for these nodes can be set using modal handwritten input, from simple numerals to time- and frequency-domain signals.
Machine learning-based handwriting recognition is used to analyze the user’s stylus strokes, affording a rich vocabulary of symbolic input. Additional nodes are available for creating conventional input/output interfaces, such as on-screen knobs and sliders and MIDI I/O.
Salazar says that Auraglyph is ‘coming soon’. Details on it are available at the Stanford site (pdf).
no colors?
no crazy sequencer?
Pen-based composing reminds me of writing tunes on a PMA-5. What goes around comes around.