At the 2013 NAMM Show, MIDI creators and innovators Alan Parsons, Tom Oberheim, Dave Smith, Jordan Rudess, George Duke and Craig Anderton participated in an uber-panel, discussing the past, present and future of MIDI.
The discussion was moderated by MIDI Manufacturers Association CEO/President Tom White.
So glad to of been there, great discussion. The guy in the red at the end is the Ned Flanders of MIDI.
If I watch this video, will I have to see Rudess’s goatee?
Yes, and also Alan Parson’s live-in face and Anderton’s reddish complexion. It’s a video with real people, not a soap opera with painted puppets.
The difference is that a goatee is strictly optional. His looks like it should be used to scour pans, or like he’s halfway through swallowing a whole squirrel.
It was a irritating that only a couple people on the panel can even see the gap between what you can do on a touch surface and what you can correctly represent as MIDI messages (and not have to be a rocket scientist to get a good enough approximation setup to run over MIDI, and have a chance that the receiving synth will interpret it correctly when almost nothing implements more than the 0x90 message right – a few lines of the entire midi spec). Because the future bend history of any note is going to be unknown, every voice needs its own channel – and for these scenarios there are only 16 voices total as a result – and outside of the specifically limited keyboard controller case, you can’t put more than one note into a channel because of this.
The MIDI Manufacturers Association seems to exist primarily to enforce the status quo. The big manufacturers love this, because they can implement MIDI with cheap generic parts.
If developers like you are hitting a wall with MIDI, it’s going to come down to whether you can get a group of iPad developers to agree on something new and make it work.
Don’t wait around for Yamaha, Roland and Korg to do what you want or you’ll be waiting another 30 years.
Eventually, it will go down like Audiobus did (and how Virtual MIDI did as well), but for control. That thing only came out 2 months ago, so the adoption rate is something crazy when you consider that it takes weeks (or months if you are not a full-time dev and had already moved on to other things) to change and test releases, and at least another week over that to get approved for the store. I think their main problem in seeing what’s going on is that they assume that there has to be come hardware controller that doesn’t exist getting suddenly manufactured to push people over the edge.
It just doesn’t work like that all when both the controller and synth are co-evolving as software. There is no manufacturing process to hold up and slow down the creation of a protocol that consistently supports new capabilities. If a binary spec for Audiobus is published and can be sent over UDP, or a new protocol for control done that way… there is more change that the hardware vendors will be forced to implement that than there is of a deluge of new MIDI HD hardware coming out and the software synths following that. Audiobus didn’t even publish any spec yet…they just made a relatively simple library, and let programmer laziness take over.
Don’t get me wrong… I have played ball with MIDI so far. The problem is that beyond the 0x90 message, a random sampling of all the synth hardware and apps I own only have a vanishingly small number of synths that can deal with the kind of pitch bending that comes up all over the place on touch screens (and especially on string controllers). Basic stuff like multiple voices at the same note that may bend off in their own directions (ie: a 12-string guitar), total fretlessness (violin, voice)… It’s such a hack that it never comes out 100% correct in MIDI.
They did have an invite-only NDA demo of what MIDI HD would do. I can only hope that they are stalling for time. But really… when a new protocol comes out, nothing implements it currently… but just about anything that fixes MIDI’s problems (a layer over OSC? minor violations of standard MIDI?) could take hold overnight in the tablet world. We saw this phenomenon happen twice, first with Virtual MIDI, then with Audiobus (where they actually ran off with the entire app store actually… if you are not an in/fx/out, then your app no longer exists… I am aware of this!)
Interesting and thoughtful comments.
There was something very emotionally moving to seeing this talk.
Watching those cats talk about midi had me sitting it through with a smile pretty much the whole way.
Thanks for sharing.
It goes to show how revolutionary MIDI was but its sad its taken a small young company like Elektron to increase the bandwidth (Turbo MIDI) for greater clock timing and accuracy that has sadly been ignored by other manufacturers.
The limited 127 bit resolution is certainly in the limelight now with touch screens becoming the norm which questions the motivation of a conference such as this.
Forgive me if this seems obvious, but I think that one of the biggest problems with current MIDI as it pertains to new forms of expressive controllers (polyphonic pitch bending and/or continuouse pitch) could be addressed with a slight modification to the note message and the way it works. If there each note had a msb and a lsb for its note number then you could have a coarse note value that follows standard keyboard turnings (which would be red by all older gear) and a fine value for bends and glides. As long as there’s not problem with having a dynamic note number this might be a way to give new expression without having to reinvent the core of how notes have irked in the past.
Well, you sort of can approximate it now. But as you get closer and closer to just getting pitch handling right (and not even getting to getting per-channel controllers right when the instrument has to hop across all 16 channels every time you play a note!), it starts to get so complicated that the probability that more than one synth interprets the messages correctly drops very quickly. The only reason why piano controllers using MIDI works is because for that use case it is *simple*; it’s only simple for that case; almost nobody botches the 0x90 message.
But when you need to bend each voice independently, between channel hopping and non-standard bend sizes, it becomes very complicated quickly – the exact opposite character of what caused MIDI to succeed in the case of piano controllers (and the only case in which it isn’t likely to get soon). A lot of synths don’t respond to MIDI bend width change messages. Many don’t allow bend widths to be changed under the assumption that you must be plugging in a keyboard. If you have to span the whole instrument across 16 channels, then you have to duplicate all settings across all 16 channels. That means duplicating controller messages to all channels. There is also the issue of what note ‘off’ means. Note ‘off’ doesn’t mean that the sound completely stops. It means that the patch goes into its release phase, and might be audible for a second afterwards (ie: a hang drum). So, you can’t touch the pitch wheel until the release finishes. That means that not only does every note on need to be in its own channel, but it means that you can’t grab a channel to set the note+bend for the next note until the current note is done releasing. That means that there is a speed limit at which you can play (and you can easily get into it with long release patches) before you have weird pitch anomalies when a releasing note in a channel gets stolen.
MIDI HD’s current proposal last I heard is to apply a bend message to each note that is down independently (bend per note rather than channel). A “backwards compatible” way is to simply make people support really huge bend widths like +/-127 semitones (ie: 100 cent resolution from note 0 to 127 … that’s 12700 discrete values. If you just have one MIDI note (the middle note – note 64 for instance) and only use the bend to set the frequency that way, then you pass the 1 cent threshold (enough for tuning and manual chorusing of voices). But when you plug into an older synth that can’t deal with such a wide bend, you get very wrong notes (a bunch of note 64 on a piano patch perhaps?). So it’s not backwards compatible in the sense that you at least get the correct pitch.
If you try to be backwards compatible on producing at least the same semitone, by at least starting from the base un-bent note and then bend to where you want to go, you then have half the resolution – not quite up to 1 cent, because it’s +/- 127 semitones now (not just +127 from 0). So it’s arguable that you have enough bits in the bend for most purposes. But when you add together the channel hopping and the limited number of channels, duplicating control changes among all the channels (to make it act like 1 bit 16-channel instrument)… trust me… not only do most synths not get it right… but the more expensive ones don’t fare any better… in my experience, the cheaper synths actually come out better on this score, because they are done in software, and stuff gets fixed some time this decade.
I have written (and continually rewritten) code for these scenarios many times. This is a ‘first day’ problem when trying to make a string instrument, though you can go your whole life without seeing the problem if you only play piano controllers.
Oh, the other thing about OSC versus MIDI … Typical OSC setups like SuperCollider/Pd have you writing the actual *patches* (ie: on your desktop or authored on your tablet) … even writing your own reverb/chorus effects units…and installing them into the synth over OSC. In other words, instead of just agreeing on protocol with semantics, an OSC synth (SuperCollider/Pd/ChucK,etc) generally throws its hands up and says “I have no idea what to do with this messaing … just give me a script that will unpack the messages and tinker with the innards of the synth directly”. There is some merit to this approach.
The main thing is that any protocol needs a negotiation phase and an escape hatch to get out of backwards compatibility messes that nobody can foresee. MIDI isn’t the only protocol that has been around for a long time. SSL networking protocol is a good example of how to design a protocol to flex in the face of changes.
Trying to not be a troll about it… but the objections to anything being wrong with MIDI are quite wrong… technically. In any case, it won’t take long for the tablet world to cause some real upheaval. A rival to MIDI coming from iOS that spreads out is probably just as viable than a blessed standard at this point.
Substitute the word “Commodore 64” everywhere you hear “MIDI” in here, and think about how ridiculous it is that the answer to new controllers is to avoid them. You think your Commodore 64 is great because you have never seen an 4Ghz Intel processor or an ARM processor, and never envisioned what you could do with 64-bit OS, etc. Something will definitely break due to touch screens, because the hardware isn’t particularly relevant when you can prototype a new protocol on a massive scale in real instruments before going back to the hardware guys and saying “implement this!” a few years later.
Whether MIDI HD, OSC or something else, it will start with software implementation and only be adapted by the big hardware makers when they see the value.