Reader Geert Bevin has been doing some testing of DAWs to learn more about their MIDI performance. And based on his testing, some DAWs are more prone to MIDI ‘jitter’ than others:
Musicians using software instruments might be delivering sub-par performances and not even know why. Many popular plugin hosts introduce MIDI jitter at the final step, causing the timing to be subtly off during live performance. Read why this is happening and which hosts to avoid in this article.
In Bevin’s testing, these apps had noticeable MIDI buffer jitter:
- AULab 2.2.1 (Apple)
- Digital Performer 8.01 (MOTU)
- DSP-Quattro 4.2.2
- Jambalaya 1.3.4 (Freak Show Enterprises)
- Logic Pro 9.1.8 (Apple)
- Mainstage 2.1.3 (Apple)
- Max/MSP 6.0.8 (Cycling 74)
- Rax 3.0.0 (Audiofile Engineering)
- StudioOne 2.0.7 (Presonus)
- VSTLord 0.3
These did not:
- AudioMulch 2.2.2
- Bidule 0.9727 (Plogue)
- Cubase 6.5.4 (Steinberg)
- EigenD 2.0.72 (Eigenlabs)
- Live 8.3.4 (Ableton)
- Maschine 1.8.1 (Native Instruments)
- Reaper 4.3.1 (Cockos)
- Vienna Ensemble Pro 5.1 (VSL)
Bevin notes that, “Many of today’s top choices for live plugin hosting introduce a non-neglectable amount of MIDI jitter that is bound to degrade your musical performance.”
Bevin’s finding are certain to raise some controversy – so see his post that discusses his testing for the details on his process and findings.
Good ol’ Reaper!
This is bound to start another DAW pissing contest. In any case, I read the linked article and found the research seems very solid and not biased. I would hope the manufacturers of various DAWs that are introducing “buffer quantization,” that is, quantizing incoming MIDI notes along audio buffer boundaries (11.2ms for 512 frames at 44.1khz), take this data and realize they need to start timestamping incoming MIDI notes and aligning them properly.
It’s hard enough to perform on a soft-synth as is, without unpredictable latency.
I think it is funny this article is complaining about how these slight variances in timing create a “sub-par” performance. I in fact purposefully build in timing variance into my performances and instruments because it makes them more human.
That’s fine and dandy, but if you want rigid robot timing for electronic music, it IS sub-par. I love some imperfect groove in a lot of my music as well, but you can simply turn off quantization and do it in the DAWs that don’t have MIDI jitter (I use Reaper, for example)
I accept a bit of imprecision in all of this as part of the freight, same as I’ve made do with pianos that weren’t in great shape. I sometimes have to slide a segment around to deal with a moment of latency, but that’s part of the sculpting process. Sometimes that even leads to an improvement I might have missed otherwise. How you cope with a lack or small blemish is part of what defines your musicianship.
Sure, I’d prefer that the issue didn’t exist, but you know what? Between my respectable tools and the work I invest, I can stand by my results. I’d also suggest that you all remember our status as insiders. Our listeners rarely know jack about music gear to begin with, or at least not at this level. They simply respond to your RESULTS.
I appreciate Geert’s work; its a useful point to consider. I’m just too knocked out to have a pipe organ Combi that sounds like the real one I once played to sweat the small stuff.
But if you want to introduce a per-note variation of, say, x milliseconds, and you have jitter of y milliseconds on average, then what you get at the end is something in the range of x-y to x+y milliseconds. It’s all about getting what you intended through the whole system from one end (the instrument you’re playing) to the other (your ear).
I’m a little sceptical about this “study,” though, because it is not clear how each DAW was setup. There’s this one revealing comment on the page about overdrive mode in Max. As is stands, I am inclined to ignore the results based on the fact that there are far, far too many unknowns. I’d repeat some of the measurements but I don’t have Cubase. Perhaps someone who does can fill in the gaps and provide some solid science (for want of a less grandiose but equally appropriate term).
Please ask which unknowns you want more information about. I tried to be very complete in this article and detailed the process used. You can also download the projects files of each DAW and host I tested. The fact that I used Cubase to generate the MIDI data and record the audio results is besides the point, it could have been any sequencer as long as it’s used at the comparison point for each measurement.
However, if there’s something that’s not correct about my findings, then I’m more than interested to learn about it since I had a hard time believing what I discovered. I had the article proof read by quite a few people, including Roger Linn, and nobody could discover inconsistencies. That doesn’t mean there aren’t though.
I beginning to read this test, after reading for more than one whole week on the midi time / jitter subject, what interface did you use? Apple supports timestamp and the only one devise I come across is the edirol um550 /880 or maybe the avid midi interface. I did have a MOTU but ehm “read the web”… I was so frustrated I was considering to throw the whole mac pro 2009 with MOTU linked MTPAV’s through the window. Instead I take a walk to the mc donalds??? that’s what I do when I have suicidal thoughts. I did not test the MOTU with DP. I will read on, So maybe I will get myself the answer.
Very good work!! Thanks a lot.
JDH
The key thing is that YOU should be in control of the timing, not your DAW!
But sir, I AM in control. Its called “fingers on the keys.” 😛 I respectfully maintain that if you’re listening for jitter, you’re defocusing yourself somewhat, like someone who claims to hear a 2ms delay. I’m glad I DON’T need or want robotic timing as a regular thing. I’ll allow for the reality and the technical need for some people; making workable choices is what sincere synthesis is all about. For me, though… I found a nice freebie modular loop I most likely could not have created from scratch. It inspired me enough to become the foundation of the beat, which I then built into a fairly meaty final song. There’s more than one way to devise perceivable precision because it comes in several flavors. I’m one of those poor souls who LIKES to hear odd fret noise or an avant-garde-ish clam-ette. It adds a vital human touch. The issue is real enough, but it doesn’t seem like some huge deal-breaker. As the old stage saying goes, “No one ever leaves the theater whistling the costumes.” Don’t let excessive attention to the occasional loose whisker interfere with your STYLE.
do try play beethoven’s moonshine sonata through garageband or something like that.
Thats not really the idea.
Oh THAT’s why Logic team has been fired !
It was a joke. Butthurt much, logic users ?
No, but it sounds like you are.
“No, you!”
Who said you could leave the kitchen?!
i love you both.
Get a room please.
No butthurt here. My version of Logic is blissfully consistent and rewarding. I trust Apple as much as one CAN trust a giant group of mostly MEN, who inevitably find some moronic reason to joust with their dicks. Logic will stick around because GarageBand is a constant pillar of the base OS package and the upgrade path to Logic consists of mature code that only requires OS version updating or iPad porting, not a rebuild from the ground up. That keeps it cost-effective, I’d think. The only butthurt here comes from my yellow Lab pulling me off my feet such that I landed left-cheek-first on a brick. Its bruised so blue, it looks like Smurf-ass. Ouch.
i’m never comfortable expecting MIDI timing to be 100% in any platform, which is why i prefer working with audio in my productions. not always possible for live performance with instruments interacting via MIDI, though.
But I expect that you record audio based on MIDI events …
not always, and even when i do, you can always adjust the timing of the recorded audio.
Frank Zappa once said that his band had no problem in playing live along with a Synclavier: “They latched right onto it.” I find that statement hard to refute, relative to the debate. I don’t know about you, but I sometimes loop a section obsessively because the FEEL seems off and 99% of the time, its because I need to replay a spot, use a different sound for it (or notch-EQ it), or remove a track entirely because in my zeal, I overloaded the spectrum. One of those ‘always’ works. Me am mighty hunter, know how to skin and cook tiger.
No evaluation of Cakewalk Sonar?
TL;DR version for you – tested on a mac.
Its funny when peoiple start to worry about ms of delay, a 10 foot guitar lead introduces 10ms of delay, a real piano has several ms delay, hamond organs could have as much as 20ms delay….and yet people made/make amazing albums. Stop worrying about the tech and make some music, learn to embrace your digital wow and flutter!
Latency and jitter are totally different things.
The article is about random, different leanth latency (delay) between notes- how is this different to jitter?
The examples you mention (10 goot guitar lead, real piano, hammond organs, …) are all examples of latency, not of jitter.
Sad
Latency is when you press the key and the sound comes out later than you expect it. It sounds like somebody playing behind the beat.
Jitter is when the sound is delayed some of the time, in an unpredictable way. It sounds like sloppy playing.
So if the input latency of a DAW isnt consistent, it will sound (when recorded) like Jitter no? The picture that leads this article calls it Jitter!!!!!
My point (or the point I was trying to make) is that people with instruments are slopper than that ‘in real life’, people drift by more than a few ms..when often laying, if you record midi with perfect timing it sounds like a machine (peopleo spend a lot of time adding swing and humanising). This artical points out a few ms drift, jitter or whatever you want to call it- I bet 99.9999% of people cant hear it on a record- analysis paralysis- just make music.
It’s not about what the listener hears, it’s about what you feel as a musician and the consistency in terms of response. Jitter is unpredictable meaning that you feel much less connected with the music you’re playing live, this obviously bleeds into the whole musician feedback loop of getting into an inspirational flow (note that this is all about live playing). Latency isn’t a problem as it’s constant and predictable. Humans have naturally learned how to deal with constant latency and adapt to it naturally.
I agree, but I’m not sure I would notice a few thousands of a seconds variation in my playing, my playing is probably out by more than plus or minus 100ms, and I think if you can play repeatable to a tenth of a second timing (600 bpm!) you are doing pretty well….
It’s the difference between theory and practice, if I could actually hear a difference, I would be convinced- I run live, cubase (seven from today ;-)) Maschine etc, never heard it myself or felt it in my playing.
Once again, you are confusing latency and jitter. The latency, when recording, is consistent and doesn’t change the timing of the resulting track.. Jitter is constantly variable delays which can cause drum patterns to sound jerky.
I’m also amused that you misspelled “sloppier.”
It says a lot about you as a professional musician, in other words, you’re depth and because of the likes like you we already have sh*t load of crap, which you call music, so when you say “let’s make music” it means lets make crap, because I like crap and can’t distinguish crap from music. So go make some music on your laptop instead of waiting my or someone else eyes with your dumb moronic views. Music is all about timing and the only ones who should be in charge of that timing are humans, not machines. So it’s hard to comprehend for some useless wannabes or someone who group up playing his laptop.
> a 10 foot guitar lead introduces 10ms of delay
How did you come to this conclusion?
Speed of sound is 1,126 ft/sec., so it travels 11.26 ft in 10 ms
the speed of sound has little to do with the speed of electricity, which is pretty much the speed of light. So unless you stretch the guitar cable straight and tight, that formula does not apply.
Get a clue…
The point is that if you are playing a guitar plugged into an amp with a 10 ft cable, it takes the sound about 10 ms to get from the speaker to your ear. Has nothing to do with electricity.
The “formula” completely applies to any discussion about latency. I know that is totally different than jitter, but someone else brought it up, and I supplied the math behind it, since someone else asked.
good answer. BRO FIST.
I’m more concerned with the speed of fanaticism. It makes light look like a snail.
That’s the speed of sound in air. Electric current flows at close to the speed of light.
actually the speed of current is some centimeters per seconds.
The variation or change (signal propagation) of the current is close to speed of light.
He’s confusing speed of sound through air with speed of analog audio through a cable. With the speed of sound through air being roughly 1116 feet/s, it takes about 11ms for audio to cover 10 feet.
How is it that three people (as of right now) could push the “thumbs up” up button for post that says a ten foot cable adds 10ms of latency? I just don’t get it.
It isn’t the cable itself, it is the distance implied by your ear at one end of the cable and the speaker at the other.
Get it now…?
The article isn’t about fixed latency between when the instrument sounds and when it reaches your ear, which is consistent and can easily be compensated for by the human brain. It’s about the variations from note to note during recording, which are not consistent and thus distracting. Timing variations as part of musical performance are pleasing to an audience when they follow a gaussian distribution; timing variations introduced by software are more random and thus more obvious and displeasing.
See http://ssli.ee.washington.edu/~bilmes/mypapers/mit-thesis.pdf and http://emcap.iua.upf.edu/showEmcap/publications/MaM_SmithHoning.pdf, plus the extensive references on those two papers, which are old but provide a good overview of the issues.
I thought it is a trivial thing for a musician to distinguish delay (steady variation) and jitter (random variation).
Kudos to Geert on doing this work!
For the record, no DAW is as tight as old school MIDI hardware, early single-tasking computers or CV syncing!
Atari St?
control voltage is the future!
*obligatory ableton midi slaved jitters quite a bit post*
Umm………………….NO PRO TOOLS?
Pro Tools is no longer relevant. Although I do use a Digi 002 in standalone mode but only for its reverb and most definitely only with Reaper.
How do you arrive at that conclusion? It’s still used in every studio I’ve ever been in, and all the pros seem to swear by it for editing (and I’d agree). It’s a pain in the arse for electronica at times, but it’s hardly irrelevant
the real problem is trying to use 30 year old technology (MIDI) in the first place… plus its a serial protocol, so only a few kilobytes can go through the pipe at the same time
“MIDI messages are made up of 8-bit words, and are transmitted serially at a rate of 31.25 kbaud. This nonstandard transmission rate was chosen because it is an exact division of 1 MHz, the speed at which many early microprocessors operated”
“It consists physically of a one-way (simplex) digital current loop electrical connection sending asynchronous serial communication data at 31,250 bits per second. 8-N-1 format, i.e. one start bit (must be 0), eight data bits, no parity bit and one stop bit (must be 1), is used, so up to 3,125 bytes per second can be sent.”
unfortunately we will never have anything different, because it would require all manufacturers to adopt the same standard, and agree on a universal standard of technology, and thus not make money from it… sooooooooo… love of money being the root of all evil, and all that – still holds true
The physical DIN MIDI connections suffer from the serial protocol limitations indeed. However, nowadays there’s MIDI over USB, over ethernet or even just locally between apps, these suffer from none of those limitations. That doesn’t mean that the MIDI data format is appropriate for modern electronic music, but that’s another discussion 🙂
If MIDI isn’t “appropriate” for “modern electronic music” maybe someone should tell the thousands of musicians who have made millions of recordings that they are doing it wrong…
Zymos –
Should we all go back to using wired phones and VHS players?
MIDI has been a great standard, but users should be knowledgeable enough to understand its limitations. And electronic musicians should be open to new ideas and tecnologies!
no thats wrong, midi over USB is still only a serial transfer of roughly 3kb per second.. the hardware has nothing to do with the data protocol
and yes its a huge limitation for sequencing tons of gear with tons of modulations
i never said it wasnt “appropriate”.. its the only thing we have so thats what we have to use
but its 30 years old, its built for computers with 1MHz CPUs… if you cant understand how much that sucks, then yeh, get out of the business because you are a fucking retard
Wang, sorry man, but I’ve been using MIDI over USB at much higher rates. The iConnectMIDi interface supports up to 12Mbps and it works really well. Local MIDI over the IAC bus on OSX has virtually no speed limits and goes as fast as can be handled by your computer. Of course, as I said earlier, that doesn’t fix the protocol shortcomings itself, but with the right hardware speed is not the issue. Now maybe you have a MIDI instrument that doesn’t send data out faster, that’s a problem of your instrument then.
i see – you are talking about MIDI with software synths, and yes thats not a problem at all
but most hardware synths that can use MIDI has a DIN connector, not usb – so it doesnt really matter
i wasnt even thinking about MIDI over USB initially, and especially when its all “under the glass”
Have you looked at new gear? If you love vintage synths, you have a point, but new gear always has USB (if not 100% of the new gear, at least 98%). Go to your local music store and look in the back of every single keyboard/workstation. You might find something that does not have USB, but that is the exception, not the rule.
What are you calling “new”? I don’t think 2008 is ancient history — http://media.soundonsound.com/sos/aug09/images/DaveSmithInstrumentsMopho_05.jpg
http://www.soundonsound.com/sos/aug09/articles/davesmithmopho.htm
most all boutique analog gear, and im talking new shit – it has DIN connectors…
lets take a look at the elektron stuff… all DIN connectors
i really do like the Virus style approach of bundling a VST frontend to talk to the synth over USB, but unless you are using beastly digital workstations, the primary MIDI interface that is most used, EVEN TODAY.. is the DIN connector
that also doesnt address the issue of using multiple peices of hardware – say you have 8 devices, you gonna have to use a USB hub, and I PROMISE you that adds jitter as well
seriously, MIDI is the problem, not the hardware connectivity… i still love it, but it should have been updated long ago.. i was hoping MLAN would take off but oh well
maybe sometime in the future, but it doesnt seem to be anytime soon
This article and discussion hurts my brain in a good way:)
Robot likes apples…..Worm likes apples…Ergo sum ………
Ableton is fine : successkid.jpg
I AM VALIDATED!!!! I have been saying for over a decade that my old System 7 Mac se/30 with an Opcode Studio 4 interface was better at midi timing than any of the programs out there. In fact it got so bad at one point I switched to using an MPC, Now I can tell all those that I worked with at Numark that they were wrong an I was right.
That is exactly what i have felt when i switched from Logic 5.5 to Cubase some years ago.
In Logic i could paint 16th notes in a row with Waldorf Attack VST and got a groovy timing.
Very funky if you mix it down to an audiofile and let it repeat every quarter.
In Cubase 4 my midi driven Akai sampler became so tight, i had to shift single notes by hand to get the same feel. So the loosers may be winners?
Can’t we have a “Jitter VST” to do it each way?
Am i speaking or just thinking? 🙂
I will build if you will buy it.
I also felt this when I moved from Logic Pro 8 to Reaper. It was just one of these things that was really difficult to pin down but my compositions and songs just felt lots more ‘solid’. But yeah, even rendering straight from an AudioUnit in Logic introduces noticeable jitter if you line the waveform and MIDI notes up. Can’t say I was too impressed when I found out.
Of course Max has Jitter.
It’s been in Max since version 3 or so.
WAKA WAKA WAKA!
are there any opinions of jitter on ipad? it really seems to me that bm2 is very tight when seq external synths.
uninformed guess: all of this might be because of the huge amount of processes running on any modern computer.
This is indeed an interesting thread. Unfortunately, there are some serious flaws with the test configuration that the author set up that invalidate the conclusions.
1) While the source of the observed timing inconsistencies was certainly at least partially due to MIDI timing inaccuracies, but there are many steps in the test signal flow that could also contribute timing fluctuations.
2) We don’t know if there is any difference in how the VST used for generating the audio hit behaves in the different hosts. It would be interesting to see if the behavior is consistent between VST instruments. Is the issue one of MIDI or is it one of the implementation of the VST API layer?
3) I’m unclear on how the author aligned the audio samples, and to what degree it exacerbates the results as we observe them.
4) This was all done with the MIDI signal chain “in the box”, so again was it an issue with how the different software components interacted within the host OS as much as how each handles MIDI?
I’m not saying that this isn’t a valid conclusion, and we all can “feel” when things are just a bit “off”, but at the same time I would like to see someone arrive at a more robust test methodology to explore this concept further.
Peace!
-Bill C, Austin TX
Thanks for asking more detailed questions, I purposed left some details out to prevent the article from getting too long. Here are some answers:
1. Tests are perfectly reproducible, I ran them at lest three times for each host and the results were always exactly the same.
2. The behavior is totally consistent when using other VST or AudioUnit instruments.
3. I manually zoomed in to sample precision, snipped at the first non zero sample, zoomed back out and then used bar boundary snapping in the DAW to align the start of the snipped clip with the beginning of the bar.
4. The signal chain was exactly the same for each test, just switching out the final host/DAW. I performed the same tests to another box over Ethernet MIDI though and also using iConnectMIDI for MIDI over USB. The behavior is exactly the same.
Actually, the behavior has been confirmed by plugin developers for Logic (see the comments on my article). The MIDI sample offsets in Logic are not set when processing live MIDI, they are set when using the built in sequencer.
Thanks for the reply and exposition on my questions and comments. All helps greatly to validate your approach and results. To be clear, it was my assumption that you were correct, just was nervous about some of the details.
what does this mean?
The MIDI sample offsets in Logic are not set when processing live MIDI, they are set when using the built in sequencer.
Bullshit!
FL Studio has jitters everyday I have used it
WTF dude
Reaper is simply the most Pro Audio Recording Software available.
REAPER is swell. I have a pal who lives by it and praises it often. Its like Cubase minus the BS that sent me over to Logic. If you’re not on a Mac, read up on REAPER and be impressed. I am.
Yes but Reaper is on the Mac as well as PC.
Maybe when all the DAW’s fix there Jitter, we humans will be looking for micro Quantize/Groove functions to emulate there previous flawed timing. Kinda like analog drift emulation in vst’s…
Like the way people reduce the number of ticks per beat in their DAW to emulate knacky old sequencers
what bout fl studio?
For those interested, I amended the article with some clarifications, based on what has been said here and on other websites: http://www.eigenzone.org/2012/12/04/midi-jitter#clarifications
Max/MSP MIDI timing is horribly inaccurate by itself, but if you tie the timing to audio events (like a slow LFO cycle) then it makes the timing MUCH more accurate.
I’ve been building a step sequencer in Max for a couple of months and I’ve noticed horrible timing issues. I expected better from it to be honest, it came so highly recommended
The developers at Plogue got in touch with me and Bidule seems fine with the right option set in the preferences. The ‘Apply’ button doesn’t change the running graph though, you need to either re-create the MIDI nodes or reload your document. Having the option set permanently should solve this for all your use-cases. This is why Bidule failed my testing initially, I simply applied the preferences, assuming it would change for the active project.
Can the Synthtopia editor please change the list at the top here and include Bidule in the ‘correct’ section? Thanks!
NO JITTER IS MACHINE GUN TIMING
CUBASE is not without jitter
MASCHINE 1.8 has pad delay and it soundet more stiff
when jitter is good and stable -then its ok with the groove