Developer Rob Fielding has created a new synth for iOS, Cantor, that goes where other apps fear to tread.
Fielding, who has pioneered microtonal instruments for iOS, has this to say about Cantor:
This app has nothing at all to do with beat making, sequencing, creation, DAWs, or iPad-only workflows.
It’s a pocketable instrument that’s as playable as a guitar or a piano for a lot of uses. It should be reliable and simple, for you to plug into an effects processor or a computer *like* a guitar or hardware synth.
I tried initially to ship this without any audio engine at all, as a pure MIDI controller. But the current state of MIDI with respect to continuous pitch instruments is still bad. I only implemented an internal engine to demonstrate a full exercise of the playability parameters.
Here are the details.
Features:
- Wavetable synthesis engine – Fielding notes that the synth engine doesn’t offer a lot of sound variety. His hope is that other developers will add the level of MIDI support that Cantor requires, but he’s included the basic synth engine so that the instrument is immediately playable.
- It has ‘octave auto’, for very fast soloing. He says Cantor is just as playable on the phone as it is on the iPad if you play it that way.
- MIDI with polyphonic bending (note ties, channel cycling, etc). Most synths won’t be able to deal with this, yet. But several synths, including Kontakt 4, Korg Karma, ThumbJam, SampleWiz & Arctic work with it, to some degree.
- Looper
- Moveable frets to let you define your own scales, which includes exact pitch locations and the number of frets per octave.
- Common microtonal scale shapes already setup: Diatonic, Pentatonic, 12ET chromatic, Pseudo Maqam Bayati, Pythagorean, Just, 19ET, 31ET, 53ET & just fretless.
Cantor has been submitted to the App Store and will be $2.
Note: Fielding is pushing MIDI to its limits and exploring areas that other developers have ignored, and this means that he’s going to run into some obstacles along the way. He notes that MIDI is “horribly broken for almost every non-piano instrument scenario when it comes to pitch handling. This works around that, but it’s not pretty.”
So review the information available on his site before you buy it, to make sure that Cantor will meet your needs.
Hasn’t he heard of OSC?
You mean the OSC that would have to be converted to MIDI anyways to work with other apps?
I wish. :-). On iOS there is background midi which meets the latency, jitter requirements and most instruments understand only a subset of midi that piano controllers emit. You can use those as monosynths in most cases. But osc has been around forever with very little traction. It has super vague semantics, like xml vs html, and very little understands it as a plug and play input. You have to control both ends of an osc connection, where this is less true for midi. The real problem with osc on ios right now is that you are expected to talk osc over tcp/ip in every instance i know of. Setup is more complex than users will tolerate. Latency and jitter over wifi are completely intolerable for this kind of app. You need 10-20ms from fingertip back into your ears with 5ms latency or so. If there were a background osc pipe, then there would be every motive for devs to migrate to osc. But synths have to take osc input in almost every case for this to happen once such a pipe exists.
I think something more surprising will happen though. Maybe AudioBus may be able to displace them all; you can mark audio data with metadata that describes the audio. I can only write my own app, and it is other people’s app that is the hardest part of the problem. Currently, emitting some form of midi is the only way to get them to understand at all – and osc over wifi is a totally useless solution for a realtime controller. Midi HD is currently being specified, and would solve a lot of problems that osc does. The fact that every dev writes his own audio engine, recording, controller, synth, etc has a lot to do with osc/midi/mythicalprotocol not simultaneously meeting the setup, performance,audio reproducability requirements. In an ideal world, synths, controllers, daws, effects are all written separately and trivial to wire together.
I wish there was a decent synth engine; it would just make it more slick – I don’t like having all these interconnecting pieces going on when I want to make a track. Couldn’t you get in touch with the DXi guy, and corroborate on something awesome? I don’t know why we all went down this road where MIDI controllers *must* be separate from a sound source. It doesn’t serve a device like the iPad well.
thanks for making it cheap ,
oh – what do you suggest setting the pitch bend range on my external synth to for this and geo? I’m using zebra which is up to 24, some of my others go to 48
Transmitting polyphonic microtonal data to synths doesn’t really make sense to me at the moment. Synths like zebra can load tuning files anyway. So it seems, the best thing an app could do at the moment is either a) provide as Rob in with Cantor but focus on creating a good synth engine, or B) provide a good configurable midi control surface (like Cantor too) which can be configured according to the scale/tuning you have loaded into your existing synths (like zebra for example).
Im not sure if software like the rumored BitWig DAW with supposed polyphonic ptich bend capabilities will make a difference to any of this either.
the simple synth engine looks a great way to experiment with tunings though, If the midi capabilities can also be configured to cater for a specific tuning, for example – If I load a 5 note per octave slendro scale into Zebra and then configure Cantor to a 5 note per octave scale and specify the midi notes then that would be a great solution!
I’m not using the MIDI tuning spec. That will actually break the current functionality. The synth must be in normal 12 equidistant note per octave mode, and the MIDI bending is used to product microtonality. The reason for this is that it’s just a special case of fretlessness, where the pitch snapping happens in the MIDI controller rather than in the synth. In my experience, this is the only way that microtonality can work properly anyway (very much like how you do it with OSC). The MIDI controller abuses the spec to send the actual frequencies to the synth, and the synth has no idea what the notes are supposed to be.
The issue boils down to using midi bend for tuning purposes, and forcing all voices to live on their own channels, and ensuring that we don’t exceed the bend width; and it has “normal” uses. As an example: older versions of this instrument actually the MIDI voice and tuned them slightly away from each other to produce chorusing effects in MIDI instruments that had no chorus post-processing effect of their own. The result was *wonderful* in Thumbjam and SampleWiz, but it gave almost every other synth a problem of some kind. Omni synths would just mash all the notes into one channel and just create an out-of-tune horror, other synths would get occasional stuck notes from this, etc.
Not a bad little app at all. I think it does need a better synth engine, but that doesn’t knock off all the gold stars. My main comment: don’t dismiss alternate tunings. They can be very useful and gorgeous, as in some of Peter Namlook’s music. The natural brass overtone harmonic series is like finding a new kind of chocolate.
Great job Rob! Your constant pushing forward and trail blazing is making it easier and possible for the rest of us to accomplish our goals!
Well… Another baby from Rob! I was there for Geo (just was the requirements, I wish I knew what even an UIButton was at the time so I could have written some lines of code and let you sleep more) and I’m glad to see the microtonal movement gaining strong momentum and not just being a fad. The addition of just, 19ET (Bach would have loved you) and 31ET was much needed IMO (comparing to Geo, which is still a beast, no matter what the naysayers may tell). Best wishes of success dude, and no grudges held.
I like the feel of it, possibly better than Geosynth. It’s easier to get the notes on the screen and apart from a couple of samples I’m not bothered about the sound engine. The thing that geosynth has over you is the customisation of the buttons. So 4.5 stars from me!
Nice work. Sonically, so much of this stuff sounds quite fresh, and it is all so much of a blast to work with, that I spend too much time doing mini composition with it, and not enough integrating ideas into my DAW, which makes me dream of an architecture that openly supports all of these beautiful little “machines” into one loving parent with a great audio engine. Reason is a bit like this now, I guess, but it doesn’t allow anyone else in to play! Rewiring into Logic is awkward…