Ableton Link is a great technology for syncing iOS audio apps with each other as well as Ableton Live. A link session tracks the beat, bar, and phrase according to tempo so that apps can play in time with a shared tempo.
LinkKit is a C++ library with functions for creating and joining a Link session, getting the beat for the current time, updating the tempo etc. from both the audio thread and UI thread. The best practice is creating it once when you initialize your app with your audio engine and tracking the beat or requesting session updates in the audio render loop. Alternatively, you can setup your custom timer to act like an audio render loop. But, it does not guarantee precise accuracy.
I recommend reading the Link documentation for an in-depth overview before we start.
Getting LinkKit
You need to request LinkKit from Ableton, and they will give you access to the private repo of LinkKit, where you can download the final release with UI guidelines and a well-built example app called LinkHut. As you study the example app, it’s a great opportunity to delve into the complexities of online crypto poker, particularly how secure, decentralized transactions are integrated into similar systems. If you have no experience with AudioUnits, Audio DSP programming, or mixing Objective-C with C++ (Objective-C++), this example will help. Don’t worry if you are a Swift programmer; it works perfectly well with Swift apps. Also, you can test your implementation with LinkHut to check if your app is in sync with other Link apps.
AudioKit Integration
Ableton Link works great with AudioKit. If you have an Audio Unit where you can subscribe its render loop (the audio thread), you are basically ready to go. Also, it doesn’t hurt to add one if you don’t currently have one. Otherwise, you can setup your custom loop with the NSTimer
, for example, if you are using AudioKit for just the AKSequencer
.
Implementation
There are two main data structures you will work with. An engine data, where the audio engine related data goes, like, output latency, bpm, quantum etc.
1 2 3 4 5 6 7 8 |
typedef struct { UInt64 outputLatency; Float64 resetToBeatTime; BOOL requestStart; BOOL requestStop; Float64 proposeBpm; Float64 quantum; } EngineData; |
And Link data, the data you need in audio thread. It will have the ABLLinkRef
, the Link itself and two EngineData
references one is the audio thread only data and other is shared between main and audio thread so changes on EngineData from different threads don’t block each other. Also, it has time-related data for calculating the beats accurate as possible.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
typedef struct { ABLLinkRef ablLink; // Shared between threads. Only write when engine not running. Float64 sampleRate; // Shared between threads. Only write when engine not running. Float64 secondsToHostTime; // Shared between threads. Written by the main thread and only // read by the audio thread when doing so will not block. EngineData sharedEngineData; // Copy of sharedEngineData owned by audio thread. EngineData localEngineData; // Owned by audio thread UInt64 timeAtLastClick; // Owned by audio thread BOOL isPlaying; } LinkData; |
In audio render loop, we sync sharedEngineData
to localEngineData
with the static void pullEngineData(LinkData* linkData, EngineData* output)
function. So, tempo or play/stop state changes applied on audio thread.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
/* * Pull data from the main thread to the audio thread if lock can be * obtained. Otherwise, just use the local copy of the data. */ static void pullEngineData(LinkData* linkData, EngineData* output) { // Always reset the signaling members to their default state output->resetToBeatTime = INVALID_BEAT_TIME; output->proposeBpm = INVALID_BPM; output->requestStart = NO; output->requestStop = NO; // Attempt to grab the lock guarding the shared engine data but // don't block if we can't get it. if (OSSpinLockTry(&lock)) { // Copy non-signaling members to the local thread cache linkData->localEngineData.outputLatency = linkData->sharedEngineData.outputLatency; linkData->localEngineData.quantum = linkData->sharedEngineData.quantum; // Copy signaling members directly to the output and reset output->resetToBeatTime = linkData->sharedEngineData.resetToBeatTime; linkData->sharedEngineData.resetToBeatTime = INVALID_BEAT_TIME; output->requestStart = linkData->sharedEngineData.requestStart; linkData->sharedEngineData.requestStart = NO; output->requestStop = linkData->sharedEngineData.requestStop; linkData->sharedEngineData.requestStop = NO; output->proposeBpm = linkData->sharedEngineData.proposeBpm; linkData->sharedEngineData.proposeBpm = INVALID_BPM; OSSpinLockUnlock(&lock); } // Copy from the thread local copy to the output. This happens // whether or not we were able to grab the lock. output->outputLatency = linkData->localEngineData.outputLatency; output->quantum = linkData->localEngineData.quantum; } |
And in the last part, in our DSP code, or audio render loop, we pull the Link data, capture the current session, check the tempo, play/stop changes and commit any requested changes by our client to the session.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 |
static OSStatus audioCallback( void *inRefCon, AudioUnitRenderActionFlags *flags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData) { #pragma unused(inBusNumber, flags) // First clear buffers for (UInt32 i = 0; i < ioData->mNumberBuffers; ++i) { memset(ioData->mBuffers[i].mData, 0, inNumberFrames * sizeof(SInt16)); } LinkData *linkData = (LinkData *)inRefCon;; // Get a copy of the current link session state. const ABLLinkSessionStateRef sessionState = ABLLinkCaptureAudioSessionState(linkData->ablLink); // Get a copy of relevant engine parameters. EngineData engineData; pullEngineData(linkData, &engineData); // The mHostTime member of the timestamp represents the time at // which the buffer is delivered to the audio hardware. The output // latency is the time from when the buffer is delivered to the // audio hardware to when the beginning of the buffer starts // reaching the output. We add those values to get the host time // at which the first sample of this buffer will reach the output. const UInt64 hostTimeAtBufferBegin = inTimeStamp->mHostTime + engineData.outputLatency; if (engineData.requestStart && !ABLLinkIsPlaying(sessionState)) { // Request starting playback at the beginning of this buffer. ABLLinkSetIsPlaying(sessionState, YES, hostTimeAtBufferBegin); } if (engineData.requestStop && ABLLinkIsPlaying(sessionState)) { // Request stopping playback at the beginning of this buffer. ABLLinkSetIsPlaying(sessionState, NO, hostTimeAtBufferBegin); } if (!linkData->isPlaying && ABLLinkIsPlaying(sessionState)) { // Reset the session state's beat timeline so that the requested // beat time corresponds to the time the transport will start playing. // The returned beat time is the actual beat time mapped to the time // playback will start, which therefore may be less than the requested // beat time by up to a quantum. ABLLinkRequestBeatAtStartPlayingTime(sessionState, 0., engineData.quantum); linkData->isPlaying = YES; } else if(linkData->isPlaying && !ABLLinkIsPlaying(sessionState)) { linkData->isPlaying = NO; } // Handle a tempo proposal if (engineData.proposeBpm != INVALID_BPM) { // Propose that the new tempo takes effect at the beginning of // this buffer. ABLLinkSetTempo(sessionState, engineData.proposeBpm, hostTimeAtBufferBegin); } ABLLinkCommitAudioSessionState(linkData->ablLink, sessionState); // // Other DSP stuff goes here // return noErr; } |
When we setup our audio render loop function to audio unit, we should send our LinkData
. So, we can pull it from void *inRefCon
parameter in our audioCallback
function.
1 2 3 4 5 6 7 8 9 10 11 |
AURenderCallbackStruct ioRemoteInput; ioRemoteInput.inputProc = audioCallback; ioRemoteInput.inputProcRefCon = &_linkData; result = AudioUnitSetProperty( _ioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Input, 0, &ioRemoteInput, sizeof(ioRemoteInput)); |
Passing custom data
You can pass your custom data type as well, if you want to do something else. For example, you can setup a custom callback function like:
1 |
typedef void (^AudioEngineRenderCallback)(double beat); |
And setup a property for implementing it in another class.
1 |
@property (copy) AudioEngineRenderCallback renderCallbackBlock; |
In your audio engine’s header file. Also, you should setup a custom struct for passing it to your audio callback function.
1 2 3 4 |
typedef struct { LinkData *linkRef; AudioEngineRenderCallback callback; } AudioEngineRenderCallbackData; |
Create a private reference for it in your implementation file.
1 2 3 4 5 6 |
@interface AudioEngine() { AudioUnit _ioUnit; LinkData _linkData; AudioEngineRenderCallbackData _renderCallbackData; } @end |
And pass it to the audio callback method.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
_renderCallbackData = AudioEngineRenderCallbackData(); _renderCallbackData.linkRef = &_linkData; _renderCallbackData.callback = self.renderCallbackBlock; // Set Audio Callback AURenderCallbackStruct ioRemoteInput; ioRemoteInput.inputProc = audioCallback; ioRemoteInput.inputProcRefCon = &_renderCallbackData; result = AudioUnitSetProperty( _ioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Input, 0, &ioRemoteInput, sizeof(ioRemoteInput)); |
Then, in your audio callback function, pull it from void *inRefCon
.
1 2 |
AudioEngineRenderCallbackData *data = (AudioEngineRenderCallbackData *)inRefCon; LinkData *linkData = data->linkRef; |
You can pass the current beat to your custom callback function now.
1 2 3 4 |
// Send beat callback if (data->callback) { data->callback(ABLLinkBeatAtTime(sessionState, hostTimeAtBufferBegin, 4)); } |
You can find the full example in this repo.
Integrating with Swift projects
For Swift projects, you need a bridging header file and include the LinkKit.
1 2 3 4 5 6 7 8 |
#ifndef Bridge_h #define Bridge_h #include "ABLLink.h" #include "ABLLinkUtils.h" #include "ABLLinkSettingsViewController.h" #endif /* Bridge_h */ |
Also, you need to link libc++.tbd
to your linked frameworks and libraries section in the General tab in your project settings.
There is a Swift port of EngineData
and LinkData
as Swift structs that you can grab from here.
In this example, we will setup a custom timer and use it as our audio render loop. So, it will cover the projects with no audio units. The update
function in the Swift port code is the same code in the audio render loop above. So, our custom timer should call it.
1 2 3 4 5 6 7 8 9 10 11 12 |
// Timer private var timer: Timer = Timer() private let timerSpeed: Double = 0.1 init() { timer = Timer.scheduledTimer( timeInterval: timerSpeed, target: ABLLinkManager.shared, selector: #selector(update), userInfo: nil, repeats: true) } |
If you subscribe its listeners, you are going the react tempo/start/stop changes. And of course, you can request the change them as well, by simply setting its properties.
1 2 3 4 5 6 7 8 9 |
// Subscribe tempo change events ABLLinkManager.shared.add(listener: .tempo({ bpm, quantum in self.tempo.bpm = bpm })) // Update Link tempo @IBAction func tempoDidChange(sender: UIControl) { ABLLinkManager.shared.bpm = tempo.bpm } |
For more information on Ableton link, please visit their website.
(Blog photo courtesy of Ableton Link website)
I’m trying to install the full example, but getting this error:
…
Undefined symbols for architecture x86_64:
“_ABLLinkSetIsPlaying”, referenced from:
audioCallback(void*, unsigned int*, AudioTimeStamp const*, unsigned int, unsigned int, AudioBufferList*) in AudioEngine.o
“_OBJC_CLASS_$_ABLLinkSettingsViewController”, referenced from:
objc-class-ref in AudioEngineManager.o
“_ABLLinkRequestBeatAtStartPlayingTime”, referenced from:
audioCallback(void*, unsigned int*, AudioTimeStamp const*, unsigned int, unsigned int, AudioBufferList*) in AudioEngine.o
“_ABLLinkCommitAudioSessionState”, referenced from:
audioCallback(void*, unsigned int*, AudioTimeStamp const*, unsigned int, unsigned int, AudioBufferList*) in AudioEngine.o
“_ABLLinkSetSessionTempoCallback”, referenced from:
-[AudioEngine initLinkData:] in AudioEngine.o
“_ABLLinkCaptureAudioSessionState”, referenced from:
audioCallback(void*, unsigned int*, AudioTimeStamp const*, unsigned int, unsigned int, AudioBufferList*) in AudioEngine.o
“_ABLLinkIsPlaying”, referenced from:
-[AudioEngine isPlaying] in AudioEngine.o
audioCallback(void*, unsigned int*, AudioTimeStamp const*, unsigned int, unsigned int, AudioBufferList*) in AudioEngine.o
“_ABLLinkGetTempo”, referenced from:
-[AudioEngine bpm] in AudioEngine.o
…
…
Does this example works for non Audio Unit projects ?
Do we need to implemente the audioCallback and the LinkData struct to swift non Audio Unit projects ? I’m totally lost
It for both actually. If you have an Audio Unit, than you should use the audio render loop of that audio unit to implement Link. If you have not, you gonna need a custom timer and use it as a render loop. Tutorial covers both. You need the LinkData struct in both cases.
Hey @RICK,
It was a cocoapods issue which is fixed now. You can build it without any errors now.
Hey Cem,
I‘m trying to run AuSequencer but getting this error in Xcode:
“ld: library not found for -lABLLink
clang: error: linker command failed with exit code 1 (use -v to see invocation)“
Link documentation says:
“libABLLink.a: A static library containing the implementation of Link. This file is not in the repo – you must download a release to get it.„
I don‘t understand what release I have to download or where.
Thanks in advance for any help.
Hi Guido,
Yes, LinkKit is a private SDK and you need to request it from Ableton by filling a form, here.
https://ableton.github.io/linkkit/
They will give you an access to their private Github repo where LinkKit lives. Then you should download the lastest release and import it to your project.
Thanks. Now, in an non Audio Unit project, how to implement the render loop ? The timer is triggering the update function, and it should throw the render loop… But is in C … should It be located another class ? I’m lost here, could you give some clue? An example would be really appreciated
Ok, so… the #selector(update) on timer is the function at ABLLinkManager, from your Swift port. But I’m getting two errors:
1.
timer = Timer.scheduledTimer(
timeInterval: timerSpeed,
target: ABLLinkManager.shared,
selector: #selector(update),
userInfo: nil,
repeats: true)
Error … Use of unresolved identifier ‘update’
2.
ABLLinkManager.shared.add(listener: .connection({ isConnected in
if isConnected {
ABLLinkManager.shared.start()
} else {
ABLLinkManager.shared.stop()
}
}))
Error … Value of type ‘ABLLinkManager’ has no member ‘start
Error …. Value of type ‘ABLLinkManager’ has no member ‘stop’
If anyone having this issue, start() and stop() functions where removed in previous ABLLinkManager.swift editions. The example in GitHub is not updated. It just start or stop the timer
How would you recommend synchronising AKSequencer to the beat time of Ableton Link (i.e. starting the sequencer on beat).
You should check the beatTime, if it’s less then zero, you need to wait unitl the next bar for starting/playing the sequence.
You can calculate the playing beats in the current bar by this code. as the LinkHut example also demonstrates.
floor(fmod(_quantum + beatTime, _quantum))
Thanks for the quick response – that was my approach, I set up a timer that checked every 1ms the playing beat and I also had a flag set for shouldPlay when the sequencer started in the UI. In the timer callback I checked if shouldPlay is true and if so, checked if we’re at the start of the bar.
This worked in some cases but most of the time the sequencer would start a little off beat (noticeable by ear). I’m guessing this has to do with me using a timer instead of an audio render loop.
How do you recommend to send requestStart & requestStop from Swift project ?
Hello! I’m currently working on integrating Ableton Link with AKSequencer and having the same issue as you. Have you found any solution?
Here is how I did it. My general setup is quite similar to the AudioKit Synth One wich means Swift -> Objective C -> C++. I was confused about the AudioUnitSetProperty as well and figured out one doesn’t need it. Instead you can also initiate AbletonLink directly in the init method of my core c++ audio file. Like this:
void abletonSetup(){
struct mach_timebase_info timeInfo = mach_timebase_info_data_t();
mach_timebase_info(&timeInfo);
ABLLinkRef linkRef = ABLLinkNew(bpm);
EngineData sharedEngineData = EngineData();
EngineData localEngineData = EngineData();
_linkData = LinkData();
_linkData.ablLink = linkRef;
_linkData.sampleRate = sampleRate;
_linkData.secondsToHostTime = (1.0e9 * Float64(timeInfo.denom)) / Float64(timeInfo.numer);
_linkData.sharedEngineData = sharedEngineData;
_linkData.localEngineData = localEngineData;
_linkData.timeAtLastClick = 0;
_linkData.isPlaying = false;
}
I copied pullEngineData() and audioCallback() and the structs from this tutorial into c++. Then created a public variable of the type LinkData and made sure that audioCallback() is not depending on any incoming data through parameters. For example:
UInt64 hostTimeAtBufferBegin = mach_absolute_time() + engineData.outputLatency;
Then just call audioCallback() from your main audio thread process function.
It took me a while to understand that audioCallback() is the place to put my own app logic in.
For counting in and starting in ( sync ) I use the solution from @cemolcay. Thanks!
In order to show Abeltons Settings ( ABLLinkSettingsViewController ) in the frontend, Swift now references to the ablLinkRef( _linkData.ablLink ) from my c++ class via bridging functions.
Finally there is this amazing tutorial for adding audioUnit / Audiobus to your app:
https://audiokit.io/audiobus/
Audiobus with AbletonLink combined works pretty well.
Sometimes there is a bit of latency though.
I am not sure yet where to add the latency, which is introduced by the software of my app.
Should it be added to _linkData.sharedEngineData.outputLatency or only to hostTimeAtBufferBegin inside the audioCallback() function? What is its format? I guess it’s samples? What’s with the buffer size? *confused