This shows you the differences between two versions of the page.
Next revision | Previous revision | ||
on-audio-kits [2021/02/07 17:14] shane created |
on-audio-kits [2021/02/07 17:16] (current) shane [On "Audio Kits" and the Future of AudioKit] |
||
---|---|---|---|
Line 1: | Line 1: | ||
====== On "Audio Kits" and the Future of AudioKit ====== | ====== On "Audio Kits" and the Future of AudioKit ====== | ||
- | This was originally | + | This was originally |
- | ==== Part 1: Concept of an Audio Kit ==== | + | ===== Concept of an Audio Kit ===== |
I define an //Audio Kit// as a collection of software resources which allow novice and intermediate programmers to produce audio programs using a high-level language, without the need to write real-time DSP code, using the “conductor” principle. Let’s break this down as five parts. | I define an //Audio Kit// as a collection of software resources which allow novice and intermediate programmers to produce audio programs using a high-level language, without the need to write real-time DSP code, using the “conductor” principle. Let’s break this down as five parts. | ||
- | === A collection of software resources === | + | ==== A collection of software resources |
This refers very specifically to a high-level language programming framework, supported by additional resource such as: | This refers very specifically to a high-level language programming framework, supported by additional resource such as: | ||
Line 13: | Line 13: | ||
* Assets, such as GUI widgets, images, sound files or samples. | * Assets, such as GUI widgets, images, sound files or samples. | ||
- | === Audio programs === | + | ==== Audio programs |
This term is intended to encompass applications and plug-ins, for mobile and/or desktop platforms., where the primary emphasis is on interactive programs which generate and/or process audio in real time and can connect to audio-related I/O devices, including MIDI systems. Some audio kits may also include resources to create non-real-time programs, e.g., programs to generate and/or play audio files. | This term is intended to encompass applications and plug-ins, for mobile and/or desktop platforms., where the primary emphasis is on interactive programs which generate and/or process audio in real time and can connect to audio-related I/O devices, including MIDI systems. Some audio kits may also include resources to create non-real-time programs, e.g., programs to generate and/or play audio files. | ||
- | === High-level (programming) language === | + | ==== High-level (programming) language |
AudioKit is the canonical “audio kit”. It is based entirely on the use of Swift, a modern programming language which is not suitable for real-time DSP development, | AudioKit is the canonical “audio kit”. It is based entirely on the use of Swift, a modern programming language which is not suitable for real-time DSP development, | ||
Line 26: | Line 26: | ||
The interface category deserves further explanation. It encompasses everything required for the audio program (which may be a plug-in) to connect to, and interoperate with, related software (e.g. a DAW) in support of real-time, interactive audio and GUI functions. | The interface category deserves further explanation. It encompasses everything required for the audio program (which may be a plug-in) to connect to, and interoperate with, related software (e.g. a DAW) in support of real-time, interactive audio and GUI functions. | ||
- | === Real-time DSP code === | + | ==== Real-time DSP code ==== |
This is any code which processes audio and related data (e.g. MIDI) with real-time responsiveness. No audio kit should require custom DSP coding, though some may support it, to some degree. | This is any code which processes audio and related data (e.g. MIDI) with real-time responsiveness. No audio kit should require custom DSP coding, though some may support it, to some degree. | ||
- | === The " | + | ==== The " |
The most important aspect of an audio kit is its ability to serve as a scripting system for audio programs. I refer to this as the conductor principle, because it is embodied perfectly in the “conductor” portion of user-written code in AudioKit, Csound, etc. | The most important aspect of an audio kit is its ability to serve as a scripting system for audio programs. I refer to this as the conductor principle, because it is embodied perfectly in the “conductor” portion of user-written code in AudioKit, Csound, etc. | ||
- | ==== What is, and is not, an audio kit? ==== | + | ===== What is, and is not, an audio kit? ===== |
As I said earlier, AudioKit is the canonical audio kit. It meets all five of the conditions listed above. | As I said earlier, AudioKit is the canonical audio kit. It meets all five of the conditions listed above. | ||
Line 49: | Line 49: | ||
See https:// | See https:// | ||
- | ==== What about multi-platform targeting? ==== | + | ===== What about multi-platform targeting? |
The ability to write code once and deploy it on multiple platforms (e.g. Macintosh, Windows, Linux, iOS, Android, RasPi, other embedded hardware, etc.), and/or with support for multiple interface standards (e.g. VST/VST3, Audio Units v2/v3, LV2, network protocols, etc.), however highly desirable and practical, is not a requirement for a programming system to be called an audio kit. | The ability to write code once and deploy it on multiple platforms (e.g. Macintosh, Windows, Linux, iOS, Android, RasPi, other embedded hardware, etc.), and/or with support for multiple interface standards (e.g. VST/VST3, Audio Units v2/v3, LV2, network protocols, etc.), however highly desirable and practical, is not a requirement for a programming system to be called an audio kit. | ||
- | See Part 3 for more about all this. | + | ====== Expanding on the Conductor Principle in AudioKit |
- | + | ||
- | ===== Part 2: Expanding on the Conductor Principle in AudioKit ===== | + | |
What I’m calling the “conductor principle” is the notion that a program written in a high-level language like Swift can script the construction of composite structures in a DSP library, which then process audio autonomously on a separate thread, and at the same time present a control/ | What I’m calling the “conductor principle” is the notion that a program written in a high-level language like Swift can script the construction of composite structures in a DSP library, which then process audio autonomously on a separate thread, and at the same time present a control/ | ||
- | ==== AudioKit architecture and its limitations ==== | + | ===== AudioKit architecture and its limitations |
In AudioKit, the DSP library is the collection of “AK…” object classes, and everything else is based on the Audio Units mechanisms provided by Apple operating systems (Core Audio). | In AudioKit, the DSP library is the collection of “AK…” object classes, and everything else is based on the Audio Units mechanisms provided by Apple operating systems (Core Audio). | ||
Line 70: | Line 68: | ||
- The AU is too large and too limited to be a basic unit of DSP code. | - The AU is too large and too limited to be a basic unit of DSP code. | ||
- | ==== AudioKit fails to accommodate significant use cases ==== | + | ===== AudioKit fails to accommodate significant use cases ===== |
Re #2: The Audio Units technology was designed around the needs of a DAW, whose plug-ins are complete audio processors (generators, | Re #2: The Audio Units technology was designed around the needs of a DAW, whose plug-ins are complete audio processors (generators, | ||
Line 76: | Line 74: | ||
* Dynamic voice allocation in a polyphonic instrument | * Dynamic voice allocation in a polyphonic instrument | ||
- | ==== AudioKit fails to accommodate significant " | + | ===== AudioKit fails to accommodate significant " |
Because of this limitation, in AudioKit SynthOne it was necessary to pull all such dynamic functionality into a single Audio Unit, just as in conventional DAW plug-ins. The result is regrettable for two reasons: | Because of this limitation, in AudioKit SynthOne it was necessary to pull all such dynamic functionality into a single Audio Unit, just as in conventional DAW plug-ins. The result is regrettable for two reasons: | ||
Line 86: | Line 84: | ||
Key AudioKit Pro branded apps based around the new AKSampler--Digital D1, FM Player 2 and others--presented substantial programming challenges, as programmers tried using Swift code to compensate for key features (such as LFOs) which weren’t included in the original DSP implementation. | Key AudioKit Pro branded apps based around the new AKSampler--Digital D1, FM Player 2 and others--presented substantial programming challenges, as programmers tried using Swift code to compensate for key features (such as LFOs) which weren’t included in the original DSP implementation. | ||
- | ==== A failed experiment ==== | + | ===== A failed experiment |
Later, I tried to create a collection of C++ based synth building-block classes (e.g. oscillators, | Later, I tried to create a collection of C++ based synth building-block classes (e.g. oscillators, | ||
Line 93: | Line 91: | ||
- Most significant of all: the new C++ objects were not scriptable at the Swift level. Hence the whole approach simply sidestepped the central principle of an audio kit. | - Most significant of all: the new C++ objects were not scriptable at the Swift level. Hence the whole approach simply sidestepped the central principle of an audio kit. | ||
- | ==== A better approach? ==== | + | ===== A better approach? |
I am now thinking that the best way around these issues will be to add a new, dynamic, scriptable DSP subsystem to AudioKit: | I am now thinking that the best way around these issues will be to add a new, dynamic, scriptable DSP subsystem to AudioKit: | ||
Line 101: | Line 99: | ||
This is nothing more than wishful thinking right now. I don’t yet have any specific proposals for how it might be architected/ | This is nothing more than wishful thinking right now. I don’t yet have any specific proposals for how it might be architected/ | ||
- | ==== This is only the beginning ==== | + | ===== This is only the beginning |
I could go on and on, but I’ll restrain myself. I’ve hardly said anything about the importance of supporting standard interface technologies such as VST/ | I could go on and on, but I’ll restrain myself. I’ve hardly said anything about the importance of supporting standard interface technologies such as VST/ |