GetDunne Wiki

Notes from the desk of Shane Dunne, software development consultant

User Tools

Site Tools


on-audio-kits

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
on-audio-kits [2021/02/07 17:14]
shane created
on-audio-kits [2021/02/07 17:16] (current)
shane [On "Audio Kits" and the Future of AudioKit]
Line 1: Line 1:
 ====== On "Audio Kits" and the Future of AudioKit ====== ====== On "Audio Kits" and the Future of AudioKit ======
-This was originally formatted as an RTF file, but I decided to redo it using DokuWiki.+This was originally written 2021-02-05 as an RTF file, but I decided to redo it using DokuWiki.
  
-==== Part 1: Concept of an Audio Kit ====+====Concept of an Audio Kit =====
  
 I define an //Audio Kit// as a collection of software resources which allow novice and intermediate programmers to produce audio programs using a high-level language, without the need to write real-time DSP code, using the “conductor” principle. Let’s break this down as five parts. I define an //Audio Kit// as a collection of software resources which allow novice and intermediate programmers to produce audio programs using a high-level language, without the need to write real-time DSP code, using the “conductor” principle. Let’s break this down as five parts.
  
-=== A collection of software resources ===+==== A collection of software resources ====
  
 This refers very specifically to a high-level language programming framework, supported by additional resource such as: This refers very specifically to a high-level language programming framework, supported by additional resource such as:
Line 13: Line 13:
   * Assets, such as GUI widgets, images, sound files or samples.   * Assets, such as GUI widgets, images, sound files or samples.
  
-=== Audio programs ===+==== Audio programs ====
  
 This term is intended to encompass applications and plug-ins, for mobile and/or desktop platforms., where the primary emphasis is on interactive programs which generate and/or process audio in real time and can connect to audio-related I/O devices, including MIDI systems. Some audio kits may also include resources to create non-real-time programs, e.g., programs to generate and/or play audio files. This term is intended to encompass applications and plug-ins, for mobile and/or desktop platforms., where the primary emphasis is on interactive programs which generate and/or process audio in real time and can connect to audio-related I/O devices, including MIDI systems. Some audio kits may also include resources to create non-real-time programs, e.g., programs to generate and/or play audio files.
  
-=== High-level (programming) language ===+==== High-level (programming) language ====
  
 AudioKit is the canonical “audio kit”. It is based entirely on the use of Swift, a modern programming language which is not suitable for real-time DSP development, but excels at the non-DSP aspects of audio program development, which fall into 3 main categories: AudioKit is the canonical “audio kit”. It is based entirely on the use of Swift, a modern programming language which is not suitable for real-time DSP development, but excels at the non-DSP aspects of audio program development, which fall into 3 main categories:
Line 26: Line 26:
 The interface category deserves further explanation. It encompasses everything required for the audio program (which may be a plug-in) to connect to, and interoperate with, related software (e.g. a DAW) in support of real-time, interactive audio and GUI functions. The interface category deserves further explanation. It encompasses everything required for the audio program (which may be a plug-in) to connect to, and interoperate with, related software (e.g. a DAW) in support of real-time, interactive audio and GUI functions.
  
-=== Real-time DSP code ===+==== Real-time DSP code ====
  
 This is any code which processes audio and related data (e.g. MIDI) with real-time responsiveness. No audio kit should require custom DSP coding, though some may support it, to some degree. This is any code which processes audio and related data (e.g. MIDI) with real-time responsiveness. No audio kit should require custom DSP coding, though some may support it, to some degree.
  
-=== The "conductor" principle ===+==== The "conductor" principle ====
  
 The most important aspect of an audio kit is its ability to serve as a scripting system for audio programs. I refer to this as the conductor principle, because it is embodied perfectly in the “conductor” portion of user-written code in AudioKit, Csound, etc. The most important aspect of an audio kit is its ability to serve as a scripting system for audio programs. I refer to this as the conductor principle, because it is embodied perfectly in the “conductor” portion of user-written code in AudioKit, Csound, etc.
  
-==== What is, and is not, an audio kit? ====+===== What is, and is not, an audio kit? =====
  
 As I said earlier, AudioKit is the canonical audio kit. It meets all five of the conditions listed above. As I said earlier, AudioKit is the canonical audio kit. It meets all five of the conditions listed above.
Line 49: Line 49:
 See https://en.wikipedia.org/wiki/Comparison_of_audio_synthesis_environments for an excellent overview of software audio synthesis environments, and if you’re interested, check each against the five conditions to see which might qualify as an audio kit. See https://en.wikipedia.org/wiki/Comparison_of_audio_synthesis_environments for an excellent overview of software audio synthesis environments, and if you’re interested, check each against the five conditions to see which might qualify as an audio kit.
  
-==== What about multi-platform targeting? ====+===== What about multi-platform targeting? =====
  
 The ability to write code once and deploy it on multiple platforms (e.g. Macintosh, Windows, Linux, iOS, Android, RasPi, other embedded hardware, etc.), and/or with support for multiple interface standards (e.g. VST/VST3, Audio Units v2/v3, LV2, network protocols, etc.), however highly desirable and practical, is not a requirement for a programming system to be called an audio kit. The ability to write code once and deploy it on multiple platforms (e.g. Macintosh, Windows, Linux, iOS, Android, RasPi, other embedded hardware, etc.), and/or with support for multiple interface standards (e.g. VST/VST3, Audio Units v2/v3, LV2, network protocols, etc.), however highly desirable and practical, is not a requirement for a programming system to be called an audio kit.
  
-See Part 3 for more about all this. +=====Expanding on the Conductor Principle in AudioKit ======
-  +
-===== Part 2: Expanding on the Conductor Principle in AudioKit =====+
  
 What I’m calling the “conductor principle” is the notion that a program written in a high-level language like Swift can script the construction of composite structures in a DSP library, which then process audio autonomously on a separate thread, and at the same time present a control/parameters API through which the high-level program can interact with them in real-time (without threading issues). What I’m calling the “conductor principle” is the notion that a program written in a high-level language like Swift can script the construction of composite structures in a DSP library, which then process audio autonomously on a separate thread, and at the same time present a control/parameters API through which the high-level program can interact with them in real-time (without threading issues).
  
-==== AudioKit architecture and its limitations ====+===== AudioKit architecture and its limitations =====
  
 In AudioKit, the DSP library is the collection of “AK…” object classes, and everything else is based on the Audio Units mechanisms provided by Apple operating systems (Core Audio). In AudioKit, the DSP library is the collection of “AK…” object classes, and everything else is based on the Audio Units mechanisms provided by Apple operating systems (Core Audio).
Line 70: Line 68:
   - The AU is too large and too limited to be a basic unit of DSP code.   - The AU is too large and too limited to be a basic unit of DSP code.
  
-==== AudioKit fails to accommodate significant use cases ====+===== AudioKit fails to accommodate significant use cases =====
  
 Re #2: The Audio Units technology was designed around the needs of a DAW, whose plug-ins are complete audio processors (generators, instruments, audio effects, and MIDI effects), that are usually joined in very simple linear chains. This “coarse grained” approach is not suitable for important cases such as: Re #2: The Audio Units technology was designed around the needs of a DAW, whose plug-ins are complete audio processors (generators, instruments, audio effects, and MIDI effects), that are usually joined in very simple linear chains. This “coarse grained” approach is not suitable for important cases such as:
Line 76: Line 74:
   * Dynamic voice allocation in a polyphonic instrument   * Dynamic voice allocation in a polyphonic instrument
  
-==== AudioKit fails to accommodate significant "AudioKit apps" ====+===== AudioKit fails to accommodate significant "AudioKit apps" =====
  
 Because of this limitation, in AudioKit SynthOne it was necessary to pull all such dynamic functionality into a single Audio Unit, just as in conventional DAW plug-ins. The result is regrettable for two reasons: Because of this limitation, in AudioKit SynthOne it was necessary to pull all such dynamic functionality into a single Audio Unit, just as in conventional DAW plug-ins. The result is regrettable for two reasons:
Line 86: Line 84:
 Key AudioKit Pro branded apps based around the new AKSampler--Digital D1, FM Player 2 and others--presented substantial programming challenges, as programmers tried using Swift code to compensate for key features (such as LFOs) which weren’t included in the original DSP implementation. Key AudioKit Pro branded apps based around the new AKSampler--Digital D1, FM Player 2 and others--presented substantial programming challenges, as programmers tried using Swift code to compensate for key features (such as LFOs) which weren’t included in the original DSP implementation.
  
-==== A failed experiment ====+===== A failed experiment =====
  
 Later, I tried to create a collection of C++ based synth building-block classes (e.g. oscillators, dynamic voice management) in a now-defunct Core Synth branch of the AudioKit source tree. Although these components worked, I consider this a failed approach, for three reasons: Later, I tried to create a collection of C++ based synth building-block classes (e.g. oscillators, dynamic voice management) in a now-defunct Core Synth branch of the AudioKit source tree. Although these components worked, I consider this a failed approach, for three reasons:
Line 93: Line 91:
   - Most significant of all: the new C++ objects were not scriptable at the Swift level. Hence the whole approach simply sidestepped the central principle of an audio kit.   - Most significant of all: the new C++ objects were not scriptable at the Swift level. Hence the whole approach simply sidestepped the central principle of an audio kit.
  
-==== A better approach? ====+===== A better approach? =====
  
 I am now thinking that the best way around these issues will be to add a new, dynamic, scriptable DSP subsystem to AudioKit: I am now thinking that the best way around these issues will be to add a new, dynamic, scriptable DSP subsystem to AudioKit:
Line 101: Line 99:
 This is nothing more than wishful thinking right now. I don’t yet have any specific proposals for how it might be architected/implemented, and I would expect a lot of careful research and experimentation will be needed before a workable design could be devised. I think it’s worth doing. This is nothing more than wishful thinking right now. I don’t yet have any specific proposals for how it might be architected/implemented, and I would expect a lot of careful research and experimentation will be needed before a workable design could be devised. I think it’s worth doing.
  
-==== This is only the beginning ====+===== This is only the beginning =====
  
 I could go on and on, but I’ll restrain myself. I’ve hardly said anything about the importance of supporting standard interface technologies such as VST/VST3/AU/AUv3, and I haven’t talked about how nothing in the proposed new approach is at all specific to Swift, so it’s straightforward to imagine adding bindings to other high-level languages, thus extending the audio kit concept to non-Apple platforms. I could go on and on, but I’ll restrain myself. I’ve hardly said anything about the importance of supporting standard interface technologies such as VST/VST3/AU/AUv3, and I haven’t talked about how nothing in the proposed new approach is at all specific to Swift, so it’s straightforward to imagine adding bindings to other high-level languages, thus extending the audio kit concept to non-Apple platforms.
on-audio-kits.1612718054.txt.gz · Last modified: 2021/02/07 17:14 by shane