GetDunne Wiki

Notes from the desk of Shane Dunne, software development consultant

User Tools

Site Tools


on-audio-kits

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
on-audio-kits [2021/02/07 17:15]
shane
on-audio-kits [2021/02/07 17:16] (current)
shane [On "Audio Kits" and the Future of AudioKit]
Line 1: Line 1:
 ====== On "Audio Kits" and the Future of AudioKit ====== ====== On "Audio Kits" and the Future of AudioKit ======
-This was originally formatted as an RTF file, but I decided to redo it using DokuWiki.+This was originally written 2021-02-05 as an RTF file, but I decided to redo it using DokuWiki.
  
-===== Part 1: Concept of an Audio Kit =====+===== Concept of an Audio Kit =====
  
 I define an //Audio Kit// as a collection of software resources which allow novice and intermediate programmers to produce audio programs using a high-level language, without the need to write real-time DSP code, using the “conductor” principle. Let’s break this down as five parts. I define an //Audio Kit// as a collection of software resources which allow novice and intermediate programmers to produce audio programs using a high-level language, without the need to write real-time DSP code, using the “conductor” principle. Let’s break this down as five parts.
Line 53: Line 53:
 The ability to write code once and deploy it on multiple platforms (e.g. Macintosh, Windows, Linux, iOS, Android, RasPi, other embedded hardware, etc.), and/or with support for multiple interface standards (e.g. VST/VST3, Audio Units v2/v3, LV2, network protocols, etc.), however highly desirable and practical, is not a requirement for a programming system to be called an audio kit. The ability to write code once and deploy it on multiple platforms (e.g. Macintosh, Windows, Linux, iOS, Android, RasPi, other embedded hardware, etc.), and/or with support for multiple interface standards (e.g. VST/VST3, Audio Units v2/v3, LV2, network protocols, etc.), however highly desirable and practical, is not a requirement for a programming system to be called an audio kit.
  
-====== Part 2: Expanding on the Conductor Principle in AudioKit ======+====== Expanding on the Conductor Principle in AudioKit ======
  
 What I’m calling the “conductor principle” is the notion that a program written in a high-level language like Swift can script the construction of composite structures in a DSP library, which then process audio autonomously on a separate thread, and at the same time present a control/parameters API through which the high-level program can interact with them in real-time (without threading issues). What I’m calling the “conductor principle” is the notion that a program written in a high-level language like Swift can script the construction of composite structures in a DSP library, which then process audio autonomously on a separate thread, and at the same time present a control/parameters API through which the high-level program can interact with them in real-time (without threading issues).
on-audio-kits.1612718155.txt.gz · Last modified: 2021/02/07 17:15 by shane