GetDunne Wiki

Notes from the desk of Shane Dunne, software development consultant

User Tools

Site Tools


overview

Overview of the VanillaJuce code

In the following, whenever I want to refer to a pair of files, e.g. PluginProcessor.h/.cpp, I'll use the typewriter font, but leave off the file extension, e.g. PluginProcessor.

The VanillaJuce code consists of three groups of files:

  1. PluginProcessor and PluginEditor represent the VanillaJuce plugin, as seen from the outside, i.e. by a DAW or other plugin host program.
  2. All the files starting with Synth represent the synthesizer (DSP) aspect.
  3. All the files starting with Gui represent the GUI aspect.

The "processor" object

The PluginProcessor files are the most important. These define a new C++ class VanillaJuceAudioProcessor, derived from the juce::AudioProcessor. Every plugin needs to a juce::AudioProcessor-derived object (this object instance is the plugin). The GUI, which is defined by the PluginEditor files, is actually optional; the fact that VanillaJuceAudioProcessor::hasEditor() returns true is what tells the plugin host that this particular plugin also has a custom GUI.

The processor needs to be able to notify the GUI editor when it changes one or more synth parameters (e.g. when a new preset is selected), so it can update the GUI display. This can be done in any number of ways, but I chose to have the VanillaJuceAudioProcessor class also derive from the juce::ChangeBroadcaster, and the VanillaJuceAudioProcessorEditor inherit from juce::ChangeListener. The processor calls its sendChangeMessage() function to notify the editor, which results in a call to the editor's changeListenerCallback() function.

To understand how parameter changes are propagated in the reverse direction—from GUI to synthesizer—we need the following overview some of the objects which make up the DSP aspect of VanillaJuce.

The "Synth" objects

The DSP aspect of VanillaJuce is represented by four main classes as follows:

  • Synth (derived from juce::Synthesiser) represents the synthesizer itself
    • There is exactly one Synth instance, which is a member variable of VanillaJuceAudioProcessor.
  • SynthVoice (derived from juce::SynthesiserVoice) represents the whole sound-generating apparatus.
    • SynthVoice encapsulates two SynthOscillator objects and one SynthEnvelopeGenerator object, which it uses to render incoming MIDI to output audio
    • The VanillaJuceAudioProcessor constructor creates 16 SynthVoice objects and adds them to the Synth instance.
  • SynthParameters (not derived from any JUCE class) is basically a struct full of member variables representing, e.g., oscillator waveforms, ADSR settings, etc.—all the details which collectively define one synth preset (or “program” in plugin parlance)
    • The VanillaJuceAudioProcessor object has a programBank member variable, which is an array of 128 SynthParameters objects.
  • SynthSound (derived from juce::SynthesiserSound) serves to link the other three classes.
    • The VanillaJuceAudioProcessor constructor creates exactly one SynthSound object and adds it to the Synth instance, but retains a pointer to it in its pSound member variable.
    • The SynthSound object contains a reference to the Synth object (which never changes), and a pointer to the currently-selected preset (a SynthParameters object, one of the elements of the processor's programBank array)

The SynthSound object and class juce::SynthesiserSound

The JUCE documentation says very little about the SynthesiserSound class. The class itself is almost trivial:

class JUCE_API  SynthesiserSound    : public ReferenceCountedObject
{
protected:
    //==============================================================================
    SynthesiserSound();
 
public:
    /** Destructor. */
    virtual ~SynthesiserSound();
 
    //==============================================================================
    /** Returns true if this sound should be played when a given midi note is pressed.
 
        The Synthesiser will use this information when deciding which sounds to trigger
        for a given note.
    */
    virtual bool appliesToNote (int midiNoteNumber) = 0;
 
    /** Returns true if the sound should be triggered by midi events on a given channel.
 
        The Synthesiser will use this information when deciding which sounds to trigger
        for a given note.
    */
    virtual bool appliesToChannel (int midiChannel) = 0;
 
    /** The class is reference-counted, so this is a handy pointer class for it. */
    typedef ReferenceCountedObjectPtr<SynthesiserSound> Ptr;
 
 
private:
    //==============================================================================
    JUCE_LEAK_DETECTOR (SynthesiserSound)
};

The constructor and destructor are empty, and the two pure-virtual member functions appliesToNote() and appliesToChannel() are very simple. appliesToNote() is clearly there to support things like keyboard splits, where different sounds are used for different note ranges, and appliesToChannel() would appear to work similarly to support multi-timbral synths, where different MIDI channels trigger different sounds. But what is this mysterious “sound” thing, and why does this class even exist?

The answer can be found in class juce::SynthesiserVoice, specifically SynthesiserVoice::startNote(). Have a look at this collection of override functions in class SynthVoice. (The ellipses … indicate where other code has been omitted for clarity.)

class SynthVoice : public SynthesiserVoice
{
    ...
 
    bool canPlaySound(SynthesiserSound* sound) override
    { return dynamic_cast<SynthSound*> (sound) != nullptr; }
 
    ...
 
    void startNote(int midiNoteNumber, float velocity, SynthesiserSound* sound, int currentPitchWheelPosition) override;
    void stopNote(float velocity, bool allowTailOff) override;
    void pitchWheelMoved(int newValue) override;
    void controllerMoved(int controllerNumber, int newValue) override;
 
    void renderNextBlock(AudioSampleBuffer& outputBuffer, int startSample, int numSamples) override;
 
    ...
};

Just by looking at this, you don't have to delve into the source for class juce::Synthesiser to see that its voice-assigning code most likely calls canPlaySound() to ensure that a given voice can actually play the given sound, and if so, calls startNote() with the current MIDI note number, key-down velocity and pitch-wheel position, plus a pointer to the sound object. Hence, unless we choose to add a lot of extra member variables to our SynthVoice class, the only way our voice objects know what sound to make will be via the SynthesiserSound* sound parameter to startNote().

So, here is the VanillaJuce SynthSound class declaration:

class SynthSound : public SynthesiserSound
{
private:
    Synth& synth;
 
public:
    SynthSound(Synth& ownerSynth);
 
    // our sound applies to all notes, all channels
    bool appliesToNote(int /*midiNoteNumber*/) override { return true; }
    bool appliesToChannel(int /*midiChannel*/) override { return true; }
 
    // pointer to currently-used parameters bundle
    SynthParameters* pParams;
 
    // call to notify owner Synth, that parameters have changed
    void parameterChanged();
};

Member variable synth object is a reference to the Synth object (which never changes). pParams is a pointer to the currently-selected preset. I've made pParams public so the VanillaJuceAudioProcessor object (which creates and “owns” the one SynthSound object) can change it whenever a different preset is selected, so it points to the appropriate entry in the programBank array.

All the Gui… class constructors take a SynthSound* argument, so they can use the pParams member to access the current parameter values, in order to display and modify them. Furthermore, whenever any part of the GUI changes a parameter value, it calls the parameterChanged() function, which is just this:

void SynthSound::parameterChanged()
{
    synth.soundParameterChanged();
}

Synth::soundParameterChanged() simply iterates over all active (currently-sounding) voices, and calls their soundParameterChanged() function. (I looked at the code for juce::Synthesiser to see how it handles iterating over all voices.)

void Synth::soundParameterChanged()
{
    // Some sound parameter has been changed. Notify all active voices.
    const ScopedLock sl(lock);
 
    for (int i = 0; i < voices.size(); ++i)
    {
        SynthVoice* const voice = dynamic_cast<SynthVoice*>(voices.getUnchecked(i));
        if (voice->isVoiceActive())
            voice->soundParameterChanged();
    }
}

The code for SynthVoice::soundParameterChanged() is not so trivial, but all it really does is re-initialize the currently sounding note so that the sound changes to reflect whatever was changed in the GUI.

overview.txt · Last modified: 2017/08/30 16:43 by shane