With non-linear DAWs such as Ableton Live, Bitwig, and Tracktion Waveform, your computer can play an active role in your workflow. Whether it’s a creative task like writing bass lines, or something as specific as choosing where to boost or attenuate frequencies, your software and hardware can step up to the plate and start making suggestions.
Over the course of two articles, we’ll be looking at several ways you can use your devices to generate ideas and create a machine-generated path through several processes. While we’ve used Ableton Live for the purpose of this article, the following applies to any non-linear DAW.
What is a non-linear DAW?
In a non-linear DAW, you are freed from the restraints of a timeline and are able to play any number of clips—aka regions for all your linear DAW users out there—in any succession at any given point in time. There is no beginning, middle, or end in a non-linear DAW. There is only now, then, and later.
Linear music is essentially an experience we have as listeners, not as users. As people who engage with music, whether you’re a technician, creative, or performer, the majority of the time we’re all over the place. When we compose and produce, we don’t start with the first note and end with the last—ideas come to us out of sequence and we put them together accordingly. When we mix there’s a lot of starting, stopping, isolating and tweaking that goes on, and when we practice, we only play things through from beginning to end when we’ve worked out everything leading up to that point.
Non-linear DAWs offer us environments that mimic this timeline free approach to experiencing music. Of course, you can also create an exportable and linear version of your idea to stick into a timeline, so in effect, non-linear DAWs also have linear functionality. For a quick hands-on experience, check out the first page of Ableton’s web-based learning modules.
What are Follow Actions?
Follow Actions are powerful things that will do your bidding. From the Ableton manual, “Follow Actions allow creating chains of clips that can trigger each other in an orderly or random way (or both).”
In a linear sense, Follow Actions are the crux of how playback works for live bands playing to tracks. One set of clips plays back at a certain tempo and once that song is done, it triggers a BPM change and launches the next set of clips, which continues to cascade until the end of the show. The band is playing along to a click/grid, and all is well. There is much more depth and flexibility to playback than this paragraph can explain, but many people primarily use Follow Actions for this specific and very linear live purpose.
Follow Actions also have the ability to create randomness and self generate their playback order based upon parameters you set. You can tell a clip to stop, play again, play the previous clip, play the next clip, play the first clip in an adjacent grouping, play the last clip in an adjacent grouping, play any clip in the group including the one that’s currently playing, or play any other clip.
In addition to setting parameters around what sort of action should take place, we have the ability to set a length of time for the action to take place. For instance, I can tell a clip to launch any other clip after three bars and three beats. To take this a step further, I can create two separate Follow Actions for a single clip and then set a ratio as to how likely each Follow Action will happen. So let’s say I have two Follow Actions: one is instructing the clip to play itself again after one beat, the other is set to play any other clip after one beat, and I have set this to a 2:1 ratio. That means that after one beat, two out of every three times the clip will start over, and one out of three times it will launch another clip in the group.
In this video, you’ll see this exact scenario in action, with a breakbeat audio file. Watch and listen as the computer generates drum patterns.
Use random and scale MIDI processing devices together
All DAWs have native audio effects for processing, and many also come with MIDI effects for processing MIDI, like an arpeggiator. MIDI data gets processed before it’s sent to a virtual instrument, so by giving the software specific instructions we can randomize and manipulate the output of a MIDI effect while putting certain restrictions on it.
An arpeggiator takes incoming MIDI information, such as a chord being played, and then arpeggiates it so that only one note is being played at a time in a certain order at a certain rhythmic subdivision. At a very basic level, setting the sequence of an arpeggiator to randomize is itself a form of letting the computer dictate the terms of an instrument's output. But let's take this a step further.
In addition to the ubiquitous arpeggiator, there are several other interesting MIDI devices we can play with.
Random: a random MIDI device will take in a MIDI note which has a dictated frequency/pitch and spit out another. C4 comes in, A#6 comes out. Left to its own devices, random will create atonal results. Certain parameters on this device will restrict the output, like whether the note is above or below incoming pitch.
Scale: a scale MIDI device will take incoming MIDI information and then pitch quantize it so that it fits inside of a specific scale. If a bunch of MIDI notes in the key of C major were sent to a scale device which is set to A# minor, something in the key of A# minor comes out.
To connect these two devices, let’s say that randomized MIDI notes come into a scale device set to A# minor. The device then quantizes all the pitches of those notes to fit the A# minor scale. And voila, you have yourself something tonal.
Let’s have the computer generate a bass line for us using these two devices. In this next example, there are three regions, each with a different MIDI sequence which consists of a single note played with three different rhythms. That note gets sent to the random device which randomizes it to another note above it up to 12 semitones or one octave away. That new note then gets sent to the scale device which shifts the pitch to its closest degree inside the A# minor scale.
To combine these two concepts, I’ll take the three rhythmic MIDI sequences and give them the exact same Follow Actions as the first example, being a 2/1 chance that the clip will play itself again or play any other clip, but the time parameter for the bass clips will be set to one bar as opposed to one beat. Global record will be activated, and then a set of clips will be launched to start the Follow Actions.
From there the software is running the show based upon instructions we provided it, and the results are computer generated, algorithmic music.
Are they coming for our jobs?
Before you start feeling threatened by the software’s ability to compose, remember that the only way this technique yields listenable results is because of a human making musical decisions on its behalf. Computers only do what we tell them too, and with these techniques, you can tell it to play an active role in your process.