“Music is feeling, then, not sound …”
– Peter Quince at the Clavier, Wallace Stevens
The conventional album – a set of a dozen or so discrete tracks in a fixed order – is a throwback, a trip down memory lane. The technical constraints that necessitated the form are no longer present. But the notion of album as collection of pieces held together by an overarching principle or theme is still a good one. Here are some starting-point ideas for albums of the 21st century.
Single album: An album consisting of one track. Manifold album: An album consisting of many (20, 200, 2000, etc.) tracks. Miniature album: An album whose total duration is 5 minutes or less. Gargantuan album: An album whose duration is several hours (days, weeks, months, years). Hybridizing these, how about an album of one track that lasts for three days? Or an album of 40 tracks that lasts for three minutes? (That’s an average of just under 5 seconds per track.)
Shuffle album: N tracks to be played back in random order. Enhanced album: music tracks with accompanying slideshows, animations, movies, games, etc. Homogeneous album: N variants of the same song. Heterogeneous album: N utterly different (genre, instrumentation, duration, mood, etc.) tracks. Interstitial album: between-track (interstitial) material is as important as the tracks themselves. Kamikaze album: destroys itself after playback. Plastic album: tracks change (subtly, moderately, dramatically) each time they are played back.
Every musical instrument has its mother tongue, its own native language that it speaks fluently and without accent. Determining factors include pitch range, timbre, articulation, agility, dynamics, and of course instrument-specific idiosyncratic abilities (drum rolls, shrieking electric guitar feedback, wild saxophone runs).
Consider, for example, the cello. Imagine it speaking (singing) a melodic line in its own unique cello-istic mother tongue. Pitch range is about that of a baritone: low to mid-low, with occasional forays into tenor territory. Timbre: sensual, voice-like, mellow to strident. Articulation: tending towards legato. Agility: more stately than that of a violin. Dynamics: barely perceptible whisper to healthy shriek. Idiosyncrasies: can play chords, harmonics, sul ponticello (on the bridge), pizzicato, etc.
Now apply the same analysis to violin, xylophone, electric guitar, timpani, congas, tablas, saxophone, double bass, piccolo, oboe, french horn, tuba, marimba, glockenspiel, timpani drums, zither, sitar, musical saw, human voice, theremin. Clearly, each instrument has its own unique personality: capabilities, limitations, quirks, charms, overall musical gestalt. A critical part of a composer’s job is to become deeply familiar with the personalities of the instruments he writes for. If not, he runs the risk of creating music that sounds wrong (awkward, inappropriate) on the instruments that play it.
Computer as Instrument
Dictionary.com defines instrument as a “a contrivance or apparatus for producing musical sounds.” Computers, in this sense, definitely qualify as instruments. Particularly for composers who build pieces from the ground up via computer: from sound generation, to editing, to mixing, to mastering. It is entirely possible these days to produce a professional-strength electronica composition from scratch with a computer, keyboard, mouse, and pair of decent speakers or headphones.
The question arises: If the computer is indeed an instrument, what is its mother tongue? It turns out this is very difficult to answer. An instrument’s mother tongue, its essential personality, is defined as much by what it can’t do as what it can. A flute can’t play below middle C. A piano can’t crescendo a held note. A snare drum can’t play a pitched melody. And so on. A computer, however, can do pretty much anything. Pitch range: unlimited. Timbre: unlimited. Articulation and agility and dynamics: all unlimited. So the question becomes: What is the mother tongue of an instrument that can speak all languages equally well? After a great deal of contemplation, here’s my stab at an answer.
On the Mother Tongue Trail
First, since the computer’s unlimited-ness is such a key part of its essential nature, it must also be a key part of its musical personality: Fluency and at-homeness with all manner of audio generation. A universal mother tongue; utter musical freedom. But where does that leave us? Too much freedom can be just as paralyzing as too little. Some form of directionality is needed, a set of paths through the endless field of possibilities.
To find these paths I came up with five essential qualities of a computer: binary logic, integer maths, intensive calculation, recursion, and randomization. Binary logic lies at the very heart of all computers: A bit (the atom of computing) is either on or off; there’s no in-between state. Because of this, computers work exclusively with integers (whole numbers), even when representing fractions. Pretty much everything computers do is built around intensive and lightning-fast calculation; around 20 billion operations per second on a typical PC these days. One of the computer’s favorite tricks, recursion, is like audio feedback: A portion of the result of a calculation is fed back into the original calculation over and over. Randomization is something computers do exceedingly well, due to the ease with which they can generate and manipulate (pseudo-)random sequences of numbers.
After arriving at these five essential computer qualities, I created a Reaktor ensemble that embodies them. Lingua Mater (Latin for “mother tongue”) generates an array of 128+ sine waves. (I considered using square waves, because of their binary waveforms, but they sound more harsh and specific than sines, more contextually loaded; sines are wonderfully neutral and abstract.)
Lingua Mater manifests binary logic by periodically turning each of its 128+ sine waves on and off. Integer maths is used to derive the sine pitches. Harmonics are integer multiples of a specified root frequency (RF): 1xRF, 2xRF, 3xRF, etc. Subharmonics are integer reciprocals of the root: 1/1xRF, 1/2xRF, 1/3xRF, etc. Intensive calculation is used to incorporate “jitter” into the audio flow: smooth, slow, subtle variations in pitch, volume, duration, etc. Recursion (feedback) is used in conjunction with delay to generate reverb-like echoes. Guided randomization determines jitter trajectories, sine-wave starting/ending points, pitches, volumes, pannings, and recursion parameters
Sure, computers can’t feel pleasure, at least not in the sense that we humans feel it. But they can be programmed to “feel” something akin to human pleasure. And, perhaps, when AI comes of age, they can program themselves to feel their own native form of deep digital pleasure. With this in mind:
What kind of music would computers compose for their own pleasure?
To answer, we need to grok how computers perceive the world: as numeric (binary) data. So their experience of a musical passage wouldn’t involve hearable sonic waveforms, rather the sequence of numeric values of the digitized samples of these waveforms.
What kind of music would pleasure a MacBook Pro? Well, consider what it perceives at the CPU level: blocks of 64 bits that keep “lighting up” in different patterns. Pattern 1 might persist for the duration of one sample (1/N seconds, where N = the sample rate), followed by pattern 2 1/N seconds later, then patterns 3, 4, 5, etc. Like the flashing grid of panel lights on a mainframe computer from a 50s sci-fi movie, only about 100,000 times faster. The pleasure would come from the succession of these 64-bit patterns, like frames from an ultra high-speed abstract movie. And how would this music sound to us humans if pumped through speakers? My guess: like an ongoing whoosh of textured noise.
Will this ever come to pass … that computers compose for themselves and their kin? Share pieces among each other, knowing full well that humans just don’t have what it takes to get it? Guess we’ll have to wait and see. If the Singularity is anything close to what futurists imagine — instead of just being another Y2K fizzle — the notion of computers composing for their own pleasure might be just the tip of the iceberg of AI weirdness.
“Creative listening” is the art of changing a piece in real time, as you listen to it. The listener is thus elevated from a passive observer to an active co-creator, co-composer; an integral component in the music’s unfolding.
You can use creative listening to change every imaginable aspect of what you are listening to: pitch (melody, harmony, relative highness or lowness), time (rhythm, beat, groove, swing, tempo, duration), volume (level, compression, accents), timbre (sound, overtones, edge), space (stereo and surround panning), form (organisation of material into sections), expression, emotional impact, etc. The only limits to the transformational properties of creative listening are those of the listener’s imagination.
For example, a novice creative listener might be able to hear a groove then nudge it up a few beats per minute in his “mind’s ear,” compress it more tightly, sharpen the rim shots, or add a touch of reverb to the mix. All of these changes would occur in real time, during the act of listening. An intermediate creative listener might be able to take the same groove and, in addition to the above, change the drum sounds, add another layer or two to the beat, and a bit of delay to keep things moving. And an advanced creative listener might be able to try out different meters, different degrees of swing, interpolate fills, rolls, stutters and reverses – and more.
So what is creative listening good for?
As a listener, it can help you enter into an active relationship with a piece of music, to become a collaborator rather than just an observer. Is the piece too busy? Add breath to it. Is it poorly mixed? Balance the layers and add compression. Is it bass-heavy? Add a low cut and boost the treble for edge. Is it too slow? Speed it up. Too fast? Slow it down. If you don’t have the skills or time to become an accomplished producer, this is the next best thing.
As a composer, creative listening can help you hone your pieces and make them more compelling, more “right.” Try out different tempos as you’re listening to the piece. Which one works best for the material and desired emotional impact? Try out different patches and effects. Which sound best? Would a four-bar dropout sound good here? How about a short break there to liven things up? Listening creatively to a piece of music – yours or someone else’s – keeps composers on their toes, open to change, evolving.
As a performer, there are benefits too: Listen to the mix and hear yourself enter before you actually commit to a single note. Rehearse different takes before committing to one. Figure out what the piece needs from you, then go for it. Improvisers in particular can benefit from this approach. Try different solos, vamps and grooves before you launch into them. Develop the ability to hear alternatives to what you’re playing in real time, as you’re playing, and to shift back and forth among these alternatives, like a dimensional traveller jumping between parallel realities. Quantum improv – why not?
Oy! More than a year since my last entry. Must remedy forthwith. Roger that.
Over the period of one full week, create a “sand mandala” piece. Any genre, acoustic or electronic. Work really hard on it, strive passionately to make your best music ever. Share it (with one person, a few, a bunch) at several phases during the course of its one-week lifespan. Then, at the end of the seventh day, destroy it, every copy, nothing remaining for all eternity.
Ashes to ashes, dust to dust.
What might qualify as extremely extreme minimalism? Music that takes one or more parameters to an almost insane point of rarefaction. Duration, for example. Many minimalist composers have worked with tempos under 10 bpm. But few have broken the 1 bpm border. Fewer still: .1, .01, .001 bpm. Imagine a piece with a tempo of one quarter-note pulse every 100 years. You’d end up with something that made the 639-year Cage organ piece seem like a fleeting bagatelle.
Pitchwise, imagine taking a 60-voice mixed choir (soprano, alto, tenor, bass) – each member of which had mastered the art of extended circular singing: being able to sing a long continuous steady tone while exhaling and inhaling – and having them all circular-sing the same pitch (vibrato-less of course) for 24 straight hours.
Minimalists often work with very low volume. But they only go so far. How about a piece that was so soft it could only (barely!) be heard by a person with perfect hearing leaning in right next to the sound source in an anechoic chamber?
Timbrally, imagine an orchestra made entirely of flutes, the acoustic instrument that sounds most like the harmonics-free sine wave. Picture, in your mind’s ear, Wagner’s Flight of the Valkyries played by 100 flutes at 20 bpm with a volume that never crept above pianissimo. Now that’s a concert I wouldn’t want to miss!
To extreme-ify something is to push it to its limits. The problem with extreme-ifying maximalism is that the maximal is, by definition, already at its (maximum) limit. So the notion of extreme must take on a different slant. Extreme-ifying the maximal means not going up to, but beyond its limits. This is a bit of an impossible task, in that it requires going further than the furthest point. But we experimentalists love impossible tasks, and the juicy paradoxes that arise from tackling them!
By way of example, consider a pitch scale that does not have the standard chromatic 12 tones per octave – or the conventional microtonal variants of 19 (19-EDO), 22 (Indian shrutis), 24 (quarter tone), 43 (Harry Partch), etc. – but a whopping 1200 notes per octave. That’s 100 ‘centitones’ per chromatic semitone. You could fit Bach’s entire B Minor Mass (pitchwise) in the space between C and C# with plenty of room to spare. It would probably sound like single long chorused note with gobs of highly ornate internal activity. Alternately, if you got each member of a 5000-person choir to take a different note within a four octave-block (C2 to C6), you’d end up with the densest vocal cluster of all time. Now that’s extreme maximalism!
So many new releases from talented, energetic, ambitious music producers of all ilk every day of every week/month/year … how c’hell is one supposed to decide what to listen to? And, with such a glut of new music, can any single piece *matter* anymore? Beyond the rush of hearing it for the first (and probably last) time? Is it all a grand eat-all-you-can/dare musical buffet; I’ll have a bite of this (then never think of it again), then this, then that one over there, then hey I forgot about these ones in the corner …
If everyone on the planet makes “competent” (thanks to technology) music, does new music become just another ho-hum consumable?
Guess I’m a modernist at heart … still believing (hoping) that personal/idiosyncratic masterpieces can be written. Singularities, rather than just anonymous blips on an endless online playlist.