Gareth Davis
Gareth Davis

Soundlings & Gareth Davis – Interview about code & code & clarinet

Next week, on Sunday November 10th during Coded Matter(s) #2: Sound Hackers, we’ll be hosting a very special performance by Soundlings (represented by Tijs Ham (TH) en Roald van Dillewijn (RvD) on code) and Gareth Davis (GD) (on bass clarinet). Two live coders will be receiving input from and responding to a live clarinetist, in a fresh improvisational hybrid performance.

For those who are interested in how such a performance is conceived and developed, we did an interview with the performers, picking their brain about the collaboration, workflow and some technical details. Read on, dear readers (and don’t forget to buy your tickets in time if you want to see this performance!).

For those interested in this kind of collaboration and working musically with code, why not check out STEIM’s Creative Music Coding Lab #7 next week on Tuesday November 5th?


Can you tell us a bit about how this piece was developed?

GD – Last summer I was working with Duane Pitre, a composer/guitarist/sound artist from New Orleans. As part of the sketches we were working on for a new album, we also wanted to play around with some different software ideas. We spent two days working in Utrecht and Tijs came along, originally, to help with some MAX/MSP programming. In the end we drifted, first a bit, then completely, from the original plans and it became more of a jam session with myself playing acoustic instruments while Duane and Tijs used software.

The concept for the piece is simple. The bass clarinet functioned as a live instrument, but at the same time as an input device sending information to Duane and Tijs for them to use to create their own parts of the piece. These were, of course, just rough beginnings, but the idea was born. The ways in which the live and manipulated sounds from the same instrument fit together creates an obvious sense of homogeneity. In a way, it builds on the idea of a guitarist using a foot pedal, creating layers of sound from a single instrument. The fundamental difference is that unlike in the case of the guitarist, the sounds are divided between three people. The coders react to my playing and I, in turn, react to their interpretations, their deconstructions of what I have already played.

This is, of course, only the very basic concept, but it highlights the way in which the idea attempts to create a very organic process. The analogue source material allows for spontaneity while the software increases and extends the possibilities of the acoustic instrument.

TH – When FIBER asked for a performance reflecting the idea of the hybrid instrument I immediately thought of the project with Duane and Gareth. I really like the idea of taking just one source of sonic input out of which three distinct artistic voices emerge through complex sound manipulations. Each voice is musically equal and able to stand its ground, so to speak. One of the big challenges is to find the balance in the performance itself. Unlike the typical static laptop performance, the bass clarinet is quite a physical instrument to play. Roald (who will be taking Duane’s place for this performance) and I will have to put in all our efforts to reach a balance in that regard. We have collaborated a lot in the past few years, mostly as part of our live electronics band ‘The Void*’, but also as members of Soundlings.

How did you approach development of the piece itself? Was it composed, improvised, both?

TH – There is this wonderful grey area between fully composed pieces and complete improvisations. Some rules and limitations are definitely necessary to make the interactions among the players work out in a good way, but at the same time its important to keep the structure loose enough to keep it all playful and fresh. We are fortunate to be able to work on this collaboration during a short residency at STEIM. Soundlings and STEIM have already been working together during several ‘Think Tank Meetups’ in the past, so it feels natural to take this collaboration to the next level. Sharing a studio space like this forces Gareth, Roald and me to communicate as clearly as possible and work towards a common goal.

GD – The way in which the piece works, how the interaction functions, mostly depends on the coders. To create a live balance is also tricky, so some original ideas may work, but become problematic to develop later on and will thus need to be rethought as we take the piece further. In a way it’s not dissimilar to using a group of acoustic instruments. One may have an idea of the various sounds desired, as well as an idea of how they can be produced and could work in the whole. But as they are put together things change and become fluid, so balance is difficult. An aspect that made perfect sense as a concept can turn out to be difficult to navigate.

RvD – When you start working on a project like this there are always ideas in mind and small pieces of software that might be used in some way. But when the players meet each other and start playing some of these ideas might work and others not. My approach for these kind of projects is always based on improvising. However, during the short residency we’ll be doing at STEIM before the performance, some rules will be defined for the improvisations in order to make the piece a bit more structured.

Can you tell us a bit about the technical details of your workflow, both in development and performance?

TH – The software I mostly use for live electronics is SuperCollider, an open source audio programming language. For this particular project I will have to be as versatile as possible in order to both work with the input of Gareth and to create a distinctively different sonic experience compared to Roald’s work. I’m planning to make use of as many features of the laptop as I can. The keyboard and trackpad are not only useful as a way to insert code, they can also be used as a controller… to steer the code itself. Also things like the built in camera and microphone can be used to control sound.

RvD – For this, and all my pieces, I use Max/MSP. A visual audio programming language. I will use some machine-listening for this project to gather data from Gareth’s sound. With this data I will create new sounds or manipulate the input sounds in such a way that it becomes a completely new and different sound than what Tijs or Gareth are making. I will also use my computer as a controller for the software. The keyboard and trackpad as the standard controllers but I too will be using a couple of the built-in sensors.

Gareth, can you tell a bit about working with coders from the perspective of an ‘analogue’ instrumentalist?

GD – In the end, this is not so different from working in any electro-acoustic combination. I work equally in contemporary classical and improvised music, and there are similarities with both. The coding has a certain likeness to working with more academic (written) composers. The principle difference though, is that an academic composer would most likely provide a predefined software patch, something designed to do certain things at given times.

In our performance however, what is happening from the software side can be constantly evolving, making it closer to working in a live improv situation. The actual differences that matter are mostly practical ones. There are plenty of ways in which acoustic instruments just don’t work well together, requiring one to create a sense of balance. The same is true of working with electronics. Some things work better than others, it’s a matter of finding out what works and how to improve that which does not.

How has working on this project influenced your style of playing in other, non-code related projects and pieces?

GD – I don’t think working on this has had any particular influence, at least not more than working on any other project. Generally, I find all projects influence others, at least indirectly. The more you play, the more you discover how things work (or don’t) and every project takes you somewhere slightly different.

That being said, working on the creation of the piece from the start as opposed to just receiving a score and learning the notes, and where the software possibilities are constructed rather than prepackaged, gives a very different way of seeing how it’s possible to interact between instrument and code, something which can definitely not be a bad thing!