Andrew’s idea was to create a palette of content that could support the composition process and had a meaningful relationship to the film’s main theme - earthquakes.
Over the next two months we honed in on a process and I developed a custom Kontakt instrument sourced completely from sonified earthquake data. Here are the details.
The SAESI Kontakt instrument was built from content generated by real-time seismic activity along the San Andreas fault in California. From May 1st until May 27th, 2014 seismic event data was recorded from the US Geological Survey and relayed to 6 sound synthesis engines. A collection of the generated content was then extracted and reprogrammed as a Kontakt instrument for use with a standard keyboard controller.
Using a custom Max patch we are able to ping the US Geological Survey’s realtime feed of seismic activity around the globe. The data is refreshed approximately every 15 seconds. For this particular project we chose to filter only data arriving between -117 and -124 degrees longitude and between 32 and 42 degrees latitude - more specifically, the San Andreas fault line. Data Sources and Parsing
From each seismic event, we can extrapolate data that is then fed to the sound engine. The following parameters are employed: Density, Magnitude, Latitude, Longitude, Depth (elevation). The values for each parameter are scaled and then transmitted via OSC to the sound design computer. OSC is a protocol similar to MIDI but is superior in that it can, among other things, send strings at a high resolution.
The Sound Computer
On the sound synthesis side of the project a Kyma sound computer has been used to build and run 6 independent sound engines. The Kyma environment is superior, not only its sound quality output, but also in its programmability. Each sound engine uses a different synthesis method that was recorded simultaneously, per event, for 20 seconds (when something good arrived from California!). Each sound engine has been captured to an independent audio file for further use in Kontakt.
The sound engines are as follows:
AtomicMachines - Uses resynthesis to read the partials from a string instrument, french horn and field recordings. The partials are then spectrally morphed based on the parameters received from the seismic data.
CracklingWorld - A cloud bank resynthesis engine. It generates granular, glassy sounds based on the input it receives. LiquidHum - A hybrid resynthesis and vocoder filterbank engine.
PizzicatoGrains - An array of various pizzicato string samples, analyzed and resynthesized in realtime based on the data input.
RumbleSynths - Similar to AtomicMachines but different ranges of input and random seed values.
ResoDrills - Synthetic spectrums generated from the data and run through two parallel harmonic resonators. The earthquake data dictates which partials are constructed.
Each event instrument layers the output of all six engines into a single keymapped program - containing audio files - one per sound engine. The instrument user can then modify the balance, transposition and tuning of each engine further.