The Sound of Squirrels // ICM W10
Nov 17, 2024
Introduction
For my ICM sound project, I worked on transforming the 2018 NYC Squirrel Census into a generative musical composition. This quirky dataset documents over 300 squirrel observations across New York City parks, capturing details such as fur colors, behaviors, locations, and even interactions with humans.
Initially I started with the OpenWeather API, where I hoped to translate real-time weather into sound. But the slow-updating data produced monotonous results. Then I discovered the Squirrel Census—a quirky, data-rich project that promised more.
The Squirrel Census is a multimedia science, design, art, and storytelling project focusing on the Eastern gray (Sciurus carolinensis).
The core concept of my project was to design a way to generate a soundscape through the dataset. Instead of a straightforward one-to-one mapping of data to sound, I aimed to create a layered and complex soundscape. Each squirrel's unique attributes—such as fur color, behavior, and location—simultaneously influenced various aspects of the musical output, from pitch and rhythm to timbre and spatial effects.
Dataset from the Squirrel Census
Using p5.js, I parsed the dataset and mapped its attributes to musical elements, allowing the generative system to "perform" the observations as a cohesive auditory experience. The result was a playful and dynamic composition, offering a new way to perceive and appreciate the lives of NYC's squirrels through sound.
Theory and Approach
The challenge of turning squirrel data into music required careful consideration of both data representation and musical aesthetics. I wanted to create a system that would represent the data and also produce engaging music/sounds.
Musical Scale Selection and Root Notes
The choice of scales and root notes was crucial for creating a cohesive sound. I worked with Claude to map the data to the notes by having it analyse the dataset and then map the datapoints to specific sounds to help me hear the data. Below are its recommendations:
Root Note Assignment:
Gray squirrels are rooted in G (55 Hz)
Black squirrels in A (57 Hz)
Cinnamon squirrels in C (60 Hz)
These root notes were chosen to create pleasant harmonies when different squirrel types appear together, as they form part of a C major triad.
The available scales each serve a distinct musical purpose:
Minor Scale [0, 2, 3, 5, 7, 8, 10]: Creates a more contemplative, mysterious mood that works well with urban wildlife observation. The minor scale's emotional quality adds a sense of story and drama to the composition.
Major Scale [0, 2, 4, 5, 7, 9, 11]: Offers a brighter, more playful character that matches the often energetic nature of squirrel behavior. This scale works particularly well when representing active behaviors like running or playing.
Pentatonic Scale [0, 2, 4, 7, 9]: A five-note scale that creates naturally pleasing harmonies. This scale was included because it's highly forgiving when notes are played randomly – any combination of notes within the pentatonic scale tends to sound harmonious together.
Data to Music Mapping Philosophy
The first step was identifying which squirrel characteristics would translate most naturally to musical elements. When I Prompted Claude I discovered that there were three key aspects that emerged as musical anchors:
Fur Color as Tonal Foundation: Each squirrel's primary fur color determines the root note of their "voice." This creates a consistent tonal identity for different squirrel types while maintaining musical coherence.
Height as Melodic Movement: A squirrel's height above ground influences the melodic progression. Ground-level squirrels produce lower pitches, while those high in trees create higher notes, mirroring their physical position in space. The height data maps to both octave shifts and scale position, creating more varied melodic possibilities.
Activities as Sound Character: Different squirrel behaviors shape the texture and dynamics of the sound:
Running produces short, energetic sounds with quick attack and decay
Eating creates softer, sustained tones with longer release times
Climbing generates medium-length notes with moderate attack and sustain
Location-Based Sound Design
The squirrel's location influences the acoustic properties of their sound:
Ground-level observations have minimal reverb and delay, creating a more immediate, "dry" sound
Above-ground locations trigger increased reverb and delay effects, suggesting the spaciousness of the tree canopy
The height also affects filter frequencies, with higher positions creating brighter timbres
Compositional Philosophy
The composition evolves through randomized selection of observations. I felt like this approach mirrors the unpredictable patterned nature of wildlife observation. While the data selection is random, the musical framework ensures that:
Notes always fall within harmonious scales
Rhythmic patterns emerge from behavior data
Spatial relationships are preserved through pitch and effects
Timbral variations maintain interest while reflecting squirrel activities
By balancing data representation with musical aesthetics, the system transforms scientific observations into an engaging sonic experience. The result is a soundscape that tells the story of New York's squirrels through rhythm, melody, and harmony.
The combination of carefully chosen scales, root notes, and sound design parameters ensures that even random selection of data points produces a cohesive musical experience, while still accurately representing the underlying squirrel census data.
Technical Implementation & Iterative Development
Stage 1: Loading the data and Basic Visualization
I first focused on establishing the foundation for the project by creating a robust data pipeline and basic visual interface. the goal was to successfully load the NYC Squirrel Census data from a CSV file, process it into a usable format, and create a simple but informative visualization that displays key squirrel characteristics. This stage was crucial as it sets up the data structure that will later drive the sound generation and creates an initial visual feedback system for debugging and development.
Pseudo-code Overview
Implementation
Key Challenges and Solutions
During this initial stage, I faced several key challenges:
Data Validation: The CSV contained inconsistent data formats, particularly in the height and activity fields. I implemented default values and null checks to handle these cases.
Performance Considerations: With over 300 squirrel records, I needed to ensure efficient data processing. I chose to process the data once during loading rather than repeatedly during playback.
Visual Feedback: Creating an informative but uncluttered display required careful consideration of layout and information hierarchy. I opted for a simple text-based display that would later serve as the foundation for more complex visualizations.
This initial stage provides a solid foundation for the subsequent development. The code is structured to allow easy expansion while maintaining clarity and performance.
Stage 2: Sound Engine
At this stage, I needed to establish how different squirrel characteristics would translate into sound. I decided to implement a SoundGenerator class that would handle multiple oscillators and basic effects, allowing me to create rich, layered sounds that could better represent the complexity of each squirrel's data. My goal was to create a basic but functional sound system that I could later expand with more sophisticated controls and mappings.
Pseudo-code Overview
Implementation
Key Challenges and Solutions
In developing the core sound engine, I encountered several challenges:
Sound Mapping Complexity: Translating squirrel characteristics into meaningful sound parameters was my first major hurdle. I decided to start with color-to-pitch mapping as my foundation, using specific root notes for each fur color. This created a consistent sound identity for different squirrel types.
Oscillator Management: Working with multiple oscillators simultaneously required careful amplitude management to prevent overwhelming the audio output. I implemented a scaling system that balanced the different oscillator volumes based on squirrel activities.
Timing System: Creating a reliable playback system that could maintain consistent timing while handling sound generation was crucial. I implemented a basic interval-based system using BPM calculations, which would later serve as the foundation for more complex rhythmic patterns.
At this stage, I had a functional sound engine that could generate basic musical output from my squirrel data. While still rudimentary, this implementation provided the foundation I needed to build more sophisticated sound design elements in the next stages. The combination of multiple oscillators already allowed for more interesting timbres than a single oscillator would have provided, and the basic envelope system gave me control over the sound's dynamic characteristics.
Stage 3: Advanced Sound Design and Mapping
With the basic sound engine in place, I focused on creating a more sophisticated and nuanced sound design system. My goal was to transform the somewhat basic synthesizer sounds into a rich, expressive musical instrument that could better represent the complexity of squirrel behaviors. I introduced a complete effects chain with reverb, delay, and filters, and created more detailed envelope mappings for different activities.
Pseudo-code Overview
Implementation
Key Challenges and Solutions
Effects Chain Management I needed to prevent audio artifacts and feedback loops in the complex effects system. I solved this by creating a structured chain (oscillators → filter → delay → reverb) with careful gain staging and manual audio routing.
Activity-Specific Sound Design Creating distinct sounds for each squirrel activity was challenging. I mapped different envelope shapes and oscillator combinations to each behavior - quick and punchy for running, sustained for eating, and moderate for climbing.
Performance Issues The multiple effects and envelopes caused performance problems. I optimized by reusing effect instances and updating parameters only when necessary, rather than creating new instances for each sound.
Spatial Representation Converting physical positions into meaningful sound was difficult. I used filter frequency for height, reverb/delay amounts for location, and subtle detuning to create more natural variations in the sound.
Stage 4: Advanced Sound Design and Mapping
In this final stage, I focused on making the project more interactive and user-friendly. I implemented a comprehensive control system that allows users to modify the playback in real-time, added random playback functionality to create more varied compositions, and enhanced the visual feedback system. My goal was to transform the project from a simple data sonification into an interactive musical instrument that could be "played" and explored by users.
Pseudo-code Overview
Final Implementation (Link to Prototype)
Future Directions
This project has been a really fun and engaging exploration of sound generation and creative interaction. Reflecting on it, I think incorporating squirrel sounds or noises to generate the sounds would have added an exciting new layer. The organic, playful quality of those sounds could have elevated the experience, making the project even more dynamic and unique.
I also recently discovered NYC Open Data, a fantastic resource offering a wealth of free and open datasets about New York City. The platform offers access to a datasets about the city's operations, environment, infrastructure, and services. Exploring these datasets, I see an opportunity to integrate environmental and urban data into my project. For instance, I could use the data on trees or green spaces to influence the rhythm or tone of the sounds, creating music that reflects the city's ecological landscape. This idea could be interesting to develop and I’m eager to see how I can use NYC Open Data to create even more innovative and meaningful soundscapes.