
Data Sonification
research:
The research conducted in the context of my postdoctoral appointment at Qualcomm Institute integrates environmental data science and acoustics. I focus on the development and implementation of innovative, multi-modal data rendering techniques to sonify ecosystems and extreme environmental events.
​
As part of the ALERTCalifornia program, which manages a network of over 1,000 cameras across both urban and remote areas of the state, our team processes and stores terabytes of visual and raw data daily. My work specifically investigates how to interpret these datasets, using advanced sound synthesis and design methods to create immersive artificial audio representations of natural phenomena, such as wind, rain, snowstorms, dust storms, bird songs, and wildfires.
​
Additionally, I have been working on a project to design and deploy a custom array of acoustic sensors at these locations. These sensors will collect, store, and stream acoustic data, allowing for a deeper understanding of the soundscapes of distinct terrestrial environments and enabling new insights into the impacts of extreme events on the environment.
​

Hailstorm
Data from camera:
GirardRidge1
Location: Shasta County, CA
Date: May 09, 2022
Timeframe: 1600-2000

Wildfire
Data from camera:
ShirleyPeak2
Location: Shirley Peak, CA
Date: August 20, 2021
Timeframe: 1448-1510

Dust Storm
Data from camera:
Monument
Location: Monument Peak, CA
Date: October 06, 2022
Timeframe: 1600-1900
