I just sat in the atrium and shaped this soundscape as people were walking through, trying to simulate some kind of response to motion. It’s still pretty muddy, but I think there’s something there. The trick will be in getting it to be unobtrusive and ambient while still providing information…
I played with a few layers of modulating soundscapes – one is a humpback whale track, the other, samples from a Speak-N-Spell. Not sure how annoying this might get in a longer-playing session, but some kind of cool effects that might be useful to be connected to inputs such as location, speed, vector, group size, etc.
Trying out Soundscaper as a way to explore synthetic modulated ambient soundscapes, to see what kinds of sounds might work for algorithmic generation from spatial data…
I’ve been exploring some hacking to prototype a sonification experiment – the idea was to build a way to provide audio biofeedback to shape the soundscape within a space in response to movement and activity. I prototyped a quick mockup using Python and imutils.
It started as a “skunkworks” project idea:
An atrium-sized theremin, so a person (or a few people, or a whole gaggle of people) could make sounds (or, hopefully, music) by moving throughout the atrium. A theremin works by responding to the electrical field in a person – normal-sized theremins respond to hand movements. An atrium-sized theremin might respond to where a person walks or stands in the atrium, or how they move. I have absolutely NO idea how to do this, but think it could be a fun way to gently nudge people to explore motion and position in a space. Bonus points for adding some form of synchronized visualization (light show? Digital image projection? Something else?)
So I started hacking stuff together to see what might work, and also to see if I could do it. I got the basic motion detection working great, using the imutils Python library. I then generated raw frequencies to approximate notes (based on the X/Y coordinates of an instance of motion).
Turn your volume WAY down. It sounds like crap and is horribly loud. But the concept worked. Motion tracking by a webcam overlooking the atrium of the Taylor Institute (the webcam was only there for the recording of this demo – it’s not a permanent installation), run through motion detection and an algorithm that calculates frequencies for notes played by each instance of movement during a cycle (the “players” count).
I updated the code after making this recording to refresh the motion detection buffer more frequently, so things like sunlight moving across a polished floor don’t trigger constant notes.
Next up: try to better explore what soundscapes could be algorithmically generated or modified in response to the motion input. Possibly using CSound?
and an updated version with improved motion detection (and annoying audio stripped out):