Prototyping an atrium-sized theremin

I’ve been exploring some hacking to prototype a sonification experiment – the idea was to build a way to provide audio biofeedback to shape the soundscape within a space in response to movement and activity. I prototyped a quick mockup using Python and imutils.

It started as a “skunkworks” project idea:

An atrium-sized theremin, so a person (or a few people, or a whole gaggle of people) could make sounds (or, hopefully, music) by moving throughout the atrium. A theremin works by responding to the electrical field in a person – normal-sized theremins respond to hand movements. An atrium-sized theremin might respond to where a person walks or stands in the atrium, or how they move. I have absolutely NO idea how to do this, but think it could be a fun way to gently nudge people to explore motion and position in a space. Bonus points for adding some form of synchronized visualization (light show? Digital image projection? Something else?)

So I started hacking stuff together to see what might work, and also to see if I could do it. I got the basic motion detection working great, using the imutils Python library. I then generated raw frequencies to approximate notes (based on the X/Y coordinates of an instance of motion).

Turn your volume WAY down. It sounds like crap and is horribly loud. But the concept worked. Motion tracking by a webcam overlooking the atrium of the Taylor Institute (the webcam was only there for the recording of this demo – it’s not a permanent installation), run through motion detection and an algorithm that calculates frequencies for notes played by each instance of movement during a cycle (the “players” count).

I updated the code after making this recording to refresh the motion detection buffer more frequently, so things like sunlight moving across a polished floor don’t trigger constant notes.

Next up: try to better explore what soundscapes could be algorithmically generated or modified in response to the motion input. Possibly using CSound?

and an updated version with improved motion detection (and annoying audio stripped out):


See Also

comments powered by Disqus
Last updated: March 30, 2023