Harmony.
auto-focused audio
Ideated and fully prototyped in the last three months of 2019, Harmony is a directional speaker that reimagines how sound can be used to shape and create environments. Harmony was designed and built as part of Making It, an undergraduate design course focused on product design and entrepreneurship held in the Center of Engineering Innovation and Design makerspace at Yale. It was the result of collaboration and teamwork by four undergraduate students: V. Patel, J. Payne, L. Wayland, and Valeria Villanueva.
Excerpt from the design brief:
Mission & Motivation
Harmony is a device that combines directional audio with motion tracking for a personalized and private sound experience that follows you as you move in space… Sharing space is difficult when people have varying auditory needs and preferences. While many products today are adept at sharing visual space (e.g. screens), current auditory solutions isolate individuals from each other and create barriers of communication… Allowing people to customize their auditory environment will enable spaces to accommodate new needs, increase the kinds of people that can cohabit a space, and create uniquely enchanting moments within otherwise mundane situations.
In the home environment, Harmony ameliorates the issue of private listening between friends or family where only some desire music. Public spaces, such as museums or exhibits, may employ Harmony to create personalized auditory connections between visitors and exhibits (e.g. by representing stories through sound). Private companies can similarly apply Harmony to attract passersby through an intriguing sound experience, or install Harmony indoors to guide customers to particular departments.
Design
The production of directional speakers for the average consumer is a recent and notably unexplored development. Harmony’s unfair advantage is our ability to integrate autonomous tracking with directional audio to create a new, shapeable soundscape. We adapted the SoundLazer ultrasonic transducer array to produce the directional sound. The array produces two ultrasonic carrier waves which are able to travel in a narrow cone over long distances. When these waves bounce off objects, the waves interact creating an interference pattern with a frequency that is the difference between those of the two waves. Using digital signal processing, the frequencies of the carrier waves are calibrated such that they produce an interference pattern identical to that of the desired audio wave.
In order to follow users as they travel through an environment, we used an RGB camera and an object recognition algorithm from Open Computer Vision that identifies and tracks facial features. We also developed a mechanism that allowed for 180º rotation in the horizontal direction, and 90º in the vertical direction. The coordinate inputs from the facial recognition piloted servos to center the aim of the directional audio on the user. These components were powered and operated through a Raspberry Pi 3B Plus and enclosed in a minimally designed case.