Hearing Augmentation and Music

People Involved: 
John Wawrzynek
Status: 
Active

High performance signal processing in the right form factor will permit much improved hearing aids (we are already working with a leading manufacturer, Starkey Labs), sound delivery systems for concert halls, conference calls, and home sound systems (using large microphone and speaker arrays, where Meyer Sound Labs is an industrial partner), musical information retrieval, composition, and gesture-driven live-performance systems that exploit new sensor technologies.

These compute-intensive audio and music applications share common software components, are mappable to dwarfs, require real-time, low-latency I/O (<<5ms), and high reliability. Real-time is needed to ensure that audio output samples are generated at a steady rate. If a sample is late, an artifact (audio glitch or click) results. Reliability is critical, as failures cannot be tolerated in concert presentations or consumer applications.

Handsets for Hearing Augmentation (HA). This application will run on a many-core handset with wireless connections to ear bud or hearing aid devices equipped with microphones. We implement dynamics processing, multi-band compressors, noise reduction, source separation, and beam forming in an adaptive manner. These will result in a hearing aid that responds dynamically to the changing context of the listener selecting optimal settings for music [Wessel, Fitz et al. 2007], speech, television, car, and so forth.

Large Microphone and Speaker Arrays. Dense arrays of speakers and microphones will enhance listener experience in conference calls, living rooms, and in concert halls. Wavefield synthesis allows us to place auditory images in fixed locations in a room. Beam-forming microphone arrays aid in location and separation of sound sources. We have demonstrated success on a 120-channel speaker array and will provide a 400-channel speaker and matched microphone array as a testbed.