Yesterday I attended a half-day seminar at Widex A/S on modeling and measuring sound quality organized by the Audio Signal Processing Network in Denmark. There were four hour-long talks split between measuring sound quality and modeling sound quality with a focus on hearing impairment. (And I arrived a bit late because the taxi took me to Lyngby instead of Lynge.)
The first talk was by Dr. Nick Zacharov of SenseLab at DELTA in Denmark. What they are trying to do in that project is reduce the cost of performing large scale listening tests. Their flagship product is SenseLabOnline, which can be used to easily design standards-compliant listening tests, and to increase the number of test participants using the WWW. He showed a very interesting study suggesting that the controlled laboratory environment with calibrated everything and sound treated anything could be unnecessary for performing certain listening experiments. Data acquired from people at home using uncalibrated headphones were just as reasonable as that acquired in controlled environments with calibrated headphones.
The second talk was by Dr. Mark Huckvale of the Centre for Law-Enforcement Audio Research (CLEAR), Department of Speech, Hearing and Phonetic Sciences, University College London, U.K. One of the tasks of CLEAR is testing the claims of various commercial products that promote themselves as improving intelligibility of speech in noise. He showed conclusively that some state-of-the-art methods of denoising (e.g., MMSE short-time spectral amplitude estimation, or binary masking) applied in commercial products in fact reduce the intelligibility of the speech in noise, but are still “preferred” by listeners. His group tested (in very unique ways!) whether denoised speech takes less cognitive effort and found no evidence that such was true. In the end we see the danger in conflating listener “preference” with “performance”, and that denoising (using some methods that destroy consonant information) hurts intelligibility.
The last two talks were about building models of sound quality for particularly hearing impaired listeners. Such models would be of great benefit for objectively comparing and contrasting any and all methods of sound processing without the high costs associated with performing extensive listening tests. The presentation by Dr. Volker Hohmann of the Medical Physics Section, Carl von Ossietsky-Universität, Oldenburg, Germany, showed how they are applying auditory models to create objective measures of sound quality. And in the talk by Dr. Lars Bramsløw of Oticon A/S, Denmark, we saw the results of a quite thorough set of experiments with standardized models of objective sound quality: PEAQ, PSQM, and PhAQM. The bad news is that none of them performed very well for speech, but better for music. I don’t know what the music class consisted of, but I wonder if its wider bandwidth played a role.