2020

James Dewey, Ph.D.

James Dewey, Ph.D.

University of Southern California
Filtering of otoacoustic emissions: a window onto cochlear frequency tuning

Healthy ears emit sounds that can be measured in the ear canal with a sensitive microphone. These otoacoustic emissions (OAEs) offer a noninvasive window onto the mechanical processes within the cochlea that confer typical hearing, and are commonly measured in the clinic to detect hearing loss. Nevertheless, their interpretation remains limited by uncertainties regarding how they are generated within the cochlea and how they propagate out of it. Through experiments in mice, this project will test theoretical relationships that suggest that OAEs are strongly shaped (or “filtered”) as they travel through the cochlea, and that this filtering is related to how well the ear can discriminate sounds at different frequencies. This may lead to novel, noninvasive tests of human cochlear function, and specifically frequency discrimination, which is important for understanding speech.

Mishaela DiNino, Ph.D.

Mishaela DiNino, Ph.D.

Carnegie Mellon University
Neural mechanisms of speech sound encoding in older adults

Many older adults have trouble understanding speech in noisy environments, often to a greater extent than their hearing thresholds would predict. Age-related changes in the central auditory system, not just hearing loss, are thought to contribute to this perceptual impairment, but the exact mechanisms by which this would occur are not yet known. As individuals age, auditory neurons become less able to synchronize to the timing information in sound. This project will examine the relationship between reduced neural processing of fine timing information and older adults’ ability to encode the acoustic building blocks of speech sounds. Limited capacity to code and use these acoustic cues might impair speech perception, particularly in the presence of background noise, independent of hearing thresholds. The results of this study will provide a better understanding of how the neural mechanisms important for speech-in-noise recognition may be altered with age, laying the groundwork for development of novel treatments for older adults who experience difficulty perceiving speech in noise.

Z. Ellen Peng, Ph.D.

Z. Ellen Peng, Ph.D.

University of Wisconsin-Madison
Investigating cortical processing during comprehension of reverberant speech in adolescents and young adults with cochlear implants

Through early cochlear implant (CI) fitting, many children diagnosed with profound neurosensorial hearing loss gain access to verbal communications through electrical hearing and go on to develop spoken language. Despite good speech outcomes tested in sound booths, many children experience difficulties in understanding speech in most noisy and reverberant indoor environments. While up to 75 percent of their time learning is spent in classrooms, the difficulty from adverse acoustics adding to children’s processing of degraded speech from CI is not well understood. In this project, we examine speech understanding in classroom-like environments through immersive acoustic virtual reality. In addition to behavioral responses, we measure neural activity using functional near-infrared spectroscopy (fNIRS)—a noninvasive, CI-compatible neuroimaging technique, in cortical regions that are responsible for speech understanding and sustained attention. Our findings will reveal the neural signature of speech processing by CI users, who developed language through electrical hearing, in classroom-like environments with adverse room acoustics.

Pei-Ciao Tang, Ph.D.

Pei-Ciao Tang, Ph.D.

University of Miami Miller School of Medicine
Elucidating the development of the otic lineage using stem cell-derived organoid systems

One of the main causes of hearing loss is the damage to and/or loss of specialized, cochlear hair cells and neurons, which are ultimately responsible for our sense of hearing. Stem cell–derived 3D inner ear organoids (lab-grown, simplified mini-organs) provide an opportunity to study hair cells and sensory neurons in a dish. However, the system is in its infancy, and hair cell–containing organoids are difficult to produce and maintain. This project will use a stem cell–derived 3D inner ear organoid system as a model to study mammalian inner ear development. The developmental knowledge gained will then be used to optimize the efficacy of the organoid system. As such, the results will progress our understanding of how the inner ear forms and functions, with the improved organoid system then allowing us directly to elucidate the factors causing the congenital hearing loss.

Bryan Ward, M.D.

Bryan Ward, M.D.

Johns Hopkins University School of Medicine
The effect of fluid volume on vestibular function and adaptation in patients with Ménière’s disease

Individuals with Ménière’s disease experience spontaneous attacks of spinning vertigo, ear fullness, tinnitus, and hearing loss. We do not know the pathophysiology of Ménière’s disease. On some tests of the inner ear, individuals with Ménière’s have responses indicating inner ear balance is not functioning well (absent caloric responses), but other tests suggesting it is (head impulse testing). The reason for this is debated. Strong magnetic resonance imaging (MRI) scanners cause dizziness and nystagmus (back-and-forth beating of the eyes from inner ear stimulation) in all healthy humans due to magnetic vestibular stimulation (MVS). The combination of MVS and MRI imaging provides a unique opportunity to better understand the physiology of patients with Ménière’s disease. This project will assess nystagmus in strong MRI machines in individuals with Ménière’s and compare this to tests of vestibular function and to imaging of the inner ear.

Ross Williamson, Ph.D.

Ross Williamson, Ph.D.

University of Pittsburgh
Characterizing tinnitus-induced changes in auditory corticofugal networks

The irrepressible perception of sounds without an external sound source is a symptom that is present in a number of different auditory dysfunctions. It is the primary complaint of tinnitus sufferers, who report significant “ringing” in the ears, and it is one of the primary sensory symptoms present in schizophrenia sufferers who “hear voices.” Tinnitus is thought to reflect a disorder in gain: A loss of input at the periphery shifts the balance of excitation and inhibition throughout the auditory hierarchy leading to excess hyperexcitability, which then leads to the perception of phantom sounds. This project aims to quantify how such “phantom sound” signals are routed and broadcast across the entire brain, and to understand how these signals impact our ability to perceive sound. Identifying improper regulation of brain-wide neural circuits in this way will provide a foundation for the development of new treatments for tinnitus and other hearing disorders.

Calvin Wu, Ph.D.

Calvin Wu, Ph.D.

University of Michigan
Development and transmission of the tinnitus neural code

Noise overexposure is a common risk factor of tinnitus, and is thus used as a common tinnitus inducer in animal research. However, noise exposure does not always cause tinnitus, and researchers would rely on behavioral testing to infer an animal’s subjective pathology. However, behavioral tests only work under the assumption that tinnitus is unchanging during the long testing period, which does not reflect the dynamic nature of tinnitus as well as ignoring variability. This inability to measure tinnitus within a short time window impedes our understanding of its emergence and progression. The project addresses these limitations through bypassing behavioral testing and directly identifying and locating an objective code for tinnitus in real-time spiking neurons. Using a novel data-driven approach, we can pinpoint exactly when/where tinnitus emerges and examine how noise trauma triggers and transmits the tinnitus signal throughout the auditory pathway.