Sensory Neuroengineering

Tobias Reichenbach
Department of Bioengineering, Imperial College London

EEG setup My group works on the biophysics of hearing and neuroscience, at the interface of science, technology and medicine.

We use ideas from theoretical physics, mathematics, and computer science in combination with ear and brain imaging to investigate principles of human auditory signal detection and processing. Together with clinical collaborators we also investigate auditory and language impairments.

We aim to apply our findings in novel bio-inspired technology as well as in novel technology diagnosing and rehabilitating hearing and communication impairments.

The group is part of the Department of Bioengineering at Imperial College London. We are funded by EPSRC, the Wellcome Trust, and the Royal Society.


Webpage on auditory illusions

neurostimulation and speech

Auditory illusions are fascinating for they show us how easy our sense of hearing can be deceived. As an example, a Shephard's tone sounds as if it consisted of a single frequency that was, eerily, constantly rising! With a group of students we have developed a webpage on auditory illusions. The webpage contains audio demonstrations of a variety of such illusions, explanation of their origin, as well as further background research. By understanding how our sense of hearing can go wrong, we can learn more about the way our ears and brains sense and perceive acoustic stimuli.

Enhancement of speech-in-noise comprehension through neurostimulation

neurostimulation and speech

Transcranial current stimulation can influence neuronal activity. As a striking example, current stimulation paired to a speech signal, with the current following the speech envelope, can influence the comprehension of speech in background noise. The speech envelope is thereby a slowly-varing signal with frequency contributions that lie mostly in the delta and in the theta frequency bands. Here we show that the modulation of speech comprehension results from the theta band, but not from the delta band. Moreover, we find that the theta-band stimulation without an additional phase shift improves the speech comprehension as compared to a sham stimulus.

M. Keshavarzi, M. Kegler, S. Kadir, T. Reichenbach,
Transcranial alternating current stimulation in the theta band but not in the delta band modulates the comprehension of naturalistic speech in noise,
Neuroimage (2020) 210:116557. [pdf]

Measuring speech comprehension from EEG recordings

comprehension decoding

If hearing aids could measure how well a wearer understands speech, they might be able to optimize and adapt their signal processing to enable the best user experience. Hearing aids can potentially measure brain responses to speech from electrodes, but how this can inform on speech comprehension has remained unclear. Here we report significant progress on this issue. By combining machine learing with an experimental paradigm that allows to disentable lower-level acoustic brain responses from neural correlates of the higher-level speech comprehension, we show that speech comprehension can be decoded from scalp recordings.

O. Etard and T. Reichenbach,
Neural speech tracking in the theta and in the delta frequency band differentially encode clarity and comprehension of speech in noise,
J. Neurosci. 39:5750 (2019). [pdf]

Decoding attention to speech from the brainstem response to speech

ear eeg

We are often faced with high noise levels: in a busy pub or restaurant, for instance, many conversations occur simultaneously. People with hearing impairment typically find it difficult to follow a particular conversation, even when they use hearing aids. Current aids do indeed amplify all the surrounding sounds, not only the target. If a hearing aid could know which speaker a user aims to listen to, it could amplify that voice in particular and reduce background noise. Here we show that a hearing aid can potentially gain knowedlege of a user's attentional focus from measuring the auditory brainstem response from surface electrodes. We show in particular that short recordings, down to a few seconds, and a few scalp electrodes suffice for a meaningful decoding of auditory attention.

O. Etard, M. Kegler, C. Braiman, A. E. Forte, T. Reichenbach,
Real-time decoding of selective attention from the human auditory brainstem response to continuous speech,
Neuroimage 200:1 (2019). [pdf] [bioRxiv]

Tobias Reichenbach joins eLife's Board of Reviewing Editors

I am excited to join the Board of Reviewing Editors of eLife. eLife is a non-profit journal that is led by scientist, and is committed to publishing life-science research of high quality and importance in an open-access manner. I look forward to working with the eLife team and the academic community to further the journal's mission of providing a publication platform that helps scientists to accelerate discovery.

Neural responses to speech can help to diagnose brain injury

Brain imaging

Brain injury such as following traffic or sports accidents can lead to severe disorders, including disorders of consciousness. This disorder is currently diagnosed through behavioural assessments, but this method fails when patients are not able to respond overtly. We investigated whether neural responses to speech as measured from the clinically-applicable EEG can aid to diagnose disorders of consciousness. We focussed on the neural tracking of the speech envelope that can index attention to speech as well as speech comprehension. We find that the latency of the neural envelope tracking related to the severity of the disorder of consciousness: patients in a vegetative state without signs of consciuosness showed neural responses to the speech envelope that were significantly delayed compared to patients that exhibited consciusness.

C. Braiman, E. A. Fridman, M. M. Conte, C. S. Reichenbach, T. Reichenbach, N. D. Schiff
Cortical Response to the Natural Speech Envelope Correlates with Neuroimaging Evidence of Cognition in Severe Brain Injury,
Curr. Biol. 28:1-7 (2018). [pdf]

How we can tune in to a voice in background noise

The investigators

In order to focus on a particular conversation, listeners need to be able to focus on the voice of the speaker they wish to listen to. This process is called selective attention and has been extensively studied within the auditory cortex. However, due to neural feedback from the cortex to lower auditory areas, the auditory brainstem as well as the inner ear, these structures may already actively participate in attending to a particular voice.

We have devised a mathematical method to measure the response of the auditory brainstem to the pitch of natural speech. In a controlled experiment on selective attention, we have then shown that the brainstem responds stronger to the pitch of the voice that a person is listening to than to that of the ignored voice. Our findings demonstrate that the brainstem contributes already actively to selective attention. They also show that the pitch of a voice can be a powerful cue to focus on that voice, which may inspire future speech-recognition technology.

A. E. Forte, O. Etard and T. Reichenbach,
The human auditory brainstem response to running speech reveals a subcortical mechanism for selective attention,

eLife 6:e27203 (2017). [pdf] [bioRxiv]

Upcoming workshop on Speech and Hearing

I am excited to announce an upcoming workshop Physics of Hearing: From Neurobiology to Information Theory and Back at the Kavli Institute for Theoretical Physics (KITP), University of California Santa Barbara (UCSB), U.S.A.. The workshop will run from May 30 2017 to July 21 2017. Coordinated by Hervé Bourlard, Maria Neimark Geffen, Jim Hudspeth, and myself it will bring together researchers on the biophysics and neurobiology of hearing with those investigating the information theory of complex auditory signals. We expect that the combination of these two perspectives will foster novel and exciting collaborations between program participants and yield significant progress in the neurobiology of hearing and oral communication as well as in speech-recognition technology.