Decoding Brain Activity

Brain-computer interfaces (BCIs) allow for communication between the human mind and machines by decoding brain activity, recorded with, for instance, electroencephalography (EEG). However, traditional BCIs and neuroscience applications often require users to concentrate on synthetic and repeated stimuli, such as flickering patterns or beep tones. These requirements therefore block the widespread usage of such BCIs in daily-life situations beyond a few niche clinical applications. In this research line, we envisage a shift toward natural BCIs, where the goal is, e.g., to decode brain responses to natural audio-visual stimuli, enabling integration with daily-life activities and devices. We aim to design novel fundamental signal processing and machine learning algorithms that unlock BCI technology for much more widespread usage. As such, we work on various high-impact applications ranging from cognitively-controlled hearing devices and consumer earphones, tracking attention in learning environments, to understanding mental states and stress regulation.

Description of Image

State-of-the-art hearing aids and cochlear implants underperform in a so-called ‘cocktail party’ scenario when multiple persons talk simultaneously (think of a reception or family dinner party). In such situations, the hearing device does not know which speaker the user intends to attend to and, thus, which speaker to enhance and which other ones to suppress. Over the past years, we have developed various algorithms and technology that allow decoding the brain activity of listeners in cocktail party problems to determine which speaker a user is attending to. With this technology, we envisage a cognitively-controlled hearing aid that allows users to steer the hearing aid towards the speech source they want to listen to.