Our brains can track the sounds in its environment while we sleep, and favor the most relevant ones, according to a recent study.

No great new information there. Everyone has woken up from sleep because of noise. But the mechanism that allows us (and some better than others) to sleep in complete safety and wake up at the right moment has remained a mystery. Why do some people who fall asleep on a bus or train miss their stop while others may only wake up at the sound of their own name but not that of others?

Studies that concentrated on the sleeping brain’s capacity to process isolated sounds don't help much with the real world, where we often sleep in environments where various sounds are superimposed and mixed with one another.

In a recent study, researchers identified the cerebral responses made while sleeping by a number of participants, who were simultaneously exposed to two voices that were highly similar in their acoustic properties, but radically different in terms of meaning: one pronounced excerpts from dialogues or articles, while the other pronounced a flow of words resembling French, but devoid of meaning.

They then used a technique that can reconstruct what the sleepers hear based on their brain activity. They were subsequently able to confirm that during light sleep, participants favored the message that held meaning for them. As a result, even while sleeping and unconscious, the brain records surrounding sounds, separates various acoustic sources, and selects those that are the most comprehensible.

This capacity to focus on what is relevant is temporary, as it involves only slow-wave and light sleep. The brain appears capable of processing information from the outside world during this sleep stage, but only for short periods of time. That ability to sleep with 'one ear closed' is why some can sleep on a bus, without missing their stop.

Citation: Guillaume Legendre, Thomas Andrillon, Matthieu Koroma and Sid Kouider, 'Sleepers track informative speech in a multitalker environment', Nature Human Behaviour, January 14, 2019. DOI:10.1038/s41562-018-0502-5