Chapter 18. Attention and Higher Cognition
Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.
By Alexa Robles-Gil Having an imaginary friend, playing house or daydreaming about the future were long considered uniquely human abilities. Now, scientists have conducted the first study indicating that apes have the ability to play pretend as well. The findings, published Thursday in the journal Science, suggest that imagination is within the cognitive potential of an ape and can possibly be traced back to our common evolutionary ancestors. “This is one of those things that we assume is distinct about our species,” said Christopher Krupenye, a cognitive scientist at Johns Hopkins University and an author of the study. “This kind of finding really shows us that there’s much more richness to these animals’ minds than people give them credit for,” he said. Researchers knew that apes were capable of certain kinds of imagination. If an ape watches someone hide food in a cup, it can imagine that the food is there despite not seeing it. Because that perception is the reality — the food is actually there — it requires the ape to sustain only one view of the world, the one that it knows to be true. “This kind of work goes beyond it,” Dr. Krupenye said. “Because it suggests that they can, at the same time, consider multiple views of the world and really distinguish what’s real from what’s imaginary.” Bonobos, an endangered species found only in the Democratic Republic of Congo, are difficult to study in the wild. For this research, Dr. Krupenye and Amalia Bastos, a cognitive scientist at the University of St. Andrews, relied on an organization known as the Ape Initiative to study Kanzi, a male bonobo famous for demonstrating some understanding of spoken English. (Kanzi was an enculturated ape born in captivity; he died last year at age 44.) © 2026 The New York Times Company
Keyword: Consciousness; Evolution
Link ID: 30112 - Posted: 02.07.2026
Elizabeth Quill Think about your breakfast this morning. Can you imagine the pattern on your coffee mug? The sheen of the jam on your half-eaten toast? Most of us can call up such pictures in our minds. We can visualize the past and summon images of the future. But for an estimated 4% of people, this mental imagery is weak or absent. When researchers ask them to imagine something familiar, they might have a concept of what it is, and words and associations might come to mind, but they describe their mind’s eye as dark or even blank. Systems neuroscientist Mac Shine at the University of Sydney, Australia, first realized that his mental experience differed in this way in 2013. He and his colleagues were trying to understand how certain types of hallucination come about1, and were discussing the vividness of mental imagery. “When I close my eyes, there’s absolutely nothing there,” Shine recalls telling his colleagues. They immediately asked him what he was talking about. “Whoa. What’s going on?” Shine thought. Neither he nor his colleagues had realized how much variation there is in the experiences people have when they close their eyes. This moment of revelation is common to many people who don’t form mental images. They report that they might never have thought about this aspect of their inner life if not for a chance conversation, a high-school psychology class or an article they stumbled across (see ‘How do you imagine?’). Although scientists have known for more than a century that mental imagery varies between people, the topic received a surge of attention when, a decade ago, an influential paper coined the term aphantasia to describe the experience of people with no mental imagery2. © 2026 Springer Nature Limited
Keyword: Attention; Consciousness
Link ID: 30107 - Posted: 02.04.2026
By Amy X. Wang Alice, fumbling through Wonderland, comes across a mushroom. One bite of it shrinks her down in size. Chowing on the other side makes her swell up, huge, taller than the treetops. Urgently, Alice sets to work “nibbling first at one and then at the other, and growing sometimes taller and sometimes shorter,” until finally she succeeds in “bringing herself down to her usual height” — whereupon everything feels “quite strange.” Is this Lewis Carroll’s 1865 fantasy tale or … the average body-conscious, improvement-obsessed 2026 Whole Foods shopper? Mushrooms, long venerated in literature as dark transformative forces, have become Goopified. Nowadays, you can chug “adaptogenic mushroom coffee,” slurp “functional mushroom cocoa,” doze off with “mushroom sleep drops” or ingest/imbibe any number of other tinctures in the billion-dollar fungal supplements market that promise to fine-tune, or even totally recalibrate, the self. The latest and hottest items in this booming new retail category are mushroom gummies, gushed over by wellness influencers, spilling out from supermarket shelves right there next to your standard cough drops and protein bars. Fungi have aided medical advances like antibiotics and statins, it’s true, and certain species have shown promising results in fighting Parkinson’s or cancer — but what these pastel gumdrops proffer is a broader, more elliptical “cellular well-being.” The mystique feels intentional on product-makers’ part: Like Carroll’s baffled heroine, maybe you’re meant to be in a bit of thrall to the mysterious, almighty mushroom — lurching through Wonderland, charmed and confused by design. After all, you wonder, what are these ancient, alien creatures, growing in the secret dark? Hippocrates was supposedly using them to cauterize wounds around the 5th century B.C.E. In the Super Mario video games, mushrooms might give you extra lives; in HBO’s “The Last of Us,” they bring about the ruin of human civilization. © 2026 The New York Times Company
Keyword: Attention; Drug Abuse
Link ID: 30102 - Posted: 01.31.2026
By Allison Parshall Until half a billion years ago, life on Earth was slow. The seas were home to single-celled microbes and largely stationary soft-bodied creatures. But at the dawn of the Cambrian era, some 540 million years ago, everything exploded. Bodies diversified in all directions, and many organisms developed appendages that let them move quickly around their environment. These ecosystems became competitive places full of predators and prey. And our branch of the tree of life evolved an incredible structure to navigate it all: the brain. We don’t know whether this was the moment when consciousness first arose on Earth. But it might have been when living creatures began to really need something like it to combine a barrage of sensory information into one unified experience that could guide their actions. It’s because of this ability to experience that, eventually, we began to feel pain and pleasure. Eventually, we became guided not just by base needs but by curiosity, emotions and introspection. Over time we became aware of ourselves. This last step is what we have to thank for most of art, science and philosophy—and the millennia-long quest to understand consciousness itself. This state of awareness of ourselves and our environment comes with many mysteries. Why does being awake and alive, being yourself, feel like anything at all, and where does this singular sense of awareness come from in the brain? These questions may have objective answers, but because they are about private, subjective experiences that can’t be directly measured, they exist at the very boundaries of what the scientific method can reveal. Still, in the past 30 years neuroscientists scouring the brain for the so-called neural correlates of consciousness have learned a lot. Their search has revealed constellations of brain networks whose connections help to explain what happens when we lose consciousness. We now have troves of data and working theories, some with mind-bending implications. We have tools to help us detect consciousness in people with brain injuries. But we still don’t have easy answers—researchers can’t even agree on what consciousness is, let alone how best to reveal its secrets. The past few years have seen accusations of pseudoscience, results that challenge leading theories, and the uneasy feeling of a field at a crossroads. © 2025 SCIENTIFIC AMERICAN,
Keyword: Consciousness; Brain imaging
Link ID: 30090 - Posted: 01.21.2026
By Pria Anand I loved literature before I loved medicine, and as a medical student, I often found that my textbooks left me cold, their medical jargon somehow missing the point of profound diseases able to rewrite a person’s life and identity. I was born, I decided, a century too late: I found the stories I craved, not in contemporary textbooks, but in outdated case reports, 18th- and 19-century descriptions of how the diseases I was studying might shape the life of a single patient. These reports were alive with vivid details: how someone’s vision loss affected their golf game or their smoking habit, their work or their love life. They were all tragedies: Each ended with an autopsy, a patient’s brain dissected to discover where, exactly, the problem lay, to inch closer to an understanding of the geography of the soul. To write these case studies, neurologists awaited the deaths and brains of living patients, robbing their subjects of the ability to choose what would become of their own bodies—the ability to write the endings of their own stories—after they had already been sapped of agency by their illnesses. Among these case reports was one from a forbidding state hospital in the north of Moscow: the story of a 19th-century Russian journalist referred to simply as “a learned man.” The journalist suffered a type of alcoholic dementia because of the brandy he often drank to cure his writer’s block and he developed a profound amnesia. He could not remember where he was or why. He could win a game of checkers but would forget that he had even played the minute the game ended. In the place of these lost memories, the journalist’s imagination spun elaborate narratives; he believed he had written an article when in fact he had barely begun to conceive it before he became sick, would describe the prior day’s visit to a far-off place when in actuality he had been too weak to get out of bed, and maintained that some of his possessions—kept in a hospital safe—had been taken from him as part of an elaborate heist. Sacks’ journals suggest he injected his own experiences into the stories of his patients. © 2026 NautilusNext Inc.,
Keyword: Attention; Learning & Memory
Link ID: 30089 - Posted: 01.21.2026
By Kristen French In 1998, neuroscientist Christof Koch bet philosopher David Chalmers that within 25 years, scientists would discover the neural correlates of consciousness. He was certain that we were on the cusp of solving the so-called hard problem: how the physical flesh of the brain gives way to the everyday streams of feelings, sensations, and thoughts that make up our waking experience. Nautilus Members enjoy an ad-free experience. Log in or Join now . That bet didn’t go well for Koch: A couple of years ago, he paid up, delivering a case of fine wine to his opponent on a conference stage in New York City. But many scientists still believe that the scientific keys to the kingdom of consciousness are within reach. Lately, some are focusing their attention on a new technology called transcranial focused ultrasound, in which acoustic waves are transmitted through the skull deep into the interior tissues. These waves can be used to stimulate specific target areas as small as a few millimeters in size and to monitor the changes that result. Now, two researchers from MIT have mapped out specific ways to use the technology to chip away at the hard problem. Because transcranial focused ultrasound offers a powerful and noninvasive way to alter brain activity, it will allow scientists to track cause-and-effect for the first time, they argue. In a new paper, published in Neuroscience and Biobehavioral Reviews, they plot out a series of experiments that will aim to answer how consciousness arises in the brain—and where. “Transcranial focused ultrasound will let you stimulate different parts of the brain in healthy subjects, in ways you just couldn’t before,” Daniel Freeman, an MIT researcher and co-author of the paper, explained in a statement. “This is a tool that’s not just useful for medicine or even basic science, but could also help address the hard problem of consciousness. It can probe where in the brain are the neural circuits that generate a sense of pain, a sense of vision, or even something as complex as human thought.” © 2026 NautilusNext Inc.,
Keyword: Consciousness
Link ID: 30086 - Posted: 01.17.2026
By Rachel Barr Philosophers and scientists have always kept close company. Look back far enough, and it’s hard to tell where one ends and the other begins. Before we had instruments to measure reality, we had to reason our way into it, but that intellectual lineage is what eventually gave us the scientific method. As technology advanced and the scope for observation expanded, specializations splintered off from philosophy to reconstitute as the sciences. Astronomy cleared the sky of deities and showed us a universe governed by gravity, not gods. Geography mapped a not-so-flat Earth, then geology dated it, stratifying earthly time in isotopes and sedimentary layers. Physics folded time into space, and with it, reimagined us not as beings apart from nature, but as a continuation of its energy and mass. We are not, as Pink Floyd suggested, “lost souls swimming in a fishbowl.” We are matter, muddling our way through life in relativistic motion. Now, in the 21st century, science is tracing a map through the other great unknown: the mind. Advances in biophotonics and neuroimaging have brought us closer than ever to a material picture of the mind, but the questions we’re now brushing up against aren’t melting away under empirical gaze. Instead, neuroscience has wandered back to philosophy’s front door, testing the limits of its most durable questions. 1. Free will In the early 19th century, French physicist Pierre-Simon Laplace imagined the Universe as clockwork, each gear turning in obedience to natural law. He conceived of a demon who, knowing the position and momentum of every particle, could predict the future with perfect accuracy. This thought experiment crystallizes classical determinism: a world where there is no freedom, only inevitability.
Keyword: Consciousness
Link ID: 30068 - Posted: 01.07.2026
Luiz Pessoa When thousands of starlings swoop and swirl in the evening sky, creating patterns called murmurations, no single bird is choreographing this aerial ballet. Each bird follows simple rules of interaction with its closest neighbours, yet out of these local interactions emerges a complex, coordinated dance that can respond swiftly to predators and environmental changes. This same principle of emergence – where sophisticated behaviours arise not from central control but from the interactions themselves – appears across nature and human society. Consider how market prices emerge from countless individual trading decisions, none of which alone contains the ‘right’ price. Each trader acts on partial information and personal strategies, yet their collective interaction produces a dynamic system that integrates information from across the globe. Human language evolves through a similar process of emergence. No individual or committee decides that ‘LOL’ should enter common usage or that the meaning of ‘cool’ should expand beyond temperature (even in French-speaking countries). Instead, these changes result from millions of daily linguistic interactions, with new patterns of speech bubbling up from the collective behaviour of speakers. These examples highlight a key characteristic of highly interconnected systems: the rich interplay of constituent parts generates properties that defy reductive analysis. This principle of emergence, evident across seemingly unrelated fields, provides a powerful lens for examining one of our era’s most elusive mysteries: how the brain works. The core idea of emergence inspired me to develop the concept I call the entangled brain: the need to understand the brain as an interactionally complex system where functions emerge from distributed, overlapping networks of regions rather than being localised to specific areas. Though the framework described here is still a minority view in neuroscience, we’re witnessing a gradual paradigm transition (rather than a revolution), with increasing numbers of researchers acknowledging the limitations of more traditional ways of thinking. © Aeon Media Group Ltd. 2012-2026.
Keyword: Consciousness; Learning & Memory
Link ID: 30066 - Posted: 01.03.2026
Jon Hamilton Scientists are updating their view of how drugs like Adderall and Ritalin help children with attention deficit hyperactivity disorder stay on task. The latest evidence is a study of thousands of brain scans of adolescents that confirms earlier hints that stimulant drugs have little direct impact on brain networks that control attention. Instead, the drugs appear to activate networks involved in alertness and the anticipation of pleasure, scientists report in the journal Cell. "We think it's a combination of both arousal and reward, that kind of one-two punch, that really helps kids with ADHD when they take this medication," says Dr. Benjamin Kay, a pediatric neurologist at Washington University School of Medicine in St. Louis and the study's lead author. The results, along with those of smaller studies, support a "mindset shift about what stimulants are doing for people," says Peter Manza, a neuroscientist at the University of Maryland who was not involved in the research. The new research analyzed data from the Adolescent Brain Cognitive Development Study, a federally funded effort that includes brain scans of nearly 12,000 children. About 4% of these kids had ADHD when they entered the study, and nearly half of those were on a prescription stimulant. About 3.5 million children in the U.S. take an ADHD medication, and the number is rising. © 2025 npr
Keyword: ADHD; Learning & Memory
Link ID: 30059 - Posted: 12.31.2025
Lynne Peeples Near the end of his first series of chess matches against IBM’s Deep Blue computer in 1996, the Russian grandmaster Garry Kasparov lamented what he saw as an unfair disadvantage: “I’m really tired. These games took a lot of energy. But if I play a normal human match, my opponent would also be exhausted.” Why thinking hard makes us feel tired Whereas machine intelligence can keep running as long as it has a power supply, a human brain will become fatigued — and you don’t have to be a chess grandmaster to understand the feeling. Anyone can end up drained after a long day of work, at school or juggling the countless decisions of daily life. This mental exhaustion can sap motivation, dull focus and erode judgement. It can raise the odds of careless mistakes. Especially when combined with sleep loss or circadian disruption, cognitive fatigue can also contribute to deadly medical errors and road traffic accidents. It was partly Kasparov’s weary comments that inspired Mathias Pessiglione, a cognitive neuroscientist and research director at the Paris Brain Institute, to study the tired brain. He wanted to know: “Why is this cognitive system prone to fatigue?” Researchers and clinicians have long struggled to define, measure and treat cognitive fatigue — relying mostly on self-reports of how tired someone says they feel. Now, however, scientists from across disciplines are enlisting innovative experimental approaches and biological markers to probe the metabolic roots and consequences of cognitive fatigue. The efforts are getting a boost in attention and funding in large part because of long COVID, which afflicts roughly 6 in every 100 people after infection with the coronavirus SARS-CoV-2, says Vikram Chib, a biomedical engineer at Johns Hopkins University in Baltimore, Maryland. “The primary symptom of long COVID is fatigue,” says Chib. “I think that has opened a lot of people’s eyes.” © 2025 Springer Nature Limited
Keyword: Neuroimmunology; Attention
Link ID: 30049 - Posted: 12.13.2025
By Sara Talpos It’s been more than a decade since scientists first started publishing papers on neural organoids, the small clusters of cells grown in labs and designed to mimic various parts of the human brain. Since then, organoids have been used to study everything from bipolar disorder and Alzheimer’s disease, to tumors and parasitic infections. Because these new tools have the potential to reduce the use of animals in research — a goal of the current Trump administration — the field’s future may be more financially secure than other areas of scientific research. In September, for example, the federal government announced an $87 million investment into organoid research broadly. Matthew Owen brings a unique perspective to this emerging field. As a philosopher of mind, he focuses on trying to understand both what the mind is and how it relates to the body and the brain. He draws on the work of historical philosophers and applies some of their ideas to modern-day science. In 2020, as a visiting scholar in a neuroscience lab at McGill University, he was introduced to researchers working with organoids. Owen, who also does research in bioethics, wanted to help them address a perhaps unsettling question: Could these miniature cell clusters ever develop consciousness? Some experts believe that organoid consciousness is not likely to happen anytime in the near future, if at all. Still, certain experiments are prompting the question. In 2022, for example, researchers, including Brett Kagan of the Australian start-up Cortical Labs, published a paper explaining how they had taught their lab-grown brain cells to play a ping-pong-like video game. (Because the cells were placed in a single layer, the structures were not technically organoids, though they are expected to have similar capabilities.) In the process, the authors wrote, the tiny cell clusters displayed “sentience.” Undark recently spoke with Owen about this particular experiment and about his own writing on organoids.
Keyword: Consciousness; Development of the Brain
Link ID: 30048 - Posted: 12.13.2025
By Claudia López Lloreda A new commentary calls into question a 2024 paper that described a universal pattern of cortical brain oscillations. But that team has provided a more expansive analysis in response and stands by its original conclusions. Both articles were published today in “Matters Arising” in Nature Neuroscience. Ultimately, the back-and-forth suggests that a frequency “motif” may exist, but it may not be as general as the original study proposed, says Aitor Morales-Gregorio, a postdoctoral researcher at Charles University, who was not involved with any of the work. “The [2024] conclusions are way too optimistic about how general and how universal this principle might be.” The 2024 study identified a brain-wave motif in 14 cortical areas in macaques: Alpha and beta rhythms predominated in the deeper layers, whereas gamma bands appeared in the more superficial layers. Because this motif also showed up in marmosets and humans, the researchers speculated that it may be a universal mechanism for cortical computation in primates. “Results typically come with a level of variability, of noise, of uncertainty,” says 2024 study investigator Diego Mendoza-Halliday, assistant professor of neuroscience at the University of Pittsburgh. But this pattern “was just there the whole time, at all times, in many, many of the recordings.” The team leveraged the findings to create an algorithm that detects Layer 4 of the cortex. But the pattern is “by no means universal,” according to the new commentary, which found the motif in about 60 percent of the recordings in an independent monkey dataset. Further, the algorithm trained to identify Layer 4 of the cortex is unreliable, the commentary shows. © 2025 Simons Foundation
Keyword: Attention
Link ID: 30044 - Posted: 12.13.2025
Helen Pearson In some parts of the world, record numbers of people are being diagnosed with attention deficit hyperactivity disorder (ADHD). In the United States, for example, government researchers last year reported that more than 11% of children had received an ADHD diagnosis at some point in their lives1 — a sharp increase from 2003, when around 8% of children had (see ‘ADHD among US boys and girls’). But now, top US health officials argue that diagnoses have spiralled out of control. In May, the Make America Healthy Again Commission — led by US health secretary Robert F. Kennedy Jr — said ADHD was part of a “crisis of overdiagnosis and overtreatment” and suggested that ADHD medications did not help children in the long term. One thing that’s clear is that several factors — including improved detection and greater awareness of ADHD — are causing people with symptoms to receive a diagnosis and treatment, whereas they wouldn’t have years earlier. Clinicians say this is especially true for women and girls, whose pattern of symptoms was often missed in the past. Although some specialists are concerned about the risks of overdiagnosis, many are more worried that too many people go undiagnosed and untreated. At the same time, the rise in awareness and diagnoses of ADHD has fuelled a public debate about how it should be viewed and how best to provide support, including when medication is required. The emergence of the neurodiversity movement is challenging the view of ADHD as a disorder that should be ‘treated’, and instead proposes that it’s a difference that should be better understood and supported — with more focus on adapting schools and workplaces, for instance. “I do have a big problem with ‘disorder’,” says Jeff Karp, a biomedical engineer at Brigham and Women’s Hospital in Boston, Massachusetts, who has ADHD. “It’s the school system that’s disordered. It’s not the kids.” But many clinicians and people with ADHD argue that it is associated with difficulties — ranging from academic struggles to an increased chance of injuries and substance misuse — that justify its label as a medical condition, and say that medication is an important and effective part of therapy for many people. © 2025 Springer Nature Limited
Keyword: ADHD
Link ID: 30034 - Posted: 11.29.2025
Mariana Lenharo The obesity drug tirzepatide, sold as Mounjaro or Zepbound, can suppress patterns of brain activity associated with food cravings, a study suggests. Researchers measured the changing electrical signals in the brain of a person with severe obesity who had experienced persistent ‘food noise’ — intrusive, compulsive thoughts about eating — shortly after the individual began taking the medication. The study is the first to use electrodes to directly measure how blockbuster obesity drugs that mimic the hormone GLP-1 affect brain activity in people, and to hint at how they curb extreme food cravings. “It’s a great strategy to try and find a neural signature of food noise, and then try to understand how drugs can manipulate it,” says Amber Alhadeff, a neuroscientist at the Monell Chemical Senses Center in Philadelphia, Pennsylvania. The findings were published today in Nature Medicine1. Casey Halpern, a neurosurgeon-scientist at the University of Pennsylvania in Philadelphia, and his colleagues did not set out to investigate the effects of obesity drugs on the brain. The team’s goal was to test whether a type of deep brain stimulation — a therapy that involves delivering a weak electrical current directly into the brain — can help to reduce compulsive eating in people with obesity for whom treatments such as bariatric surgery haven’t worked. The scientists set up a study in which participants had an electrode implanted into their nucleus accumbens, a region of the brain that is involved in feelings of reward. It also expresses the GLP-1 receptor, notes Christian Hölscher, a neuroscientist at the Henan Academy of Innovations in Medical Science in Zhengzhou, China, “so we know that GLP-1 plays a role in modulating reward here”. This type of electrode, which can both record electrical activity and deliver an electrical current when needed, is already used in people to treat some forms of epilepsy. © 2025 Springer Nature Limited
Keyword: Obesity; Attention
Link ID: 30016 - Posted: 11.19.2025
By Nora Bradford Here are three words: pine, crab, sauce. There’s a fourth word that combines with each of the others to create another common word. What is it? When the answer finally comes to you, it’ll likely feel instantaneous. You might even say “Aha!” This kind of sudden realization is known as insight, and a research team recently uncovered how the brain produces it (opens a new tab), which suggests why insightful ideas tend to stick in our memory. Maxi Becker (opens a new tab), a cognitive neuroscientist at Duke University, first got interested in insight after reading the landmark 1962 book The Structure of Scientific Revolutions (opens a new tab) by the historian and philosopher of science Thomas Kuhn. “He describes how some ideas are so powerful that they can completely shift the way an entire field thinks,” she said. “That got me wondering: How does the brain come up with those kinds of ideas? How can a single thought change how we see the world?” Such moments of insight are written across history. According to the Roman architect and engineer Vitruvius, in the third century BCE the Greek mathematician Archimedes suddenly exclaimed “Eureka!” after he slid into a bathtub and saw the water level rise by an amount equal to his submerged volume (although this tale may be apocryphal (opens a new tab)). In the 17th century, according to lore, Sir Isaac Newton had a breakthrough in understanding gravity after an apple fell on his head. In the early 1900s, Einstein came to a sudden realization that “if a man falls freely, he would not feel his weight,” which led him to his theory of relativity, as he later described in a lecture. Insights are not limited to geniuses: We have these cognitive experiences all the time when solving riddles or dealing with social or intellectual problems. They are distinct from analytical problem-solving, such as the process of doing formulaic algebra, in which you arrive at a solution slowly and gradually as if you’re getting warmer. Instead, insights often follow periods of confusion. You never feel as if you’re getting warmer; rather, you go from cold to hot, seemingly in an instant. Or, as the neuropsychologist Donald Hebb, known for his work building neurobiological models of learning, wrote in the 1940s, sometimes “learning occurs as a single jump, an all-or-none affair.” © 2025 Simons Foundation
Keyword: Attention; Learning & Memory
Link ID: 30004 - Posted: 11.08.2025
Ian Sample Science editor It’s never a great look. The morning meeting is in full swing but thanks to a late night out your brain switches off at the precise moment a question comes your way. Such momentary lapses in attention are a common problem for the sleep deprived, but what happens in the brain in these spells of mental shutdown has proved hard to pin down. Now scientists have shed light on the process and found there is more to zoning out than meets the eye. The brief loss of focus coincides with a wave of fluid flowing out of the brain, which returns once attention recovers. “The moment somebody’s attention fails is the moment this wave of fluid starts to pulse,” said Dr Laura Lewis, a senior author on the study at MIT in Boston. “It’s not just that your neurons aren’t paying attention to the world, there’s this big change in fluid in the brain at the same time.” Lewis and her colleague Dr Zinong Yang investigated the sleep-deprived brain to understand the kinds of attention failures that lead drowsy drivers to crash and tired animals to become a predator’s lunch. In the study, 26 volunteers took turns to wear an EEG cap while lying in an fMRI scanner. This enabled the scientists to monitor the brain’s electrical activity and physiological changes during tests in which people had to respond as quickly as possible to hearing a tone or seeing crosshairs on a screen turn into a square. Each volunteer was scanned after a restful night’s sleep at home and after a night of total sleep deprivation supervised by scientists at the laboratory. Unsurprisingly, people performed far worse when sleep deprived, responding more slowly or not at all. © 2025 Guardian News & Media Limited
Keyword: Sleep; Attention
Link ID: 29993 - Posted: 11.01.2025
Imma Perfetto Anyone who has ever struggled through the day following a poor night’s sleep has had to wrench their attention back to the task at hand after their mind drifted off unexpectedly. Now, researchers have pinpointed exactly what causes these momentary failures of attention. The new study in Nature Neuroscience found that the brains of sleep-deprived people initiate waves of cerebrospinal fluid (CSF), the liquid which cushions the brain, which dramatically impaired attention. This process usually happens during sleep. The rhythmic flow of CSF into and out of the brain carries away protein waste which has built up over the course of the day. When this is maintenance interrupted due to lack of sleep, it seems the brain attempts to play catch up during its waking hours. “If you don’t sleep, the CSF waves start to intrude into wakefulness where normally you wouldn’t see them,” says study senior author Laura Lewis of Massachusetts Institute of Technology’s (MIT) Institute for Medical Engineering and Science. “However, they come with an attentional trade off, where attention fails during the moments that you have this wave of fluid flow. “The results are suggesting that at the moment that attention fails, this fluid is actually being expelled outward away from the brain. And when attention recovers, it’s drawn back in.” © Copyright CSIRO
Keyword: Sleep; Attention
Link ID: 29992 - Posted: 11.01.2025
By Grigori Guitchounts On a mellow spring night, I gazed at the setting desert sun in Joshua Tree National Park in California. The sun glowed a warm blood-orange and the sky shimmered pink and purple. I had just defended my Ph.D. in neuroscience, and my partner and I had flown west to celebrate and exhale. It was early March 2020, and we were hoping to quiet our minds in the desert. I was also hoping to change mine. I had been curious about psychedelics for years, but it wasn’t until I read How to Change Your Mind by Michael Pollan about the new science of psychedelics, that I felt ready. The book made a compelling case that psychedelics provided a fascinating introspective experience. Still, I was nervous. I’d heard stories about bad trips and flashbacks. I knew enough neuroscience to know these were serious drugs—compounds that could temporarily dismantle how the brain makes sense of reality and potentially change it irreversibly. I also knew I was burned out. My Ph.D. had been hard in the way Ph.D.s often are: thrilling, lonely, disorienting. My advisor had left academia halfway through, and I’d spent years without much supervision, never quite sure whether I was on the right track and if I had a future in academia. But I didn’t take LSD seeking healing or clarity. I just wanted to see what the fuss was about. After years of hunkering down, I was craving a freeing experience. What followed was strange, intense, and beautiful. The wooden floorboards of our cabin turned into a bustling cityscape. The mirror in the bathroom showed my face aged beyond recognition: The natural lines in my skin became deep wrinkles, my eyes sunken, as if time had decided to give me a sneak peak of what would come. Later, absorbed with coloring pencils, I watched the marks I was making dissolve in real time, as if the paper were being erased by invisible rain. © 2025 NautilusNext Inc.,
Keyword: Drug Abuse; Consciousness
Link ID: 29979 - Posted: 10.22.2025
Asif Ghazanfar Picture someone washing their hands. The water running down the drain is a deep red. How you interpret this scene depends on its setting, and your history. If the person is in a gas station bathroom, and you just saw the latest true-crime series, these are the ablutions of a serial killer. If the person is at a kitchen sink, then perhaps they cut themselves while preparing a meal. If the person is in an art studio, you might find resonance with the struggle to get paint off your hands. If you are naive to crime story tropes, cooking or painting, you would have a different interpretation. If you are present, watching someone wash deep red off their hands into a sink, your response depends on even more variables. How we act in the world is also specific to our species; we all live in an ‘umwelt’, or self-centred world, in the words of the philosopher-biologist Jakob von Uexküll (1864-1944). It’s not as simple as just taking in all the sensory information and then making a decision. First, our particular eyes, ears, nose, tongue and skin already filter what we can see, hear, smell, taste and feel. We don’t take in everything. We don’t see ultraviolet light like a bird, we don’t hear infrasound like elephants and baleen whales do. Second, the size and shape of our bodies determine what possible actions we can take. Parkour athletes – those who run, vault, climb and jump in complex urban environments – are remarkable in their skills and daring, but sustain injuries that a cat doing the exact same thing would not. Every animal comes with a unique bag of tricks to exploit their environment; these tricks are also limitations under different conditions. Third, the world, our environment, changes. Seasons change, what animals can eat therefore also changes. If it’s the rainy season, grass will be abundant. The amount of grass determines who is around to eat it and therefore who is around to eat the grass-eaters. Ultimately, the challenge for each of us animals is how to act in this unstable world that we do not fully apprehend with our senses and our body’s limited degrees of freedom. There is a fourth constraint, one that isn’t typically recognised. Most of the time, our intuition tells us that what we are seeing (or hearing or feeling) is an accurate representation of what is out there, and that anyone else would see (or hear or feel) it the same way. But we all know that’s not true and yet are continually surprised by it. It is even more fundamental than that: you know that seemingly basic sensory information that we are able to take in with our eyes and ears? It’s inaccurate. How we perceive elementary colours, ‘red’ for example, always depends on the amount of light, surrounding colours and other factors. In low lighting, the deep red washing down the sink might appear black. A yellow sink will make it look more orange; a blue sink may make it look violet. © Aeon Media Group Ltd. 2012-2025.
Keyword: Vision; Attention
Link ID: 29961 - Posted: 10.08.2025
By Ellen Barry Around the time of the pandemic, I began to notice something happening in my social circle. A close friend, then in her early 50s, got a diagnosis of attention deficit hyperactivity disorder. She described it as a profound relief, releasing her from years of self-blame — about missed deadlines and lost receipts, but also things that were deeper and more complicated, like her sensitivity to injustice. Listen to this article with reporter commentary Something similar happened to a co-worker, and a cousin in his 30s, and an increasing number of people I met covering mental health. It wasn’t always A.D.H.D. For some of them, the revelation was a diagnosis of autism spectrum disorder: After years of inarticulate unease in social situations, they felt freed by the framework of neurodivergence, and embraced by the community that came along with it. Since then I’ve heard accounts from people who received midlife diagnoses of binge eating disorder, post-traumatic stress disorder, anxiety. Nearly all of them said the diagnosis provided relief. Sometimes it led to an effective treatment. But sometimes, simply identifying the problem — putting a name to it — seemed to help. Lately, it seems as if we never stop talking about the rising rates of chronic diseases, among them autism, A.D.H.D., depression, anxiety and PTSD. Health Secretary Robert F. Kennedy Jr. has pointed to these trends as evidence that Americans are “the sickest people in the world,” and has set about upending whole swaths of our public health system in search of causes, like vaccines or environmental toxins. But much of what we’re seeing is a change in diagnostic practices, as we apply medical labels to ever milder versions of disease. There are many reasons for this: The shame that once accompanied many disorders has lifted. Screening for mental health problems is now common in schools. Social media gives us the tools to diagnose ourselves. And clinicians, in a time of mental health crisis, see an opportunity to treat illnesses early. © 2025 The New York Times Company


.gif)

