Chapter 14. Attention and Higher Cognition
Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.
By Katie Engelhart The doctor told her that her husband was just a vegetable now. “And he’s always just going to be a vegetable.” Did he really say it like that? Vegetable? And, just? Well, that’s how she remembers it. In his notes, the doctor wrote that his patient’s prognosis was “Poor/Grave.” A few weeks earlier, on Oct. 4, 2024, while on a trip out of town, Aaron Williams said that his stomach hurt. Then he started vomiting and couldn’t stop, and then he started screaming. His wife, Tabitha, tried to drive him back home to Aiken, S.C. — and she was almost there, maybe 30 minutes away, when Aaron’s body stiffened and his limbs flung out and he went quiet. At the hospital, Aaron, who was 30, was found to be in cardiac arrest. Doctors performed CPR, and when it did not work, they did it again and again; Aaron’s small, lithe body — just 5-foot-8, 135 pounds — heaved under the force of it, until after five rounds of compressions his heart started beating again. Doctors inserted a breathing tube and attached it to a ventilator next to Aaron’s bed. Sitting at her comatose husband’s side, Tabitha could hear its quiet mechanical hiss. As it turned out, Aaron, who has Type 1 diabetes, had not been taking his insulin. Part of it, maybe, was hubris; he had been a diabetic since forever, and he thought he knew his body well enough to know when his glucose levels were really off-kilter. Also, he didn’t have a prescription; Aaron and Tabitha had recently moved, with five of their children, and he still hadn’t found a new family doctor who would take Medicaid. Doctors did a CT scan, an electroencephalogram (EEG) and later an M.R.I., and they saw evidence of a global anoxic brain injury and “severe cortical dysfunction.” There was cerebral swelling too: so much that his brain pushed outward against his skull, partly flattening the folds and ridges that covered its surface. When he was examined, Aaron had no blink reflex, and he didn’t respond to sound. © 2026 The New York Times Company
Keyword: Consciousness
Link ID: 30202 - Posted: 04.15.2026
By Diana Kwon The ability to conjure pictures in the mind’s eye enables us to remember the past and imagine the future. It also allows us to plan, navigate and create works of art. In a study published April 9 in Science, researchers report that imagining an object reactivates some of the same neurons involved in seeing it in the first place, providing new insight into how mental imagery is produced in the brain. Previous research had hinted that the neurons involved in perceiving and imagining images overlapped. These studies used various methods, such as asking participants to view and then imagine pictures while lying in a functional MRI scanner, to show that the same brain regions were involved in these processes. But whether the same individual neurons were involved remained an open question, says Ueli Rutishauser, a neuroscientist at Cedars-Sinai Medical Center in Los Angeles. Because measuring neuronal activity requires electrodes in the brain, Rutishauser and colleagues studied 16 adults with epilepsy who had already had electrodes temporarily implanted into their brains to identify the origin of their seizures. Participants viewed hundreds of images from five categories — faces, text, plants, animals and everyday objects — while researchers recorded activity from over 700 neurons in the ventral temporal cortex, a region involved in representing visual objects. Of those, about 450 selectively responded to individual categories. Machine learning then revealed that 80 percent of those category-responsive neurons were selective to specific visual features within the images. © Society for Science & the Public 2000–2026.
Keyword: Attention; Vision
Link ID: 30195 - Posted: 04.11.2026
By Andrew Jacobs As researchers have sought to demonstrate the therapeutic benefits of mind-altering drugs like LSD and psilocybin “magic mushrooms,” many have struggled to explain exactly how these compounds work on the human brain. One way scientists have tried to show what these compounds do is by using functional M.R.I. machines to peer into the brains of research participants in the midst of a psychedelic experience. This has produced evocative color images that show a maelstrom of activity as the drugs disrupt patterns of connectivity between brain regions and networks. But the interpretations of those scans, published in scientific journals, have been inconsistent and even contradictory. Over the past five years, an international consortium of researchers has tried to make sense of the divergent results by bringing together the data from nearly a dozen brain imaging studies in five countries that have been published since 2012. The studies included more than 500 scans of 267 research participants on five substances: LSD, psilocybin, mescaline, DMT and ayahuasca. Their findings, published on Monday in the journal Nature Medicine, suggest that psychedelics prompt a welter of activity between regions of the brain that normally operate somewhat independently: the areas that process sensory information like vision, hearing and touch, and those involved with abstract thinking and self-reflection. The research suggests that psychedelic compounds temporarily reduce the separation between how we think and how we perceive, which could explain the neurological mechanics behind the sensory distortions, mystical experiences and ego dissolution that patients report during sessions. © 2026 The New York Times Company
Keyword: Drug Abuse; Consciousness
Link ID: 30193 - Posted: 04.08.2026
By Mac Shine The brain is arguably the most complex object in the known universe, and neuroscience—the discipline charged with understanding it—has grown to match that complexity. Today, the field spans everything from the molecular choreography of a single synapse to the large-scale network dynamics that give rise to conscious experience. It is simultaneously one of the most exciting and most disorienting fields to work in. The conceptual map that connects our different subfields hasn’t been written yet. But a new study published in Aperture Neuro in February takes a remarkable step toward drawing that map. Led by Mario Senden, a computational neuroscientist at Maastricht University, the work applies state-of-the-art text embedding and community detection algorithms to nearly half a million neuroscience abstracts published between 1999 and 2023. It carves the literature into 175 distinct research clusters, characterizing each one along dimensions ranging from spatial scale to theoretical orientation. What emerges is a portrait of a discipline that is, in many ways, healthier than it might appear from the inside. Despite its staggering diversity—clusters range from AMPA receptor trafficking to the neural underpinnings of consciousness—the field is remarkably well integrated; the vast majority of research communities actively draw on and feed into one another. The cluster of resting-state functional MRI dynamics and the molecular mechanisms of hippocampal plasticity emerge as some of the field’s great intellectual hubs, providing conceptual and methodological scaffolding for dozens of downstream communities. But the map also has its fault lines. Microscale and macroscale research communities operate in two largely separate epistemic worlds, divided by spatial scale and by the training trajectories that produce different kinds of neuroscientists. Temporal scales are integrated only pairwise, never holistically. And perhaps most provocatively: Not a single cluster in the entire 175-cluster solution is organized around a theoretical framework. The Bayesian brain, the free energy principle and predictive coding are common targets of empirical science, yet none of them anchor their own research community. Theory, it seems, is something neuroscience does around the edges of the phenomena it is really interested in. © 2026 Simons Foundation
Keyword: Attention
Link ID: 30192 - Posted: 04.08.2026
Marielle Segarra When neurosurgeon and journalist Dr. Sanjay Gupta set out to write a book about pain, it wasn't because he felt like he had all the answers. It was because he was still so often mystified by it. "Most of my patients come to me for pain. Head pain, back pain, neck pain, whatever it might be," he says. "If that's what the majority of your professional life is, you should understand it as best you can." His 2025 book, It Doesn't Have to Hurt: Your Smart Guide to a Pain-Free Life, gathers the latest developments in pain science, based on his own experience with patients and conversations with researchers and doctors. What he found may challenge your own understanding of pain and even give you the tools to help you feel better. There's evidence, for example, that just learning about pain and how it works "seems to be pain relieving" for those with chronic pain conditions, he says. Gupta, who also serves as the chief medical correspondent for CNN, explains what we still don't know about pain and shares a few effective new treatments. This interview has been edited for length and clarity. In your book, you say that one of the most significant developments emerging in pain treatment is the fact that the brain is at the center of any pain experience. Can you tell us more about why that matters? What I think has become clear — and I'm not the first person to say this — is the idea that if the brain doesn't decide you have pain, then you don't have pain. © 2026 npr
Keyword: Pain & Touch; Attention
Link ID: 30189 - Posted: 04.04.2026
By Diana Kwon Human minds often wander. Whether we’re busy at work, doing chores or exercising, our thoughts frequently shift away from the task at hand. These spontaneous thoughts sometimes turn toward sensations in the body, such as our heartbeat or breath, and that could affect our immediate emotional state and long-term mental health, researchers report March 25 in Proceedings of the National Academy of Sciences. Many studies focus on thinking about memories, events and other people, what scientists consider the cognitive aspects of mind wandering, says Micah Allen, a neuroscientist at Aarhus University in Denmark. This research suggests that mind wandering plays an important role in planning, learning, creativity and other important mental processes. It has also been linked to negative emotions and some, such as obsessively ruminating on past mistakes, may contribute to depression, attention-deficit/hyperactivity disorder and other mental illnesses. Do you share our vision for a healthier, happier world through science? But how the mind might drift to bodily sensations, what some researchers call “body wandering,” and its effects have largely been overlooked, Allen says. He and colleagues had 536 people lie still in a magnetic resonance imaging scanner and then complete a questionnaire about what was on their minds during that time. In addition to the typical content of daydreams, such as memories, plans or social interactions, participants reported paying attention to sensations in their body, such as their breathing, heartbeats and bladder. The team also found evidence of this in the MRI scans: Body wandering appeared to have a distinct brain signature from that of “cognitive” mind wandering. © Society for Science & the Public 2000–2026.
Keyword: Stress; Attention
Link ID: 30188 - Posted: 04.04.2026
Anne-Laure Le Cunff It’s Monday morning at the lab and I have a team presentation due in two hours. I open my laptop intending to tweak a figure, then notice a paper I’d bookmarked. That paper cites another, which leads me to one of the authors’ new preprints. Soon I find myself with 27 tabs open, three half-formed ideas scribbled in my notebook, and a new app downloaded to prototype something that has nothing to do with my presentation. I know I should stop and I can feel the time pressure building, but the pull to wander is too strong – almost physical. Just five more minutes, I promise to myself, and I’ll return my attention to the ‘real’ work. Only when my anxiety becomes impossible to ignore do I force myself to come back to the slides. This little dance isn’t unusual for me and the millions of other people who can spend hours in deep, almost joyful focus when a question grabs our attention, but who can also derail ourselves completely when we hear about a shiny new idea. For a long time, I thought this was a personal failure of discipline, a quirk I needed to manage better. It’s only when I started working at the ADHD Research Lab at King’s College London that I came to believe it might be something else entirely. I’m a cognitive neuroscientist using behavioural experiments, eye-tracking and EEG to examine how attention is drawn toward some signals and away from others. In retrospect, the irony isn’t lost on me that I spent years studying attention without applying the same analytic lens to myself. To understand why I’d dismissed my own experience for so long, it helps to look at how ADHD is officially defined. ADHD, or attention deficit hyperactivity disorder, is characterised in the current edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5-TR) as ‘a persistent pattern of inattention and/or hyperactivity-impulsivity that interferes with functioning or development.’ The emphasis is on impairment: something is not working as it should. © Aeon Media Group Ltd. 2012-2026.
Keyword: ADHD
Link ID: 30184 - Posted: 04.01.2026
By Jennie Erin Smith Can a “friendly” rivalry between two artificial intelligence (AI) agents help reveal how the brain supports consciousness? That’s the suggestion coma researcher Martin Monti and his colleagues at the University of California, Los Angeles make in a paper published today in Nature Neuroscience. One of their two AI models generated realistic imitations of electrical patterns seen in conscious and unconscious brain states, from wakefulness to deep comas. Its counterpart had to identify these states. The results largely support established ideas about how the brain behaves during comas, vegetative states, and other disorders of consciousness. But they also suggest roles for a brain structure and a pattern of cell signaling not previously known to be involved in such disorders—predictions the scientists were able to test. Monti spoke with Science about how the paper’s two models, which he calls the “black box” and the “glass brain,” could reveal new ways to restore consciousness after brain injury. This interview has been edited for clarity and lengt Q: You built two AI models, with one designed to interrogate the other. Can you explain how they talk to each other? A: So here’s the game: We have two friends. One—let’s call it the black box—knows how to tell consciousness from unconsciousness. It’s been trained on 680,000 snippets of EEG [electroencephalography] data from animals and people in different states of consciousness. The other—think of it as a glass brain—is a real, biologically plausible simulation of the human brain. We tell it, “Your job is to move all of your knobs, every single parameter you’ve got, to trick the other guy—the black box—to think that you’re creating a real EEG of a conscious or unconscious state.” Now, we ask the glass brain, “Which brain parameters made the box think the EEG was unconscious?” © 2026 American Association for the Advancement of Science.
Keyword: Consciousness; Robotics
Link ID: 30176 - Posted: 03.25.2026
Nate Scharping Whether or not we have free will is a question philosophers have been debating for millennia. In the early 1980s, there was a brief moment when it appeared the debate may finally have been settled. The potential solution came not from philosophy, but neuroscience. The answer, somewhat depressingly, was that free will didn’t exist. Experiments carried out by the neuroscientist Benjamin Libet appeared to show decisions being made in the brain before people were even aware of them. It was as if science had finally revealed the strings of the puppet master controlling our thoughts and actions. To even casual observers of the history of inquiries into free will, this pronouncement felt premature. Thankfully, they were right. Scientists today are much more sceptical not only of the idea that free will doesn’t exist, but also of the notion that brain scans will ever definitively prove or disprove its existence. But why? Ultimately, the question of free will may be best left to philosophers, but that doesn’t mean it’s a topic neuroscientists should ignore. Experiments into how the human brain makes decisions have led to important insights into neurology and psychology, and have expanded our understanding of the brain’s inner workings. Those experiments include the ones Libet conducted in the 1980s, which, although viewed in a more critical light now, paved the way for decades of innovative research. The experiments were simple. Libet attached volunteers to an electroencephalogram (EEG) machine to monitor their brain activity, then placed a button in front of them and asked them to decide when they wanted to press it. While they were deciding, they had to watch a timer, consisting of a dot moving around the inside of a circle (like a second hand on a clock). Each volunteer had to note the dot’s position when they decided to press the button. With the EEGs, Libet was looking for something called a readiness potential, a build-up of activity in the brain’s motor cortex that precedes a muscle movement. He was hoping to see how a volunteer’s awareness of their decision to move (their noting of the dot’s position) lined up with their readiness potential. © Our Media 2026
Keyword: Consciousness; Attention
Link ID: 30171 - Posted: 03.21.2026
David Adam When neuroscientists gather in the Spanish city of Seville in May for the annual Dopamine Society meeting, one discussion could be unusually lively. Session 31 will feature a debate between researchers who fundamentally disagree about the role dopamine has in the brain. Dopamine is one of the most extensively studied neurotransmitters, chemicals that convey signals from cell to cell. It’s the one with the highest profile outside neuroscience: often known as the ‘pleasure chemical’, it’s depicted as the hit of reward that people get from recreational drugs or scrolling through social media. That’s a gross simplification of what dopamine does; on that, researchers agree. But beyond that, where once there was a simple model that explained how dopamine works in the brain, now there are challenges that seek to amend the theory — or even to overturn it. This could have implications not only for basic neuroscience, but also for clinicians trying to explain and treat conditions such as attention deficit hyperactivity disorder (ADHD) and addiction. If the model is wrong or needs modification, then so might some of the assumptions about what drives these disorders and the best way to treat them. The classic idea, known as the reward prediction error (RPE) hypothesis, is that bursts of dopamine in the brain link stimuli to rewards, helping to reinforce associations that fulfil a need for an animal or a person. The model has dominated and guided research in the field for decades, offering a mathematical framework to interpret data from animal experiments, and it does a good job of explaining behaviour. This was a valuable rarity for researchers struggling to overlay simple theories onto the intense complexity of the brain. “Dopamine was the one field of neuroscience where we had a computational model that explained what the signal was and what it was computing,” says Mark Humphries, a neuroscientist at the University of Nottingham, UK. People in the field knew that some of the assumptions involved in the RPE model were simplistic. But as a working understanding of part of the brain, it was seen as a major step forwards. © 2026 Springer Nature Limited
Keyword: Learning & Memory; Drug Abuse
Link ID: 30166 - Posted: 03.19.2026
By Brianne Kane, Fonda Mwangi, Alex Sugiura, Kylie Murphy, Jeffery DelViscio & Kendra Pierre-Louis In this episode of Science Quickly, journalist Michael Pollan joins Scientific American’s Bri Kane to unpack why consciousness is so hard to define in a discussion that explores what brain science, artificial intelligence experiments and even psychedelics might reveal about how awareness works. Bri Kane: Just to get us going on something really easy I wanted to ask you, Michael Pollan: Are you conscious, do you know if I’m conscious, and are you 100 percent certain that this microphone is not conscious? Michael Pollan: I can’t be sure you’re conscious. I have to infer that from the evidence: that you’re the same species as me, and our species can be conscious, and we have something called philosophy of mind, which is an imaginative faculty that allows us to imagine what other people are thinking. I know I’m conscious, I think. That’s actually the thing we know with the greatest certainty. I mean, [René] Descartes told us that 400 years ago: The only thing we can be sure of is the fact that we exist, and we are conscious. Everything else is an inference. So I’m inferring you’re conscious, and I’m gonna operate on that basis, if it’s okay. And then the microphone, the microphone hasn’t shown me any evidence of consciousness. Kane: So I mean, like you’re saying, there’s only so much evidence to point to for consciousness; some of it is kind of just your gut understanding. And our February cover issue this year was about these 29 different theories of consciousness, which you’ve covered is further evidence that science is really floundering on finding some solid ground on: What is consciousness, and how can we provide evidence to prove this, to tackle this subject with science? But your work seems to really discuss when science and philosophy start rubbing up against each other, which I think is why you get into some really interesting questions in this book. So I wanted to ask you: What theory, out of those 29, do you find yourself leaning towards that seems like the most probable understanding of consciousness? © 2025 SCIENTIFIC AMERICAN,
Keyword: Consciousness
Link ID: 30153 - Posted: 03.07.2026
By Nick Hilden Since the start of the so-called psychedelic renaissance some 25 years ago, writers have tackled the subject from the vantages of science, politics, mental health, productivity, creativity, spirituality, how-to, and even cooking. With his new book On Drugs, Justin Smith-Ruiu explores these powerful drugs through a philosophical lens, analyzing their effects and implications via thinkers spanning Foucault to Freud, Spinoza to Sartre, and scores of others who over the past 2,000 years have sought to explain the mysteries of the human experience. While authors have applied philosophy to psychedelics before, they have typically done so through the framework of mental health or otherwise medicinal frameworks, while Smith-Ruiu is more interested in treating psychedelics as philosophical objects worthy of examination in and of themselves. At the same time, he follows the drugs down the rabbit hole, sizing up what psychedelics taught him on a personal level, and delving into questions surrounding the scientific prohibition of auto-experimentation, whether the hallucinations conjured by psychedelics are real or imagined, and what they have to teach us about the nature of reality. A professor of history and science, Smith-Ruiu has previously applied philosophical analysis to some of the most pressing issues of our day. In The Internet Is Not What You Think It Is, he explored how the internet arose from some of our deepest philosophical yearnings. In Irrationality, he asserted that human irritation is fundamental to the human experience rather than a contextual social aberration. And in Nature, Human Nature, and Human Difference, he argued that our contemporary conceptions of race are not innate but rather emerged from the modern scientific efforts to classify and systematize. (He also happens to have an asteroid named after him—it doesn’t get much “higher” than that.) © 2026 NautilusNext Inc.,
Keyword: Drug Abuse; Consciousness
Link ID: 30146 - Posted: 03.04.2026
By Tim Vernimmen Steve Fleming’s research is definitely “meta” — a Greek prefix indicating self-reference. He’s a cognitive neuroscientist at University College London who studies metacognition: what we know about what we know, think about what we think, believe about what we believe. While this may seem quite philosophical and well-nigh impossible to study in the lab, he has made it his mission to measure and model it and understand where in the brain it manifests itself. Fleming explored these issues in his 2021 book, Know Thyself: The Science of Self-Awareness. In the 2024 Annual Review of Psychology, he further examined the link between metacognition and confidence: our sense of whether we have made the right decision, whether we are successful at the tasks presented to us, and whether our worldview is likely correct. Fleming’s work is casting new light on why some people seem chronically underconfident even when they’re doing just fine, and why others are entirely convinced they’re right about everything, even when there is overwhelming evidence to the contrary. In the following discussion, which has been edited for length and clarity, Fleming shared his thoughts on some of the questions that inevitably come up when our brains assess their own activity. Metacognition is quite an uncommon research topic. How did you end up studying this? I studied experimental psychology in Oxford, where I had the opportunity to work with psychologist Paul Azzopardi. He studies blindsight, a condition where, due to certain types of brain damage, people are subjectively blind but still able to perform various tasks using visual information. This presents a fascinating dissociation between conscious experience and actual functionality.
Keyword: Consciousness; Attention
Link ID: 30142 - Posted: 02.28.2026
By Katherine Ellison For most of her adult life, Katherine Sanders had what she calls a typical career for someone with attention deficit hyperactivity disorder. After finishing her doctoral thesis on Bronze Age Syrian mythology, she bounced between unrelated jobs. She tutored university students. She sewed Victorian corsets for bridal outfits. She designed stained glass and sold picture frames. She enjoyed the work, but none of it felt like a calling. Life got harder when she found herself juggling part-time work with caring for a spirited five-year-old. Sanders, who lives in Edinburgh, Scotland, burned meals on the stove and forgot to pick up her daughter from school. Finally, she decided to work with a coach to help her cope with her ADHD, a bedeviling condition whose hallmark symptoms are distraction, forgetfulness, restlessness and impulsivity. She began by enrolling in a digital course called Your ADHD Brain is A-OK. Like most ADHD coaches, Tracy Otsuka, the course producer, has herself been diagnosed with the disorder. Otsuka, based in Northern California, says she focuses on helping her clients shed shame as a prelude to finding their purpose and living more fulfilled lives. Participants in the self-paced course watch 26 videos and fill out worksheets designed to identify their values and strengths. Sanders says working with Otsuka led to a lightbulb moment for her. “This woman is very smart, she’s very savvy,” she says. “And she still did stuff like forget to pick her kid up from school.… She still does the same things as me.” The experience made Sanders realize that she, too, was smart but had a specific challenge she needed to learn to manage.
Keyword: ADHD
Link ID: 30134 - Posted: 02.21.2026
Carlo Iacono Everyone is panicking about the death of reading. The statistics look damning: the share of Americans who read for pleasure on an average day has fallen by more than 40 per cent over the past 20 years, according to research published in iScience this year. The OECD calls the 2022 decline in educational outcomes ‘unprecedented’ across developed nations. In the OECD’s latest adult-skills survey, Denmark and Finland were the only participating countries where average literacy proficiency improved over the past decade. Your nephew speaks in TikTok references. Democracy itself apparently hangs by the thread of our collective attention span. This narrative has a seductive simplicity. Screens are destroying civilisation. Children can no longer think. We are witnessing the twilight of the literate mind. A recent Substack essay by James Marriott proclaimed the arrival of a ‘post-literate society’ and invited us to accept this as a fait accompli. (Marriott does also write for The Times.) The diagnosis is familiar: technology has fundamentally degraded our capacity for sustained thought, and there’s nothing to be done except write elegiac essays from a comfortable distance. I spend my working life in a university library, watching how people actually engage with information. What I observe doesn’t match this narrative. Not because the problems aren’t real, but because the diagnosis is wrong. The declinist position rests on a category error: treating ‘screen culture’ as a unified phenomenon with inherent cognitive properties. As if the same device that delivers algorithmically curated rage-bait and also the complete works of Shakespeare is itself the problem rather than how we decide to use it. © Aeon Media Group Ltd. 2012-2026.
Keyword: Attention; Language
Link ID: 30132 - Posted: 02.21.2026
By Alexa Robles-Gil Having an imaginary friend, playing house or daydreaming about the future were long considered uniquely human abilities. Now, scientists have conducted the first study indicating that apes have the ability to play pretend as well. The findings, published Thursday in the journal Science, suggest that imagination is within the cognitive potential of an ape and can possibly be traced back to our common evolutionary ancestors. “This is one of those things that we assume is distinct about our species,” said Christopher Krupenye, a cognitive scientist at Johns Hopkins University and an author of the study. “This kind of finding really shows us that there’s much more richness to these animals’ minds than people give them credit for,” he said. Researchers knew that apes were capable of certain kinds of imagination. If an ape watches someone hide food in a cup, it can imagine that the food is there despite not seeing it. Because that perception is the reality — the food is actually there — it requires the ape to sustain only one view of the world, the one that it knows to be true. “This kind of work goes beyond it,” Dr. Krupenye said. “Because it suggests that they can, at the same time, consider multiple views of the world and really distinguish what’s real from what’s imaginary.” Bonobos, an endangered species found only in the Democratic Republic of Congo, are difficult to study in the wild. For this research, Dr. Krupenye and Amalia Bastos, a cognitive scientist at the University of St. Andrews, relied on an organization known as the Ape Initiative to study Kanzi, a male bonobo famous for demonstrating some understanding of spoken English. (Kanzi was an enculturated ape born in captivity; he died last year at age 44.) © 2026 The New York Times Company
Keyword: Consciousness; Evolution
Link ID: 30112 - Posted: 02.07.2026
Elizabeth Quill Think about your breakfast this morning. Can you imagine the pattern on your coffee mug? The sheen of the jam on your half-eaten toast? Most of us can call up such pictures in our minds. We can visualize the past and summon images of the future. But for an estimated 4% of people, this mental imagery is weak or absent. When researchers ask them to imagine something familiar, they might have a concept of what it is, and words and associations might come to mind, but they describe their mind’s eye as dark or even blank. Systems neuroscientist Mac Shine at the University of Sydney, Australia, first realized that his mental experience differed in this way in 2013. He and his colleagues were trying to understand how certain types of hallucination come about1, and were discussing the vividness of mental imagery. “When I close my eyes, there’s absolutely nothing there,” Shine recalls telling his colleagues. They immediately asked him what he was talking about. “Whoa. What’s going on?” Shine thought. Neither he nor his colleagues had realized how much variation there is in the experiences people have when they close their eyes. This moment of revelation is common to many people who don’t form mental images. They report that they might never have thought about this aspect of their inner life if not for a chance conversation, a high-school psychology class or an article they stumbled across (see ‘How do you imagine?’). Although scientists have known for more than a century that mental imagery varies between people, the topic received a surge of attention when, a decade ago, an influential paper coined the term aphantasia to describe the experience of people with no mental imagery2. © 2026 Springer Nature Limited
Keyword: Attention; Consciousness
Link ID: 30107 - Posted: 02.04.2026
By Amy X. Wang Alice, fumbling through Wonderland, comes across a mushroom. One bite of it shrinks her down in size. Chowing on the other side makes her swell up, huge, taller than the treetops. Urgently, Alice sets to work “nibbling first at one and then at the other, and growing sometimes taller and sometimes shorter,” until finally she succeeds in “bringing herself down to her usual height” — whereupon everything feels “quite strange.” Is this Lewis Carroll’s 1865 fantasy tale or … the average body-conscious, improvement-obsessed 2026 Whole Foods shopper? Mushrooms, long venerated in literature as dark transformative forces, have become Goopified. Nowadays, you can chug “adaptogenic mushroom coffee,” slurp “functional mushroom cocoa,” doze off with “mushroom sleep drops” or ingest/imbibe any number of other tinctures in the billion-dollar fungal supplements market that promise to fine-tune, or even totally recalibrate, the self. The latest and hottest items in this booming new retail category are mushroom gummies, gushed over by wellness influencers, spilling out from supermarket shelves right there next to your standard cough drops and protein bars. Fungi have aided medical advances like antibiotics and statins, it’s true, and certain species have shown promising results in fighting Parkinson’s or cancer — but what these pastel gumdrops proffer is a broader, more elliptical “cellular well-being.” The mystique feels intentional on product-makers’ part: Like Carroll’s baffled heroine, maybe you’re meant to be in a bit of thrall to the mysterious, almighty mushroom — lurching through Wonderland, charmed and confused by design. After all, you wonder, what are these ancient, alien creatures, growing in the secret dark? Hippocrates was supposedly using them to cauterize wounds around the 5th century B.C.E. In the Super Mario video games, mushrooms might give you extra lives; in HBO’s “The Last of Us,” they bring about the ruin of human civilization. © 2026 The New York Times Company
Keyword: Attention; Drug Abuse
Link ID: 30102 - Posted: 01.31.2026
By Allison Parshall Until half a billion years ago, life on Earth was slow. The seas were home to single-celled microbes and largely stationary soft-bodied creatures. But at the dawn of the Cambrian era, some 540 million years ago, everything exploded. Bodies diversified in all directions, and many organisms developed appendages that let them move quickly around their environment. These ecosystems became competitive places full of predators and prey. And our branch of the tree of life evolved an incredible structure to navigate it all: the brain. We don’t know whether this was the moment when consciousness first arose on Earth. But it might have been when living creatures began to really need something like it to combine a barrage of sensory information into one unified experience that could guide their actions. It’s because of this ability to experience that, eventually, we began to feel pain and pleasure. Eventually, we became guided not just by base needs but by curiosity, emotions and introspection. Over time we became aware of ourselves. This last step is what we have to thank for most of art, science and philosophy—and the millennia-long quest to understand consciousness itself. This state of awareness of ourselves and our environment comes with many mysteries. Why does being awake and alive, being yourself, feel like anything at all, and where does this singular sense of awareness come from in the brain? These questions may have objective answers, but because they are about private, subjective experiences that can’t be directly measured, they exist at the very boundaries of what the scientific method can reveal. Still, in the past 30 years neuroscientists scouring the brain for the so-called neural correlates of consciousness have learned a lot. Their search has revealed constellations of brain networks whose connections help to explain what happens when we lose consciousness. We now have troves of data and working theories, some with mind-bending implications. We have tools to help us detect consciousness in people with brain injuries. But we still don’t have easy answers—researchers can’t even agree on what consciousness is, let alone how best to reveal its secrets. The past few years have seen accusations of pseudoscience, results that challenge leading theories, and the uneasy feeling of a field at a crossroads. © 2025 SCIENTIFIC AMERICAN,
Keyword: Consciousness; Brain imaging
Link ID: 30090 - Posted: 01.21.2026
By Pria Anand I loved literature before I loved medicine, and as a medical student, I often found that my textbooks left me cold, their medical jargon somehow missing the point of profound diseases able to rewrite a person’s life and identity. I was born, I decided, a century too late: I found the stories I craved, not in contemporary textbooks, but in outdated case reports, 18th- and 19-century descriptions of how the diseases I was studying might shape the life of a single patient. These reports were alive with vivid details: how someone’s vision loss affected their golf game or their smoking habit, their work or their love life. They were all tragedies: Each ended with an autopsy, a patient’s brain dissected to discover where, exactly, the problem lay, to inch closer to an understanding of the geography of the soul. To write these case studies, neurologists awaited the deaths and brains of living patients, robbing their subjects of the ability to choose what would become of their own bodies—the ability to write the endings of their own stories—after they had already been sapped of agency by their illnesses. Among these case reports was one from a forbidding state hospital in the north of Moscow: the story of a 19th-century Russian journalist referred to simply as “a learned man.” The journalist suffered a type of alcoholic dementia because of the brandy he often drank to cure his writer’s block and he developed a profound amnesia. He could not remember where he was or why. He could win a game of checkers but would forget that he had even played the minute the game ended. In the place of these lost memories, the journalist’s imagination spun elaborate narratives; he believed he had written an article when in fact he had barely begun to conceive it before he became sick, would describe the prior day’s visit to a far-off place when in actuality he had been too weak to get out of bed, and maintained that some of his possessions—kept in a hospital safe—had been taken from him as part of an elaborate heist. Sacks’ journals suggest he injected his own experiences into the stories of his patients. © 2026 NautilusNext Inc.,
Keyword: Attention; Learning & Memory
Link ID: 30089 - Posted: 01.21.2026


.gif)

