Chapter 18. Attention and Higher Cognition

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 1705

By Angie Voyles Askham Each time we blink, it obscures our visual world for 100 to 300 milliseconds. It’s a necessary action that also, researchers long presumed, presents the brain with a problem: how to cobble together a cohesive picture of the before and after. “No one really thought about blinks as an act of looking or vision to begin with,” says Martin Rolfs, professor of experimental psychology at Humboldt University of Berlin. But blinking may be a more important component of vision than previously thought, according to a study published last month in the Proceedings of the National Academy of Sciences. Participants performed better on a visual task when they blinked while looking at the visual stimulus than when they blinked before it appeared. The blink, the team found, caused a change in visual input that improved participants’ perception. The finding suggests that blinking is a feature of seeing rather than a bug, says Rolfs, who was not involved with the study but wrote a commentary about it. And it could explain why adults blink more frequently than is seemingly necessary, the researchers say. “The brain capitalizes on things that are changing in the visual world—whether it’s blinks or eye movements, or any type of ocular-motor dynamics,” says Patrick Mayo, a neuroscientist in the ophthalmology department at the University of Pittsburgh, who was also not involved in the work. “That is … a point that’s still not well appreciated in visual neuroscience, generally.” The researchers started their investigation by simulating a blink. In the computational model they devised, a person staring at black and white stripes would suddenly see a dark, uniform gray before once again viewing the high-contrast pattern. The interruption would cause a brief change in the stimulus input to neurons in the retina, which in turn could increase the cells’ sensitivity to stimuli right after a blink, they hypothesized. © 2024 Simons Foundation

Keyword: Vision; Attention
Link ID: 29303 - Posted: 05.14.2024

By Emily Cooke & LiveScience Optical illusions play on the brain's biases, tricking it into perceiving images differently than how they really are. And now, in mice, scientists have harnessed an optical illusion to reveal hidden insights into how the brain processes visual information. The research focused on the neon-color-spreading illusion, which incorporates patterns of thin lines on a solid background. Parts of these lines are a different color — such as lime green, in the example above — and the brain perceives these lines as part of a solid shape with a distinct border — a circle, in this case. The closed shape also appears brighter than the lines surrounding it. It's well established that this illusion causes the human brain to falsely fill in and perceive a nonexistent outline and brightness — but there's been ongoing debate about what's going on in the brain when it happens. Now, for the first time, scientists have demonstrated that the illusion works on mice, and this allowed them to peer into the rodents' brains to see what's going on. Specifically, they zoomed in on part of the brain called the visual cortex. When light hits our eyes, electrical signals are sent via nerves to the visual cortex. This region processes that visual data and sends it on to other areas of the brain, allowing us to perceive the world around us. The visual cortex is made of six layers of neurons that are progressively numbered V1, V2, V3 and so on. Each layer is responsible for processing different features of images that hit the eyes, with V1 neurons handling the first and most basic layer of data, while the other layers belong to the "higher visual areas." These neurons are responsible for more complex visual processing than V1 neurons. © 2024 SCIENTIFIC AMERICAN,

Keyword: Vision; Consciousness
Link ID: 29298 - Posted: 05.09.2024

By Dan Falk Some years ago, when he was still living in southern California, neuroscientist Christof Koch drank a bottle of Barolo wine while watching The Highlander, and then, at midnight, ran up to the summit of Mount Wilson, the 5,710-foot peak that looms over Los Angeles. After an hour of “stumbling around with my headlamp and becoming nauseated,” as he later described the incident, he realized the nighttime adventure was probably not a smart idea, and climbed back down, though not before shouting into the darkness the last line of William Ernest Henley’s 1875 poem “Invictus”: “I am the master of my fate / I am the captain of my soul.” Koch, who first rose to prominence for his collaborative work with the late Nobel Laureate Francis Crick, is hardly the only scientist to ponder the nature of the self—but he is perhaps the most adventurous, both in body and mind. He sees consciousness as the central mystery of our universe, and is willing to explore any reasonable idea in the search for an explanation. Over the years, Koch has toyed with a wide array of ideas, some of them distinctly speculative—like the idea that the Internet might become conscious, for example, or that with sufficient technology, multiple brains could be fused together, linking their accompanying minds along the way. (And yet, he does have his limits: He’s deeply skeptical both of the idea that we can “upload” our minds and of the “simulation hypothesis.”) In his new book, Then I Am Myself The World, Koch, currently the chief scientist at the Allen Institute for Brain Science in Seattle, ventures through the challenging landscape of integrated information theory (IIT), a framework that attempts to compute the amount of consciousness in a system based on the degree to which information is networked. Along the way, he struggles with what may be the most difficult question of all: How do our thoughts—seemingly ethereal and without mass or any other physical properties—have real-world consequences? © 2024 NautilusNext Inc.,

Keyword: Consciousness
Link ID: 29294 - Posted: 05.07.2024

By Steve Paulson These days, we’re inundated with speculation about the future of artificial intelligence—and specifically how AI might take away our jobs, or steal the creative work of writers and artists, or even destroy the human species. The American writer Meghan O’Gieblyn also wonders about these things, and her essays offer pointed inquiries into the philosophical and spiritual underpinnings of this technology. She’s steeped in the latest AI developments but is also well-versed in debates about linguistics and the nature of consciousness. O’Gieblyn also writes about her own struggle to find deeper meaning in her life, which has led her down some unexpected rabbit holes. A former Christian fundamentalist, she later stumbled into transhumanism and, ultimately, plunged into the exploding world of AI. (She currently also writes an advice column for Wired magazine about tech and society.) When I visited her at her home in Madison, Wisconsin, I was curious if I might see any traces of this unlikely personal odyssey. I hadn’t expected her to pull out a stash of old notebooks filled with her automatic writing, composed while working with a hypnotist. I asked O’Gieblyn if she would read from one of her notebooks, and she picked this passage: “In all the times we came to bed, there was never any sleep. Dawn bells and doorbells and daffodils and the side of the road glaring with their faces undone …” And so it went—strange, lyrical, and nonsensical—tapping into some part of herself that she didn’t know was there. That led us into a wide-ranging conversation about the unconscious, creativity, the quest for transcendence, and the differences between machine intelligence and the human mind. © 2024 NautilusNext Inc.,

Keyword: Consciousness; Robotics
Link ID: 29289 - Posted: 05.03.2024

By Dan Falk Daniel Dennett, who died in April at the age of 82, was a towering figure in the philosophy of mind. Known for his staunch physicalist stance, he argued that minds, like bodies, are the product of evolution. He believed that we are, in a sense, machines—but astoundingly complex ones, the result of millions of years of natural selection. Dennett wrote more than a dozen books, some of them aimed at a scholarly audience but many of them directed squarely at the inquisitive non-specialist—including bestsellers like Consciousness Explained, Breaking the Spell, and Darwin’s Dangerous Idea. Reading his works, one gets the impression of a mind jammed to the rafters with ideas. As Richard Dawkins put it in a blurb for Dennett’s last book, a memoir titled I’ve Been Thinking: “How unfair for one man to be blessed with such a torrent of stimulating thoughts.” Dennett spent decades puzzling over the existence of minds. How does non-thinking matter arrange itself into matter that can think, and even ponder its own existence? A long-time academic nemesis of Dennett’s, the philosopher David Chalmers, dubbed this the “Hard Problem” of consciousness. But Dennett felt this label needlessly turned a series of potentially-solvable problems into one giant unsolvable one: He was sure the so-called hard problem would evaporate once the various lesser (but still difficult) problems of understanding the brain’s mechanics were figured out. Because he viewed brains as miracle-free mechanisms, he saw no barrier to machine consciousness, at least in principle. Yet he had no fear of Terminator-style AI doomsday scenarios, either. (“The whole singularity stuff, that’s preposterous,” he once told an interviewer for The Guardian. “It distracts us from much more pressing problems.”) © 2024 NautilusNext Inc.,

Keyword: Consciousness; Attention
Link ID: 29285 - Posted: 05.02.2024

By Lilly Tozer How the brain processes visual information — and its perception of time — is heavily influenced by what we’re looking at, a study has found. In the experiment, participants perceived the amount of time they had spent looking at an image differently depending on how large, cluttered or memorable the contents of the picture were. They were also more likely to remember images that they thought they had viewed for longer. The findings, published on 22 April in Nature Human Behaviour1, could offer fresh insights into how people experience and keep track of time. “For over 50 years, we’ve known that objectively longer-presented things on a screen are better remembered,” says study co-author Martin Wiener, a cognitive neuroscientist at George Mason University in Fairfax, Virginia. “This is showing for the first time, a subjectively experienced longer interval is also better remembered.” Research has shown that humans’ perception of time is intrinsically linked to our senses. “Because we do not have a sensory organ dedicated to encoding time, all sensory organs are in fact conveying temporal information” says Virginie van Wassenhove, a cognitive neuroscientist at the University of Paris–Saclay in Essonne, France. Previous studies found that basic features of an image, such as its colours and contrast, can alter people’s perceptions of time spent viewing the image. In the latest study, researchers set out to investigate whether higher-level semantic features, such as memorability, can have the same effect. © 2024 Springer Nature Limited

Keyword: Attention; Vision
Link ID: 29269 - Posted: 04.24.2024

By John Horgan Philosopher Daniel Dennett died a few days ago, on April 19. When he argued that we overrate consciousness, he demonstrated, paradoxically, how conscious he was, and he made his audience more conscious. Dennett’s death feels like the end of an era, the era of ultramaterialist, ultra-Darwinian, swaggering, know-it-all scientism. Who’s left, Richard Dawkins? Dennett wasn’t as smart as he thought he was, I liked to say, because no one is. He lacked the self-doubt gene, but he forced me to doubt myself. He made me rethink what I think, and what more can you ask of a philosopher? I first encountered Dennett’s in-your-face brilliance in 1981 when I read The Mind’s I, a collection of essays he co-edited. And his name popped up at a consciousness shindig I attended earlier this month. To honor Dennett, I’m posting a revision of my 2017 critique of his claim that consciousness is an “illusion.” I’m also coining a phrase, “the Dennett paradox,”which is explained below. Of all the odd notions to emerge from debates over consciousness, the oddest is that it doesn’t exist, at least not in the way we think it does. It is an illusion, like “Santa Claus” or “American democracy.” René Descartes said consciousness is the one undeniable fact of our existence, and I find it hard to disagree. I’m conscious right now, as I type this sentence, and you are presumably conscious as you read it (although I can’t be absolutely sure). The idea that consciousness isn’t real has always struck me as absurd, but smart people espouse it. One of the smartest is philosopher Daniel Dennett, who has been questioning consciousness for decades, notably in his 1991 bestseller Consciousness Explained. © 2024 SCIENTIFIC AMERICAN,

Keyword: Consciousness
Link ID: 29266 - Posted: 04.24.2024

By Dan Falk In 2022, researchers at the Bee Sensory and Behavioral Ecology Lab at Queen Mary University of London observed bumblebees doing something remarkable: The diminutive, fuzzy creatures were engaging in activity that could only be described as play. Given small wooden balls, the bees pushed them around and rotated them. The behavior had no obvious connection to mating or survival, nor was it rewarded by the scientists. It was, apparently, just for fun. The study on playful bees is part of a body of research that a group of prominent scholars of animal minds cited today, buttressing a new declaration that extends scientific support for consciousness to a wider suite of animals than has been formally acknowledged before. For decades, there’s been a broad agreement among scientists that animals similar to us — the great apes, for example — have conscious experience, even if their consciousness differs from our own. In recent years, however, researchers have begun to acknowledge that consciousness may also be widespread among animals that are very different from us, including invertebrates with completely different and far simpler nervous systems. The new declaration, signed by biologists and philosophers, formally embraces that view. It reads, in part: “The empirical evidence indicates at least a realistic possibility of conscious experience in all vertebrates (including all reptiles, amphibians and fishes) and many invertebrates (including, at minimum, cephalopod mollusks, decapod crustaceans and insects).” Inspired by recent research findings that describe complex cognitive behaviors in these and other animals, the document represents a new consensus and suggests that researchers may have overestimated the degree of neural complexity required for consciousness. © 2024the Simons Foundation.

Keyword: Consciousness; Evolution
Link ID: 29264 - Posted: 04.20.2024

By Meghan Willcoxon In the summer of 1991, the neuroscientist Vittorio Gallese was studying how movement is represented in the brain when he noticed something odd. He and his research adviser, Giacomo Rizzolatti, at the University of Parma were tracking which neurons became active when monkeys interacted with certain objects. As the scientists had observed before, the same neurons fired when the monkeys either noticed the objects or picked them up. But then the neurons did something the researchers didn’t expect. Before the formal start of the experiment, Gallese grasped the objects to show them to a monkey. At that moment, the activity spiked in the same neurons that had fired when the monkey grasped the objects. It was the first time anyone had observed neurons encode information for both an action and another individual performing that action. Those neurons reminded the researchers of a mirror: Actions the monkeys observed were reflected in their brains through these peculiar motor cells. In 1992, Gallese and Rizzolatti first described the cells in the journal Experimental Brain Research and then in 1996 named them “mirror neurons” in Brain. The researchers knew they had found something interesting, but nothing could have prepared them for how the rest of the world would respond. Within 10 years of the discovery, the idea of a mirror neuron had become the rare neuroscience concept to capture the public imagination. From 2002 to 2009, scientists across disciplines joined science popularizers in sensationalizing these cells, attributing more properties to them to explain such complex human behaviors as empathy, altruism, learning, imitation, autism and speech. Then, nearly as quickly as mirror neurons caught on, scientific doubts about their explanatory power crept in. Within a few years, these celebrity cells were filed away in the drawer of over-promised, under-delivered discoveries. Vittorio Gallese wears round glasses.

Keyword: Attention; Vision
Link ID: 29242 - Posted: 04.04.2024

By Markham Heid The human hand is a marvel of nature. No other creature on Earth, not even our closest primate relatives, has hands structured quite like ours, capable of such precise grasping and manipulation. But we’re doing less intricate hands-on work than we used to. A lot of modern life involves simple movements, such as tapping screens and pushing buttons, and some experts believe our shift away from more complex hand activities could have consequences for how we think and feel. “When you look at the brain’s real estate — how it’s divided up, and where its resources are invested — a huge portion of it is devoted to movement, and especially to voluntary movement of the hands,” said Kelly Lambert, a professor of behavioral neuroscience at the University of Richmond in Virginia. Dr. Lambert, who studies effort-based rewards, said that she is interested in “the connection between the effort we put into something and the reward we get from it” and that she believes working with our hands might be uniquely gratifying. In some of her research on animals, Dr. Lambert and her colleagues found that rats that used their paws to dig up food had healthier stress hormone profiles and were better at problem solving compared with rats that were given food without having to dig. She sees some similarities in studies on people, which have found that a whole range of hands-on activities — such as knitting, gardening and coloring — are associated with cognitive and emotional benefits, including improvements in memory and attention, as well as reductions in anxiety and depression symptoms. These studies haven’t determined that hand involvement, specifically, deserves the credit. The researchers who looked at coloring, for example, speculated that it might promote mindfulness, which could be beneficial for mental health. Those who have studied knitting said something similar. “The rhythm and repetition of knitting a familiar or established pattern was calming, like meditation,” said Catherine Backman, a professor emeritus of occupational therapy at the University of British Columbia in Canada who has examined the link between knitting and well-being. © 2024 The New York Times Company

Keyword: Learning & Memory; Stress
Link ID: 29231 - Posted: 04.02.2024

By Ingrid Wickelgren You see a woman on the street who looks familiar—but you can’t remember how you know her. Your brain cannot attach any previous experiences to this person. Hours later, you suddenly recall the party at a friend’s house where you met her, and you realize who she is. In a new study in mice, researchers have discovered the place in the brain that is responsible for both types of familiarity—vague recognition and complete recollection. Both, moreover, are represented by two distinct neural codes. The findings, which appeared on February 20 in Neuron, showcase the use of advanced computer algorithms to understand how the brain encodes concepts such as social novelty and individual identity, says study co-author Steven Siegelbaum, a neuroscientist at the Mortimer B. Zuckerman Mind Brain Behavior Institute at Columbia University. The brain’s signature for strangers turns out to be simpler than the one used for old friends—which makes sense, Siegelbaum says, given the vastly different memory requirements for the two relationships. “Where you were, what you were doing, when you were doing it, who else [was there]—the memory of a familiar individual is a much richer memory,” Siegelbaum says. “If you’re meeting a stranger, there’s nothing to recollect.” The action occurs in a small sliver of a brain region called the hippocampus, known for its importance in forming memories. The sliver in question, known as CA2, seems to specialize in a certain kind of memory used to recall relationships. “[The new work] really emphasizes the importance of this brain area to social processing,” at least in mice, says Serena Dudek, a neuroscientist at the National Institute of Environmental Health Sciences, who was not involved in the study. © 2024 SCIENTIFIC AMERICAN,

Keyword: Attention; Learning & Memory
Link ID: 29222 - Posted: 03.28.2024

By Robert D. Hershey Jr. Daniel Kahneman, who never took an economics course but who pioneered a psychologically based branch of that field that led to a Nobel in economic science in 2002, died on Wednesday. He was 90. His death was confirmed by his partner, Barbara Tversky. She declined to say where he died. Professor Kahneman, who was long associated with Princeton University and lived in Manhattan, employed his training as a psychologist to advance what came to be called behavioral economics. The work, done largely in the 1970s, led to a rethinking of issues as far-flung as medical malpractice, international political negotiations and the evaluation of baseball talent, all of which he analyzed, mostly in collaboration with Amos Tversky, a Stanford cognitive psychologist who did groundbreaking work on human judgment and decision-making. (Ms. Tversky, also a professor of psychology at Stanford, had been married to Professor Tversky, who died in 1996. She and Professor Kahneman became partners several years ago.) As opposed to traditional economics, which assumes that human beings generally act in fully rational ways and that any exceptions tend to disappear as the stakes are raised, the behavioral school is based on exposing hard-wired mental biases that can warp judgment, often with counterintuitive results. “His central message could not be more important,” the Harvard psychologist and author Steven Pinker told The Guardian in 2014, “namely, that human reason left to its own devices is apt to engage in a number of fallacies and systematic errors, so if we want to make better decisions in our personal lives and as a society, we ought to be aware of these biases and seek workarounds. That’s a powerful and important discovery.” © 2024 The New York Times Company

Keyword: Attention
Link ID: 29218 - Posted: 03.28.2024

By Jyoti Madhusoodanan When the Philadelphia-based company Bioquark announced a plan in 2016 to regenerate neurons in brain-dead people, their proposal elicited skepticism and backlash. Researchers questioned the scientific merits of the planned study, which sought to inject stem cells and other materials into recently deceased subjects. Ethicists said it bordered on quackery and would exploit grieving families. Bioquark has since folded. But quietly, a physician who was involved in the controversial proposal, Himanshu Bansal, has continued the research. Bansal recently told Undark that he has been conducting work funded by him and his research team at a private hospital in Rudrapur, India, experimenting mostly with young adults who have succumbed to traffic accidents. He said he has data for 20 subjects for the first phase of the study and 11 for the second — some of whom showed glimmers of renewed electrical activity — and he plans to expand the study to include several more. Bansal said he has submitted his results to peer-reviewed journals over the past several years but has yet to find one that would publish them. Bansal may be among the more controversial figures conducting research with people who have been declared brain dead, but not by any stretch is he the only one. In recent years, high-profile experiments implanting non-human organs into human bodies, a procedure known as xenotransplantation, have fueled rising interest in using brain-dead subjects to study procedures that are too risky to perform on living people. With the support of a ventilator and other equipment, a person’s heart, kidneys, immune system, and other body parts can function for days, sometimes weeks or more, after brain death. For researchers who seek to understand drug delivery, organ transplantation, and other complexities of human physiology, these bodies can provide a more faithful simulacrum of a living human being than could be achieved with animals or lab-grown cells and tissues.

Keyword: Consciousness
Link ID: 29217 - Posted: 03.26.2024

By Anna Gibbs Imagine a person’s face. Now imagine that whenever you looked at that face, there was a chance it would appear distorted. That’s what life is like for a person with prosopometamorphopsia, or PMO. Now, thanks to a new study, you can see through the eyes of someone with this rare condition. Relying on feedback from a 58-year-old man who has had PMO for nearly three years, researchers at Dartmouth College altered photos of faces to mimic the “demonic” distortions he experienced. This is believed to be the first time that images have been created to so closely replicate what a patient with the condition is seeing, psychologist Antônio Mello and colleagues report in the March 23 Lancet. “We hope this has a big impact in the way people think about PMO, especially for them to be able to understand how severe PMO can be,” Mello says. For instance, he says, this particular patient didn’t like to go to the store because fellow shoppers looked like “an army of demons.” PMO is poorly understood, with fewer than 100 cases cited since 1904. Patients report a wide variety of facial distortions. While the patient in this study sees extremely stretched features with deep grooves on the face, others may see distortions that cause features to move position or change size. Because of that, this visualization is patient-specific and wouldn’t apply for everyone with PMO, says Jason Barton, a neurologist at the University of British Columbia in Vancouver who has worked with the researchers before but was not involved in this study. Still, “I think it’s helpful for people to understand the kinds of distortions people can see.” © Society for Science & the Public 2000–2024.

Keyword: Attention
Link ID: 29211 - Posted: 03.23.2024

By Julian E. Barnes New studies by the National Institutes of Health failed to find evidence of brain injury in scans or blood markers of the diplomats and spies who suffered symptoms of Havana syndrome, bolstering the conclusions of U.S. intelligence agencies about the strange health incidents. Spy agencies have concluded that the debilitating symptoms associated with Havana syndrome, including dizziness and migraines, are not the work of a hostile foreign power. They have not identified a weapon or device that caused the injuries, and intelligence analysts now believe the symptoms are most likely explained by environmental factors, existing medical conditions or stress. The lead scientist on one of the two new studies said that while the study was not designed to find a cause, the findings were consistent with those determinations. The authors said the studies are at odds with findings from researchers at the University of Pennsylvania, who found differences in brain scans of people with Havana syndrome symptoms and a control group Dr. David Relman, a prominent scientist who has had access to the classified files involving the cases and representatives of people suffering from Havana syndrome, said the new studies were flawed. Many brain injuries are difficult to detect with scans or blood markers, he said. He added that the findings do not dispute that an external force, like a directed energy device, could have injured the current and former government workers. The studies were published in The Journal of the American Medical Association on Monday alongside an editorial by Dr. Relman that was critical of the findings. © 2024 The New York Times Company

Keyword: Learning & Memory; Depression
Link ID: 29196 - Posted: 03.19.2024

By Meghan Rosen Leakiness in the brain could explain the memory and concentration problems linked to long COVID. In patients with brain fog, MRI scans revealed signs of damaged blood vessels in their brains, researchers reported February 22 in Nature Neuroscience. In these people, dye injected into the bloodstream leaked into their brains and pooled in regions that play roles in language, memory, mood and vision. It’s the first time anyone’s shown that long COVID patients can have leaky blood brain barriers, says study coauthor Matthew Campbell, a geneticist at Trinity College Dublin in Ireland. That barrier, tightly knit cells lining blood vessels, typically keeps riffraff out of the brain, like bouncers guarding a nightclub. If the barrier breaks down, bloodborne viruses, cells and other interlopers can sneak into the brain’s tissues and wreak havoc, says Avindra Nath, a neurologist at the National Institutes of Health in Bethesda, Md. It’s too early to say definitively whether that’s happening in people with long COVID, but the new study provides evidence that “brain fog has a biological basis,” says Nath, who wasn’t involved with the work. That alone is important for patients, he says, because their symptoms may be otherwise discounted by physicians. For some people, brain fog can feel like a slowdown in thinking or difficulty recalling short-term memories, Campbell says. For example, “patients will go for a drive, and forget where they’re driving to.” That might sound trivial, he says, but it actually pushes people into panic mode. © Society for Science & the Public 2000–2024.

Keyword: Attention; Learning & Memory
Link ID: 29192 - Posted: 03.16.2024

By Meghan Bartels No matter how much trouble your pet gets into when they’re awake, few sights are as peaceful as a dog curled up in their bed or a cat stretched out in the sun, snoring away. But their experience of sleep can feel impenetrable. What fills the dreams of a dog or cat? That’s a tricky question to answer. Snowball isn’t keeping a dream journal, and there’s no technology yet that can translate the brain activity of even a sleeping human into a secondhand experience of their dream world, much less a sleeping animal. “No one has done research on the content of animals’ dreams,” says Deirdre Barrett, a dream researcher at Harvard University and author of the book The Committee of Sleep. But Rover’s dreamscape isn’t entirely impenetrable, at least to educated guesses. First of all, Barrett says, only your furrier friends appear to dream. Fish, for example, don’t seem to display rapid eye movement (REM), the phase of sleep during which dreams are most common in humans. “I think it’s a really good guess that they don’t have dreams in the sense of anything like the cognitive activity that we call dreams,” she says. Whether birds experience REM sleep is less clear, Barrett says. And some marine mammals always keep one side of their brain awake even while the other sleeps, with no or very strange REM sleep involved. That means seals and dolphins likely don’t dream in anything like the way humans do. But the mammals we keep as pets are solidly REM sleepers. “I think it’s a very safe, strong guess that they are having some kind of cognitive brain activity that is as much like our dreams as their waking perceptions are like ours,” she says. That doesn’t mean that cats and dogs experience humanlike dreams. “It would be a mistake to assume that other animals dream in the same way that we do, just in their nonhuman minds and bodies,” says David Peña-Guzmán, a philosopher at San Francisco State University and author of the book When Animals Dream. For example, humans rarely report scents when recounting dreams; however, we should expect dogs to dream in smells, he says, given that olfaction is so central to their waking experience of the world. © 2024 SCIENTIFIC AMERICAN

Keyword: Sleep; Consciousness
Link ID: 29176 - Posted: 03.05.2024

By Pam Belluck Long Covid may lead to measurable cognitive decline, especially in the ability to remember, reason and plan, a large new study suggests. Cognitive testing of nearly 113,000 people in England found that those with persistent post-Covid symptoms scored the equivalent of 6 I.Q. points lower than people who had never been infected with the coronavirus, according to the study, published Wednesday in The New England Journal of Medicine. People who had been infected and no longer had symptoms also scored slightly lower than people who had never been infected, by the equivalent of 3 I.Q. points, even if they were ill for only a short time. The differences in cognitive scores were relatively small, and neurological experts cautioned that the results did not imply that being infected with the coronavirus or developing long Covid caused profound deficits in thinking and function. But the experts said the findings are important because they provide numerical evidence for the brain fog, focus and memory problems that afflict many people with long Covid. “These emerging and coalescing findings are generally highlighting that yes, there is cognitive impairment in long Covid survivors — it’s a real phenomenon,” said James C. Jackson, a neuropsychologist at Vanderbilt Medical Center, who was not involved in the study. He and other experts noted that the results were consistent with smaller studies that have found signals of cognitive impairment. The new study also found reasons for optimism, suggesting that if people’s long Covid symptoms ease, the related cognitive impairment might, too: People who had experienced long Covid symptoms for months and eventually recovered had cognitive scores similar to those who had experienced a quick recovery, the study found. © 2024 The New York Times Company

Keyword: Attention; Learning & Memory
Link ID: 29171 - Posted: 02.29.2024

By Kevin Mitchell It is often said that “the mind is what the brain does.” Modern neuroscience has indeed shown us that mental goings-on rely on and are in some sense entailed by neural goings-on. But the truth is that we have a poor handle on the nature of that relationship. One way to bridge that divide is to try to define the relationship between neural and mental representations. The basic premise of neuroscience is that patterns of neural activity carry some information — they are about something. But not all such patterns need be thought of as representations; many of them are just signals. Simple circuits such as the muscle stretch reflex or the eye-blink reflex, for example, are configured to respond to stimuli such as the lengthening of a muscle or a sudden bright light. But they don’t need to internally represent this information — or make that information available to other parts of the nervous system. They just need to respond to it. More complex information processing, by contrast, such as in our image-forming visual system, requires internal neural representation. By integrating signals from multiple photoreceptors, retinal ganglion cells carry information about patterns of light in the visual stimulus — particularly edges where the illumination changes from light to dark. This information is then made available to the thalamus and the cortical hierarchy, where additional processing goes on to extract higher- and higher-order features of the entire visual scene. Scientists have elucidated the logic of these hierarchical systems by studying the types of stimuli to which neurons are most sensitively tuned, known as “receptive fields.” If some neuron in an early cortical area responds selectively to, say, a vertical line in a certain part of the visual field, the inference is that when such a neuron is active, that is the information that it is representing. In this case, it is making that information available to the next level of the visual system — itself just a subsystem of the brain. © 2024 Simons Foundation

Keyword: Consciousness; Vision
Link ID: 29148 - Posted: 02.13.2024

By Benjamin Breen When I began researching Tripping on Utopia in 2018, I was aware that many midcentury scientists and psychiatrists had shown a keen interest in the promise of psychedelics. But what I didn’t realize was how remarkably broad-based this interest was. As I dug deeper into the archival record, I was struck by the public enthusiasm for the use of substances like LSD and mescaline in therapy—as manifested not just in scientific studies, but in newspaper articles and even television specials. (My favorite is this remarkable 1957 broadcast which shows a woman taking LSD on camera, then uttering memorable lines like “I’ve never seen such infinite beauty in my life” and “I wish I could talk in Technicolor.”) Above all, I was surprised by the public response to the Hollywood actor Cary Grant’s reveal that he was regularly using LSD in psychedelic therapy sessions. In a series of interviews starting in 1959—the same year he starred in North by Northwest—Grant went public as an unlikely advocate for psychedelic therapy. It was the surprisingly positive reaction to Grant’s endorsement that most struck me. As recounted in my book, the journalist who broke the story was overwhelmed by phone calls and letters. “Psychiatrists called, complaining that their patients were now begging them for LSD,” he remembered. “Every actor in town under analysis wanted it.” Nor was this first wave of legal psychedelic therapy restricted to Hollywood. Two other very prominent advocates of psychedelic therapy in the late 1950s were former Congresswoman Clare Boothe Luce and her husband Henry Luce, the founder of Time and Life magazines. It is not an exaggeration to say that this married couple dominated the media landscape of the 20th century. Nor is it an exaggeration to say that psychedelics profoundly influenced Clare Boothe Luce’s life in the late 1950s. She credited LSD with transformative insights that helped her to overcome lasting trauma associated with her abusive childhood and the death of her only daughter in a car accident. © 2024 NautilusNext Inc.,

Keyword: Drug Abuse; Consciousness
Link ID: 29142 - Posted: 02.10.2024