Chapter 18. Attention and Higher Cognition

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 1690

By Meghan Rosen Leakiness in the brain could explain the memory and concentration problems linked to long COVID. In patients with brain fog, MRI scans revealed signs of damaged blood vessels in their brains, researchers reported February 22 in Nature Neuroscience. In these people, dye injected into the bloodstream leaked into their brains and pooled in regions that play roles in language, memory, mood and vision. It’s the first time anyone’s shown that long COVID patients can have leaky blood brain barriers, says study coauthor Matthew Campbell, a geneticist at Trinity College Dublin in Ireland. That barrier, tightly knit cells lining blood vessels, typically keeps riffraff out of the brain, like bouncers guarding a nightclub. If the barrier breaks down, bloodborne viruses, cells and other interlopers can sneak into the brain’s tissues and wreak havoc, says Avindra Nath, a neurologist at the National Institutes of Health in Bethesda, Md. It’s too early to say definitively whether that’s happening in people with long COVID, but the new study provides evidence that “brain fog has a biological basis,” says Nath, who wasn’t involved with the work. That alone is important for patients, he says, because their symptoms may be otherwise discounted by physicians. For some people, brain fog can feel like a slowdown in thinking or difficulty recalling short-term memories, Campbell says. For example, “patients will go for a drive, and forget where they’re driving to.” That might sound trivial, he says, but it actually pushes people into panic mode. © Society for Science & the Public 2000–2024.

Keyword: Attention; Learning & Memory
Link ID: 29192 - Posted: 03.16.2024

By Meghan Bartels No matter how much trouble your pet gets into when they’re awake, few sights are as peaceful as a dog curled up in their bed or a cat stretched out in the sun, snoring away. But their experience of sleep can feel impenetrable. What fills the dreams of a dog or cat? That’s a tricky question to answer. Snowball isn’t keeping a dream journal, and there’s no technology yet that can translate the brain activity of even a sleeping human into a secondhand experience of their dream world, much less a sleeping animal. “No one has done research on the content of animals’ dreams,” says Deirdre Barrett, a dream researcher at Harvard University and author of the book The Committee of Sleep. But Rover’s dreamscape isn’t entirely impenetrable, at least to educated guesses. First of all, Barrett says, only your furrier friends appear to dream. Fish, for example, don’t seem to display rapid eye movement (REM), the phase of sleep during which dreams are most common in humans. “I think it’s a really good guess that they don’t have dreams in the sense of anything like the cognitive activity that we call dreams,” she says. Whether birds experience REM sleep is less clear, Barrett says. And some marine mammals always keep one side of their brain awake even while the other sleeps, with no or very strange REM sleep involved. That means seals and dolphins likely don’t dream in anything like the way humans do. But the mammals we keep as pets are solidly REM sleepers. “I think it’s a very safe, strong guess that they are having some kind of cognitive brain activity that is as much like our dreams as their waking perceptions are like ours,” she says. That doesn’t mean that cats and dogs experience humanlike dreams. “It would be a mistake to assume that other animals dream in the same way that we do, just in their nonhuman minds and bodies,” says David Peña-Guzmán, a philosopher at San Francisco State University and author of the book When Animals Dream. For example, humans rarely report scents when recounting dreams; however, we should expect dogs to dream in smells, he says, given that olfaction is so central to their waking experience of the world. © 2024 SCIENTIFIC AMERICAN

Keyword: Sleep; Consciousness
Link ID: 29176 - Posted: 03.05.2024

By Pam Belluck Long Covid may lead to measurable cognitive decline, especially in the ability to remember, reason and plan, a large new study suggests. Cognitive testing of nearly 113,000 people in England found that those with persistent post-Covid symptoms scored the equivalent of 6 I.Q. points lower than people who had never been infected with the coronavirus, according to the study, published Wednesday in The New England Journal of Medicine. People who had been infected and no longer had symptoms also scored slightly lower than people who had never been infected, by the equivalent of 3 I.Q. points, even if they were ill for only a short time. The differences in cognitive scores were relatively small, and neurological experts cautioned that the results did not imply that being infected with the coronavirus or developing long Covid caused profound deficits in thinking and function. But the experts said the findings are important because they provide numerical evidence for the brain fog, focus and memory problems that afflict many people with long Covid. “These emerging and coalescing findings are generally highlighting that yes, there is cognitive impairment in long Covid survivors — it’s a real phenomenon,” said James C. Jackson, a neuropsychologist at Vanderbilt Medical Center, who was not involved in the study. He and other experts noted that the results were consistent with smaller studies that have found signals of cognitive impairment. The new study also found reasons for optimism, suggesting that if people’s long Covid symptoms ease, the related cognitive impairment might, too: People who had experienced long Covid symptoms for months and eventually recovered had cognitive scores similar to those who had experienced a quick recovery, the study found. © 2024 The New York Times Company

Keyword: Attention; Learning & Memory
Link ID: 29171 - Posted: 02.29.2024

By Kevin Mitchell It is often said that “the mind is what the brain does.” Modern neuroscience has indeed shown us that mental goings-on rely on and are in some sense entailed by neural goings-on. But the truth is that we have a poor handle on the nature of that relationship. One way to bridge that divide is to try to define the relationship between neural and mental representations. The basic premise of neuroscience is that patterns of neural activity carry some information — they are about something. But not all such patterns need be thought of as representations; many of them are just signals. Simple circuits such as the muscle stretch reflex or the eye-blink reflex, for example, are configured to respond to stimuli such as the lengthening of a muscle or a sudden bright light. But they don’t need to internally represent this information — or make that information available to other parts of the nervous system. They just need to respond to it. More complex information processing, by contrast, such as in our image-forming visual system, requires internal neural representation. By integrating signals from multiple photoreceptors, retinal ganglion cells carry information about patterns of light in the visual stimulus — particularly edges where the illumination changes from light to dark. This information is then made available to the thalamus and the cortical hierarchy, where additional processing goes on to extract higher- and higher-order features of the entire visual scene. Scientists have elucidated the logic of these hierarchical systems by studying the types of stimuli to which neurons are most sensitively tuned, known as “receptive fields.” If some neuron in an early cortical area responds selectively to, say, a vertical line in a certain part of the visual field, the inference is that when such a neuron is active, that is the information that it is representing. In this case, it is making that information available to the next level of the visual system — itself just a subsystem of the brain. © 2024 Simons Foundation

Keyword: Consciousness; Vision
Link ID: 29148 - Posted: 02.13.2024

By Benjamin Breen When I began researching Tripping on Utopia in 2018, I was aware that many midcentury scientists and psychiatrists had shown a keen interest in the promise of psychedelics. But what I didn’t realize was how remarkably broad-based this interest was. As I dug deeper into the archival record, I was struck by the public enthusiasm for the use of substances like LSD and mescaline in therapy—as manifested not just in scientific studies, but in newspaper articles and even television specials. (My favorite is this remarkable 1957 broadcast which shows a woman taking LSD on camera, then uttering memorable lines like “I’ve never seen such infinite beauty in my life” and “I wish I could talk in Technicolor.”) Above all, I was surprised by the public response to the Hollywood actor Cary Grant’s reveal that he was regularly using LSD in psychedelic therapy sessions. In a series of interviews starting in 1959—the same year he starred in North by Northwest—Grant went public as an unlikely advocate for psychedelic therapy. It was the surprisingly positive reaction to Grant’s endorsement that most struck me. As recounted in my book, the journalist who broke the story was overwhelmed by phone calls and letters. “Psychiatrists called, complaining that their patients were now begging them for LSD,” he remembered. “Every actor in town under analysis wanted it.” Nor was this first wave of legal psychedelic therapy restricted to Hollywood. Two other very prominent advocates of psychedelic therapy in the late 1950s were former Congresswoman Clare Boothe Luce and her husband Henry Luce, the founder of Time and Life magazines. It is not an exaggeration to say that this married couple dominated the media landscape of the 20th century. Nor is it an exaggeration to say that psychedelics profoundly influenced Clare Boothe Luce’s life in the late 1950s. She credited LSD with transformative insights that helped her to overcome lasting trauma associated with her abusive childhood and the death of her only daughter in a car accident. © 2024 NautilusNext Inc.,

Keyword: Drug Abuse; Consciousness
Link ID: 29142 - Posted: 02.10.2024

By Nora Bradford Whenever you’re actively performing a task — say, lifting weights at the gym or taking a hard exam — the parts of your brain required to carry it out become “active” when neurons step up their electrical activity. But is your brain active even when you’re zoning out on the couch? The answer, researchers have found, is yes. Over the past two decades they’ve defined what’s known as the default mode network, a collection of seemingly unrelated areas of the brain that activate when you’re not doing much at all. Its discovery has offered insights into how the brain functions outside of well-defined tasks and has also prompted research into the role of brain networks — not just brain regions — in managing our internal experience. In the late 20th century, neuroscientists began using new techniques to take images of people’s brains as they performed tasks in scanning machines. As expected, activity in certain brain areas increased during tasks — and to the researchers’ surprise, activity in other brain areas declined simultaneously. The neuroscientists were intrigued that during a wide variety of tasks, the very same brain areas consistently dialed back their activity. It was as if these areas had been active when the person wasn’t doing anything, and then turned off when the mind had to concentrate on something external. Researchers called these areas “task negative.” When they were first identified, Marcus Raichle, a neurologist at the Washington University School of Medicine in St. Louis, suspected that these task-negative areas play an important role in the resting mind. “This raised the question of ‘What’s baseline brain activity?’” Raichle recalled. In an experiment, he asked people in scanners to close their eyes and simply let their minds wander while he measured their brain activity. All Rights Reserved © 2024

Keyword: Attention; Consciousness
Link ID: 29135 - Posted: 02.06.2024

By Ashley Juavinett In the 2010 award-winning film “Inception,” Leonardo DiCaprio’s character and others run around multiple layers of someone’s consciousness, trying to implant an idea in the person’s mind. If you can plant something deep enough, the film suggests, you can make them believe it is their own idea. The film was billed as science fiction, but three years later, in 2013, researchers actually did this — in a mouse, at least. The work focused on the hippocampus, along with its closely interconnected structures, long recognized by scientists to hold our dearest memories. If you damage significant portions of just one region of your hippocampus, the dentate gyrus, you’ll lose the ability to form new memories. How these memories are stored, however, is still up for debate. One early but persistent idea posits that enduring changes in our neural circuitry, or “engrams,” may represent the physical traces of specific memories. An engram is sometimes thought of as a group of cells, along with their synaptic weights and connections throughout the brain. In sum, the engram is what DiCaprio’s character would have had to discreetly manipulate in his target. In 2012, a team in Susumu Tonegawa’s lab at the Massachusetts Institute of Technology (MIT) showed that you could mark the cells of a real memory engram and reactivate them later. Taking that work one step further, Steve Ramirez, Xu Liu and others in Tonegawa’s lab demonstrated the following year that you can implant a memory of something that never even happened. In doing so, they turned science fiction into reality, one tiny foot shock at a time. Published in Science, Ramirez and Liu’s study is a breath of fresh air, scientifically speaking. The abstract starts with one of the shortest sentences you’ll ever find in a scientific manuscript: “Memories can be unreliable.” The entire paper is extremely readable, and there is no shortage of related papers and review articles that you could give your students to read for additional context. © 2024 Simons Foundation

Keyword: Learning & Memory
Link ID: 29131 - Posted: 02.06.2024

By Erin Garcia de Jesús Bruce the kea is missing his upper beak, giving the olive green parrot a look of perpetual surprise. But scientists are the astonished ones. The typical kea (Nestor notabilis) sports a long, sharp beak, perfect for digging insects out of rotten logs or ripping roots from the ground in New Zealand’s alpine forests. Bruce has been missing the upper part of his beak since at least 2012, when he was rescued as a fledgling and sent to live at the Willowbank Wildlife Reserve in Christchurch. The defect prevents Bruce from foraging on his own. Keeping his feathers clean should also be an impossible task. In 2021, when comparative psychologist Amalia Bastos arrived at the reserve with colleagues to study keas, the zookeepers reported something odd: Bruce had seemingly figured out how to use small stones to preen. “We were like, ‘Well that’s weird,’ ” says Bastos, of Johns Hopkins University. Over nine days, the team kept a close eye on Bruce, quickly taking videos if he started cleaning his feathers. Bruce, it turned out, had indeed invented his own work-around to preen, the researchers reported in 2021 in Scientific Reports. First, Bruce selects the proper tool, rolling pebbles around in his mouth with his tongue and spitting out candidates until he finds one that he likes, usually something pointy. Next, he holds the pebble between his tongue and lower beak. Then, he picks through his feathers. “It’s crazy because the behavior was not there from the wild,” Bastos says. When Bruce arrived at Willowbank, he was too young to have learned how to preen. And no other bird in the aviary uses pebbles in this way. “It seems like he just innovated this tool use for himself,” she says. © Society for Science & the Public 2000–2024.

Keyword: Intelligence; Evolution
Link ID: 29117 - Posted: 01.27.2024

By Christian Guay & Emery Brown What does it mean to be conscious? People have been thinking and writing about this question for millennia. Yet many things about the conscious mind remain a mystery, including how to measure and assess it. What is a unit of consciousness? Are there different levels of consciousness? What happens to consciousness during sleep, coma and general anesthesia? As anesthesiologists, we think about these questions often. We make a promise to patients every day that they will be disconnected from the outside world and their inner thoughts during surgery, retain no memories of the experience and feel no pain. In this way, general anesthesia has enabled tremendous medical advances, from microscopic vascular repairs to solid organ transplants. In addition to their tremendous impact on clinical care, anesthetics have emerged as powerful scientific tools to probe questions about consciousness. They allow us to induce profound and reversible changes in conscious states—and study brain responses during these transitions. But one of the challenges that anesthesiologists face is measuring the transition from one state to another. That’s because many of the approaches that exist interrupt or disrupt what we are trying to study. Essentially, assessing the system affects the system. In studies of human consciousness, determining whether someone is conscious can arouse the person being studied—confounding that very assessment. To address this challenge, we adapted a simple approach we call the breathe-squeeze method. It offers us a way to study changes in conscious state without interrupting those shifts. To understand this approach, it helps to consider some insights from studies of consciousness that have used anesthetics. For decades researchers have used electroencephalography (EEG) to observe electrical activity in the brains of people receiving various anesthetics. They can then analyze that activity with EEG readings to characterize patterns that are specific to various anesthetics, so-called anesthetic signatures. © 2024 SCIENTIFIC AMERICAN

Keyword: Consciousness; Sleep
Link ID: 29116 - Posted: 01.27.2024

By Kenna Hughes-Castleberry Crows, ravens and other birds in the Corvidae family have a head for numbers. Not only can they make quantity estimations (as can many other animal species), but they can learn to associate number values with abstract symbols, such as “3.” The biological basis of this latter talent stems from specific number-associated neurons in a brain region called the nidopallium caudolaterale (NCL), a new study shows. The region also supports long-term memory, goal-oriented thinking and number processing. Discovery of the specialized neurons in the NCL “helps us understand the origins of our counting and math capabilities,” says study investigator Andreas Nieder, professor of animal physiology at the University of Tübingen. Until now, number-associated neurons — cells that fire especially frequently in response to an animal seeing a specific number — had been found only in the prefrontal cortex of primates, which shared a common ancestor with corvids some 300 million years ago. The new findings imply that the ability to form number-sign associations evolved independently and convergently in the two lineages. “Studying whether animals have similar concepts or represent numerosity in ways that are similar to what humans do helps us establish when in our evolutionary history these abilities may have emerged and whether these abilities emerge only in species with particular ecologies or social structures,” says Jennifer Vonk, professor of psychology at Oakland University, who was not involved in the new study. Corvids are considered especially intelligent birds, with previous studies showing that they can create and use tools, and may even experience self-recognition. Nieder has studied corvids’ and other animals’ “number sense,” or the ability to understand numerical values, for more than a decade. His previous work revealed specialized neurons in the NCL that recognize and respond to different quantities of items — including the number zero. But he tested the neurons only with simple pictures and signs that have inherent meaning for the crows, such as size. © 2023 Simons Foundation.

Keyword: Intelligence; Evolution
Link ID: 29111 - Posted: 01.23.2024

By Mariana Lenharo Neuroscientist Lucia Melloni didn’t expect to be reminded of her parents’ divorce when she attended a meeting about consciousness research in 2018. But, much like her parents, the assembled academics couldn’t agree on anything. The group of neuroscientists and philosophers had convened at the Allen Institute for Brain Science in Seattle, Washington, to devise a way to empirically test competing theories of consciousness against each other: a process called adversarial collaboration. Devising a killer experiment was fraught. “Of course, each of them was proposing experiments for which they already knew the expected results,” says Melloni, who led the collaboration and is based at the Max Planck Institute for Empirical Aesthetics in Frankfurt, Germany. Melloni, falling back on her childhood role, became the go-between. The collaboration Melloni is leading is one of five launched by the Templeton World Charity Foundation, a philanthropic organization based in Nassau, the Bahamas. The charity funds research into topics such as spirituality, polarization and religion; in 2019, it committed US$20 million to the five projects. The aim of each collaboration is to move consciousness research forward by getting scientists to produce evidence that supports one theory and falsifies the predictions of another. Melloni’s group is testing two prominent ideas: integrated information theory (IIT), which claims that consciousness amounts to the degree of ‘integrated information’ generated by a system such as the human brain; and global neuronal workspace theory (GNWT), which claims that mental content, such as perceptions and thoughts, becomes conscious when the information is broadcast across the brain through a specialized network, or workspace. She and her co-leaders had to mediate between the main theorists, and seldom invited them into the same room. Their struggle to get the collaboration off the ground is mirrored in wider fractures in the field. © 2024 Springer Nature Limited

Keyword: Consciousness
Link ID: 29106 - Posted: 01.18.2024

By Conor Feehly A decade ago, when I was starting my first year of university in New Zealand, I attended a stage hypnosis. It was one of a number of events the university offered to incoming students during orientation week. From the stage of a campus auditorium, the hypnotist-for-hire asked an audience of some 200 students to close their eyes and listen to his voice. Then he directed us to clasp our hands tightly together, and to imagine an invisible thread wrapping around them—over and under, over and under—until it was impossible to pull them apart. After a few minutes of this, he told us to try to separate our hands. Those who could not, he said, should come on down to the stage. I instantly pulled my hands apart, but to my surprise, a close friend sitting next to me made his way to the front of the auditorium with roughly 20 others from the audience. Once on stage, the hypnotist tried to bring them deeper into a hypnotic trance, directing them to focus on his calm, authoritative voice. He then asked a few of them to role-play scenarios for our entertainment: a supermarket checkout clerk ringing up shopping items, a lifeguard scanning for lives to save. After a short time, I saw the hypnotist whisper something into the ear of my friend. He sheepishly made his way back to the seat next to me. “What did he say to you?” I asked. He replied, “I can tell you’re acting, mate, get off the stage.” In the more than 200 years since the practice of contemporary hypnosis was described by German physician Franz Mesmer, public perception of it has see-sawed between skepticism and credulity. Today hypnotherapy is used to provide therapeutic remedy for depression, pain, substance use disorders, and certain traumas, uses that are supported to a certain extent by research evidence. But many still consider hypnosis more of a cheap magician’s trick than legitimate clinical medicine. © 2024 NautilusNext Inc.,

Keyword: Attention
Link ID: 29094 - Posted: 01.13.2024

By Regina G. Barber Human brains aren't built to comprehend large numbers, like the national debt or how much to save for retirement. But with a few tools — analogies, metaphors and visualizations — we can get better at it. erhui1979/Getty Images Imagine a horizontal line. The very left is marked one thousand and the very right is marked one billion. On this line, where would you add a marker to represent one million? If you said somewhere in the middle, you answered the same as the roughly 50 percent of people who have done this exercise in a number line study. But the answer is actually much closer to one thousand since there are one thousand millions in one billion. This error makes sense because "our human brains are pretty bad at comprehending large numbers," says Elizabeth Toomarian, an educational neuroscientist at Stanford University. She studies how the brain makes sense of numbers. Or doesn't. "Our brains are evolutionarily very old and we are pushing them to do things that we've only just recently conceptualized," says Toomarian. Instead, the human brain is built to understand how much of something is in its environment. For example, which bush has more berries or how many predators are in that clearing? But comprehending the national debt or imagining the size of our universe? "We certainly can use our brains in that way, but we're recycling these sort of evolutionarily old brain architectures to do something really new," she says. In other words, it's not our fault that we have trouble wrapping our heads around big numbers. © 2024 npr

Keyword: Attention
Link ID: 29074 - Posted: 01.03.2024

By Henkjan Honing In 2009, my research group found that newborns possess the ability to discern a regular pulse— the beat—in music. It’s a skill that might seem trivial to most of us but that’s fundamental to the creation and appreciation of music. The discovery sparked a profound curiosity in me, leading to an exploration of the biological underpinnings of our innate capacity for music, commonly referred to as “musicality.” In a nutshell, the experiment involved playing drum rhythms, occasionally omitting a beat, and observing the newborns’ responses. Astonishingly, these tiny participants displayed an anticipation of the missing beat, as their brains exhibited a distinct spike, signaling a violation of their expectations when a note was omitted. Yet, as with any discovery, skepticism emerged (as it should). Some colleagues challenged our interpretation of the results, suggesting alternate explanations rooted in the acoustic nature of the stimuli we employed. Others argued that the observed reactions were a result of statistical learning, questioning the validity of beat perception being a separate mechanism essential to our musical capacity. Infants actively engage in statistical learning as they acquire a new language, enabling them to grasp elements such as word order and common accent structures in their native language. Why would music perception be any different? To address these challenges, in 2015, our group decided to revisit and overhaul our earlier beat perception study, expanding its scope, method and scale, and, once more, decided to include, next to newborns, adults (musicians and non-musicians) and macaque monkeys. The results, recently published in Cognition, confirm that beat perception is a distinct mechanism, separate from statistical learning. The study provides converging evidence on newborns’ beat perception capabilities. In other words, the study was not simply a replication but utilized an alternative paradigm leading to the same conclusion. © 2023 NautilusNext Inc., All rights reserved.

Keyword: Hearing; Language
Link ID: 29067 - Posted: 12.27.2023

By Mariana Lenharo Could artificial intelligence (AI) systems become conscious? A trio of consciousness scientists says that, at the moment, no one knows — and they are expressing concern about the lack of inquiry into the question. In comments to the United Nations, three leaders of the Association for Mathematical Consciousness Science (AMCS) call for more funding to support research on consciousness and AI. They say that scientific investigations of the boundaries between conscious and unconscious systems are urgently needed, and they cite ethical, legal and safety issues that make it crucial to understand AI consciousness. For example, if AI develops consciousness, should people be allowed to simply switch it off after use? Such concerns have been mostly absent from recent discussions about AI safety, such as the high-profile AI Safety Summit in the United Kingdom, says AMCS board member Jonathan Mason, a mathematician based in Oxford, UK and one of the authors of the comments. Nor did US President Joe Biden’s executive order seeking responsible development of AI technology address issues raised by conscious AI systems, Mason notes. “With everything that’s going on in AI, inevitably there’s going to be other adjacent areas of science which are going to need to catch up,” Mason says. Consciousness is one of them. The other authors of the comments were AMCS president Lenore Blum, a theoretical computer scientist at Carnegie Mellon University in Pittsburgh, Pennsylvania, and board chair Johannes Kleiner, a mathematician studying consciousness at the Ludwig Maximilian University of Munich in Germany. © 2023 Springer Nature Limited

Keyword: Consciousness; Robotics
Link ID: 29065 - Posted: 12.27.2023

By Ann Gibbons Louise hadn’t seen her sister or nephew for 26 years. Yet the moment she spotted them on a computer screen, she recognized them, staring hard at their faces. The feat might have been impressive enough for a human, but Louise is a bonobo—one who had spent most of her life at a separate sanctuary from these relatives. The discovery, published today in the Proceedings of the National Academy of Sciences, reveals that our closest primate cousins can remember the faces of friends and family for years, and sometimes even decades. The study, experts say, shows that the capability for long-term social memory is not unique to people, as was long believed. “It’s a remarkable finding,” says Frans de Waal, a primatologist at Emory University who was not involved with the work. “I’m not even sure we humans remember most individuals we haven’t seen for 2 decades.” The research, he says, raises the possibility that other animals can also do this and may remember far more than we give them credit for. Trying to figure out whether nonhuman primates remember a face isn’t simple. You can’t just ask them. So in the new study, comparative psychologist Christopher Krupenye at Johns Hopkins University and colleagues used eye trackers, infrared cameras that noninvasively map a subject’s gaze as they look at images of people or objects. The scientists worked with 26 chimpanzees and bonobos living in three zoos or sanctuaries in Europe and Japan. The team showed the animals photos of the faces of two apes placed side by side on the screen at the same time for 3 seconds. Some images were of complete strangers; some were of close friends, foes, or family members who had once lived in their same social groups, but whom they hadn’t seen in years.

Keyword: Attention; Learning & Memory
Link ID: 29058 - Posted: 12.19.2023

By Jaimie Seaton It’s not uncommon for Veronica Smith to be looking at her partner’s face when suddenly she sees his features changing—his eyes moving closer together and then farther apart, his jawline getting wider and narrower, and his skin moving and shimmering. Smith, age 32, has experienced this phenomenon when looking at faces since she was four or five years old, and while it’s intermittent when she’s viewing another person’s face, it’s more constant when she views her own. “I almost always experience it when I look at my own face in the mirror, which makes it really hard to get ready because I’ll think that I look weird,” Smith explains. “I can more easily tell that I’m experiencing distortions when I’m looking at other people because I know what they look like.” Smith has a rare condition called prosopometamorphopsia (PMO), in which faces appear distorted in shape, texture, position or color. (PMO is related to Alice in Wonderland syndrome, or AIWS, which distorts the size perception of objects or one’s own body.) PMO has fascinated many scientists. The late neurologist and writer Oliver Sacks co-wrote a paper on the condition that was published in 2014, the year before he died. Brad Duchaine, a professor of psychological and brain sciences at Dartmouth College, explains that some people with it see distortions that affect the whole face (bilateral PMO) while others see only the left or right half of a face as distorted (hemi-PMO). “Not surprisingly, people with PMO find the distortions extremely distressing. Over the last century, approximately 75 cases have been reported in the literature. However, little is known about the condition because cases with face distortions have usually been documented by neurologists who don’t have expertise in visual neuroscience or the time to study the cases in depth,” Duchaine says. For 25 years Duchaine’s work has focused on prosopagnosia (face blindness), but after co-authoring a study on hemi-PMO that was published in 2020, Duchaine shifted much of his lab’s work to PMO. © 2023 SCIENTIFIC AMERICAN,

Keyword: Attention; Vision
Link ID: 29051 - Posted: 12.16.2023

By Oshan Jarow Sometimes when I’m looking out across the northern meadow of Brooklyn’s Prospect Park, or even the concrete parking lot outside my office window, I wonder if someone like Shakespeare or Emily Dickinson could have taken in the same view and seen more. I don’t mean making out blurry details or more objects in the scene. But through the lens of their minds, could they encounter the exact same world as me and yet have a richer experience? One way to answer that question, at least as a thought experiment, could be to compare the electrical activity inside our brains while gazing out upon the same scene, and running some statistical analysis designed to actually tell us whose brain activity indicates more richness. But that’s just a loopy thought experiment, right? Not exactly. One of the newest frontiers in the science of the mind is the attempt to measure consciousness’s “complexity,” or how diverse and integrated electrical activity is across the brain. Philosophers and neuroscientists alike hypothesize that more complex brain activity signifies “richer” experiences. The idea of measuring complexity stems from information theory — a mathematical approach to understanding how information is stored, communicated, and processed —which doesn’t provide wonderfully intuitive examples of what more richness actually means. Unless you’re a computer person. “If you tried to upload the content onto a hard drive, it’s how much memory you’d need to be able to store the experience you’re having,” Adam Barrett, a professor of machine learning and data science at the University of Sussex, told me. Another approach to understanding richness is to look at how it changes in different mental states. Recent studies have found that measures of complexity are lowest in patients under general anesthesia, higher in ordinary wakefulness, and higher still in psychedelic trips, which can notoriously turn even the most mundane experiences — say, my view of the parking lot outside my office window — into profound and meaningful encounters.

Keyword: Consciousness
Link ID: 29049 - Posted: 12.16.2023

By Amitha Kalaichandran In May, I was invited to take part in a survey by the National Academies of Sciences, Engineering, and Medicine to better delineate how long Covid is described and diagnosed as part of The National Research Action Plan on Long Covid. The survey had several questions around definitions and criteria to include, such as “brain fog” often experienced by those with long Covid. My intuition piqued, and I began to wonder about the similarities between these neurological symptoms and those experienced by people with attention-deficit/hyperactivity disorder, or ADHD. As a medical journalist with clinical and epidemiological experience, I found the possible connection and its implications impossible to ignore. We know that three years of potential exposure to SARS-CoV-2, in combination with the shift in social patterns (including work-from-home and social isolation), has impacted several aspects of neurocognition, as detailed in a recent report from the Substance Abuse and Mental Health Services Administration. A 2021 systematic review found persistent neuropsychiatric symptoms in Covid-19 survivors, and a 2021 paper in the journal JAMA Network Open found that executive functioning, processing speed, memory, and recall were impacted in patients hospitalized with Covid-19. Long Covid may indeed be linked to developing chronic neurocognitive issues, and even dementia may be accelerated. The virus might impact the frontal lobe, the area that governs executive function — which involves how we make decisions and plan, use our working memory, and control impulses. In October, a paper in Cell reported that long Covid brain fog could be traced to serotonin depletion driven by immune system proteins called viral-associated interferons. Similarly, the symptoms of attention-deficit/hyperactivity disorder, or ADHD, are believed to be rooted structurally in the frontal lobe and possibly from a naturally low level of the neurotransmitter dopamine, with contributions from norepinephrine, serotonin, and GABA. This helps explain why people with ADHD, who experience inattention, hyperactivity, and impulsivity, among other symptoms, may seek higher levels of stimulation: to activate the release of dopamine. However, a deficit in serotonin can also trigger ADHD. The same neurotransmitter, when depleted, may be responsible for brain fog in long Covid.

Keyword: ADHD
Link ID: 29038 - Posted: 12.09.2023

By Amanda Gefter On a February morning in 1935, a disoriented homing pigeon flew into the open window of an unoccupied room at the Hotel New Yorker. It had a band around its leg, but where it came from, or was meant to be headed, no one could say. While management debated what to do, a maid rushed to the 33rd floor and knocked at the door of the hotel’s most infamous denizen: Nikola Tesla. The 78-year-old inventor quickly volunteered to take in the homeless pigeon. “Dr. Tesla … dropped work on a new electrical project, lest his charge require some little attention,” reported The New York Times. “The man who recently announced the discovery of an electrical death-beam, powerful enough to destroy 10,000 airplanes at a swoop, carefully spread towels on his window ledge and set down a little cup of seed.” Nikola Tesla—the Serbian-American scientist famous for designing the alternating current motor and the Tesla coil—had, for years, regularly been spotted skulking through the nighttime streets of midtown Manhattan, feeding the birds at all hours. In the dark, he’d sound a low whistle, and from the gloom, hordes of pigeons would flock to the old man, perching on his outstretched arms. He was known to keep baskets in his room as nests, along with caches of homemade seed mix, and to leave his windows perpetually open so the birds could come and go. Once, he was arrested for trying to lasso an injured homing pigeon in the plaza of St. Patrick’s Cathedral, and, from his holding cell in the 34th Street precinct, had to convince the officers that he was—or had been—one of the most famous inventors in the world. It had been years since he’d produced a successful invention. He was gaunt and broke—living off of debt and good graces—having been kicked out of a string of hotels, a trail of pigeon droppings and unpaid rent in his wake. He had no family or close friends, except for the birds. © 2023 NautilusNext Inc.,

Keyword: Consciousness
Link ID: 29034 - Posted: 12.09.2023