Chapter 17. Learning and Memory

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 1882

Nancy S. Jecker & Andrew Ko Putting a computer inside someone’s brain used to feel like the edge of science fiction. Today, it’s a reality. Academic and commercial groups are testing “brain-computer interface” devices to enable people with disabilities to function more independently. Yet Elon Musk’s company, Neuralink, has put this technology front and center in debates about safety, ethics and neuroscience. In January 2024, Musk announced that Neuralink implanted its first chip in a human subject’s brain. The Conversation reached out to two scholars at the University of Washington School of Medicine – Nancy Jecker, a bioethicst, and Andrew Ko, a neurosurgeon who implants brain chip devices – for their thoughts on the ethics of this new horizon in neuroscience. How does a brain chip work? Neuralink’s coin-size device, called N1, is designed to enable patients to carry out actions just by concentrating on them, without moving their bodies. Subjects in the company’s PRIME study – short for Precise Robotically Implanted Brain-Computer Interface – undergo surgery to place the device in a part of the brain that controls movement. The chip records and processes the brain’s electrical activity, then transmits this data to an external device, such as a phone or computer. The external device “decodes” the patient’s brain activity, learning to associate certain patterns with the patient’s goal: moving a computer cursor up a screen, for example. Over time, the software can recognize a pattern of neural firing that consistently occurs while the participant is imagining that task, and then execute the task for the person. © 2010–2024, The Conversation US, Inc.

Keyword: Robotics; Learning & Memory
Link ID: 29151 - Posted: 02.20.2024

By David Marchese Our memories form the bedrock of who we are. Those recollections, in turn, are built on one very simple assumption: This happened. But things are not quite so simple. “We update our memories through the act of remembering,” says Charan Ranganath, a professor of psychology and neuroscience at the University of California, Davis, and the author of the illuminating new book “Why We Remember.” “So it creates all these weird biases and infiltrates our decision making. It affects our sense of who we are.” Rather than being photo-accurate repositories of past experience, Ranganath argues, our memories function more like active interpreters, working to help us navigate the present and future. The implication is that who we are, and the memories we draw on to determine that, are far less fixed than you might think. “Our identities,” Ranganath says, “are built on shifting sand.” What is the most common misconception about memory? People believe that memory should be effortless, but their expectations for how much they should remember are totally out of whack with how much they’re capable of remembering.1 Another misconception is that memory is supposed to be an archive of the past. We expect that we should be able to replay the past like a movie in our heads. The problem with that assumption is that we don’t replay the past as it happened; we do it through a lens of interpretation and imagination. Semantic memory is the term for the memory of facts and knowledge about the world. standpoint? It’s exceptionally hard to answer the question of how much we can remember. What I’ll say is that we can remember an extraordinary amount of detail that would make you feel at times as if you have a photographic memory. We’re capable of these extraordinary feats. I would argue that we’re all everyday-memory experts, because we have this exceptional semantic memory, which is the scaffold for episodic memory. I know it sounds squirmy to say, “Well, I can’t answer the question of how much we remember,” but I don’t want readers to walk away thinking memory is all made up. © 2024 The New York Times Company

Keyword: Learning & Memory
Link ID: 29134 - Posted: 02.06.2024

By Sabrina Malhi Researchers have found a possible link between the common hormone disorder PCOS and cognitive decline later in life. PCOS, which stands for polycystic ovary syndrome, is the most common endocrine disorder among women ages 15 to 44. However, it is often underdiagnosed because many of its symptoms, including abnormal menstrual cycles and excess hair, can be attributed to other causes. The syndrome was first described in 1935 by American gynecologists Irving F. Stein and Michael L. Leventhal. They published a paper documenting a group of women with lack of periods, excess body hair and enlarged ovaries with multiple cysts. Their work helped identify and characterize PCOS as it is known today. Health experts hypothesize that genetic factors could contribute to the development of the condition, but the exact causes are still unknown. Here’s what to know about PCOS and its potential link to cognitive health. PCOS is a chronic hormonal disorder characterized by overproduction of androgens, which are typically considered male hormones. High androgen levels can lead to irregular menstrual cycles and fertility issues when excessively produced in women. In the United States, 6 to 12 percent of people assigned female at birth who are of reproductive age are affected by PCOS, according to data from the Centers for Disease Control and Prevention. The condition is associated with an increased risk of obesity, high blood pressure, high cholesterol and endometrial cancer. PCOS is also often linked to insulin resistance, which can result in elevated blood sugar levels and an escalated risk of Type 2 diabetes. The condition can contribute to various metabolic issues, including high blood pressure, excess abdominal fat, and abnormal cholesterol or triglyceride levels. People with PCOS face an elevated risk of developing cardiovascular problems, such as high blood pressure, high cholesterol levels and an increased risk of heart disease. A recent study in the journal Neurology found that people with PCOS performed lower than normal on a suite of cognitive tests.

Keyword: Hormones & Behavior; Learning & Memory
Link ID: 29132 - Posted: 02.06.2024

By Ben Guarino Billionaire technologist Elon Musk announced this week that his company Neuralink has implanted its brain-computer interface into a human for the first time. The recipient was “recovering well,” Musk wrote on his social media platform X (formerly Twitter) on Monday evening, adding that initial results showed “promising neuron spike detection”—a reference to brain cells’ electrical activity. Each wireless Neuralink device contains a chip and electrode arrays of more than 1,000 superthin, flexible conductors that a surgical robot threads into the cerebral cortex. There the electrodes are designed to register thoughts related to motion. In Musk’s vision, an app will eventually translate these signals to move a cursor or produce text—in short, it will enable computer control by thinking. “Imagine if Stephen Hawking could communicate faster than a speed typist or auctioneer. That is the goal,” Musk wrote of the first Neuralink product, which he said is named Telepathy. The U.S. Food and Drug Administration had approved human clinical trials for Neuralink in May 2023. And last September the company announced it was opening enrollment in its first study to people with quadriplegia. Monday’s announcement did not take neuroscientists by surprise. Musk, the world’s richest man, “said he was going to do it,” says John Donoghue, an expert in brain-computer interfaces at Brown University. “He had done the preliminary work, built on the shoulders of others, including what we did starting in the early 2000s.” Neuralink’s original ambitions, which Musk outlined when he founded the company in 2016, included meshing human brains with artificial intelligence. Its more immediate aims seem in line with the neural keyboards and other devices that people with paralysis already use to operate computers. The methods and speed with which Neuralink pursued those goals, however, have resulted in federal investigations into dead study animals and the transportation of hazardous material. © 2024 SCIENTIFIC AMERICAN

Keyword: Robotics
Link ID: 29124 - Posted: 01.31.2024

By Ben Guarino Billionaire technologist Elon Musk announced this week that his company Neuralink has implanted its brain-computer interface into a human for the first time. The recipient was “recovering well,” Musk wrote on his social media platform X (formerly Twitter) on Monday evening, adding that initial results showed “promising neuron spike detection”—a reference to brain cells’ electrical activity. Each wireless Neuralink device contains a chip and electrode arrays of more than 1,000 superthin, flexible conductors that a surgical robot threads into the cerebral cortex. There the electrodes are designed to register thoughts related to motion. In Musk’s vision, an app will eventually translate these signals to move a cursor or produce text—in short, it will enable computer control by thinking. “Imagine if Stephen Hawking could communicate faster than a speed typist or auctioneer. That is the goal,” Musk wrote of the first Neuralink product, which he said is named Telepathy. The U.S. Food and Drug Administration had approved human clinical trials for Neuralink in May 2023. And last September the company announced it was opening enrollment in its first study to people with quadriplegia. Monday’s announcement did not take neuroscientists by surprise. Musk, the world’s richest man, “said he was going to do it,” says John Donoghue, an expert in brain-computer interfaces at Brown University. “He had done the preliminary work, built on the shoulders of others, including what we did starting in the early 2000s.” Neuralink’s original ambitions, which Musk outlined when he founded the company in 2016, included meshing human brains with artificial intelligence. Its more immediate aims seem in line with the neural keyboards and other devices that people with paralysis already use to operate computers. The methods and speed with which Neuralink pursued those goals, however, have resulted in federal investigations into dead study animals and the transportation of hazardous material. © 2024 SCIENTIFIC AMERICAN

Keyword: Robotics
Link ID: 29123 - Posted: 01.31.2024

A new study shows male zebra finches must sing every day to keep their vocal muscles in shape. Females prefer the songs of males that did their daily vocal workout. Sponsor Message ARI SHAPIRO, HOST: Why do songbirds sing so much? Well, a new study suggests they have to to stay in shape. Here's NPR's Ari Daniel. ARI DANIEL, BYLINE: A few years ago, I was out at dawn in South Carolina low country, a mix of swamp and trees draped in Spanish moss. (SOUNDBITE OF BIRDS CHIRPING) DANIEL: The sound of birdsong filled the air. It's the same in lots of places. Once the light of day switches on, songbirds launch their serenade. IRIS ADAM: I mean, why birds sing is relatively well-answered. DANIEL: Iris Adam is a behavioral neuroscientist at the University of Southern Denmark. ADAM: For many songbirds, males sing to impress a female and attract them as mate. And also, birds sing to defend their territory. DANIEL: But Adam says these reasons don't explain why songbirds sing so darn much. ADAM: There's an insane drive to sing. DANIEL: For some, it's hours every day. That's a lot of energy. Plus, singing can be dangerous. ADAM: As soon as you sing, you reveal yourself - like, where you are, that you even exist, where your territory is. All of that immediately is out in the open for predators, for everybody. DANIEL: Why take that risk? Adam wondered whether the answer might lie in the muscles that produce birdsong and if those muscles require regular exercise. So she designed a series of experiments on zebra finches, little Australian songbirds with striped heads and a bloom of orange on their cheeks. One of Adam's first experiments involved taking males and severing the connection between their brains and their singing muscles. ADAM: Already after two days, they had lost some of their performance. And after three weeks, they were back to the same level when they were juveniles and never had sung before. DANIEL: Next, she left the finches intact but prevented them from singing for a week by keeping them in the dark almost around the clock. ADAM: The first two or three days, it's quite easy. But the longer the experiment goes, the more they are like, I need to sing. And so then you need to tell them, like, stop. You can't sing. DANIEL: After a week, the birds' singing muscles lost half their strength. But does that impact what the resulting song sounds like? Here's a male before the seven days of darkness. © 2023 npr

Keyword: Animal Communication; Language
Link ID: 29042 - Posted: 12.13.2023

By Ellen Barry At the root of post-traumatic stress disorder, or PTSD, is a memory that cannot be controlled. It may intrude on everyday activity, thrusting a person into the middle of a horrifying event, or surface as night terrors or flashbacks. Decades of treatment of military veterans and sexual assault survivors have left little doubt that traumatic memories function differently from other memories. A group of researchers at Yale University and the Icahn School of Medicine at Mount Sinai set out to find empirical evidence of those differences. The team conducted brain scans of 28 people with PTSD while they listened to recorded narrations of their own memories. Some of the recorded memories were neutral, some were simply “sad,” and some were traumatic. The brain scans found clear differences, the researchers reported in a paper published on Thursday in the journal Nature Neuroscience. The people listening to the sad memories, which often involved the death of a family member, showed consistently high engagement of the hippocampus, part of the brain that organizes and contextualizes memories. When the same people listened to their traumatic memories — of sexual assaults, fires, school shootings and terrorist attacks — the hippocampus was not involved. “What it tells us is that the brain is in a different state in the two memories,” said Daniela Schiller, a neuroscientist at the Icahn School of Medicine at Mount Sinai and one of the authors of the study. She noted that therapies for PTSD often sought to help people organize their memory so they can view it as distant from the present. “Now we find something that potentially can explain it in the brain,” she said. “The brain doesn’t look like it’s in a state of memory; it looks like it is a state of present experience.” Indeed, the authors conclude in the paper, “traumatic memories are not experienced as © 2023 The New York Times Company

Keyword: Learning & Memory; Stress
Link ID: 29030 - Posted: 12.02.2023

By John Krakauer & Tamar Makin The human brain’s ability to adapt and change, known as neuroplasticity, has long captivated both the scientific community and the public imagination. It’s a concept that brings hope and fascination, especially when we hear extraordinary stories of, for example, blind individuals developing heightened senses that enable them to navigate through a cluttered room purely based on echolocation or stroke survivors miraculously regaining motor abilities once thought lost. For years, the notion that neurological challenges such as blindness, deafness, amputation or stroke lead to dramatic and significant changes in brain function has been widely accepted. These narratives paint a picture of a highly malleable brain that is capable of dramatic reorganization to compensate for lost functions. It’s an appealing notion: the brain, in response to injury or deficit, unlocks untapped potentials, rewires itself to achieve new capabilities and self-repurposes its regions to achieve new functions. This idea can also be linked with the widespread, though inherently false, myth that we only use 10 percent of our brain, suggesting that we have extensive neural reserves to lean on in times of need. But how accurate is this portrayal of the brain’s adaptive abilities to reorganize? Are we truly able to tap into reserves of unused brain potential following an injury, or have these captivating stories led to a misunderstanding of the brain’s true plastic nature? In a paper we wrote for the journal eLife, we delved into the heart of these questions, analyzing classical studies and reevaluating long-held beliefs about cortical reorganization and neuroplasticity. What we found offers a compelling new perspective on how the brain adapts to change and challenges some of the popularized notions about its flexible capacity for recovery. The roots of this fascination can be traced back to neuroscientist Michael Merzenich’s pioneering work, and it was popularized through books such as Norman Doidge’s The Brain That Changes Itself. Merzenich’s insights were built on the influential studies of Nobel Prize–winning neuroscientists David Hubel and Torsten Wiesel, who explored ocular dominance in kittens. © 2023 SCIENTIFIC AMERICAN,

Keyword: Learning & Memory; Regeneration
Link ID: 29019 - Posted: 11.22.2023

By Carl Zimmer Sign up for Science Times Get stories that capture the wonders of nature, the cosmos and the human body. Get it sent to your inbox. If a troop of baboons encounters another troop on the savanna, they may keep a respectful distance or they may get into a fight. But human groups often do something else: They cooperate. Tribes of hunter-gatherers regularly come together for communal hunts or to form large-scale alliances. Villages and towns give rise to nations. Networks of trade span the planet. Human cooperation is so striking that anthropologists have long considered it a hallmark of our species. They have speculated that it emerged thanks to the evolution of our powerful brains, which enable us to use language, establish cultural traditions and perform other complex behaviors. But a new study, published in Science on Thursday, throws that uniqueness into doubt. It turns out that two groups of apes in Africa have regularly mingled and cooperated with each other for years. “To have extended, friendly, cooperative relationships between members of other groups who have no kinship ties is really quite extraordinary,” said Joan Silk, a primatologist at Arizona State University who was not involved in the study. The new research comes from long-term observations of bonobos, an ape species that lives in the forests of the Democratic Republic of Congo. A century ago, primatologists thought bonobos were a slender subspecies of chimpanzee. But the two species are genetically distinct and behave in some remarkably different ways. Among chimpanzees, males hold a dominant place in society. They can be extremely violent, even killing babies. In bonobo groups, however, females dominate, and males have never been observed to commit infanticide. Bonobos often defuse conflict with sex, a strategy that primatologists have not observed among chimpanzees. Scientists made most of their early observations of bonobos in zoos. But in recent years they’ve conducted long-term studies of the apes in the wild. © 2023 The New York Times Company

Keyword: Evolution; Aggression
Link ID: 29011 - Posted: 11.18.2023

Max Kozlov Researchers have sifted through genomes from thousands of individuals in an effort to identify genes linked to Alzheimer’s disease. But these scientists have faced a serious obstacle: it’s hard to know for certain which of those people have Alzheimer’s. There’s no foolproof blood test for the disease, and dementia, a key symptom of Alzheimer’s, is also caused by other disorders. Early-stage Alzheimer’s might cause no symptoms at all. Now, researchers have developed artificial intelligence (AI)-based approaches that could help. One algorithm efficiently sorts through large numbers of brain images and picks out those that include characteristics of Alzheimer’s. A second machine-learning method identifies important structural features of the brain — an effort that could eventually help scientists to spot new signs of Alzheimer’s in brain scans. The goal is to use people’s brain images as visual ‘biomarkers’ of Alzheimer’s. Applying the method to large databases that also include medical information and genetic data, such as the UK Biobank, could allow scientists to pinpoint genes that contribute to the disease. In turn, this work could aid the creation of treatments and of models that predict who’s at risk of developing the disease. Combining genomics, brain imaging and AI is allowing researchers to “find brain measures that are tightly linked to a genomic driver”, says Paul Thompson, a neuroscientist at the University of Southern California in Los Angeles, who is spearheading efforts to develop these algorithms. Thompson and others described the new AI techniques on 4 November at the annual conference of the American Society of Human Genetics in Washington DC. Overwhelmed with data © 2023 Springer Nature Limited

Keyword: Alzheimers; Robotics
Link ID: 29004 - Posted: 11.13.2023

By Catherine Offord Close your eyes and picture yourself running an errand across town. You can probably imagine the turns you’d need to take and the landmarks you’d encounter. This ability to conjure such scenarios in our minds is thought to be crucial to humans’ capacity to plan ahead. But it may not be uniquely human: Rats also seem to be able to “imagine” moving through mental environments, researchers report today in Science. Rodents trained to navigate within a virtual arena could, in return for a reward, activate the same neural patterns they’d shown while navigating—even when they were standing still. That suggests rodents can voluntarily access mental maps of places they’ve previously visited. “We know humans carry around inside their heads representations of all kinds of spaces: rooms in your house, your friends’ houses, shops, libraries, neighborhoods,” says Sean Polyn, a psychologist at Vanderbilt University who was not involved in the research. “Just by the simple act of reminiscing, we can place ourselves in these spaces—to think that we’ve got an animal analog of that very human imaginative act is very impressive.” Researchers think humans’ mental maps are encoded in the hippocampus, a brain region involved in memory. As we move through an environment, cells in this region fire in particular patterns depending on our location. When we later revisit—or simply think about visiting—those locations, the same hippocampal signatures are activated. Rats also encode spatial information in the hippocampus. But it’s been impossible to establish whether they have a similar capacity for voluntary mental navigation because of the practical challenges of getting a rodent to think about a particular place on cue, says study author Chongxi Lai, who conducted the work while a graduate student and later a postdoc at the Howard Hughes Medical Institute’s Janelia Research Campus. In their new study, Lai, along with Janelia neuroscientist Albert Lee and colleagues, found a way around this problem by developing a brain-machine interface that rewarded rats for navigating their surroundings using only their thoughts.

Keyword: Learning & Memory; Attention
Link ID: 28989 - Posted: 11.04.2023

By Jake Buehler A fruit bat hanging in the corner of a cave stirs; it is ready to move. It scans the space to look for a free perch and then takes flight, adjusting its membranous wings to angle an approach to a spot next to one of its fuzzy fellows. As it does so, neurological data lifted from its brain is broadcast to sensors installed in the cave’s walls. This is no balmy cave along the Mediterranean Sea. The group of Egyptian fruit bats is in Berkeley, California, navigating an artificial cave in a laboratory that researchers have set up to study the inner workings of the animals’ minds. The researchers had an idea: that as a bat navigates its physical environment, it’s also navigating a network of social relationships. They wanted to know whether the bats use the same or different parts of their brain to map these intersecting realities. In a new study published in Nature in August, the scientists revealed that these maps overlap. The brain cells informing a bat of its own location also encode details about other bats nearby — not only their location, but also their identities. The findings raise the intriguing possibility that evolution can program those neurons for multiple purposes to serve the needs of different species. The neurons in question are located in the hippocampus, a structure deep within the mammalian brain that is involved in the creation of long-term memories. A special population of hippocampal neurons, known as place cells, are thought to create an internal navigation system. First identified in the rat hippocampus in 1971 by the neuroscientist John O’Keefe, place cells fire when an animal is in a particular location; different place cells encode different places. This system helps animals determine where they are, where they need to go and how to get from here to there. In 2014, O’Keefe was awarded the Nobel Prize for his discovery of place cells, and over the last several decades they have been identified in multiple primate species, including humans. However, moving from place to place isn’t the only way an animal can experience a change in its surroundings. In your home, the walls and furniture mostly stay the same from day to day, said Michael Yartsev, who studies the neural basis of natural behavior at the University of California, Berkeley and co-led the new work. But the social context of your living space could change quite regularly. © 2023 An editorially independent publication supported by the Simons Foundation.

Keyword: Learning & Memory
Link ID: 28982 - Posted: 11.01.2023

Anil Oza Scientists once considered sleep to be like a shade getting drawn over a window between the brain and the outside world: when the shade is closed, the brain stops reacting to outside stimuli. A study published on 12 October in Nature Neuroscience1 suggests that there might be periods during sleep when that shade is partially open. Depending on what researchers said to them, participants in the study would either smile or frown on cue in certain phases of sleep. “You’re not supposed to be able to do stuff while you sleep,” says Delphine Oudiette, a cognitive scientist at the Paris Brain Institute in France and a co-author of the study. Historically, the definition of sleep is that consciousness of your environment halts, she adds. “It means you don’t react to the external world.” Dream time A few years ago, however, Oudiette began questioning this definition after she and her team conducted an experiment in which they were able to communicate with people who are aware that they are dreaming while they sleep — otherwise known as lucid dreamers. During these people’s dreams, experimenters were able to ask questions and get responses through eye and facial-muscle movements2. Karen Konkoly, who was a co-author on that study and a cognitive scientist at Northwestern University in Evanston, Illinois, says that after that paper came out, “it was a big open question in our minds whether communication would be possible with non-lucid dreamers”. So Oudiette continued with the work. In her latest study, she and her colleagues observed 27 people with narcolepsy — characterized by daytime sleepiness and a high frequency of lucid dreams — and 22 people without the condition. While they were sleeping, participants were repeatedly asked to frown or smile. All of them responded accurately to at least 70% of these prompts. © 2023 Springer Nature Limited

Keyword: Sleep; Learning & Memory
Link ID: 28968 - Posted: 10.25.2023

By Benjamin Mueller Once their scalpels reach the edge of a brain tumor, surgeons are faced with an agonizing decision: cut away some healthy brain tissue to ensure the entire tumor is removed, or give the healthy tissue a wide berth and risk leaving some of the menacing cells behind. Now scientists in the Netherlands report using artificial intelligence to arm surgeons with knowledge about the tumor that may help them make that choice. The method, described in a study published on Wednesday in the journal Nature, involves a computer scanning segments of a tumor’s DNA and alighting on certain chemical modifications that can yield a detailed diagnosis of the type and even subtype of the brain tumor. That diagnosis, generated during the early stages of an hourslong surgery, can help surgeons decide how aggressively to operate, the researchers said. In the future, the method may also help steer doctors toward treatments tailored for a specific subtype of tumor. “It’s imperative that the tumor subtype is known at the time of surgery,” said Jeroen de Ridder, an associate professor in the Center for Molecular Medicine at UMC Utrecht, a Dutch hospital, who helped lead the study. “What we have now uniquely enabled is to allow this very fine-grained, robust, detailed diagnosis to be performed already during the surgery.” A brave new world. A new crop of chatbots powered by artificial intelligence has ignited a scramble to determine whether the technology could upend the economics of the internet, turning today’s powerhouses into has-beens and creating the industry’s next giants. Here are the bots to know: ChatGPT. ChatGPT, the artificial intelligence language model from a research lab, OpenAI, has been making headlines since November for its ability to respond to complex questions, write poetry, generate code, plan vacations and translate languages. GPT-4, the latest version introduced in mid-March, can even respond to images (and ace the Uniform Bar Exam). © 2023 The New York Times Company

Keyword: Robotics; Intelligence
Link ID: 28958 - Posted: 10.12.2023

By Stephanie Pappas If you’ve ever awoken from a vivid dream only to find that you can’t remember the details by the end of breakfast, you’re not alone. People forget most of the dreams they have—though it is possible to train yourself to remember more of them. Dreaming happens mostly (though not always exclusively) during rapid eye movement (REM) sleep. During this sleep stage, brain activity looks similar to that in a waking brain, with some very important differences. Key among them: during REM sleep, the areas of the brain that transfer memories into long-term storage—as well as the long-term storage areas themselves—are relatively deactivated, says Deirdre Barrett, a dream researcher at Harvard Medical School and author of the book The Committee of Sleep (Oneiroi Press, 2001). This may be a side effect of REM’s role in memory consolidation, according to a 2019 study on mice in the journal Science. Short-term memory areas are active during REM sleep, but those only hang on to memories for about 30 seconds. “You have to wake up from REM sleep, generally, to recall a dream,” Barrett says. If, instead, you pass into the next stage of sleep without rousing, that dream will never enter long-term memory. REM sleep occurs about every 90 minutes, and it lengthens as the night drags on. The first REM cycle of the night is typically just a few minutes long, but by the end of an eight-hour night of sleep, a person has typically been in the REM stage for a good 20 minutes, Barrett says. That’s why the strongest correlation between any life circumstance and your memory of dreams is the number of hours you’ve slept. If you sleep only six hours, you’re getting less than half of the dream time of an eight-hour night, she says. Those final hours of sleep are the most important for dreaming. And people tend to remember the last dream of the night—the one just before waking. © 2023 Scientific American

Keyword: Sleep; Learning & Memory
Link ID: 28939 - Posted: 10.03.2023

By Clay Risen Endel Tulving, whose insights into the structure of human memory and the way we recall the past revolutionized the field of cognitive psychology, died on Sept. 11 in Mississauga, Ontario. He was 96. His daughters, Linda Tulving and Elo Tulving-Blais, said his death, at an assisted living home, was caused by complications of a stroke. Until Dr. Tulving began his pathbreaking work in the 1960s, most cognitive psychologists were more interested in understanding how people learn things than in how they retain and recall them. When they did think about memory, they often depicted it as one giant cerebral warehouse, packed higgledy-piggledy, with only a vague conception of how we retrieved those items. This, they asserted, was the realm of “the mind,” an untestable, almost philosophical construct. Dr. Tulving, who spent most of his career at the University of Toronto, first made his name with a series of clever experiments and papers, demonstrating how the mind organizes memories and how it uses contextual cues to retrieve them. Forgetting, he posited, was less about information loss than it was about the lack of cues to retrieve it. He established his legacy with a chapter in the 1972 book “Organization of Memory,” which he edited with Wayne Donaldson. In that chapter, he argued for a taxonomy of memory types. He started with two: procedural memory, which is largely unconscious and involves things like how to walk or ride a bicycle, and declarative memory, which is conscious and discrete. © 2023 The New York Times Company

Keyword: Learning & Memory
Link ID: 28934 - Posted: 09.29.2023

By Veronique Greenwood In the dappled sunlit waters of Caribbean mangrove forests, tiny box jellyfish bob in and out of the shade. Box jellies are distinguished from true jellyfish in part by their complex visual system — the grape-size predators have 24 eyes. But like other jellyfish, they are brainless, controlling their cube-shaped bodies with a distributed network of neurons. That network, it turns out, is more sophisticated than you might assume. On Friday, researchers published a report in the journal Current Biology indicating that the box jellyfish species Tripedalia cystophora have the ability to learn. Because box jellyfish diverged from our part of the animal kingdom long ago, understanding their cognitive abilities could help scientists trace the evolution of learning. The tricky part about studying learning in box jellies was finding an everyday behavior that scientists could train the creatures to perform in the lab. Anders Garm, a biologist at the University of Copenhagen and an author of the new paper, said his team decided to focus on a swift about-face that box jellies execute when they are about to hit a mangrove root. These roots rise through the water like black towers, while the water around them appears pale by comparison. But the contrast between the two can change from day to day, as silt clouds the water and makes it more difficult to tell how far away a root is. How do box jellies tell when they are getting too close? “The hypothesis was, they need to learn this,” Dr. Garm said. “When they come back to these habitats, they have to learn, how is today’s water quality? How is the contrast changing today?” In the lab, researchers produced images of alternating dark and light stripes, representing the mangrove roots and water, and used them to line the insides of buckets about six inches wide. When the stripes were a stark black and white, representing optimum water clarity, box jellies never got close to the bucket walls. With less contrast between the stripes, however, box jellies immediately began to run into them. This was the scientists’ chance to see if they would learn. © 2023 The New York Times Company

Keyword: Learning & Memory; Evolution
Link ID: 28925 - Posted: 09.23.2023

By Jim Davies Think of what you want to eat for dinner this weekend. What popped into mind? Pizza? Sushi? Clam chowder? Why did those foods (or whatever foods you imagined) appear in your consciousness and not something else? Psychologists have long held that when we are making a decision about a particular category of thing, we tend to bring to mind items that are typical or common in our culture or everyday lives, or ones we value the most. On this view, whatever foods you conjured up are likely ones that you eat often, or love to eat. Sounds intuitive. But a recent paper published in Cognition suggests it’s more complicated than that. Tracey Mills, a research assistant working at MIT, led the study along with Jonathan Phillips, a cognitive scientist and philosopher at Dartmouth College. They put over 2,000 subjects, recruited online, through a series of seven experiments that allowed them to test a novel approach for understanding which ideas within a category will pop into our consciousness—and which won’t. In this case, they had subjects think about zoo animals, holidays, jobs, kitchen appliances, chain restaurants, sports, and vegetables. What they found is that what makes a particular thing come to mind—such as a lion when one is considering zoo animals—is determined not by how valuable or familiar it is, but by where it lies in a multidimensional idea grid that could be said to resemble a kind of word cloud. “Under the hypothesis we argue for,” Mills and Phillips write, “the process of calling members of a category to mind might be modeled as a search through feature space, weighted toward certain features that are relevant for that category.” Historical “value” just happens to be one dimension that is particularly relevant when one is talking about dinner, but is less relevant for categories such as zoo animals or, say, crimes, they write. © 2023 NautilusNext Inc., All rights reserved.

Keyword: Attention; Learning & Memory
Link ID: 28910 - Posted: 09.16.2023

By Joanna Thompson= Like many people, Mary Ann Raghanti enjoys potatoes loaded with butter. Unlike most people, however, she actually asked the question of why we love stuffing ourselves with fatty carbohydrates. Raghanti, a biological anthropologist at Kent State University, has researched the neurochemical mechanism behind that savory craving. As it turns out, a specific brain chemical may be one of the things that not only developed our tendency to overindulge in food, alcohol and drugs but also helped the human brain evolve to be unique from the brains of closely related species. A new study, led by Raghanti and published on September 11 in the Proceedings of the National Academy of Sciences USA, examined the activity of a particular neurotransmitter in a region of the brain that is associated with reward and motivation across several species of primates. The researchers found higher levels of that brain chemical—neuropeptide Y (NPY)—in humans, compared with our closest living relatives. That boost in the reward peptide could explain our love of high-fat foods, from pizza to poutine. The impulse to stuff ourselves with fats and sugars may have given our ancestors an evolutionary edge, allowing them to develop a larger and more complex brain. “I think this is a first bit of neurobiological insight into one of the most interesting things about us as a species,” says Robert Sapolsky, a neuroendocrinology researcher at Stanford University, who was not directly involved in the research but helped review the new paper. Advertisement Neuropeptide Y is associated with “hedonic eating”—consuming food strictly to experience pleasure rather than to satisfy hunger. It drives individuals to seek out high-calorie foods, especially those rich in fat. Historically, though, NPY has been overlooked in favor of flashier “feel good” chemicals such as dopamine and serotonin. © 2023 Scientific American,

Keyword: Obesity; Intelligence
Link ID: 28905 - Posted: 09.13.2023

By Saugat Bolakhe Memory doesn’t represent a single scientific mystery; it’s many of them. Neuroscientists and psychologists have come to recognize varied types of memory that coexist in our brain: episodic memories of past experiences, semantic memories of facts, short- and long-term memories, and more. These often have different characteristics and even seem to be located in different parts of the brain. But it’s never been clear what feature of a memory determines how or why it should be sorted in this way. Now, a new theory backed by experiments using artificial neural networks proposes that the brain may be sorting memories by evaluating how likely they are to be useful as guides in the future. In particular, it suggests that many memories of predictable things, ranging from facts to useful recurring experiences — like what you regularly eat for breakfast or your walk to work — are saved in the brain’s neocortex, where they can contribute to generalizations about the world. Memories less likely to be useful — like the taste of that unique drink you had at that one party — are kept in the seahorse-shaped memory bank called the hippocampus. Actively segregating memories this way on the basis of their usefulness and generalizability may optimize the reliability of memories for helping us navigate novel situations. The authors of the new theory — the neuroscientists Weinan Sun and James Fitzgerald of the Janelia Research Campus of the Howard Hughes Medical Institute, Andrew Saxe of University College London, and their colleagues — described it in a recent paper in Nature Neuroscience. It updates and expands on the well-established idea that the brain has two linked, complementary learning systems: the hippocampus, which rapidly encodes new information, and the neocortex, which gradually integrates it for long-term storage. James McClelland, a cognitive neuroscientist at Stanford University who pioneered the idea of complementary learning systems in memory but was not part of the new study, remarked that it “addresses aspects of generalization” that his own group had not thought about when they proposed the theory in the mid 1990s. All Rights Reserved © 2023

Keyword: Learning & Memory; Attention
Link ID: 28900 - Posted: 09.07.2023