Chapter 17. Learning and Memory

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 1946

By Claudia López Lloreda For a neuroscientist, the opportunity to record single neurons in people doesn’t knock every day. It is so rare, in fact, that after 14 years of waiting by the door, Florian Mormann says he has recruited just 110 participants—all with intractable epilepsy. All participants had electrodes temporarily implanted in their brains to monitor their seizures. But the slow work to build this cohort is starting to pay off for Mormann, a group leader at the University of Bonn, and for other researchers taking a similar approach, according to a flurry of studies published in the past year. For instance, certain neurons selectively respond not only to particular scents but also to the words and images associated with them, Mormann and his colleagues reported in October. Other neurons help to encode stimuli, form memories and construct representations of the world, recent work from other teams reveals. Cortical neurons encode specific information about the phonetics of speech, two independent teams reported last year. Hippocampal cells contribute to working memory and map out time in novel ways, two other teams discovered last year, and some cells in the region encode information related to a person’s changing knowledge about the world, a study published in August found. These studies offer the chance to answer questions about human brain function that remain challenging to answer using animal models, says Ziv Williams, associate professor of neurosurgery at Harvard Medical School, who led one of the teams that worked on speech phonetics. “Concept cells,” he notes by way of example, such as those Mormann identified, or the “Jennifer Aniston” neurons famously described in a 2005 study, have proved elusive in the monkey brain. © 2025 Simons Foundation

Keyword: Attention; Learning & Memory
Link ID: 29709 - Posted: 03.19.2025

By Angie Voyles Askham Synaptic plasticity in the hippocampus involves both strengthening relevant connections and weakening irrelevant ones. That sapping of synaptic links, called long-term depression (LTD), can occur through two distinct routes: the activity of either NMDA receptors or metabotropic glutamate receptors (mGluRs). The mGluR-dependent form of LTD, required for immediate translation of mRNAs at the synapse, appears to go awry in fragile X syndrome, a genetic condition that stems from loss of the protein FMRP and is characterized by intellectual disability and often autism. Possibly as a result, mice that model fragile X exhibit altered protein synthesis regulation in the hippocampus, an increase in dendritic spines and overactive neurons. Treatments for fragile X that focus on dialing down the mGluR pathway and tamping down protein synthesis at the synapse have shown success in quelling those traits in mice, but they have repeatedly failed in human clinical trials. But the alternative pathway—via the NMDA receptor—may provide better results, according to a new study. Signaling through the NMDA receptor subunit GluN2B can also decrease spine density and alleviate fragile-X-linked traits in mice, the work shows. “You don’t have to modulate the protein synthesis directly,” says Lynn Raymond, professor of psychiatry and chair in neuroscience at the University of British Columbia, who was not involved in the work. Instead, activation of part of the GluN2B subunit can indirectly shift the balance of mRNAs that are translated at the synapse. “It’s just another piece of the puzzle, but I think it’s a very important piece,” she says. Whether this insight will advance fragile X treatments remains to be seen, says Wayne Sossin, professor of neurology and neurosurgery at Montreal Neurological Institute-Hospital, who was not involved in the study. Multiple groups have cured fragile-X-like traits in mice by altering what happens at the synapse, he says. “Altering translation in a number of ways seems to change the balance that is off when you lose FMRP. And it’s not really clear how specific that is for FMRP.” © 2025 Simons Foundation

Keyword: Development of the Brain; Learning & Memory
Link ID: 29700 - Posted: 03.12.2025

By Tim Vernimmen On a rainy day in July 2024, Tim Bliss and Terje Lømo are in the best of moods, chuckling and joking over brunch, occasionally pounding the table to make a point. They’re at Lømo’s house near Oslo, Norway, where they’ve met to write about the late neuroscientist Per Andersen, in whose lab they conducted groundbreaking experiments more than 50 years ago. The duo only ever wrote one research paper together, in 1973, but that work is now considered a turning point in the study of learning and memory. Published in the Journal of Physiology, it was the first demonstration that when a neuron — a cell that receives and sends signals throughout the nervous system — signals to another neuron frequently enough, the second neuron will later respond more strongly to new signals, not for just seconds or minutes, but for hours. It would take decades to fully understand the implications of their research, but Bliss and Lømo had discovered something momentous: a phenomenon called long-term potentiation, or LTP, which researchers now know is fundamental to the brain’s ability to learn and remember. Today, scientists agree that LTP plays a major role in the strengthening of neuronal connections, or synapses, that allow the brain to adjust in response to experience. And growing evidence suggests that LTP may also be crucially involved in a variety of problems, including memory deficits and pain disorders. Bliss and Lømo never wrote another research article together. In fact, they would soon stop working on LTP — Bliss for about a decade, Lømo for the rest of his life. Although the researchers knew they had discovered something important, at first the paper “didn’t make a big splash,” Bliss says. By the early 1970s, neuroscientist Eric Kandel had demonstrated that some simple forms of learning can be explained by chemical changes in synapses — at least in a species of sea slug. But scientists didn’t yet know if such findings applied to mammals, or if they could explain more complex and enduring types of learning, such as the formation of memories that may last for years.

Keyword: Learning & Memory
Link ID: 29694 - Posted: 03.05.2025

By Ingrid Wickelgren After shuffling the cards in a standard 52-card deck, Alex Mullen, a three-time world memory champion, can memorize their order in under 20 seconds. As he flips though the cards, he takes a mental walk through a house. At each point in his journey — the mailbox, front door, staircase and so on — he attaches a card. To recall the cards, he relives the trip. This technique, called “method of loci” or “memory palace,” is effective because it mirrors the way the brain naturally constructs narrative memories: Mullen’s memory for the card order is built on the scaffold of a familiar journey. We all do something similar every day, as we use familiar sequences of events, such as the repeated steps that unfold during a meal at a restaurant or a trip through the airport, as a home for specific details — an exceptional appetizer or an object flagged at security. The general narrative makes the noteworthy features easier to recall later. “You are taking these details and connecting them to this prior knowledge,” said Christopher Baldassano (opens a new tab), a cognitive neuroscientist at Columbia University. “We think this is how you create your autobiographical memories.” Psychologists empirically introduced (opens a new tab) this theory some 50 years ago, but proof of such scaffolds in the brain was missing. Then, in 2018, Baldassano found it: neural fingerprints of narrative experience, derived from brain scans, that replay sequentially during standard life events. He believes that the brain builds a rich library of scripts for expected scenarios — restaurant or airport, business deal or marriage proposal — over a person’s lifetime. These standardized scripts, and departures from them, influence how and how well we remember specific instances of these event types, his lab has found. And recently, in a paper published in Current Biology in fall 2024, they showed that individuals can select a dominant script (opens a new tab) for a complex, real-world event — for example, while watching a marriage proposal in a restaurant, we might opt, subconsciously, for either a proposal or a restaurant script — which determines what details we remember. © 2025 Simons Foundation

Keyword: Learning & Memory; Attention
Link ID: 29685 - Posted: 02.26.2025

By Michael S. Rosenwald In early February, Vishvaa Rajakumar, a 20-year-old Indian college student, won the Memory League World Championship, an online competition pitting people against one another with challenges like memorizing the order of 80 random numbers faster than most people can tie a shoelace. The renowned neuroscientist Eleanor Maguire, who died in January, studied mental athletes like Mr. Rajakumar and found that many of them used the ancient Roman “method of loci,” a memorization trick also known as the “memory palace.” The technique takes several forms, but it generally involves visualizing a large house and assigning memories to rooms. Mentally walking through the house fires up the hippocampus, the seahorse-shaped engine of memory deep in the brain that consumed Dr. Maguire’s career. We asked Mr. Rajakumar about his strategies of memorization. His answers, lightly edited and condensed for clarity, are below. Q. How do you prepare for the Memory League World Championship? Hydration is very important because it helps your brain. When you memorize things, you usually sub-vocalize, and it helps to have a clear throat. Let’s say you’re reading a book. You’re not reading it out loud, but you are vocalizing within yourself. If you don’t drink a lot of water, your speed will be a bit low. If you drink a lot of water, it will be more and more clear and you can read it faster. Q. What does your memory palace look like? Let’s say my first location is my room where I sleep. My second location is the kitchen. And the third location is my hall. The fourth location is my veranda. Another location is my bathroom. Let’s say I am memorizing a list of words. Let’s say 10 words. What I do is, I take a pair of words, make a story out of them and place them in a location. And I take the next two words. I make a story out of them. I place them in the second location. The memory palace will help you to remember the sequence. © 2025 The New York Times Company

Keyword: Learning & Memory; Attention
Link ID: 29673 - Posted: 02.15.2025

By Angie Voyles Askham Identifying what a particular neuromodulator does in the brain—let alone how such molecules interact—has vexed researchers for decades. Dopamine agonists increase reward-seeking, whereas serotonin agonists decrease it, for example, suggesting that the two neuromodulators act in opposition. And yet, neurons in the brain’s limbic regions release both chemicals in response to a reward (and also to a punishment), albeit on different timescales, electrophysiological recordings have revealed, pointing to a complementary relationship. This dual response suggests that the interplay between dopamine and serotonin may be important for learning. But no tools existed to simultaneously manipulate the neuromodulators and test their respective roles in a particular area of the brain—at least, not until now—says Robert Malenka, professor of psychiatry and behavioral sciences at Stanford University. As it turns out, serotonin and dopamine join forces in the nucleus accumbens during reinforcement learning, according to a new study Malenka led, yet they act in opposition: dopamine as a gas pedal and serotonin as a brake on signaling that a stimulus is rewarding. The mice he and his colleagues studied learned faster and performed more reliably when the team optogenetically pressed on the animals’ dopamine “gas” as they simultaneously eased off the serotonin “brake.” “It adds a very rich and beguiling picture of the interaction between dopamine and serotonin,” says Peter Dayan, director of computational neuroscience at the Max Planck Institute for Biological Cybernetics. In 2002, Dayan proposed a different framework for how dopamine and serotonin might work in opposition, but he was not involved in the new study. The new work “partially recapitulates” that 2002 proposal, Dayan adds, “but also poses many more questions.” © 2025 Simons Foundation

Keyword: Learning & Memory
Link ID: 29672 - Posted: 02.15.2025

By Michael S. Rosenwald Eleanor Maguire, a cognitive neuroscientist whose research on the human hippocampus — especially those belonging to London taxi drivers — transformed the understanding of memory, revealing that a key structure in the brain can be strengthened like a muscle, died on Jan. 4 in London. She was 54. Her death, at a hospice facility, was confirmed by Cathy Price, her colleague at the U.C.L. Queen Square Institute of Neurology. Dr. Maguire was diagnosed with spinal cancer in 2022 and had recently developed pneumonia. Working for 30 years in a small, tight-knit lab, Dr. Maguire obsessed over the hippocampus — a seahorse-shaped engine of memory deep in the brain — like a meticulous, relentless detective trying to solve a cold case. An early pioneer of using functional magnetic resonance imaging (f.M.R.I.) on living subjects, Dr. Maguire was able to look inside human brains as they processed information. Her studies revealed that the hippocampus can grow, and that memory is not a replay of the past but rather an active reconstructive process that shapes how people imagine the future. “She was absolutely one of the leading researchers of her generation in the world on memory,” Chris Frith, an emeritus professor of neuropsychology at University College London, said in an interview. “She changed our understanding of memory, and I think she also gave us important new ways of studying it.” In 1995, while she was a postdoctoral fellow in Dr. Frith’s lab, she was watching television one evening when she stumbled on “The Knowledge,” a quirky film about prospective London taxi drivers memorizing the city’s 25,000 streets to prepare for a three-year-long series of licensing tests. Dr. Maguire, who said she rarely drove because she feared never arriving at her destination, was mesmerized. “I am absolutely appalling at finding my way around,” she once told The Daily Telegraph. “I wondered, ‘How are some people so bloody good and I am so terrible?’” In the first of a series of studies, Dr. Maguire and her colleagues scanned the brains of taxi drivers while quizzing them about the shortest routes between various destinations in London. © 2025 The New York Times Company

Keyword: Learning & Memory
Link ID: 29671 - Posted: 02.15.2025

By Felicity Nelson Mice immediately bolt for shelter when they see the looming shadow of a bird, just as humans jump when they see a spider. But these instinctive reactions, which are controlled by the brainstem, can be suppressed if animals learn that a scary stimulus is harmless. In Science today, neuroscientists reveal the precise regions of the brain that suppress fear responses in mice1 — a finding that might help scientists to develop strategies for treating post-traumatic stress disorder and anxiety in people. The study showed that two parts of the brain work together to learn to suppress fear. But, surprisingly, only one of these regions is involved in later recalling the learnt behaviour. “This is the first evidence of that mechanism,” says neuroscientist Pascal Carrive at the University of New South Wales in Sydney, Australia. In the study, an expanding dark circle was used to imitate a swooping bird, and caused naive mice to run to a shelter. To teach the mice that this looming stimulus was not dangerous, a barrier was added to prevent the animals from hiding. “I like their behavioural model,” says Christina Perry, a behavioural neuroscientist at Macquarie University in Sydney. “It’s very simple,” she adds. The mice “don’t get eaten, so they learn that this fake predator is not, in fact, a threat”. As the mice were learning to be bolder, the researchers switched specific types of neurons on or off using optogenetics — a well-established technique that allows neurons to be controlled with light. When researchers silenced the parts of the cerebral cortex that analyse visual stimuli (called the posterolateral higher visual areas), the mice did not learn to suppress fear and continued to try to escape from the fake bird — suggesting that this area of the brain is necessary for learning to suppress this fear reaction. © 2025 Springer Nature Limited

Keyword: Emotions; Stress
Link ID: 29664 - Posted: 02.08.2025

By Yasemin Saplakoglu Imagine you’re on a first date, sipping a martini at a bar. You eat an olive and patiently listen to your date tell you about his job at a bank. Your brain is processing this scene, in part, by breaking it down into concepts. Bar. Date. Martini. Olive. Bank. Deep in your brain, neurons known as concept cells are firing. You might have concept cells that fire for martinis but not for olives. Or ones that fire for bars — perhaps even that specific bar, if you’ve been there before. The idea of a “bank” also has its own set of concept cells, maybe millions of them. And there, in that dimly lit bar, you’re starting to form concept cells for your date, whether you like him or not. Those cells will fire when something reminds you of him. Concept neurons fire for their concept no matter how it is presented: in real life or a photo, in text or speech, on television or in a podcast. “It’s more abstract, really different from what you’re seeing,” said Elizabeth Buffalo (opens a new tab), a neuroscientist at the University of Washington. For decades, neuroscientists mocked the idea that the brain could have such intense selectivity, down to the level of an individual neuron: How could there be one or more neurons for each of the seemingly countless concepts we engage with over a lifetime? “It’s inefficient. It’s not economic,” people broadly agreed, according to the neurobiologist Florian Mormann (opens a new tab) at the University of Bonn. But when researchers identified concept cells in the early 2000s, the laughter started to fade. Over the past 20 years, they have established that concept cells not only exist but are critical to the way the brain abstracts and stores information. New studies, including one recently published in Nature Communications, have suggested that they may be central to how we form and retrieve memory. © 2025 Simons Foundation

Keyword: Learning & Memory; Attention
Link ID: 29639 - Posted: 01.22.2025

Rachael Elward Lauren Ford Severance, which imagines a world where a person’s work and personal lives are surgically separated, will soon return to Apple TV+ for a second season. While the concept of this gripping piece of science fiction is far-fetched, it touches on some interesting neuroscience. Can a person’s mind really be surgically split in two? Remarkably, “split-brain” patients have existed since the 1940s. To control epilepsy symptoms, these patients underwent a surgery to separate the left and right hemispheres. Similar surgeries still happen today. Later research on this type of surgery showed that the separated hemispheres of split-brain patients could process information independently. This raises the uncomfortable possibility that the procedure creates two separate minds living in one brain. In season one of Severance, Helly R (Britt Lower) experienced a conflict between her “innie” (the side of her mind that remembered her work life) and her “outie” (the side outside of work). Similarly, there is evidence of a conflict between the two hemispheres of real split-brain patients. When speaking with split-brain patients, you are usually communicating with the left hemisphere of the brain, which controls speech. However, some patients can communicate from their right hemisphere by writing, for example, or arranging Scrabble letters. A young patient was asked what job he would like in the future. His left hemisphere chose an office job making technical drawings. His right hemisphere, however, arranged letters to spell “automobile racer”. Split brain patients have also reported “alien hand syndrome”, where one of their hands is perceived to be moving of its own volition. These observations suggest that two separate conscious “people” may coexist in one brain and may have conflicting goals. In Severance, however, both the innie and the outie have access to speech. This is one indicator that the fictional “severance procedure” must involve a more complex separation of the brain’s networks. © 2010–2025, The Conversation US, Inc.

Keyword: Learning & Memory; Consciousness
Link ID: 29635 - Posted: 01.18.2025

By Anna Victoria Molofsky Twenty years ago, a remarkable discovery upended our understanding of the range of elements that can shape neuronal function: A team in Europe demonstrated that enzymatic digestion of the extracellular matrix (ECM)—a latticework of proteins that surrounds all brain cells—could restore plasticity to the visual cortex even after the region’s “critical period” had ended. Other studies followed, showing that ECM digestion could also alter learning in the hippocampus and other brain circuits. These observations established that proteins outside neurons can control synaptic plasticity. We now know that up to 20 percent of the brain is extracellular space, filled with hundreds of ECM proteins—a “matrisome” that plays multiple roles, including modulating synaptic function and myelin formation. ECM genes in the human brain are different than those in other species, suggesting that the proteins they encode could be part of what makes our brains unique and keeps them healthy. In a large population study, posted as a preprint on bioRxiv last year, that examined blood protein biomarkers of organ aging, for example, the presence of ECM proteins was most highly correlated with a youthful brain. Matrisome proteins are also dysregulated in astrocytes from people at high risk for Alzheimer’s disease, another study showed. Despite the influence of these proteins and the ongoing work of a few dedicated researchers, however, the ECM field has not caught on. I would challenge a room full of neuroscientists to name one protein in the extracellular matrix. To this day, the only ECM components most neuroscientists have heard of are “perineuronal nets”—structures that play an important role in stabilizing synapses but make up just a tiny fraction of the matrisome. A respectable scientific journal, covering its own paper that identified a critical impact of ECM, called it “brain goop.” © 2025 Simons Foundation

Keyword: Learning & Memory; Glia
Link ID: 29633 - Posted: 01.18.2025

By Roni Caryn Rabin Water fluoridation is widely seen as one of the great public health achievements of the 20th century, credited with substantially reducing tooth decay. But there has been growing controversy among scientists about whether fluoride may be linked to lower I.Q. scores in children. A comprehensive federal analysis of scores of previous studies, published this week in JAMA Pediatrics, has added to those concerns. It found a significant inverse relationship between exposure levels and cognitive function in children. Higher fluoride exposures were linked to lower I.Q. scores, concluded researchers working for the National Institute of Environmental Health Sciences. None of the studies included in the analysis were conducted in the United States, where recommended fluoridation levels in drinking water are very low. At those amounts, evidence was too limited to draw definitive conclusions. Observational studies cannot prove a cause-and-effect relationship. Yet in countries with much higher levels of fluoridation, the analysis also found evidence of what scientists call a dose-response relationship, with I.Q. scores falling in lock step with increasing fluoride exposure. Children are exposed to fluoride through many sources other than drinking water: toothpaste, dental treatments and some mouthwashes, as well as black tea, coffee and certain foods, such as shrimp and raisins. Some drugs and industrial emissions also contain fluoride. For every one part per million increase in fluoride in urinary samples, which reflect total exposures from water and other sources, I.Q. points in children decreased by 1.63, the analysis found. “There is concern that pregnant women and children are getting fluoride from many sources,” said Kyla Taylor, an epidemiologist at the institute and the report’s lead author, “and that their total fluoride exposure is too high and may affect fetal, infant and child neurodevelopment.” © 2025 The New York Times Company

Keyword: Intelligence; Development of the Brain
Link ID: 29625 - Posted: 01.11.2025

By Laura Sanders Recovery from PTSD comes with key changes in the brain’s memory system, a new study finds. These differences were found in the brains of 19 people who developed post-traumatic stress disorder after the 2015 terrorist attacks in Paris — and then recovered over the following years. The results, published January 8 in Science Advances, point to the complexity of PTSD, but also to ways that brains can reshape themselves as they recover. With memory tasks and brain scans, the study provides a cohesive look at the recovering brain, says cognitive neuroscientist Vishnu Murty of the University of Oregon in Eugene. “It’s pulled together a lot of pieces that were floating around in the field.” On the night of November 13, 2015, terrorists attacked a crowded stadium, a theater and restaurants in Paris. In the years after, PTSD researchers were able to study some of the people who endured that trauma. Just over half the 100 people who volunteered for the study had PTSD initially. Of those, 34 still had the disorder two to three years later; 19 had recovered by two to three years. People who developed PTSD showed differences in how their brains handled intrusive memories, laboratory-based tests of memory revealed. Participants learned pairs of random words and pictures — a box of tissues with the word “work,” for example. PTSD involves pairs of associated stimuli too, though in much more complicated ways. A certain smell or sound, for instance, can be linked with the memory of trauma. © Society for Science & the Public 2000–2025.

Keyword: Learning & Memory; Stress
Link ID: 29622 - Posted: 01.11.2025

By McKenzie Prillaman A peek into living tissue from human hippocampi, a brain region crucial for memory and learning, revealed relatively few cell-to-cell connections for the vast number of nerve cells. But signals sent via those sparse connections proved extremely reliable and precise, researchers report December 11 in Cell. One seahorse-shaped hippocampus sits deep within each hemisphere of the mammalian brain. In each hippocampus’s CA3 area, humans have about 1.7 million nerve cells called pyramidal cells. This subregion is thought to be the most internally connected part of the brain in mammals. But much information about nerve cells in this structure has come from studies in mice, which have only 110,000 pyramidal cells in each CA3 subregion. Previously discovered differences between mouse and human hippocampi hinted that animals with more nerve cells may have fewer connections — or synapses — between them, says cellular neuroscientist Peter Jonas of the Institute of Science and Technology Austria in Klosterneuburg. To see if this held true, he and his colleagues examined tissue taken with consent from eight patients who underwent brain surgery to treat epilepsy. Recording electrical activity from human pyramidal cells in the CA3 area suggested that about 10 synapses existed for every 800 cell pairs tested. In mice, that concentration roughly tripled. Despite the relatively scant nerve cell connections in humans, those cells showed steady and robust activity when sending signals to one another — unlike mouse pyramidal cells. © Society for Science & the Public 2000–2025

Keyword: Learning & Memory
Link ID: 29616 - Posted: 01.08.2025

By Traci Watson New clues have emerged in the mystery of how the brain avoids ‘catastrophic forgetting’ — the distortion and overwriting of previously established memories when new ones are created. A research team has found that, at least in mice, the brain processes new and old memories in separate phases of sleep, which might prevent mixing between the two. Assuming that the finding is confirmed in other animals, “I put all my money that this segregation will also occur in humans”, says György Buzsáki, a systems neuroscientist at New York University in New York City. That’s because memory is an evolutionarily ancient system, says Buzsáki, who was not part of the research team but once supervised the work of some of its members. The work was published on Wednesday in Nature1. Scientists have long known that, during sleep, the brain ‘replays’ recent experiences: the same neurons involved in an experience fire in the same order. This mechanism helps to solidify the experience as a memory and prepare it for long-term storage. To study brain function during sleep, the research team exploited a quirk of mice: their eyes are partially open during some stages of slumber. The team monitored one eye in each mouse as it slept. During a deep phase of sleep, the researchers observed the pupils shrink and then return to their original, larger size repeatedly, with each cycle lasting roughly one minute. Neuron recordings showed that most of the brain’s replay of experiences took place when the animals’ pupils were small. That led the scientists to wonder whether pupil size and memory processing are linked. To find out, they enlisted a technique called optogenetics, which uses light to either trigger or suppress the electrical activity of genetically engineered neurons in the brain. First, they trained engineered mice to find a sweet treat hidden on a platform. Immediately after these lessons, as the mice slept, the authors used optogenetics to reduce bursts of neuronal firing that have been linked to replay. They did so during both the small-pupil and large-pupil stages of sleep. © 2025 Springer Nature Limited

Keyword: Learning & Memory; Sleep
Link ID: 29615 - Posted: 01.04.2025

Aswathy Ammothumkandy, Charles Liu, Michael A. Bonaguidi Your brain can still make new neurons when you’re an adult. But how does the rare birth of these new neurons contribute to cognitive function? Neurons are the cells that govern brain function, and you are born with most of the neurons you will ever have during your lifetime. While the brain undergoes most of its development during early life, specific regions of the brain continue to generate new neurons throughout adulthood, although at a much lower rate. Whether this process of neurogenesis actually happens in adults and what function it serves in the brain is still a subject of debate among scientists. Past research has shown that people with epilepsy or Alzheimer’s disease and other dementias develop fewer neurons as adults than people without these conditions. However, whether the absence of new neurons contributes to the cognitive challenges patients with these neurological disorders face is unknown. We are part of a team of stem cell researchers, neuroscientists, neurologists, neurosurgeons and neuropsychologists. Our newly published research reveals that the new neurons that form in adults’ brains are linked to how you learn from listening to other people. Researchers know that new neurons contribute to memory and learning in mice. But in humans, the technical challenges of identifying and analyzing new neurons in adult brains, combined with their rarity, had led scientists to doubt their significance to brain function. To uncover the relationship between neurogenesis in adults and cognitive function, we studied patients with drug-resistant epilepsy. These patients underwent cognitive assessments prior to and donated brain tissue during surgical procedures to treat their seizures. To see whether how many new neurons a patient had was associated with specific cognitive functions, we looked under the microscope for markers of neurogenesis. © 2010–2024, The Conversation US, Inc.

Keyword: Neurogenesis; Learning & Memory
Link ID: 29590 - Posted: 12.07.2024

By Andrea Tamayo Kidney cells can make memories too. At least, in a metaphorical sense. Neurons have historically been the cell most associated with memory. But far outside the brain, kidney cells can also store information and recognize patterns in a similar way to neurons, researchers report November 7 in Nature Communications. “We’re not saying that this kind of memory helps you learn trigonometry or remember how to ride a bike or stores your childhood memories,” says Nikolay Kukushkin, a neuroscientist at New York University. “This research adds to the idea of memory; it doesn’t challenge the existing conceptions of memory in the brain.” In experiments, the kidney cells showed signs of what’s called a “massed-space effect.” This well-known feature of how memory works in the brain facilitates storing information in small chunks over time, rather than a big chunk at once. Outside the brain, cells of all types need to keep track of stuff. One way they do that is through a protein central to memory processing, called CREB. It, and other molecular components of memory, are found in neurons and nonneuronal cells. While the cells have similar parts, the researchers weren’t sure if the parts worked the same way. In neurons, when a chemical signal passes through, the cell starts producing CREB. The protein then turns on more genes that further change the cell, kick-starting the molecular memory machine (SN: 2/3/04). Kukushkin and colleagues set out to determine whether CREB in nonneuronal cells responds to incoming signals the same way. © Society for Science & the Public 2000–2024.

Keyword: Learning & Memory
Link ID: 29576 - Posted: 11.27.2024

By Sofia Quaglia It’s amazing what chimpanzees will do for a snack. In Congolese rainforests, the apes have been known to poke a hole into the ground with a stout stick, then grab a long stem and strip it through their teeth, making a brush-like end. Into the hole that lure goes, helping the chimps fish out a meal of termites. How did the chimps figure out this sophisticated foraging technique and others? “It’s difficult to imagine that it can just have appeared out of the blue,” said Andrew Whiten, a cultural evolution expert from the University of St. Andrews in Scotland who has studied tool use and foraging in chimpanzees. Now Dr. Whiten’s team has set out to demonstrate that advanced uses of tools are an example of humanlike cultural transmission that has accumulated over time. Where bands of apes in Central and East Africa exhibit such complex behaviors, they say, there are also signs of genes flowing between groups. They describe this as evidence that such foraging techniques have been passed from generation to generation, and innovated over time across different interconnected communities. In a study published on Thursday in the journal Science, Dr. Whiten and colleagues go as far as arguing that chimpanzees have a “tiny degree of cumulative culture,” a capability long thought unique to humans. From mammals to birds to reptiles and even insects, many animals exhibit some evidence of culture, when individuals can socially learn something from a nearby individual and then start doing it. But culture becomes cumulative over time when individuals learn from others, each building on the technique so much that a single animal wouldn’t have been able to learn all of it on its own. For instance, some researchers interpret using rocks as a hammer and anvil to open a nut as something chimpanzees would not do spontaneously without learning it socially. Humans excel at this, with individual doctors practicing medicine each day, but medicine is no one single person’s endeavor. Instead, it is an accumulation of knowledge over time. Most chimpanzee populations do not use a complex set of tools, in a specific sequence, to extract food. © 2024 The New York Times Company

Keyword: Evolution; Learning & Memory
Link ID: 29573 - Posted: 11.23.2024

By Claudia López Lloreda Fear memories serve a purpose: A mouse in the wild learns to fear the sound of footsteps, which helps it avoid predators. But in certain situations, those fear memories can also tinge neutral memories with fear, resulting in maladaptive behavior. A mouse or person, for instance, may learn to fear stimuli that should presumably be safe. This shift can occur when an existing fear memory broadens—either by recruiting inappropriate neurons into the cell ensemble that contains it or by linking up to a previously neutral memory, according to two new studies in mice, one published today and another last week. Memories are embodied in the brain through sparse ensembles of neurons, called engrams, that activate when an animal forms a new memory or recalls it later. These ensembles were thought to be “stable and permanent,” says Denise Cai, associate professor of neuroscience at the Icahn School of Medicine at Mount Sinai, who led one of the studies. But the new findings reveal how, during times of fear and stress, memories can become malleable, either as they are brought back online or as the neurons that encode them expand. There is “this really powerful ability of stress to look back and change memories for neutral experiences that have come before by pulling them into the same neural representation or by exciting them more during offline periods,” says Elizabeth Goldfarb, assistant professor of psychiatry at the Yale School of Medicine, who was not involved in the studies. That challenges the previous dogma, Cai says. “We’ve learned that these memory ensembles are actually quite dynamic.” © 2024 Simons Foundation

Keyword: Learning & Memory; Stress
Link ID: 29563 - Posted: 11.16.2024

By Angie Voyles Askham Engrams, the physical circuits of individual memories, consist of more than just neurons, according to a new study published today in Nature. Astrocytes, too, shape how some memories are stored and retrieved, the work shows. The results represent “a fundamental change” in how the neuroscience field should think about indexing memories, says lead researcher Benjamin Deneen, professor of neurosurgery at Baylor College of Medicine. “We need to reconsider the cellular, physical basis of how we store memories.” When mice form a new memory, a specific set of neurons becomes active and expresses the immediate early gene c-FOS, past work has found. Reactivating that ensemble of neurons, the engram, causes the mice to recall that memory. Interactions between neurons and astrocytes are critical for the formation of long-term memory, according to a spatial transcriptomics study from February, and both astrocytes and oligodendrocytes are involved in memory formation, other work has shown. Yet engram studies have largely ignored the activity of non-neuronal cells, says Sheena Josselyn, senior scientist at the Hospital for Sick Children, who was not involved in the new study. But astrocytes are also active alongside neurons as memories are formed and recalled, and disrupting the star-shaped cells’ function interferes with these processes, the new work reveals. The study does not dethrone neurons as the lead engram stars, according to Josselyn. “It really shows that, yes, neurons are important. But there are also other players that we’re just beginning to understand the importance of,” she says. “It’ll help broaden our focus.” © 2024 Simons Foundation

Keyword: Learning & Memory; Glia
Link ID: 29558 - Posted: 11.13.2024