Chapter 17. Learning and Memory

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 21 - 40 of 1990

Tobi Thomas Health and inequalities correspondent Scientists have linked the impact of living in an unequal society to structural changes in the brains of children – regardless of individual wealth – for the first time. A study of more than 10,000 young people in the US discovered altered brain development in children from wealthy and lower-income families in areas with higher rates of inequality, which were also associated with poorer mental health. The data was gathered from the Adolescent Brain Cognitive Development study and published in the journal Nature Mental Health. Researchers at King’s College London, Harvard University, and the University of York then measured inequality within a particular US state by scoring how evenly income is measured. States with higher levels of inequality included New York, Connecticut, California and Florida, while Utah, Wisconsin, Minnesota and Vermont were more equal. MRI scans were analysed to study the surface area and thickness of regions in the cortex, including those involved in higher cognitive functions including memory, emotion, attention and language. Connections between different regions of the brain were also analysed by the scans, where changes in blood flow indicate brain activity. The research found that children living in areas with higher levels of societal inequality, including socioeconomic imbalances and deprivation for example, were linked to having a reduced surface area of the brain’s cortex, and altered connections between multiple regions of the brain. The findings, the first to reveal the impact societal inequality has on the structures of the brain, also provided evidence that the impacted neurodevelopment might relate to future mental health and cognitive function. Notably, these brain changes in children were seen regardless of their economic background. © 2025 Guardian News & Media Limited

Keyword: Development of the Brain; Learning & Memory
Link ID: 29951 - Posted: 10.01.2025

By Calli McMurray Studying animal behavior in the wild often gets hairy, with little experimental control and an abundance of extraneous data. And when multiple animals get together, the way they look, act and smell all influence one another, making it difficult to parse complex social interactions, says Andres Bendesky, associate professor of ecology, evolution and environmental biology at Columbia University. Robotic or animated partners, however, can simplify that equation. Studying animal-robot interaction gives researchers complete control over one partner during any tête-à-tête, Bendesky says. It makes it possible to present the same stimulus to an animal repeatedly or compare how different individuals react. And the method complements observation-based research: Scientists can use a robot- or animation-based paradigm to test ideas gleaned from studies that use artificial-intelligence tools to track behavior. Bendesky is part of a growing cohort of neuroscientists turning to robots to help them decode social interactions. The quirks are still being ironed out, but the approach is already helping several groups tackle questions about schooling, fighting and chatting behaviors. The rigor of the results depends on whether a critter believes what it sees, says Tim Landgraf, professor of artificial and collective intelligence at Freie Universität Berlin, who uses robots to study group behavior in guppies. That can be hard to gauge; there’s no handbook that describes what traits make a robot believable, he says. But researchers can compare how animals act toward a real peer versus a counterfeit one, says Steve Chang, associate professor of psychology and neuroscience at Yale University, who doesn’t work with robots but studies the social behavior of macaques and marmosets. © 2025 Simons Foundation

Keyword: Robotics; Sexual Behavior
Link ID: 29936 - Posted: 09.20.2025

By Sujata Gupta Anne-Laure Le Cunff was something of a wild child. As a teenager, she repeatedly disabled the school fire alarm to sneak smoke breaks and helped launch a magazine filled with her teachers’ fictional love lives. Later, as a young adult studying neuroscience, Le Cunff would spend hours researching complex topics but struggled to complete simple administrative tasks. And she often obsessed over random projects before abruptly abandoning them. Then, three years ago, a colleague asked Le Cunff if she might have attention-deficit/hyperactivity disorder, or ADHD, a condition marked by distractibility, hyperactivity and impulsivity. Doctors confirmed her colleague’s suspicions. But fearing professional stigma, Le Cunff — by then by then a postdoctoral fellow in the ADHD Lab at King’s College London — kept her diagnosis secret until this year. Le Cunff knew all too well about the deficits associated with ADHD. But her research — and personal experience — hinted at an underappreciated upside. “I started seeing … breadcrumbs pointing at a potential association between curiosity and ADHD,” she says. People within the ADHD community have long recognized that the condition can be both harmful and helpful. Researchers, though, have largely focused on the harms. And those studying treatments tend to define success as a reduction in ADHD symptoms, with little regard to possible benefits. That’s starting to change. For instance, Norwegian researchers asked 50 individuals with ADHD to describe their positive experiences with the disorder as part of an effort to develop more holistic treatments. People cited their creativity, energy, adaptability, resilience and curiosity, researchers reported in BMJ Open in October 2023. © Society for Science & the Public 2000–2025.

Keyword: ADHD; Attention
Link ID: 29932 - Posted: 09.17.2025

Rachel Fieldhouse Deep in the rainforests of the Democratic Republic of the Congo, Mélissa Berthet found bonobos doing something thought to be uniquely human. During the six months that Berthet observed the primates, they combined calls in several ways to make complex phrases1. In one example, bonobos (Pan paniscus) that were building nests together added a yelp, meaning ‘let’s do this’, to a grunt that says ‘look at me’. “It’s really a way to say: ‘Look at what I’m doing, and let’s do this all together’,” says Berthet, who studies primates and linguistics at the University of Rennes, France. In another case, a peep that means ‘I would like to do this’ was followed by a whistle signalling ‘let’s stay together’. The bonobos combine the two calls in sensitive social contexts, says Berthet. “I think it’s to bring peace.” The study, reported in April, is one of several examples from the past few years that highlight just how sophisticated vocal communication in non-human animals can be. In some species of primate, whale2 and bird, researchers have identified features and patterns of vocalization that have long been considered defining characteristics of human language. These results challenge ideas about what makes human language special — and even how ‘language’ should be defined. Perhaps unsurprisingly, many scientists turn to artificial intelligence (AI) tools to speed up the detection and interpretation of animal sounds, and to probe aspects of communication that human listeners might miss. “It’s doing something that just wasn’t possible through traditional means,” says David Robinson, an AI researcher at the Earth Species Project, a non-profit organization based in Berkeley, California, that is developing AI systems to decode communication across the animal kingdom. As the research advances, there is increasing interest in using AI tools not only to listen in on animal speech, but also to potentially talk back. © 2025 Springer Nature Limited

Keyword: Animal Communication; Language
Link ID: 29931 - Posted: 09.17.2025

Jon Hamilton People who inherit two copies of a gene variant called APOE4 have a 60% chance of developing Alzheimer's by age 85. Only about 2% to 3% of people in the U.S. have this genetic profile, and most of them don't know it because they've never sought genetic testing. But three scientists are among those who did get tested, and learned that they are in the high-risk group. Now, each is making an effort to protect not only their own brain, but the brains of others with the genotype known as APOE4-4. "I just felt like the end of the world," says June, who asked to use only her first name out of fear that making her genetic status public could affect her job or health insurance. June was 57 when she found out. As someone with a doctorate in biochemistry, she quickly understood what the results meant. New tests of blood and spinal fluid could help doctors quickly identify patients who would most benefit from treatment. "People with our genotype are almost destined to get the disease," she says. "We tend to get symptoms 7 to 10 years earlier than the general population, which means that I had about seven years left before I may get the disease." At first, June spent sleepless nights online, reading academic papers about Alzheimer's and genetics. She even looked into physician-assisted suicide in an effort to make sure she would not become a burden to her adult son. © 2025 npr

Keyword: Alzheimers; Genes & Behavior
Link ID: 29913 - Posted: 09.03.2025

By Lauren Schenkman Microglia safeguard the proliferation and survival of young GABAergic interneurons by secreting insulin-like growth factor 1 (IGF-1), according to a new study of human brain tissue and organoids. The finding points to the potential origin of the brain signaling imbalance implicated in autism and other conditions. Microglia contribute to brain development, past findings show, but their exact function has been unclear. Some experiments showed that these cells prune neural circuits, but later work called that idea into question. The new research “identifies microglia as really an important source of IGF, and one that sets the supply of GABAergic interneurons in the developing brain,” says Damon Page, principal investigator at Seattle Children’s Research Institute. Page was not involved in this work but led an earlier investigation that showed IGF-1 prevents microcephaly in a mouse model of autism when administered during a critical window soon after birth. This new study “extends back that window into the embryonic period,” he says, with implications for understanding both typical development and conditions such as autism. The study was published 6 August in Nature. The investigators used staining techniques to pinpoint microglia in the medial ganglionic eminence, where interneurons form, in human brain tissue samples at various developmental stages. At early developmental stages, microglia were sprinkled throughout brain matter, but later on these cells arranged themselves around clusters of GABAergic neuroblasts, with their processes extending into the clusters. Microglia also aligned themselves with radial glia, the precursors to many brain cells. Based on existing data, IGF-1 emerged as the chemical most likely to mediate microglia’s effects on developing cell types, and in organoid models of the developing human brain, the cells secreted IGF-1, they found. © 2025 Simons Foundation

Keyword: Glia; Learning & Memory
Link ID: 29911 - Posted: 09.03.2025

By Nora Bradford During her training in anthropology, Dorsa Amir, now at Duke University, became fascinated with the Müller-Lyer illusion. The illusion is simple: one long horizontal line is flanked by arrowheads on either side. Whether the arrowheads are pointing inward or outward dramatically changes the perceived length of the line—people tend to see it as longer when the arrowheads point in and as shorter when they point out. Graphic shows how the Müller-Lyer illusion makes two equal-length lines seem to have different lengths because of arrowlike tips pointing inward or outward. Most intriguingly, psychologists in the 1960s had apparently discovered something remarkable about the illusion: only European and American urbanites fell for the trick. The illusion worked less well, or didn’t work at all, on groups surveyed across Africa and the Philippines. The idea that this simple illusion supposedly only worked in some cultures but not others compelled Amir, who now studies how culture shapes the mind. “I always thought it was so cool, right, that this basic thing that you think is just so obvious is the type of thing that might vary across cultures,” Amir says. But this foundational research—and the hypothesis that arose to explain it, called the “carpentered-world” hypothesis—is now widely disputed, including by Amir herself. This has left researchers like her questioning what we can truly know about how culture shapes how we see the world. When researcher Marshall Segall and his colleagues conducted the cross-cultural experiment on the Müller-Lyer illusion in the 1960s, they came up with a hypothesis to explain the strange results: difference in building styles. The researchers theorized that the prevalence of carpentry features, such as rectangular spaces and right angles, trained the visual systems of people in more wealthy, industrialized cultures to perceive these angles in a way that make them more prone to the Müller-Lyer illusion. © 2025 SCIENTIFIC AMERICAN

Keyword: Vision; Attention
Link ID: 29899 - Posted: 08.23.2025

By Claire L. Evans In 1983, the octogenarian geneticist Barbara McClintock stood at the lectern of the Karolinska Institute in Stockholm. She was famously publicity averse — nearly a hermit — but it’s customary for people to speak when they’re awarded a Nobel Prize, so she delivered a halting account of the experiments that had led to her discovery, in the early 1950s, of how DNA sequences can relocate across the genome. Near the end of the speech, blinking through wire-framed glasses, she changed the subject, asking: “What does a cell know of itself?” McClintock had a reputation for eccentricity. Still, her question seemed more likely to come from a philosopher than a plant geneticist. She went on to describe lab experiments in which she had seen plant cells respond in a “thoughtful manner.” Faced with unexpected stress, they seemed to adjust in ways that were “beyond our present ability to fathom.” What does a cell know of itself? It would be the work of future biologists, she said, to find out. Forty years later, McClintock’s question hasn’t lost its potency. Some of those future biologists are now hard at work unpacking what “knowing” might mean for a single cell, as they hunt for signs of basic cognitive phenomena — like the ability to remember and learn — in unicellular creatures and nonneural human cells alike. Science has long taken the view that a multicellular nervous system is a prerequisite for such abilities, but new research is revealing that single cells, too, keep a record of their experiences for what appear to be adaptive purposes. In a provocative study published in Nature Communications late last year, the neuroscientist Nikolay Kukushkin and his mentor Thomas J. Carew at New York University showed that human kidney cells growing in a dish can “remember” patterns of chemical signals (opens a new tab) when they’re presented at regularly spaced intervals — a memory phenomenon common to all animals, but unseen outside the nervous system until now. Kukushkin is part of a small but enthusiastic cohort of researchers studying “aneural,” or brainless, forms of memory. What does a cell know of itself? So far, their research suggests that the answer to McClintock’s question might be: much more than you think. © 2025 Simons Foundation

Keyword: Learning & Memory
Link ID: 29872 - Posted: 08.02.2025

By Claudia López Lloreda When it comes to cognition and behavior, neurons usually take center stage. They famously drive everything from thoughts to movements by way of synaptic communication, with the help of neuromodulators such as dopamine, norepinephrine and certain immune molecules that regulate neuronal activity and plasticity. But astrocytes play essential roles in these processes behind the scenes, according to four independent studies published in the past two months. Rather than acting solely on neurons, neuromodulators also act on astrocytes to influence neuronal function and behavior—making astrocytes crucial intermediates in activities previously attributed to direct communication between neurons, the studies suggest. For instance, norepinephrine sensitizes astrocytes to neurotransmitters and prompts them to regulate circuit computations, synapse function and various behaviors across diverse animal models, three of the studies—all published last month in Science—show. “Do neurons actually signal through astrocytes in a meaningful way during normal behavior or normal circuit function?” asks Marc Freeman, senior scientist at Oregon Health & Science University and principal investigator on one of the Science studies. These new findings “argue very strongly the answer is yes.” Astrocytes can also detect peripheral inflammation and modify the neurons that drive a stress-induced fear behavior in mice, according to the fourth study, published in April in Nature. Although astrocytes are no longer thought of as simply support cells, they were still “not really considered for having a real plasticity and a real important role,” says Caroline Menard, associate professor of psychiatry and neurosciences at the University of Laval, who was not involved in any of the new studies. Now “there’s more consideration from the field that behavior is not only driven by neurons, but there’s other cell types involved.” © 2025 Simons Foundation

Keyword: Glia; Learning & Memory
Link ID: 29845 - Posted: 07.02.2025

Humberto Basilio Mindia Wichert has taken part in plenty of brain experiments as a cognitive-neuroscience graduate student at the Humboldt University of Berlin, but none was as challenging as one he faced in 2023. Inside a stark white room, he stared at a flickering screen that flashed a different image every 10 seconds. His task was to determine what familiar object appeared in each image. But, at least at first, the images looked like nothing more than a jumble of black and white patches. “I’m very competitive with myself,” says Wichert. “I felt really frustrated.” Cognitive neuroscientist Maxi Becker, now at Duke University in Durham, North Carolina, chose the images in an attempt to spark a fleeting mental phenomenon that people often experience but can’t control or fully explain. Study participants puzzling out what is depicted in the images — known as Mooney images, after a researcher who published a set of them in the 1950s1 — can’t rely on analytical thinking. Instead, the answer must arrive all at once, like a flash of lightning in the dark (take Nature’s Mooney-images quiz below). Becker asked some of the participants to view the images while lying inside a functional magnetic resonance imaging (fMRI) scanner, so she could track tiny shifts in blood flow corresponding to brain activity. She hoped to determine which regions produce ‘aha!’ moments. Over the past two decades, scientists studying such moments of insight — also known as eureka moments — have used the tools of neuroscience to reveal which regions of the brain are active and how they interact when discovery strikes. They’ve refined the puzzles they use to trigger insight and the measurements they take, in an attempt to turn a self-reported, subjective experience into something that can be documented and rigorously studied. This foundational work has led to new questions, including why some people are more insightful than others, what mental states could encourage insight and how insight might boost memory. © 2025 Springer Nature Limited

Keyword: Attention; Learning & Memory
Link ID: 29844 - Posted: 06.28.2025

By Sydney Wyatt The shape and density of dendritic spines fluctuate in step with the estrous cycle in the hippocampus of living mice, a new study shows. And these structural changes coincide with shifts in the stability of place fields encoded by place cells. “You can literally see these oscillations in hippocampal spines, and they keep time with the endocrine rhythms being produced by the ovaries,” says study investigator Emily Jacobs, associate professor of psychological and brain sciences at the University of California, Santa Barbara. She and her colleagues used calcium imaging and surgically implanted microperiscopes to view the dynamics of the dendritic spines in real time. The findings, published in Neuron in May, replicate and expand upon a series of cross-sectional studies of rat brain tissue in the early 1990s that documented sex hormone receptors in the hippocampus and showed that changes in estradiol levels across the estrous cycle track with differences in dendritic spine density. “The field of neuroendocrinology was really changed in the early ’90s because of this discovery,” Jacobs says. The new work is a “very important advancement,” says John Morrison, professor of neurology at the University of California, Davis, who was not involved in the research. It shows that spines change across the natural cycle of living mice, supporting estradiol’s role in this process, and it links these changes to electrophysiological differences, he says. “The most surprising part of this study is that everything seems to follow each other. Usually biology doesn’t cooperate like this,” Morrison says. Before the early 1990s, estrogens were viewed only as reproductive hormones, and their effects in the brain were thought to be limited to the hypothalamus, says Catherine Woolley, professor of neurobiology at Northwestern University, who worked on the classic rat hippocampus studies when she was a graduate student in the lab of the late Bruce McEwen. For that reason, her rat hippocampus results were initially met with “resistance,” she adds. A leader in the field once told her to “get some better advice” from her adviser “because estrogens are reproductive hormones, and they don’t have effects in the hippocampus,” she recalls. © 2025 Simons Foundation

Keyword: Hormones & Behavior; Learning & Memory
Link ID: 29841 - Posted: 06.28.2025

By Marta Hill Every year, black-capped chickadees perform an impressive game of hide-and-seek. These highly visual birds cache tens of thousands of surplus food morsels and then recover them during leaner times. Place cells in the hippocampus may help the birds keep track of their hidden bounty, according to a study published 11 June in Nature. The cells activate not only when a bird visits a food stash but also when it looks at the stash from far away, the study shows. “What is really profound about the work is it’s trying to unpack how it is that we’re able to combine visual information, which is based on where we currently are in the world, with our understanding of the space around us and how we can navigate it,” says Nick Turk-Browne, professor of psychology and director of the Wu Tsai Institute at Yale University, who was not involved in the study. With each gaze shift, the hippocampus first predicts what the bird is about to see and then reacts to what it actually sees, the study shows. “It really fits beautifully into this picture of this dual role for the system in representing actual and representing possible,” says Loren Frank, professor of physiology and psychiatry at the University of California, San Francisco, who was not involved in the work. The findings help explain how the various functions of the hippocampus—navigation, perception, learning and memory—work together, Turk-Browne adds. “If we can have a smart, abstract representation of place that doesn’t depend on actually physically being there, then you can imagine how this can be used to construct memories.” © 2025 Simons Foundation

Keyword: Learning & Memory
Link ID: 29827 - Posted: 06.14.2025

By Laura Dattaro One of Clay Holroyd’s mostly highly cited papers is a null result. In 2005, he tested a theory he had proposed about a brain response to unexpected rewards and disappointments, but the findings—now cited more than 600 times—didn’t match his expectations, he says. In the years since, other researchers have run similar tests, many of which contradicted Holroyd’s results. But in 2021, EEGManyLabs announced that it would redo Holroyd’s original experiment across 13 labs. In their replication effort, the researchers increased the sample size from 17 to 370 people. The results—the first from EEGManyLabs—published in January in Cortex, failed to replicate the null result, effectively confirming Holroyd’s theory. “Fundamentally, I thought that maybe it was a power issue,” says Holroyd, a cognitive neuroscientist at Ghent University. “Now this replication paper quite nicely showed that it was a power issue.” The two-decade tale demonstrates why pursuing null findings and replications—the focus of this newsletter—is so important. Holroyd’s 2002 theory proposed that previously observed changes in dopamine associated with unexpectedly positive or negative results cause neural responses that can be measured with EEG. The more surprising a result, he posited, the larger the response. To test the idea, Holroyd and his colleagues used a gambling-like task in which they told participants the odds of correctly identifying which of four choices would lead to a 10-cent reward. In reality, the reward was random. When participants received no reward, their neural reaction to the negative result was equally strong regardless of which odds they had been given, contradicting the theory. © 2025 Simons Foundation

Keyword: Attention; Learning & Memory
Link ID: 29814 - Posted: 05.31.2025

By Sydney Wyatt Donald Hebb famously proposed in 1949 that when neurons fire together, the synaptic connections between them strengthen, forming the basis for long-term memories. That theory—which held up in experiments in rat hippocampal slice cultures—has shaped how researchers understand synaptic plasticity ever since. But a new computational modeling study adds to mounting evidence that Hebbian plasticity does not always explain how changing neuronal connections enable learning. Rather, behavioral timescale synaptic plasticity (BTSP), which can strengthen synapses even when neurons fire out of sync, better captures the changes seen in CA1 hippocampal cells as mice learn to navigate a new environment, the study suggests. Hebbian spike-timing-dependent plasticity occurs when a neuron fires just ahead of one it synapses onto, leading to a stronger connection between the two cells. BTSP, on the other hand, relies on a complex spike, or a burst of action potentials, in the postsynaptic cell, which triggers a calcium signal that travels across the dendritic arbor. The signal strengthens synaptic connections with the presynaptic cell that were active within seconds of that spike, causing larger changes in synaptic strength. BTSP helps hippocampal cells establish their place fields, the positions at which they fire, previous work suggests. But it was unclear whether it also contributes to learning, says Mark Sheffield, associate professor of neurobiology at the University of Chicago, who led the new study. The new findings suggest that it does—challenging how researchers traditionally think about plasticity mechanisms in the hippocampus, says Jason Shepherd, associate professor of neurobiology at the University of Utah, who was not involved in the research. “The classic rules of plasticity that we have been sort of thinking about for decades may not be actually how the brain works, and that’s a big deal.” © 2025 Simons Foundation

Keyword: Learning & Memory
Link ID: 29810 - Posted: 05.28.2025

By Ajdina Halilovic When Todd Sacktor (opens a new tab) was about to turn 3, his 4-year-old sister died of leukemia. “An empty bedroom next to mine. A swing set with two seats instead of one,” he said, recalling the lingering traces of her presence in the house. “There was this missing person — never spoken of — for which I had only one memory.” That memory, faint but enduring, was set in the downstairs den of their home. A young Sacktor asked his sister to read him a book, and she brushed him off: “Go ask your mother.” Sacktor glumly trudged up the stairs to the kitchen. It’s remarkable that, more than 60 years later, Sacktor remembers this fleeting childhood moment at all. The astonishing nature of memory is that every recollection is a physical trace, imprinted into brain tissue by the molecular machinery of neurons. How the essence of a lived moment is encoded and later retrieved remains one of the central unanswered questions in neuroscience. Sacktor became a neuroscientist in pursuit of an answer. At the State University of New York Downstate in Brooklyn, he studies the molecules involved in maintaining the neuronal connections underlying memory. The question that has always held his attention was first articulated in 1984 (opens a new tab) by the famed biologist Francis Crick: How can memories persist for years, even decades, when the body’s molecules degrade and are replaced in a matter of days, weeks or, at most, months? In 2024, working alongside a team that included his longtime collaborator André Fenton (opens a new tab), a neuroscientist at New York University, Sacktor offered a potential explanation in a paper published in Science Advances. The researchers discovered that a persistent bond between two proteins (opens a new tab) is associated with the strengthening of synapses, which are the connections between neurons. Synaptic strengthening is thought to be fundamental to memory formation. As these proteins degrade, new ones take their place in a connected molecular swap that maintains the bond’s integrity and, therefore, the memory. © 2025 Simons Foundation

Keyword: Learning & Memory
Link ID: 29784 - Posted: 05.11.2025

By Giorgia Guglielmi Newly formed memories change over the course of a night’s sleep, a new study in rats suggests. The results reveal that memory processing and consolidation is more complex and prolonged than previously understood, says study investigator Jozsef Csicsvari, professor of systems neuroscience at the Institute of Science and Technology Austria. Sleep has long been known to help consolidate memories, though most studies have tracked only a few hours of this process. The new work monitored memory-related brain activity patterns across almost an entire day—representing a significant step forward, says Lisa Genzel, associate professor of neuroscience at Radboud University, who wasn’t involved in the research. That’s “a heroic effort,” she says. Csicsvari and his team implanted wireless electrodes into the hippocampus of three rats and recorded neuronal activity as the animals learned to navigate a maze in search of hidden pieces of food, rested or slept for 16 to 20 hours after, and then revisited the same food locations the following day. The neurons that fired during learning became active again throughout the rest period, especially during sleep, the team found. This reactivation is a key part of memory consolidation, and it doesn’t just happen immediately after learning; instead, it continues for hours, the study shows. And while the animals slept, their brain activity patterns gradually shifted to resemble the post-sleep recall patterns—a change known as “representational drift” that likely helps the brain weave new information into what it already knows, Csicsvari says. Some neuron groups may be more involved than others in updating memories, the work showed. Some cell types remained stable, whereas others changed their activity. For example, hippocampal neurons called CA1 pyramidal cells showed distinct firing patterns during memory reactivation. And interneurons, too, appeared to play a supporting role, mirroring the changes in pyramidal cells. The team published their findings in Neuron in March. © 2025 Simons Foundation

Keyword: Sleep; Learning & Memory
Link ID: 29777 - Posted: 05.07.2025

By Elise Cutts Food poisoning isn’t an experience you’re likely to forget — and now, scientists know why. A study published April 2 in Nature has unraveled neural circuitry in mice that makes food poisoning so memorable. “We’ve all experienced food poisoning at some point … And not only is it terrible in the moment, but it leads us to not eat those foods again,” says Christopher Zimmerman of Princeton University. Luckily, developing a distaste for foul food doesn’t take much practice — one ill-fated encounter with an undercooked enchilada or contaminated hamburger is enough, even if it takes hours or days for symptoms to set in. The same is true for other animals, making food poisoning one of the best ways to study how our brains connect events separated in time, says neuroscientist Richard Palmiter of the University of Washington in Seattle. Mice usually need an immediate reward or punishment to learn something, Palmiter says; even just a minute’s delay between cause (say, pulling a lever) and effect (getting a treat) is enough to prevent mice from learning. Not so for food poisoning. Despite substantial delays, their brains have no trouble associating an unfamiliar food in the past with tummy torment in the present. Researchers knew that a brain region called the amygdala represents flavors and decides whether or not they’re gross. Palmiter’s group had also shown that the gut tells the brain it’s feeling icky by activating specific “alarm” neurons, called CGRP neurons. “They respond to everything that’s bad,” Palmiter says. © Society for Science & the Public 2000–2025.

Keyword: Learning & Memory; Emotions
Link ID: 29756 - Posted: 04.23.2025

William Wright & Takaki Komiyama Every day, people are constantly learning and forming new memories. When you pick up a new hobby, try a recipe a friend recommended or read the latest world news, your brain stores many of these memories for years or decades. But how does your brain achieve this incredible feat? In our newly published research in the journal Science, we have identified some of the “rules” the brain uses to learn. Learning in the brain The human brain is made up of billions of nerve cells. These neurons conduct electrical pulses that carry information, much like how computers use binary code to carry data. These electrical pulses are communicated with other neurons through connections between them called synapses. Individual neurons have branching extensions known as dendrites that can receive thousands of electrical inputs from other cells. Dendrites transmit these inputs to the main body of the neuron, where it then integrates all these signals to generate its own electrical pulses. It is the collective activity of these electrical pulses across specific groups of neurons that form the representations of different information and experiences within the brain. For decades, neuroscientists have thought that the brain learns by changing how neurons are connected to one another. As new information and experiences alter how neurons communicate with each other and change their collective activity patterns, some synaptic connections are made stronger while others are made weaker. This process of synaptic plasticity is what produces representations of new information and experiences within your brain. In order for your brain to produce the correct representations during learning, however, the right synaptic connections must undergo the right changes at the right time. The “rules” that your brain uses to select which synapses to change during learning – what neuroscientists call the credit assignment problem – have remained largely unclear. © 2010–2025, The Conversation US, Inc.

Keyword: Learning & Memory
Link ID: 29754 - Posted: 04.23.2025

By Gayoung Lee edited by Allison Parshall Crows sometimes have a bad rap: they’re said to be loud and disruptive, and myths surrounding the birds tend to link them to death or misfortune. But crows deserve more love and charity, says Andreas Nieder, a neurophysiologist at the University of Tübingen in Germany. They not only can be incredibly cute, cuddly and social but also are extremely smart—especially when it comes to geometry, as Nieder has found. In a paper published on Friday in Science Advances, Nieder and his colleagues report that crows display an impressive aptitude at distinguishing shapes by using geometric irregularities as a cognitive cue. These crows could even discern quite subtle differences. For the experiment, the crows perched in front of a digital screen that, almost like a video game, displayed progressively more complex combinations of shapes. First, the crows were taught to peck at a certain shape for a reward. Then they were presented with that same shape among five others—for example, one star shape placed among five moon shapes—and were rewarded if they correctly picked the "outlier." “Initially [the outlier] was very obvious,” Nieder says. But once the crows appeared to have familiarized themselves with how the “game” worked, Nieder and his team introduced more similar quadrilateral shapes to see if the crows would still be able to identify outliers. “And they could tell us, for instance, if they saw a figure that was just not a square, slightly skewed, among all the other squares,” Nieder says. “They really could do this spontaneously [and] discriminate the outlier shapes based on the geometric differences without us needing them to train them additionally.” Even when the researchers stopped rewarding them with treats, the crows continued to peck the outliers. © 2024 SCIENTIFIC AMERICAN,

Keyword: Evolution; Intelligence
Link ID: 29741 - Posted: 04.12.2025

By Yasemin Saplakoglu Humans tend to put our own intelligence on a pedestal. Our brains can do math, employ logic, explore abstractions and think critically. But we can’t claim a monopoly on thought. Among a variety of nonhuman species known to display intelligent behavior, birds have been shown time and again to have advanced cognitive abilities. Ravens plan (opens a new tab) for the future, crows count and use tools (opens a new tab), cockatoos open and pillage (opens a new tab) booby-trapped garbage cans, and chickadees keep track (opens a new tab) of tens of thousands of seeds cached across a landscape. Notably, birds achieve such feats with brains that look completely different from ours: They’re smaller and lack the highly organized structures that scientists associate with mammalian intelligence. “A bird with a 10-gram brain is doing pretty much the same as a chimp with a 400-gram brain,” said Onur Güntürkün (opens a new tab), who studies brain structures at Ruhr University Bochum in Germany. “How is it possible?” Researchers have long debated about the relationship between avian and mammalian intelligences. One possibility is that intelligence in vertebrates — animals with backbones, including mammals and birds — evolved once. In that case, both groups would have inherited the complex neural pathways that support cognition from a common ancestor: a lizardlike creature that lived 320 million years ago, when Earth’s continents were squished into one landmass. The other possibility is that the kinds of neural circuits that support vertebrate intelligence evolved independently in birds and mammals. It’s hard to track down which path evolution took, given that any trace of the ancient ancestor’s actual brain vanished in a geological blink. So biologists have taken other approaches — such as comparing brain structures in adult and developing animals today — to piece together how this kind of neurobiological complexity might have emerged. © 2025 Simons Foundation

Keyword: Intelligence; Evolution
Link ID: 29738 - Posted: 04.09.2025