Chapter 19. Language and Lateralization
Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.
By John Pavlus Even in a world where large language models (LLMs) and AI chatbots are commonplace, it can be hard to fully accept that fluent writing can come from an unthinking machine. That’s because, to many of us, finding the right words is a crucial part of thought — not the outcome of some separate process. But what if our neurobiological reality includes a system that behaves something like an LLM? Long before the rise of ChatGPT, the cognitive neuroscientist Ev Fedorenko (opens a new tab) began studying how language works in the adult human brain. The specialized system she has described, which she calls “the language network,” maps the correspondences between words and their meanings. Her research suggests that, in some ways, we do carry around a biological version of an LLM — that is, a mindless language processor — inside our own brains. “You can think of the language network as a set of pointers,” Fedorenko said. “It’s like a map, and it tells you where in the brain you can find different kinds of meaning. It’s basically a glorified parser that helps us put the pieces together — and then all the thinking and interesting stuff happens outside of [its] boundaries.” Fedorenko has been gathering biological evidence of this language network for the past 15 years in her lab at the Massachusetts Institute of Technology. Unlike a large language model, the human language network doesn’t string words into plausible-sounding patterns with nobody home; instead, it acts as a translator between external perceptions (such as speech, writing and sign language) and representations of meaning encoded in other parts of the brain (including episodic memory and social cognition, which LLMs don’t possess). Nor is the human language network particularly large: If all of its tissue were clumped together, it would be about the size of a strawberry (opens a new tab). But when it is damaged, the effect is profound. An injured language network can result in forms of aphasia (opens a new tab) in which sophisticated cognition remains intact but trapped within a brain unable to express it or distinguish incoming words from others. © 2025 Simons Foundation
Keyword: Language
Link ID: 30043 - Posted: 12.06.2025
Liam Drew Paradromics, a neurotechnology developer, announced today that the US Food and Drug Administration (FDA) has approved a first long-term clinical trial of its brain–computer interface (BCI). Early next year, the company — one of the closet rivals to Elon Musk’s neurotechnology firm Neuralink — will implant its device in two volunteers who were left unable to speak owing to neurological diseases and injuries. It has two goals: to ensure the device is safe; and to restore a person’s ability to communicate with real-time speech. “We’re very excited about bringing this new hardware into a trial,” says Matt Angle, chief executive of Paradromics, which is based in Austin, Texas. Paradromics’ BCI has an active area of roughly 7.5 millimetres in diameter of thin, stiff, platinum-iridium electrodes that penetrate the surface of the cerebral cortex to record from individual neurons around 1.5 mm deep. This is then connected by wire to a power source and wireless transceiver implanted in an individual’s chest. Initially, the two volunteers will each have one electrode array implanted in the area of the motor cortex that controls the lips, tongue and larynx, Angle says. Neural activity will then be recorded from this region as the study participants imagine speaking sentences that are presented to them. Following previous work by researchers who are now collaborating with Paradromics1, the system learns what patterns of neural activity correspond to each intended speech sound. When participants imagine speaking these neural patterns will be converted into text on a screen for participants to approve, or into a real-time voice output based on old recordings of participants’ own voices. This is the first BCI clinical trial to formally target synthetic-voice generation. “Arguably, the greatest quality of life change you can deliver right now with BCI is communication,” Angle says. © 2025 Springer Nature Limited
Keyword: Robotics
Link ID: 30019 - Posted: 11.22.2025
By Kate Graham-Shaw A long time ago in a galaxy far, far away, R2-D2 beeped and booped—and now birds that copy the Star Wars character are giving scientists fresh insight into how different species imitate complex sounds. A study, published recently in Scientific Reports, analyzed the sounds of nine species of parrots, including Budgies, as well as European Starlings to see how accurately each bird mimicked R2-D2’s robotic whirring. Researchers did acoustic analyses on samples of birds imitating the plucky droid that were already available online to compare how statistically similar each bird’s noises were to a model of R2-D2’s sounds. The starlings, a type of songbird, emerged as star vocalists: their ability to produce “multiphonic” noises—in their case, two different notes or tones expressed simultaneously—allowed them to replicate R2-D2’s complex chirps more accurately. Parrots and budgies, which only produce “monophonic” (or single-tone) noises, imitated the droid’s sounds with less accuracy and musicality. The differing abilities stem from physical variations in the birds’ “syrinx”—a unique vocal organ that sits at the base of the avian windpipe. “Starlings can produce two sounds at once because they control both sides of the syrinx independently,” says study co-author Nick Dam, an evolutionary biologist at Leiden University in the Netherlands. “Parrots are physically incapable of producing two tones simultaneously.” It isn’t exactly known why different species developed differing control over their syrinx. “Likely, some ancestor of songbirds happened to evolve the ability to control the muscles on both sides of the syrinx, and this helped them in some way,” says University of Northern Colorado biologist Lauryn Benedict, who wasn’t involved in the study but sometimes works with its authors. One of the leading explanations involves mating; the better at singing a male songbird is, the more females he attracts. © 2025 SCIENTIFIC AMERICAN,
Keyword: Animal Communication; Language
Link ID: 30017 - Posted: 11.19.2025
By Kathryn Hulick Dolphins whistle, humpback whales sing and sperm whales click. Now, a new analysis of sperm whale codas — a unique series of clicks — suggests a previously unrecognized acoustic pattern. The finding, reported November 12 in Open Mind, implies that the whales’ clicking communications might be more complex — and meaningful — than previously realized. But the study faces sharp criticism from marine biologists who argue that these patterns are more likely to be recording artifacts or by-products of alertness rather than language-like signals. For decades, biologists have known that both the number and timing of clicks in a coda matter and can even identify the clan of a sperm whale (Physeter macrocephalus). Sperm whales in the eastern Caribbean Sea off the coast of Dominica, for example, often use a series of two slow and three quick sounds: “click…click… click-click-click.” Relying on artificial intelligence and linguistics analysis, the new study finds that sometimes this series sounds more like “clack…clack… clack-clack-clack,” says Shane Gero, a marine biologist at Project CETI, a Dominica-based nonprofit studying sperm whale communication. Project CETI linguist Gašper Beguš wonders about the meanings a coda might convey. “It sounds really alien,” almost like Morse code, says Beguš, of the University of California, Berkeley. Based on his team’s result, he now speculates that sperm whales might use clicks or clacks “in a similar way as we use our vowels to transmit meaning.” Not everyone agrees with that assessment. The comparison to vowels is “completely nonsense,” says Luke Rendell, a marine biologist at the University of St. Andrews in Scotland who has studied sperm whales for more than 30 years. “There’s no evidence that the animals are responding in any way to this [new pattern].” © Society for Science & the Public 2000–2025
Keyword: Language; Animal Communication
Link ID: 30013 - Posted: 11.15.2025
Katie Kavanagh Speaking multiple languages could slow down brain ageing and help to prevent cognitive decline, a study of more than 80,000 people has found. The work, published in Nature Aging on 10 November1, suggests that people who are multilingual are half as likely to show signs of accelerated biological ageing as are those who speak just one language. “We wanted to address one of the most persistent gaps in ageing research, which is if multilingualism can actually delay ageing,” says study co-author Agustín Ibáñez, a neuroscientist at the Adolfo Ibáñez University in Santiago, Chile. Previous research in this area has suggested that speaking multiple languages can improve cognitive functions such memory and attention2, which boosts brain health as we get older. But many of these studies rely on small sample sizes and use unreliable methods of measuring ageing, which leads to results that are inconsistent and not generalizable. “The effects of multilingualism on ageing have always been controversial, but I don’t think there has been a study of this scale before, which seems to demonstrate them quite decisively,” says Christos Pliatsikas, a cognitive neuroscientist at the University of Reading, UK. The paper’s results could “bring a step change to the field”, he adds. They might also “encourage people to go out and try to learn a second language, or keep that second language active”, says Susan Teubner-Rhodes, a cognitive psychologist at Auburn University in Alabama. © 2025 Springer Nature Limited
Keyword: Language; Alzheimers
Link ID: 30005 - Posted: 11.12.2025
By Meghie Rodrigues Babies start processing language before they are born, a new study suggests. A research team in Montreal has found that newborns who had heard short stories in foreign languages while in the womb process those languages similarly to their native tongue. The study, published in August in Nature Communications Biology, is the first to use brain imaging to show what neuroscientists and psychologists had long suspected. Previous research had shown that fetuses and newborns can recognize familiar voices and rhythms and even that they prefer their native language soon after birth. But these findings come mostly from behavioral cues—sucking patterns, head turns or heart rate changes—rather than direct evidence from the brain. “We cannot say babies ‘learn’ a language prenatally,” says Anne Gallagher, a neuropsychologist at the University of Montreal and senior author of the study. What we can say, she adds, is that neonates develop familiarity with one or more languages during gestation, which shapes their brain networks at birth. The research team recruited 60 people for the experiment, all of them about 35 weeks into their pregnancy. Of those, 39 exposed their fetuses to 10 minutes of prerecorded stories in French (their native language) and another 10 minutes of the same stories in either Hebrew or German at least once every other day until birth. These languages were chosen because their acoustic and phonological properties are very distinctfrom French and from each other, explains co-lead author Andréanne René, a Ph.D. candidate in clinical neuropsychology at the University of Montreal. The other 21 participants were part of the control group; their fetuses were exposed to French in their natural environments, with no special input. © 2025 SCIENTIFIC AMERICAN
Keyword: Language; Development of the Brain
Link ID: 29959 - Posted: 10.08.2025
By Keith Schneider Jane Goodall, one of the world’s most revered conservationists, who earned scientific stature and global celebrity by chronicling the distinctive behavior of wild chimpanzees in East Africa — primates that made and used tools, ate meat, held rain dances and engaged in organized warfare — died on Wednesday in Los Angeles. She was 91. Her death, while on a speaking tour, was confirmed by the Jane Goodall Institute, whose U.S. headquarters are in Washington, D.C. When not traveling widely, she lived in Bournemouth, on the south coast of England, in her childhood home. Dr. Goodall was 29 in the summer of 1963 when National Geographic magazine published her 7,500-word, 37-page account of the lives of primates she had observed in the Gombe Stream Chimpanzee Reserve in what is now Tanzania. The National Geographic Society had been financially supporting her field studies there. The article, with photographs by Hugo van Lawick, a Dutch wildlife photographer whom she later married, also described Dr. Goodall’s struggles to overcome disease, predators and frustration as she tried to get close to the chimps, working from a primitive research station along the eastern shore of Lake Tanganyika. On the scientific merits alone, her discoveries about how wild chimpanzees raised their young, established leadership, socialized and communicated broke new ground and attracted immense attention and respect among researchers. Stephen Jay Gould, the evolutionary biologist and science historian, said her work with chimpanzees “represents one of the Western world’s great scientific achievements.” On learning of Dr. Goodall’s documented evidence that humans were not the only creatures capable of making and using tools, Louis Leakey, the paleoanthropologist and Dr. Goodall’s mentor, famously remarked, “Now we must redefine ‘tool,’ redefine ‘man,’ or accept chimpanzees as humans.” © 2025 The New York Times Company
Keyword: Evolution; Animal Communication
Link ID: 29953 - Posted: 10.04.2025
By Catherine Offord As the National Football League’s (NFL’s) latest season gets underway, so, too, does the conversation about the risk of serious brain damage to its athletes. Multiple well-publicized studies in recent years have linked repetitive head impacts typical in football and other contact sports to an increased likelihood of chronic traumatic encephalopathy (CTE), a neurodegenerative condition characterized by a buildup of misfolded proteins in the brain. Now, a leading CTE research group reports evidence that regular sports-related impacts could cause brain damage before the condition’s hallmark features appear. An analysis of postmortem brain tissue from athletes and nonathletes who died before their early 50s, published today in Nature, identifies multiple cellular differences between the groups, regardless of whether CTE was present. The findings support the idea that contact sports are associated with specific cellular changes in the brain. The study also “helps us understand, or at least ask new questions about, the mechanisms that bridge that acute exposure to later neurodegeneration,” says Gil Rabinovici, a neurologist and researcher at the University of California San Francisco who was not involved in the work. But not many brains were examined—fewer than 30 for most analyses. And the study doesn’t show that the neuron loss and other brain changes affect a person’s cognitive or mental health, cautions Colin Smith, a neuropathologist at the University of Edinburgh. “What does this mean clinically? … That is still the big question hanging here.” CTE recently hit the headlines again after a shooter killed four people and himself in the New York City building housing NFL’s headquarters this summer. In a note found by police, the former high school football player reportedly said he thought he had CTE, and asked that his brain be studied.
Keyword: Brain Injury/Concussion
Link ID: 29935 - Posted: 09.20.2025
Chris Simms A wearable device could make saying ‘Alexa, what time is it?’ aloud a thing of the past. An artificial intelligence (AI) neural interface called AlterEgo promises to allow users to silently communicate just by internally articulating words. Sitting over the ear, the device facilitates daily life through live communication with the Internet. “It gives you the power of telepathy but only for the thoughts you want to share,” says AlterEgo’s chief executive Arnav Kapur, based in Cambridge, Massachusetts. Kapur unveiled the device on 8 September. The device does not read brain activity, but predicts what a wearer wants to say from signals in muscles used to speak, then sends audio information back into their ear. The researchers say that their non-invasive technology could help people with motor neuron disease (amyotrophic lateral sclerosis; ALS) and multiple sclerosis (MS) who have trouble speaking, but also want to make the devices commercially available for general use. In a promotional video on the AlterEgo website, Kapur says that “it’s a revolutionary breakthrough with the potential to change the way we interact with our technology, with one another and with the world around us”. “The big question about this is ‘how likely is that potential to be realized?,” says Howard Chizeck, an electrical and computer engineer at the University of Washington in Seattle. Chizeck says that the technology seems workable and is less of a privacy risk than listening devices such as Amazon’s Alexa are, but isn’t convinced that the device will catch on for commercial use. © 2025 Springer Nature Limited
Keyword: Robotics; Language
Link ID: 29934 - Posted: 09.20.2025
Rachel Fieldhouse Deep in the rainforests of the Democratic Republic of the Congo, Mélissa Berthet found bonobos doing something thought to be uniquely human. During the six months that Berthet observed the primates, they combined calls in several ways to make complex phrases1. In one example, bonobos (Pan paniscus) that were building nests together added a yelp, meaning ‘let’s do this’, to a grunt that says ‘look at me’. “It’s really a way to say: ‘Look at what I’m doing, and let’s do this all together’,” says Berthet, who studies primates and linguistics at the University of Rennes, France. In another case, a peep that means ‘I would like to do this’ was followed by a whistle signalling ‘let’s stay together’. The bonobos combine the two calls in sensitive social contexts, says Berthet. “I think it’s to bring peace.” The study, reported in April, is one of several examples from the past few years that highlight just how sophisticated vocal communication in non-human animals can be. In some species of primate, whale2 and bird, researchers have identified features and patterns of vocalization that have long been considered defining characteristics of human language. These results challenge ideas about what makes human language special — and even how ‘language’ should be defined. Perhaps unsurprisingly, many scientists turn to artificial intelligence (AI) tools to speed up the detection and interpretation of animal sounds, and to probe aspects of communication that human listeners might miss. “It’s doing something that just wasn’t possible through traditional means,” says David Robinson, an AI researcher at the Earth Species Project, a non-profit organization based in Berkeley, California, that is developing AI systems to decode communication across the animal kingdom. As the research advances, there is increasing interest in using AI tools not only to listen in on animal speech, but also to potentially talk back. © 2025 Springer Nature Limited
Keyword: Animal Communication; Language
Link ID: 29931 - Posted: 09.17.2025
By Jake Buehler All eight arms of an octopus can be used for whatever their cephalopod owner wishes, but some arms are favored for certain tasks. A new, detailed analysis of how octopuses wield their famously flexible appendages suggests that all eight arms share a skill set, but the front four spend more time on exploration and the back four on movement. The findings, published September 11 in Scientific Reports, provide a comprehensive accounting of how subtle arm movements coordinate the clever invertebrates’ repertoire of behaviors. Octopuses live their lives through their sucker-lined arms, which make up the bulk of their body mass and contain most of their nervous system. Marine biologist Chelsea Bennice wanted to understand how octopuses use the extreme flexibility of their boneless limbs to move, hunt and investigate their environment. Her colleagues had examined some of these behaviors in laboratory settings, but not in the wild. Bennice and her colleagues watched 25 videos, filmed from 2007 to 2015, of multiple species of wild octopuses in Spain and the Caribbean, cataloging their behaviors and arm movements. In all, the researchers logged nearly 4,000 arm actions, which could be broken down into 12 types, including raising, reaching and grasping. The arms could deform in four distinct ways: elongating, shortening, bending and twisting. The team found that the octopuses were exceptionally ambidextrous. “Octopuses are ultimate multitaskers,” says Bennice, of Florida Atlantic University in Boca Raton. “All arms are capable of all arm behaviors and all arm deformations. They can even use multiple arm actions on a single arm and on several arms at the same time.” © Society for Science & the Public 2000–2025.
Keyword: Laterality; Evolution
Link ID: 29926 - Posted: 09.13.2025
By Rachel E. Gross The first thing Debra McVean did when she woke up at the hospital in March 2024 was try to get to the bathroom. But her left arm wouldn’t move; neither would her left leg. She was paralyzed all along her left side. She had suffered a stroke, her doctor soon explained. A few nights before, a blood clot had lodged in an artery in her neck, choking off oxygen to her brain cells. Now an M.R.I. showed a dark spot in her brain, an eerie absence directly behind her right eye. What that meant for her prognosis, however, the doctor couldn’t say. “Something’s missing there, but you don’t know what,” Ms. McVean’s husband, Ian, recalled recently. “And you don’t know how that will affect her recovery. It’s that uncertainty, it eats away at you.” With a brain injury, unlike a broken bone, there is no clear road to recovery. Nor are there medical tools or therapies to help guide the brain toward healing. All doctors can do is encourage patients to work hard in rehab, and hope. That is why, for decades, the medical attitude toward survivors of brain injury has been largely one of neurological “nihilism,” said Dr. Fernando Testai, a neurologist at the University of Illinois, Chicago, and the editor in chief of the Journal of Stroke and Cerebrovascular Diseases. Stroke, he said, “was often seen as a disease of ‘diagnose and adios.’” That may be about to change. A few days after Ms. McVean woke up in the Foothills Medical Center in Calgary, she was told about a clinical trial for a pill that could help the brain recover from a stroke or traumatic injury, called Maraviroc. Given her level of physical disability, she was a good candidate for the study. She hesitated. The pills were large — horse pills, she called them. But she knew the study could help others, and there was a 50 percent chance that she would get a drug that could help her, too. © 2025 The New York Times Company
Keyword: Stroke; Regeneration
Link ID: 29921 - Posted: 09.06.2025
By Marta Hill Most people flinch when a rat scurries into their path, but not one New York City-based research team: These researchers actively seek out urban rats to study their day-to-day behaviors and interactions. The work is part of a growing trend of neuroscientists studying animals in their natural environments rather than in the lab. “It’s a classic neuroscience model organism, but we don’t really know that much about their natural ecology,” says team member Emily Mackevicius, senior research scientist at Basis Research Institute. The fact that urban rats are ubiquitous presents a convenient opportunity for naturalistic study, adds Ralph Peterson, a postdoctoral fellow at the institute, who is also part of the team. Last year, Peterson, Mackevicius and their colleagues held a series of rat behavior stakeouts around New York City—in the Union Square subway station, in a wooded area of Central Park and on a street corner in Harlem. The team used thermal cameras to track the animals as they foraged in the dark and ultrasonic audio recorders to eavesdrop on rat vocalizations. Rats in the wild vocalize differently than laboratory rats, the team found. For example, lab rats typically emit calls at 22 kilohertz in negative contexts, such as when they sense danger, according to a 2021 review article. By contrast, the city rats used that frequency across more varied scenarios, including while they were foraging. The team posted their results on bioRxiv last month. “This creature that we see out at night all the time, running around, is actually vocalizing all the while, and we can’t hear it,” Peterson says. © 2025 Simons Foundation
Keyword: Animal Communication; Evolution
Link ID: 29893 - Posted: 08.20.2025
By Carl Zimmer For decades, neuroengineers have dreamed of helping people who have been cut off from the world of language. A disease like amyotrophic lateral sclerosis, or A.L.S., weakens the muscles in the airway. A stroke can kill neurons that normally relay commands for speaking. Perhaps, by implanting electrodes, scientists could instead record the brain’s electric activity and translate that into spoken words. Now a team of researchers has made an important advance toward that goal. Previously they succeeded in decoding the signals produced when people tried to speak. In the new study, published on Thursday in the journal Cell, their computer often made correct guesses when the subjects simply imagined saying words. Christian Herff, a neuroscientist at Maastricht University in the Netherlands who was not involved in the research, said the result went beyond the merely technological and shed light on the mystery of language. “It’s a fantastic advance,” Dr. Herff said. The new study is the latest result in a long-running clinical trial, called BrainGate2, that has already seen some remarkable successes. One participant, Casey Harrell, now uses his brain-machine interface to hold conversations with his family and friends. In 2023, after A.L.S. had made his voice unintelligible, Mr. Harrell agreed to have electrodes implanted in his brain. Surgeons placed four arrays of tiny needles on the left side, in a patch of tissue called the motor cortex. The region becomes active when the brain creates commands for muscles to produce speech. A computer recorded the electrical activity from the implants as Mr. Harrell attempted to say different words. Over time, with the help of artificial intelligence, the computer accurately predicted almost 6,000 words, with an accuracy of 97.5 percent. It could then synthesize those words using Mr. Harrell’s voice, based on recordings made before he developed A.L.S. © 2025 The New York Times Company
Keyword: Language; Robotics
Link ID: 29892 - Posted: 08.16.2025
James Doubek Researchers have some new evidence about what makes birds make so much noise early in the morning, and it's not for some of the reasons they previously thought. For decades, a dominant theory about why birds sing at dawn — called the "dawn chorus" — has been that they can be heard farther and more clearly at that time. Sound travels faster in humid air and it's more humid early in the morning. It's less windy, too, which is thought to lessen any distortion of their vocalizations. But scientists from the Cornell Lab of Ornithology's K. Lisa Yang Center for Conservation Bioacoustics and Project Dhvani in India combed through audio recordings of birds in the rainforest. They say they didn't find evidence to back up this "acoustic transmission hypothesis." It was among the hypotheses involving environmental factors. Another is that birds spend their time singing at dawn because there's low light and it's a bad time to look for food. "We basically didn't find much support for some of these environmental cues which have been purported in literature as hypotheses" for why birds sing more at dawn, says Vijay Ramesh, a postdoctoral research associate at Cornell and the study's lead author. The study, called "Why is the early bird early? An evaluation of hypotheses for avian dawn-biased vocal activity," was published this month in the peer-reviewed journal Philosophical Transactions of the Royal Society B. The researchers didn't definitively point to one reason for why the dawn chorus is happening, but they found support for ideas that the early morning racket relates to birds marking their territory after being inactive at night, and communicating about finding food. © 2025 npr
Keyword: Animal Communication; Evolution
Link ID: 29839 - Posted: 06.21.2025
Associated Press Prairie dogs bark to alert each other to the presence of predators, with different cries depending on whether the threat is airborne or approaching by land. But their warnings also seem to help a vulnerable grassland bird. Curlews have figured out that if they eavesdrop on alarms from US prairie dog colonies they may get a jump on predators coming for them, too, according to research published on Thursday in the journal Animal Behavior. “Prairie dogs are on the menu for just about every predator you can think of – golden eagles, red-tailed hawks, foxes, badgers, even large snakes,” said Andy Boyce, a research ecologist in Montana at the Smithsonian’s National Zoo and Conservation Biology Institute. Such animals also gladly snack on grassland nesting birds such as the long-billed curlew, so the birds have adapted. Previous research has shown birds frequently eavesdrop on other bird species to glean information about food sources or danger, said Georgetown University ornithologist Emily Williams, who was not involved in the study. But, so far, scientists have documented only a few instances of birds eavesdropping on mammals. “That doesn’t necessarily mean it’s rare in the wild,” she said, “it just means we haven’t studied it yet.” Prairie dogs, a type of ground squirrel, live in large colonies with a series of burrows that may stretch for miles underground, especially on the vast US plains. When they hear each other’s barks, they either stand alert watching or dive into their burrows. “Those little barks are very loud; they can carry quite a long way,” said research co-author Andrew Dreelin, who also works for the Smithsonian. © 2025 Guardian News & Media Limited
Keyword: Animal Communication; Language
Link ID: 29832 - Posted: 06.18.2025
David Farrier Charles Darwin suggested that humans learned to speak by mimicking birdsong: our ancestors’ first words may have been a kind of interspecies exchange. Perhaps it won’t be long before we join the conversation once again. The race to translate what animals are saying is heating up, with riches as well as a place in history at stake. The Jeremy Coller Foundation has promised $10m to whichever researchers can crack the code. This is a race fuelled by generative AI; large language models can sort through millions of recorded animal vocalisations to find their hidden grammars. Most projects focus on cetaceans because, like us, they learn through vocal imitation and, also like us, they communicate via complex arrangements of sound that appear to have structure and hierarchy. Sperm whales communicate in codas – rapid sequences of clicks, each as brief as 1,000th of a second. Project Ceti (the Cetacean Translation Initiative) is using AI to analyse codas in order to reveal the mysteries of sperm whale speech. There is evidence the animals take turns, use specific clicks to refer to one another, and even have distinct dialects. Ceti has already isolated a click that may be a form of punctuation, and they hope to speak whaleish as soon as 2026. The linguistic barrier between species is already looking porous. Last month, Google released DolphinGemma, an AI program to translate dolphins, trained on 40 years of data. In 2013, scientists using an AI algorithm to sort dolphin communication identified a new click in the animals’ interactions with one another, which they recognised as a sound they had previously trained the pod to associate with sargassum seaweed – the first recorded instance of a word passing from one species into another’s native vocabulary. The prospect of speaking dolphin or whale is irresistible. And it seems that they are just as enthusiastic. In November last year, scientists in Alaska recorded an acoustic “conversation” with a humpback whale called Twain, in which they exchanged a call-and-response form known as “whup/throp” with the animal over a 20-minute period. In Florida, a dolphin named Zeus was found to have learned to mimic the vowel sounds, A, E, O, and U. © 2025 Guardian News & Media Limited
Keyword: Language; Evolution
Link ID: 29821 - Posted: 06.04.2025
Danielle Wilhour Cerebrospinal fluid, or CSF, is a clear, colorless liquid that plays a crucial role in maintaining the health and function of your central nervous system. It cushions the brain and spinal cord, provides nutrients and removes waste products. Despite its importance, problems related to CSF often go unnoticed until something goes wrong. Recently, cerebrospinal fluid disorders drew public attention with the announcement that musician Billy Joel had been diagnosed with normal pressure hydrocephalus. In this condition, excess CSF accumulates in the brain’s cavities, enlarging them and putting pressure on surrounding brain tissue even though diagnostic readings appear normal. Because normal pressure hydrocephalus typically develops gradually and can mimic symptoms of other neurodegenerative diseases, such as Alzheimer’s or Parkinson’s disease, it is often misdiagnosed. I am a neurologist and headache specialist. In my work treating patients with CSF pressure disorders, I have seen these conditions present in many different ways. Here’s what happens when your cerebrospinal fluid stops working. What is cerebrospinal fluid? CSF is made of water, proteins, sugars, ions and neurotransmitters. It is primarily produced by a network of cells called the choroid plexus, which is located in the brain’s ventricles, or cavities. The choroid plexus produces approximately 500 milliliters (17 ounces) of CSF daily, but only about 150 milliliters (5 ounces) are present within the central nervous system at any given time due to constant absorption and replenishment in the brain. This fluid circulates through the ventricles of the brain, the central canal of the spinal cord and the subarachnoid space surrounding the brain and spinal cord. © 2010–2025, The Conversation US, Inc.
Keyword: Biomechanics; Stroke
Link ID: 29812 - Posted: 05.31.2025
By Paula Span & KFF Health News Kristin Kramer woke up early on a Tuesday morning 10 years ago because one of her dogs needed to go out. Then, a couple of odd things happened. When she tried to call her other dog, “I couldn’t speak,” she said. As she walked downstairs to let them into the yard, “I noticed that my right hand wasn’t working.” But she went back to bed, “which was totally stupid,” said Kramer, now 54, an office manager in Muncie, Indiana. “It didn’t register that something major was happening,” especially because, reawakening an hour later, “I was perfectly fine.” So she “just kind of blew it off” and went to work. It’s a common response to the neurological symptoms that signal a TIA, a transient ischemic attack or ministroke. At least 240,000 Americans experience one each year, with the incidence increasing sharply with age. Because the symptoms disappear quickly, usually within minutes, people don’t seek immediate treatment, putting them at high risk for a bigger stroke. Kramer felt some arm tingling over the next couple of days and saw her doctor, who found nothing alarming on a CT scan. But then she started “jumbling” her words and finally had a relative drive her to an emergency room. By then, she could not sign her name. After an MRI, she recalled, “my doctor came in and said, ‘You’ve had a small stroke.’” Did those early-morning aberrations constitute a TIA? Might a 911 call and an earlier start on anticlotting drugs have prevented her stroke? “We don’t know,” Kramer said. She’s doing well now, but faced with such symptoms again, “I would seek medical attention.” © 2025 SCIENTIFIC AMERICAN,
Keyword: Stroke
Link ID: 29808 - Posted: 05.28.2025
Sofia Marie Haley I approach a flock of mountain chickadees feasting on pine nuts. A cacophony of sounds, coming from the many different bird species that rely on the Sierra Nevada’s diverse pine cone crop, fill the crisp mountain air. The strong “chick-a-dee” call sticks out among the bird vocalizations. The chickadees are communicating to each other about food sources – and my approach. Mountain chickadees are a member of the family Paridae, which is known for its complex vocal communication systems and cognitive abilities. Along with my advisers, behavioral ecologists Vladimir Pravosudov and Carrie Branch, I’m studying mountain chickadees at our study site in Sagehen Experimental Forest, outside of Truckee, California, for my doctoral research. I am focusing on how these birds convey a variety of information with their calls. The chilly autumn air on top of the mountain reminds me that it will soon be winter. It is time for the mountain chickadees to leave the socially monogamous partnerships they had while raising their chicks to form larger flocks. Forming social groups is not always simple; young chickadees are joining new flocks, and social dynamics need to be established before the winter storms arrive. I can hear them working this out vocally. There’s an unusual variety of complex calls, with melodic “gargle calls” at the forefront, coming from individuals announcing their dominance over other flock members. Examining and decoding bird calls is becoming an increasingly popular field of study, as scientists like me are discovering that many birds – including mountain chickadees – follow systematic rules to share important information, stringing together syllables like words in a sentence. © 2010–2025, The Conversation US, Inc.
Keyword: Language; Evolution
Link ID: 29807 - Posted: 05.28.2025


.gif)

