Chapter 19. Language and Lateralization

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 21 - 40 of 2626

Alejandra Marquez Janse & Christopher Intagliata Imagine you're moving to a new country on the other side of the world. Besides the geographical and cultural changes, you will find a key difference will be the language. But will your pets notice the difference? It was a question that nagged at Laura Cuaya, a brain researcher at the Neuroethology of Communication Lab at at Eötvös Loránd University in Budapest. "When I moved from Mexico to Hungary to start my post-doc research, all was new for me. Obviously, here, people in Budapest speak Hungarian. So you've had a different language, completely different for me," she said. The language was also new to her two dogs: Kun Kun and Odín. "People are super friendly with their dogs [in Budapest]. And my dogs, they are interested in interacting with people," Cuaya said. "But I wonder, did they also notice people here ... spoke a different language?" Cuaya set out to find the answer. She and her colleagues designed an experiment with 18 volunteer dogs — including her two border collies — to see if they could differentiate between two languages. Kun Kun and Odín were used to hearing Spanish; the other dogs Hungarian. The dogs sat still within an MRI machine, while listening to an excerpt from the story The Little Prince. They heard one version in Spanish, and another in Hungarian. Then the scientists analyzed the dogs' brain activity. © 2022 npr

Keyword: Language; Evolution
Link ID: 28145 - Posted: 01.08.2022

Jon Hamilton When baby mice cry, they do it to a beat that is synchronized to the rise and fall of their own breath. It's a pattern that researchers say could help explain why human infants can cry at birth — and how they learn to speak. Mice are born with a cluster of cells in the brainstem that appears to coordinate the rhythms of breathing and vocalizations, a team reports in the journal Neuron. If similar cells exist in human newborns, they could serve as an important building block for speech: the ability to produce one or many syllables between each breath. The cells also could explain why so many human languages are spoken at roughly the same tempo. "This suggests that there is a hardwired network of neurons that is fundamental to speech," says Dr. Kevin Yackle, the study's senior author and a researcher at the University of California, San Francisco. Scientists who study human speech have spent decades debating how much of our ability is innate and how much is learned. The research adds to the evidence that human speech relies — at least in part — on biological "building blocks" that are present from birth, says David Poeppel, a professor of psychology and neural science at New York University who was not involved in the study. But "there is just a big difference between a mouse brain and a human brain," Poeppel says. So the human version of this building block may not look the same. © 2022 npr

Keyword: Language; Evolution
Link ID: 28144 - Posted: 01.08.2022

Chloe Tenn On October 4, physiologist David Julius and neurobiologist Arden Patapoutian were awarded the Nobel Prize in Physiology or Medicine for their work on temperature, pain, and touch perception. Julius researched the burning sensation people experience from chilies, and identified an ion channel, TRPV1 that is activated by heat. Julius and Patapoutian then separately reported on the TRPM8 ion channel that senses menthol’s cold in 2002. Patapoutian’s group went on to discover the PIEZO1 and PIEZO2 ion channels that are involved in sensing mechanical pressure. The Nobel Committee wrote that the pair’s work inspired further research into understanding how the nervous system senses temperature and mechanical stimuli and that the laureates “identified critical missing links in our understanding of the complex interplay between our senses and the environment.” This year saw innovations in augmenting the brain’s capabilities by plugging it in to advanced computing technology. For example, a biology teacher who lost her vision 16 years ago was able to distinguish shapes and letters with the help of special glasses that interfaced with electrodes implanted in her brain. Along a similar vein, a computer connected to a brain-implant system discerned brain signals for handwriting in a paralyzed man, enabling him to type up to 90 characters per minute with an accuracy above 90 percent. Such studies are a step forward for technologies that marry cutting-edge neuroscience and computational innovation in an attempt to improve people’s lives. © 1986–2021 The Scientist.

Keyword: Pain & Touch; Language
Link ID: 28134 - Posted: 12.31.2021

Jeanne Paz Blocking an immune system molecule that accumulates after traumatic brain injury could significantly reduce the injury’s detrimental effects, according to a recent mouse study my neuroscience lab and I published in the journal Science. The cerebral cortex, the part of the brain involved in thinking, memory and language, is often the primary site of head injury because it sits directly beneath the skull. However, we found that another region near the center of the brain that regulates sleep and attention, the thalamus, was even more damaged than the cortex months after the injury. This may be due to increased levels of a molecule called C1q, which triggers a part of the immune system called the classical complement pathway. This pathway plays a key role in rapidly clearing pathogens and dead cells from the body and helps control the inflammatory immune response. C1q plays both helpful and harmful roles in the brain. On the one hand, accumulation of C1q in the brain can trigger abnormal elimination of synapses – the structures that allow neurons to communicate with one another – and contribute to neurodegenerative disease. On the other hand, C1q is also involved in normal brain development and protects the central nervous system from infection. In the case of traumatic brain injury, we found that C1q lingered in the thalamus at abnormally high levels for months after the initial injury and was associated with inflammation, dysfunctional brain circuits and neuronal death. This suggests that higher levels of C1q in the thalamus could contribute to several long-term effects of traumatic brain injury, such as sleep disruption and epilepsy. © 2010–2021, The Conversation US, Inc.

Keyword: Brain Injury/Concussion; Neuroimmunology
Link ID: 28112 - Posted: 12.15.2021

By Erin Blakemore Anger — such as road rage and the simmering displeasure of the ongoing pandemic — is the watchword for 2021. But be careful — those big emotions could trigger a stroke. FAQ: What to know about the omicron variant of the coronavirus Researchers in a global study devoted to figuring out stroke triggers found that about 1 in 11 stroke patients experience anger or emotional upset in the hour before their stroke symptoms begin. The study, published in the European Heart Journal, looked at data from 13,462 patients in 32 countries who had strokes. The patients completed extensive questionnaires during the first three days after they were hospitalized, answering questions about their medical history and what they had been doing and feeling before their stroke. Just over 8 percent of the patients surveyed said they had experienced anger or emotional upset within a day of symptom onset, which served as the control period. Just over 9 percent said they had been angry or upset within an hour of the first symptoms of their stroke, which was the test period. The risk of a stroke was higher in the test period when compared with the control period, the researchers said. “Our research found that anger or emotional upset was linked to an approximately 30% increase in risk of stroke during one hour after an episode — with a greater increase if the patient did not have a history of depression,” Andrew Smyth, a professor of clinical epidemiology at NUI Galway in Ireland who co-led the study, said in a statement. Lower education upped the odds of having a stroke linked with anger or emotional upset, as well.

Keyword: Stroke; Emotions
Link ID: 28109 - Posted: 12.15.2021

Daisy Yuhas Billions of people worldwide speak two or more languages. (Though the estimates vary, many sources assert that more than half of the planet is bilingual or multilingual.) One of the most common experiences for these individuals is a phenomenon that experts call “code switching,” or shifting from one language to another within a single conversation or even a sentence. This month Sarah Frances Phillips, a linguist and graduate student at New York University, and her adviser Liina Pylkkänen published findings from brain imaging that underscore the ease with which these switches happen and reveal how the neurological patterns that support this behavior are very similar in monolingual people. The new study reveals how code switching—which some multilingual speakers worry is “cheating,” in contrast to sticking to just one language—is normal and natural. Phillips spoke with Mind Matters editor Daisy Yuhas about these findings and why some scientists believe bilingual speakers may have certain cognitive advantages. Can you tell me a little bit about what drew you to this topic? I grew up in a bilingual household. My mother is from South Korea; my dad is African-American. So I grew up code switching a lot between Korean and English, as well as different varieties of English, such as African-American English and the more mainstream, standardized version. When you spend a lot of time code switching, and then you realize that this is something that is not well understood from a linguistic perspective, nor from a neurobiological perspective, you realize, “Oh, this is open territory.” © 2021 Scientific American

Keyword: Language
Link ID: 28095 - Posted: 12.01.2021

Andrew Gregory Health editor Drinking coffee or tea may be linked with a lower risk of stroke and dementia, according to the largest study of its kind. Strokes cause 10% of deaths globally, while dementia is one of the world’s biggest health challenges – 130 million are expected to be living with it by 2050. In the research, 365,000 people aged between 50 and 74 were followed for more than a decade. At the start the participants, who were involved in the UK Biobank study, self-reported how much coffee and tea they drank. Over the research period, 5,079 of them developed dementia and 10,053 went on to have at least one stroke. Researchers found that people who drank two to three cups of coffee or three to five cups of tea a day, or a combination of four to six cups of coffee and tea, had the lowest risk of stroke or dementia. Those who drank two to three cups of coffee and two to three cups of tea daily had a 32% lower risk of stroke. These people had a 28% lower risk of dementia compared with those who did not drink tea or coffee. The research, by Yuan Zhang and colleagues from Tianjin Medical University, China, suggests drinking coffee alone or in combination with tea is also linked with lower risk of post-stroke dementia. Writing in the journal Plos Medicine, the authors said: “Our findings suggested that moderate consumption of coffee and tea separately or in combination were associated with lower risk of stroke and dementia.” © 2021 Guardian News & Media Limited

Keyword: Stroke; Drug Abuse
Link ID: 28082 - Posted: 11.20.2021

Kate Wild “The skull acts as a bastion of privacy; the brain is the last private part of ourselves,” Australian neurosurgeon Tom Oxley says from New York. Oxley is the CEO of Synchron, a neurotechnology company born in Melbourne that has successfully trialled hi-tech brain implants that allow people to send emails and texts purely by thought. In July this year, it became the first company in the world, ahead of competitors like Elon Musk’s Neuralink, to gain approval from the US Food and Drug Administration (FDA) to conduct clinical trials of brain computer interfaces (BCIs) in humans in the US. Synchron has already successfully fed electrodes into paralysed patients’ brains via their blood vessels. The electrodes record brain activity and feed the data wirelessly to a computer, where it is interpreted and used as a set of commands, allowing the patients to send emails and texts. BCIs, which allow a person to control a device via a connection between their brain and a computer, are seen as a gamechanger for people with certain disabilities. “No one can see inside your brain,” Oxley says. “It’s only our mouths and bodies moving that tells people what’s inside our brain … For people who can’t do that, it’s a horrific situation. What we’re doing is trying to help them get what’s inside their skull out. We are totally focused on solving medical problems.” BCIs are one of a range of developing technologies centred on the brain. Brain stimulation is another, which delivers targeted electrical pulses to the brain and is used to treat cognitive disorders. Others, like imaging techniques fMRI and EEG, can monitor the brain in real time. “The potential of neuroscience to improve our lives is almost unlimited,” says David Grant, a senior research fellow at the University of Melbourne. “However, the level of intrusion that would be needed to realise those benefits … is profound”. © 2021 Guardian News & Media Limited

Keyword: Brain imaging; Language
Link ID: 28070 - Posted: 11.09.2021

Jon Hamilton Headaches, nausea, dizziness, and confusion are among the most common symptoms of a concussion. But researchers say a blow to the head can also make it hard to understand speech in a noisy room. "Making sense of sound is one of the hardest jobs that we ask our brains to do," says Nina Kraus, a professor of neurobiology at Northwestern University. "So you can imagine that a concussion, getting hit in the head, really does disrupt sound processing." About 15% to 20% of concussions cause persistent sound-processing difficulties, Kraus says, which suggests that hundreds of thousands of people are affected each year in the U.S. The problem is even more common in the military, where many of the troops who saw combat in Iraq and Afghanistan sustained concussions from roadside bombs. From ear to brain Our perception of sound starts with nerve cells in the inner ear that transform pressure waves into electrical signals, Kraus says. But it takes a lot of brain power to transform those signals into the auditory world we perceive. Article continues after sponsor message The brain needs to compare the signals from two ears to determine the source of a sound. Then it needs to keep track of changes in volume, pitch, timing and other characteristics. Kraus's lab, called Brainvolts, is conducting a five-year study of 500 elite college athletes to learn how a concussion can affect the brain's ability to process the huge amount of auditory information it receives. And she devotes an entire chapter to concussion in her 2021 book, Of Sound Mind: How Our Brain Constructs a Meaningful Sonic World. © 2021 npr

Keyword: Brain Injury/Concussion; Hearing
Link ID: 28064 - Posted: 11.06.2021

Jordana Cepelewicz Hearing is so effortless for most of us that it’s often difficult to comprehend how much information the brain’s auditory system needs to process and disentangle. It has to take incoming sounds and transform them into the acoustic objects that we perceive: a friend’s voice, a dog barking, the pitter-patter of rain. It has to extricate relevant sounds from background noise. It has to determine that a word spoken by two different people has the same linguistic meaning, while also distinguishing between those voices and assessing them for pitch, tone and other qualities. According to traditional models of neural processing, when we hear sounds, our auditory system extracts simple features from them that then get combined into increasingly complex and abstract representations. This process allows the brain to turn the sound of someone speaking, for instance, into phonemes, then syllables, and eventually words. But in a paper published in Cell in August, a team of researchers challenged that model, reporting instead that the auditory system often processes sound and speech simultaneously and in parallel. The findings suggest that how the brain makes sense of speech diverges dramatically from scientists’ expectations, with the signals from the ear branching into distinct brain pathways at a surprisingly early stage in processing — sometimes even bypassing a brain region thought to be a crucial stepping-stone in building representations of complex sounds.

Keyword: Language; Hearing
Link ID: 28058 - Posted: 10.30.2021

Jordana Cepelewicz Hearing is so effortless for most of us that it’s often difficult to comprehend how much information the brain’s auditory system needs to process and disentangle. It has to take incoming sounds and transform them into the acoustic objects that we perceive: a friend’s voice, a dog barking, the pitter-patter of rain. It has to extricate relevant sounds from background noise. It has to determine that a word spoken by two different people has the same linguistic meaning, while also distinguishing between those voices and assessing them for pitch, tone and other qualities. According to traditional models of neural processing, when we hear sounds, our auditory system extracts simple features from them that then get combined into increasingly complex and abstract representations. This process allows the brain to turn the sound of someone speaking, for instance, into phonemes, then syllables, and eventually words. But in a paper published in Cell in August, a team of researchers challenged that model, reporting instead that the auditory system often processes sound and speech simultaneously and in parallel. The findings suggest that how the brain makes sense of speech diverges dramatically from scientists’ expectations, with the signals from the ear branching into distinct brain pathways at a surprisingly early stage in processing — sometimes even bypassing a brain region thought to be a crucial stepping-stone in building representations of complex sounds. Simons Foundation All Rights Reserved © 2021

Keyword: Language
Link ID: 28054 - Posted: 10.27.2021

Nicola Davis They have fluffy ears, a penetrating stare and a penchant for monogamy. But it turns out that indris – a large, critically endangered species of lemur – have an even more fascinating trait: an unexpected sense of rhythm. Indri indri are known for their distinctive singing, a sound not unlike a set of bagpipes being stepped on. The creatures often strike up a song with members of their family either in duets or choruses, featuring sounds from roars to wails. Now scientists say they have analysed the songs of 39 indris living in the rainforest of Madagascar, revealing that – like humans – the creatures employ what are known as categorical rhythms. These rhythms are essentially distinctive and predictable patterns of intervals between the onset of notes. For example in a 1:1 rhythm, all the intervals are of equal length, while a 1:2 rhythm has some twice as long as those before or after – like the opening bars of We Will Rock You by Queen. “They are quite predictable [patterns], because the next note is going to come either one unit or two whole units after the previous note,” said Dr Andrea Ravignani, co-author of the research from the Max Planck Institute for Psycholinguistics. While the 1:1 rhythms have previously been identified in certain songbirds, the team say their results are the first time categorical rhythms have been identified in a non-human mammal. “The evidence is even stronger than in birds,” said Ravignani. © 2021 Guardian News & Media Limited

Keyword: Animal Communication; Language
Link ID: 28050 - Posted: 10.27.2021

ByRachel Fritts Across North America, hundreds of bird species waste time and energy raising chicks that aren’t their own. They’re the victims of a “brood parasite” called the cowbird, which adds its own egg to their clutch, tricking another species into raising its offspring. One target, the yellow warbler, has a special call to warn egg-warming females when cowbirds are casing the area. Now, researchers have found the females act on that warning 1 day later—suggesting their long-term memories might be much better than thought. “It’s a very sophisticated and subtle behavioral response,” says Erick Greene, a behavioral ecologist at the University of Montana, Missoula, who was not involved in the study. “Am I surprised? I guess I’m more in awe. It’s pretty dang cool.” Birds have been dazzling scientists with their intellects for decades. Western scrub jays, for instance, can remember where they’ve stored food for the winter—and can even keep track of when it will spoil. There’s evidence that other birds might have a similarly impressive ability to remember certain meaningful calls. “Animals are smart in the context in which they need to be smart,” says Mark Hauber, an animal behavior researcher at the University of Illinois, Urbana-Champaign (UIUC), and the Institute of Advanced Studies in Berlin, who co-authored the new study. He wanted to see whether yellow warblers had the capacity to remember their own important warning call known as a seet. Shelby Lawson The birds make the staccato sound of this call only when a cowbird is near. When yellow warbler females hear it, they go back to their nests and sit tight. (It could just as well be called a “seat” call.) But it’s been unclear whether they still remember the warning in the morning. © 2021 American Association for the Advancement of Science.

Keyword: Animal Communication; Learning & Memory
Link ID: 28039 - Posted: 10.16.2021

Linda Geddes Your dog might follow commands such as “sit”, or become uncontrollably excited at the mention of the word “walkies”, but when it comes to remembering the names of toys and other everyday items, most seem pretty absent-minded. Now a study of six “genius dogs” has advanced our understanding of dogs’ memories, suggesting some of them possess a remarkable grasp of the human language. Hungarian researchers spent more than two years scouring the globe for dogs who could recognise the names of their various toys. Although most can learn commands to some degree, learning the names of items appears to be a very different task, with most dogs unable to master this skill. Max (Hungary), Gaia (Brazil), Nalani (Netherlands), Squall (US), Whisky (Norway), and Rico (Spain) made the cut after proving they knew the names of more than 28 toys, with some knowing more than 100. They were then enlisted to take part in a series of livestreamed experiments known as the Genius Dog Challenge. “These gifted dogs can learn new names of toys in a remarkable speed,” said Dr Claudia Fugazza at Eötvös Loránd University in Budapest, who led the research team. “In our previous study we found that they could learn a new toy name after hearing it only four times. But, with such short exposure, they did not form a long-term memory of it.” To further push the dogs’ limits, their owners were tasked with teaching them the names of six, and then 12 new toys in a single week. © 2021 Guardian News & Media Limited

Keyword: Animal Communication; Language
Link ID: 28023 - Posted: 10.06.2021

By Jackie Rocheleau Elevated blood levels of a specific protein may help scientists predict who has a better chance of bouncing back from a traumatic brain injury. The protein, called neurofilament light or NfL for short, lends structural support to axons, the tendrils that send messages between brain cells. Levels of NfL peak on average at 10 times the typical level 20 days after injury and stay above normal a year later, researchers report September 29 in Science Translational Medicine. The higher the peak NfL blood concentrations after injury, the tougher the recovery for people with TBI six and 12 months later, shows the study of 197 people treated at eight trauma centers across Europe for moderate to severe TBI. Brain scans of 146 participants revealed that their peak NfL concentrations predicted the extent of brain shrinkage after six months, and axon damage at six and 12 months after injury, neurologist Neil Graham of Imperial College London and his colleagues found. These researchers also had a unique opportunity to check that the blood biomarker, which gives indirect clues about the brain injury, actually measured what was happening in the brain. In 18 of the participants that needed brain surgery, researchers sampled the fluid surrounding injured neurons. NfL concentrations there correlated with NfL concentrations in the blood. “The work shows that a new ultrasensitive blood test can be used to accurately diagnose traumatic brain injury,” says Graham. “This blood test can predict quite precisely who’s going to make a good recovery and who’s going to have more difficulties.” © Society for Science & the Public 2000–2021.

Keyword: Brain Injury/Concussion
Link ID: 28017 - Posted: 10.02.2021

By Sierra Carter Black women who have experienced more racism throughout their lives have stronger brain responses to threat, which may hurt their long-term health, according to a new study I conducted with clinical neuropsychologist Negar Fani and other colleagues. I am part of a research team that for more than 15 years has studied the ways stress related to trauma exposure can affect the mind and body. In our recent study, we took a closer look at a stressor that Black Americans disproportionately face in the United States: racism. My colleagues and I completed research with 55 Black women who reported how much they’d been exposed to traumatic experiences, such as childhood abuse and physical or sexual violence, and to racial discrimination, experiencing unfair treatment due to race or ethnicity. We asked them to focus on a task that required attention while simultaneously looking at stressful images. We used functional MRI to observe their brain activity during that time. We found that Black women who reported more experiences of racial discrimination had more response activity in brain regions that are associated with vigilance and watching out for threat — that is, the middle occipital cortex and ventromedial prefrontal cortex. Their reactions were above and beyond the response caused by traumatic experiences not related to racism. Our research suggests that racism had a traumalike effect on Black women’s health; being regularly attuned to the threat of racism can tax important body-regulation tools and worsen brain health.

Keyword: Stress; Brain Injury/Concussion
Link ID: 28015 - Posted: 10.02.2021

By Sam Roberts Washoe was 10 months old when her foster parents began teaching her to talk, and five months later they were already trumpeting her success. Not only had she learned words; she could also string them together, creating expressions like “water birds” when she saw a pair of swans and “open flower” to gain admittance to a garden. Washoe was a chimpanzee. She had been born in West Africa, probably orphaned when her mother was killed, sold to a dealer, flown to the United States for use of testing by the Air Force and adopted by R. Allen Gardner and his wife, Beatrix. She was raised as if she were a human child. She craved oatmeal with onions and pumpkin pudding. “The object of our research was to learn how much chimps are like humans,” Professor Gardner told Nevada Today, a University of Nevada publication, in 2007. “To measure this accurately, chimps would be needed to be raised as human children, and to do that, we needed to share a common language.” Washoe ultimately learned some 200 words, becoming what researchers said was the first nonhuman to communicate using sign language developed for the deaf. Professor Gardner, an ethologist who, with his wife, raised the chimpanzee for nearly five years, died on Aug. 20 at his ranch near Reno, Nev. He was 91. His death was announced by the University of Nevada, Reno, where he had joined the faculty in 1963 and conducted his research until he retired in 2010. When scientific journals reported in 1967 that Washoe (pronounced WA-sho), named after a county in Nevada, had learned to recognize and use multiple gestures and expressions in sign language, the news electrified the world of psychologists and ethologists who study animal behavior. © 2021 The New York Times Company

Keyword: Language; Evolution
Link ID: 28013 - Posted: 10.02.2021

By Jonathan Lambert Vampire bats may be bloodthirsty, but that doesn’t mean they can’t share a drink with friends. Fights can erupt among bats over gushing wounds bit into unsuspecting animals. But bats that have bonded while roosting often team up to drink blood away from home, researchers report September 23 in PLOS Biology. Vampire bats (Desmodus rotundus) can form long-term social bonds with each other through grooming, sharing regurgitated blood meals and generally hanging out together at the roost (SN: 10/31/19). But whether these friendships, which occur between both kin and nonkin, extend to the bats’ nightly hunting had been unclear. “They’re flying around out there, but we didn’t know if they were still interacting with each other,” says Gerald Carter, an evolutionary biologist at Ohio State University in Columbus. To find out, Carter and his colleague Simon Ripperger of the Museum für Naturkunde in Berlin, built on previous research that uncovered a colony’s social network using bat backpacks. Tiny computer sensors glued to 50 female bats in Tolé, Panama, continuously registered proximity to other sensors both within the roost and outside, revealing when bats met up while foraging. Two common vampire bats feed on a cow near La Chorrera, Panama. It can take 10 to 40 minutes for a bat to bite a small, diamond-shaped wound into an animal’s flesh, and fights can sometimes break out over access to wounds. But researchers found that bats who are friendly back at the roost likely feed together in the field, potentially saving time and energy. © Society for Science & the Public 2000–2021

Keyword: Evolution
Link ID: 28005 - Posted: 09.25.2021

Jon Hamilton People who have had a stroke appear to regain more hand and arm function if intensive rehabilitation starts two to three months after the injury to their brain. A study of 72 stroke patients suggests this is a "critical period," when the brain has the greatest capacity to rewire, a team reports in this week's journal PNAS. The finding challenges the current practice of beginning rehabilitation as soon as possible after a stroke and suggests intensive rehabilitation should go on longer than most insurance coverage allows, says Elissa Newport, a co-author of the study and director of the Center for Brain Plasticity and Recovery at Georgetown University Medical Center. Newport was speaking in place of the study's lead author, Dr. Alexander Dromerick, who died after the study was accepted but before it was published. If the results are confirmed with other larger studies, "the clinical protocol for the timing of stroke rehabilitation would be changed," says Li-Ru Zhao, a professor of neurosurgery at Upstate Medical University in Syracuse, N.Y., who was not involved in the research. The study involved patients treated at Medstar National Rehabilitation Hospital in Washington, D.C., most in their 50s and 60s. One of the study participants was Anthony McEachern, who was 45 when he had a stroke in 2017. Just a few hours earlier, McEachern had been imitating Michael Jackson dance moves with his kids. But at home that night he found himself unable stand up. © 2021 npr

Keyword: Stroke; Learning & Memory
Link ID: 28002 - Posted: 09.22.2021

Christie Wilcox If it walks like a duck and talks like a person, it’s probably a musk duck (Biziura lobata)—the only waterfowl species known that can learn sounds from other species. The Australian species’ facility for vocal learning had been mentioned anecdotally in the ornithological literature; now, a paper published September 6 in Philosophical Transactions of the Royal Society B reviews and discusses the evidence, which includes 34-year-old recordings made of a human-reared musk duck named Ripper engaging in an aggressive display while quacking “you bloody fool.” Ripper quacking "you bloody fool" while being provoked by a person separated from him by a fence The Scientist spoke with the lead author on the paper, Leiden University animal behavior researcher Carel ten Cate, to learn more about these unique ducks and what their unexpected ability reveals about the evolution of vocal learning. The Scientist: What is vocal learning? Carel ten Cate: Vocal learning, as it is used in this case, is that animals and humans, they learn their sounds from experience. So they learn from what they hear around them, which will usually be the parents, but it can also be other individuals. And if they don’t get that sort of exposure, then they will be unable to produce species-specific vocalizations, or in the human case, speech sounds and proper spoken language. © 1986–2021 The Scientist.

Keyword: Language; Evolution
Link ID: 27987 - Posted: 09.13.2021