Chapter 15. Language and Lateralization
Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.
Ian Sample Science editor Five children who were born deaf now have hearing in both ears after taking part in an “astounding” gene therapy trial that raises hopes for further treatments. The children were unable to hear because of inherited genetic mutations that disrupt the body’s ability to make a protein needed to ensure auditory signals pass seamlessly from the ear to the brain. Doctors at Fudan University in Shanghai treated the children, aged between one and 11, in both ears in the hope they would gain sufficient 3D hearing to take part in conversations and work out which direction sounds were coming from. Within weeks of receiving the therapy, the children had gained hearing, could locate the sources of sounds, and recognised speech in noisy environments. Two of the children were recorded dancing to music, the researchers reported in Nature Medicine. A child facing away from the camera towards a panel of auditory testing equipment with script in the top left corner Dr Zheng-Yi Chen, a scientist at Massachusetts Eye and Ear, a Harvard teaching hospital in Boston that co-led the trial, said the results were “astounding”, adding that researchers continued to see the children’s hearing ability “dramatically progress”. The therapy uses an inactive virus to smuggle working copies of the affected gene, Otof, into the inner ear. Once inside, cells in the ear use the new genetic material as a template to churn out working copies of the crucial protein, otoferlin. Video footage of the patients shows a two-year-old boy responding to his name three weeks after the treatment and dancing to music after 13 weeks, having shown no response to either before receiving the injections. © 2024 Guardian News & Media Limited
Keyword: Hearing; Genes & Behavior
Link ID: 29347 - Posted: 06.06.2024
By Gemma Conroy Researchers have developed biodegradable, wireless sensors that can monitor changes in the brain following a head injury or cancer treatment, without invasive surgery. In rats and pigs, the soft sensors performed just as well as conventional wired sensors for up to a month after being injected under the skull. The gel-based sensors measure key health markers, including temperature, pH and pressure. “It is quite likely this technology will be useful for people in medical settings,” says study co-author Yueying Yang, a biomedical engineer at Huazhong University of Science and Technology (HUST) in Wuhan, China. The findings were published today in Nature1. “It’s a very comprehensive study,” says Christopher Reiche, who develops implantable microdevices at the University of Utah in Salt Lake City. For years, scientists have been developing brain sensors that can be implanted inside the skull. But many of these devices rely on wires to transmit data to clinicians. The wires are difficult to insert and remove, and create openings in the skin for viruses and bacteria to enter the body. Wireless sensors offer a solution to this problem, but are thwarted by their limited communication range and relatively large size. Developing sensors that can access and monitor the brain is “extremely difficult”, says Omid Kavehei, a biomedical engineer who specializes in neurotechnology at the University of Sydney in Australia. To overcome these challenges, Yang and her colleagues created a set of 2-millimetre cube-shaped sensors out of hydrogel, a soft, flexible material that’s often used in tissue regeneration and drug delivery. The gel sensors change shape under different temperatures, pressures and pH conditions, and respond to vibrations caused by variations in blood flow in the brain. When the sensors are implanted under the skull and scanned with an ultrasound probe — a tool that is already used to image the human brain in clinics — these changes are detectable in the form of ultrasonic waves that pass through the skull. The tiny gel-cubes completely dissolve in saline solution after around four months, and begin to break down in the brain after five weeks. © 2024 Springer Nature Limited
Keyword: Brain Injury/Concussion; Brain imaging
Link ID: 29346 - Posted: 06.06.2024
By George Musser Had you stumbled into a certain New York University auditorium in March 2023, you might have thought you were at pure neuroscience conference. In fact, it was a workshop on artificial intelligence—but your confusion could have been readily forgiven. Speakers talked about “ablation,” a procedure of creating brain lesions, as commonly done in animal model experiments. They mentioned “probing,” like using electrodes to tap into the brain’s signals. They presented linguistic analyses and cited long-standing debates in psychology over nature versus nurture. Plenty of the hundred or so researchers in attendance probably hadn’t worked with natural brains since dissecting frogs in seventh grade. But their language choices reflected a new milestone for their field: The most advanced AI systems, such as ChatGPT, have come to rival natural brains in size and complexity, and AI researchers are studying them almost as if they were studying a brain in a skull. As part of that, they are drawing on disciplines that traditionally take humans as their sole object of study: psychology, linguistics, philosophy of mind. And in return, their own discoveries have started to carry over to those other fields. These various disciplines now have such closely aligned goals and methods that they could unite into one field, Grace Lindsay, assistant professor of psychology and data science at New York University, argued at the workshop. She proposed calling this merged science “neural systems understanding.” “Honestly, it’s neuroscience that would benefit the most, I think,” Lindsay told her colleagues, noting that neuroscience still lacks a general theory of the brain. “The field that I come from, in my opinion, is not delivering. Neuroscience has been around for over 100 years. I really thought that, when people developed artificial neural systems, they could come to us.” © 2024 Simons Foundation
Keyword: Consciousness; Language
Link ID: 29344 - Posted: 06.06.2024
By Amorina Kingdon Like most humans, I assumed that sound didn’t work well in water. After all, Jacques Cousteau himself called the ocean the “silent world.” I thought, beyond whales, aquatic animals must not use sound much. How wonderfully wrong I was. In water a sound wave travels four and a half times faster, and loses less energy, than in air. It moves farther and faster and carries information better. In the ocean, water exists in layers and swirling masses of slightly different densities, depending on depth, temperature, and saltiness. The physics-astute reader will know that the density of the medium in which sound travels influences its speed. So, as sound waves spread through the sea, their speed changes, causing complex reflection or refraction and bending of the sound waves into “ducts” and “channels.” Under the right circumstances, these ducts and channels can carry sound waves hundreds and even thousands of kilometers. What about other sensory phenomena? Touch and taste work about the same in water as in air. But the chemicals that tend to carry scent move slower in water than in air. And water absorbs light very easily, greatly diminishing visibility. Even away from murky coastal waters, in the clearest seas, light vanishes below several hundred meters and visibility below several dozen. So sound is often the best, if not only, way for ocean and freshwater creatures to signal friends, detect enemies, and monitor the world underwater. And there is much to monitor: Earthquakes, mudslides, and volcanic activity rumble through the oceans, beyond a human’s hearing range. Ice cracks, booms, and scrapes the seafloor. Waves hiss and roar. Raindrops plink. If you listen carefully, you can tell wind speed, rainfall, even drop size, by listening to the ocean as a storm passes. Even snowfall makes a sound. © 2024 NautilusNext Inc.,
Keyword: Animal Communication; Sexual Behavior
Link ID: 29341 - Posted: 06.04.2024
By Sumeet Kulkarni As spring turns to summer in the United States, warming conditions have started to summon enormous numbers of red-eyed periodical cicadas out of their holes in the soil across the east of the country. This year sees an exceptionally rare joint emergence of two cicada broods: one that surfaces every 13 years and another with a 17-year cycle. They last emerged together in 1803, when Thomas Jefferson was US president. This year, billions or even trillions of cicadas from these two broods — each including multiple species of the genus Magicicada — are expected to swarm forests, fields and urban neighbourhoods. To answer readers’ cicada questions, Nature sought help from three researchers. Katie Dana is an entomologist affiliated with the Illinois Natural History Survey at the University of Illinois at Urbana-Champaign. John Lill is an insect ecologist at George Washington University in Washington DC. Fatima Husain is a cognitive neuroscientist at the University of Illinois at Urbana-Champaign. Their answers have been edited for length and clarity. Why do periodical cicadas have red eyes? JL: We’re not really sure. We do know that cicadas’ eyes turn red in the winter before the insects come out. The whole coloration pattern in periodical cicadas is very bright: red eyes, black and orange wings. They’re quite different from the annual cicadas, which are green and black, and more camouflaged. It’s a bit of an enigma why the periodical ones are so brightly coloured, given that it just makes them more obvious to predators. There are no associated defences with being brightly coloured — it kind of flies in the face of what we know about bright coloration in a lot of other animals, where usually it’s some kind of signal for toxicity. There also exist mutants with brown, orange, golden or even blue eyes. People hunt for blue-eyed ones; it’s like trying to find a four-leaf clover. © 2024 Springer Nature Limited
Keyword: Animal Communication; Sexual Behavior
Link ID: 29339 - Posted: 06.04.2024
Sacha Pfeiffer A few weeks ago, at about 6:45 in the morning, I was at home, waiting to talk live on the air with Morning Edition host Michel Martin about a story I'd done, when I suddenly heard a loud metallic hammering. It sounded like a machine was vibrating my house. It happened again about 15 seconds later. And again after that. This rhythmic clatter seemed to be coming from my basement utility closet. Was my furnace breaking? Or my water heater? I worried that it might happen while I was on the air. Luckily, the noise stopped while I spoke with Michel, but restarted later. This time I heard another sound, a warbling or trilling, possibly inside my chimney. Was there an animal in there? I ran outside, looked up at my roof — and saw a woodpecker drilling away at my metal chimney cap. I've seen and heard plenty of woodpeckers hammer on trees. But never on metal. So to find out why the bird was doing this, I called an expert: Kevin McGowan, an ornithologist at the Cornell Lab of Ornithology who recently created a course called "The Wonderful World of Woodpeckers." McGowan said woodpeckers batter wood to find food, make a home, mark territory and attract a mate. But when they bash away at metal, "what the birds are trying to do is make as big a noise as possible," he said, "and a number of these guys have found that — you know what? If you hammer on metal, it's really loud!" Woodpeckers primarily do this during the springtime breeding season, and their metallic racket has two purposes, "basically summarized as: All other guys stay away, all the girls come to me," McGowan said. "And the bigger the noise, the better." © 2024 npr
Keyword: Sexual Behavior; Animal Communication
Link ID: 29333 - Posted: 06.02.2024
By Liqun Luo The brain is complex; in humans it consists of about 100 billion neurons, making on the order of 100 trillion connections. It is often compared with another complex system that has enormous problem-solving power: the digital computer. Both the brain and the computer contain a large number of elementary units—neurons and transistors, respectively—that are wired into complex circuits to process information conveyed by electrical signals. At a global level, the architectures of the brain and the computer resemble each other, consisting of largely separate circuits for input, output, central processing, and memory.1 Which has more problem-solving power—the brain or the computer? Given the rapid advances in computer technology in the past decades, you might think that the computer has the edge. Indeed, computers have been built and programmed to defeat human masters in complex games, such as chess in the 1990s and recently Go, as well as encyclopedic knowledge contests, such as the TV show Jeopardy! As of this writing, however, humans triumph over computers in numerous real-world tasks—ranging from identifying a bicycle or a particular pedestrian on a crowded city street to reaching for a cup of tea and moving it smoothly to one’s lips—let alone conceptualization and creativity. So why is the computer good at certain tasks whereas the brain is better at others? Comparing the computer and the brain has been instructive to both computer engineers and neuroscientists. This comparison started at the dawn of the modern computer era, in a small but profound book entitled The Computer and the Brain, by John von Neumann, a polymath who in the 1940s pioneered the design of a computer architecture that is still the basis of most modern computers today.2 Let’s look at some of these comparisons in numbers (Table 1). © 2024 NautilusNext Inc.,
Keyword: Stroke
Link ID: 29331 - Posted: 05.29.2024
By Amanda Heidt For the first time, a brain implant has helped a bilingual person who is unable to articulate words to communicate in both of his languages. An artificial-intelligence (AI) system coupled to the brain implant decodes, in real time, what the individual is trying to say in either Spanish or English. The findings1, published on 20 May in Nature Biomedical Engineering, provide insights into how our brains process language, and could one day lead to long-lasting devices capable of restoring multilingual speech to people who can’t communicate verbally. “This new study is an important contribution for the emerging field of speech-restoration neuroprostheses,” says Sergey Stavisky, a neuroscientist at the University of California, Davis, who was not involved in the study. Even though the study included only one participant and more work remains to be done, “there’s every reason to think that this strategy will work with higher accuracy in the future when combined with other recent advances”, Stavisky says. The person at the heart of the study, who goes by the nickname Pancho, had a stroke at age 20 that paralysed much of his body. As a result, he can moan and grunt but cannot speak clearly. In his thirties, Pancho partnered with Edward Chang, a neurosurgeon at the University of California, San Francisco, to investigate the stroke’s lasting effects on his brain. In a groundbreaking study published in 20212, Chang’s team surgically implanted electrodes on Pancho’s cortex to record neural activity, which was translated into words on a screen. Pancho’s first sentence — ‘My family is outside’ — was interpreted in English. But Pancho is a native Spanish speaker who learnt English only after his stroke. It’s Spanish that still evokes in him feelings of familiarity and belonging. “What languages someone speaks are actually very linked to their identity,” Chang says. “And so our long-term goal has never been just about replacing words, but about restoring connection for people.” © 2024 Springer Nature Limited
Keyword: Language; Robotics
Link ID: 29321 - Posted: 05.23.2024
By Emily Anthes Half a century ago, one of the hottest questions in science was whether humans could teach animals to talk. Scientists tried using sign language to converse with apes and trained parrots to deploy growing English vocabularies. The work quickly attracted media attention — and controversy. The research lacked rigor, critics argued, and what seemed like animal communication could simply have been wishful thinking, with researchers unconsciously cuing their animals to respond in certain ways. In the late 1970s and early 1980s, the research fell out of favor. “The whole field completely disintegrated,” said Irene Pepperberg, a comparative cognition researcher at Boston University, who became known for her work with an African gray parrot named Alex. Today, advances in technology and a growing appreciation for the sophistication of animal minds have renewed interest in finding ways to bridge the species divide. Pet owners are teaching their dogs to press “talking buttons” and zoos are training their apes to use touch screens. In a cautious new paper, a team of scientists outlines a framework for evaluating whether such tools might give animals new ways to express themselves. The research is designed “to rise above some of the things that have been controversial in the past,” said Jennifer Cunha, a visiting research associate at Indiana University. The paper, which is being presented at a science conference on Tuesday, focuses on Ms. Cunha’s parrot, an 11-year-old Goffin’s cockatoo named Ellie. Since 2019, Ms. Cunha has been teaching Ellie to use an interactive “speech board,” a tablet-based app that contains more than 200 illustrated icons, corresponding to words and phrases including “sunflower seeds,” “happy” and “I feel hot.” When Ellie presses on an icon with her tongue, a computerized voice speaks the word or phrase aloud. In the new study, Ms. Cunha and her colleagues did not set out to determine whether Ellie’s use of the speech board amounted to communication. Instead, they used quantitative, computational methods to analyze Ellie’s icon presses to learn more about whether the speech board had what they called “expressive and enrichment potential.” © 2024 The New York Times Company
Keyword: Language; Epilepsy
Link ID: 29306 - Posted: 05.14.2024
By Miryam Naddaf Scientists have developed brain implants that can decode internal speech — identifying words that two people spoke in their minds without moving their lips or making a sound. Although the technology is at an early stage — it was shown to work with only a handful of words, and not phrases or sentences — it could have clinical applications in future. Similar brain–computer interface (BCI) devices, which translate signals in the brain into text, have reached speeds of 62–78 words per minute for some people. But these technologies were trained to interpret speech that is at least partly vocalized or mimed. The latest study — published in Nature Human Behaviour on 13 May1 — is the first to decode words spoken entirely internally, by recording signals from individual neurons in the brain in real time. “It's probably the most advanced study so far on decoding imagined speech,” says Silvia Marchesotti, a neuroengineer at the University of Geneva, Switzerland. “This technology would be particularly useful for people that have no means of movement any more,” says study co-author Sarah Wandelt, a neural engineer who was at the California Institute of Technology in Pasadena at the time the research was done. “For instance, we can think about a condition like locked-in syndrome.” The researchers implanted arrays of tiny electrodes in the brains of two people with spinal-cord injuries. They placed the devices in the supramarginal gyrus (SMG), a region of the brain that had not been previously explored in speech-decoding BCIs. © 2024 Springer Nature Limited
Keyword: Brain imaging; Language
Link ID: 29302 - Posted: 05.14.2024
By Elizabeth Anne Brown The beluga whale wears its heart on its sleeve — or rather, its forehead. Researchers have created a visual encyclopedia of the different expressions that belugas (Delphinapterus leucas) in captivity seem to make with their highly mobile “melon,” a squishy deposit of fat on the forehead that helps direct sound waves for echolocation. Using muscles and connective tissue, belugas can extend the melon forward until it juts over their lips like the bill of a cap; mush it down until it’s flattened against their skull; lift it vertically to create an impressive fleshy top hat; and shake it with such force that it jiggles like Jell-O. “If that doesn’t scream ‘pay attention to me,’ I don’t know what does,” says animal behaviorist Justin Richard of the University of Rhode Island in Kingston. “It’s like watching a peacock spread their feathers.” Before Richard became a scientist, he spent a decade as a beluga trainer at the Mystic Aquarium in Connecticut, working closely with the enigmatic animals. “Even as a trainer, I knew the shapes meant something,” Richard says. “But nobody had been able to put together enough observations to make sense of it.” Over the course of a year, from 2014 to 2015, Richard and colleagues recorded interactions between four belugas at the Mystic Aquarium. Analyzing the footage revealed that the belugas make five distinct melon shapes the scientists dubbed flat, lift, press, push and shake. The belugas sported an average of nearly two shapes per minute during social interaction, the team reports March 2 in Animal Cognition. © Society for Science & the Public 2000–2024
Keyword: Animal Communication; Evolution
Link ID: 29291 - Posted: 05.03.2024
By Claire Cameron On Aug. 19, 2021, a humpback whale named Twain whupped back. Specifically, Twain made a series of humpback whale calls known as “whups” in response to playback recordings of whups from a boat of researchers off the coast of Alaska. The whale and the playback exchanged calls 36 times. On the boat was naturalist Fred Sharpe of the Alaska Whale Foundation, who has been studying humpbacks for over two decades, and animal behavior researcher Brenda McCowan, a professor at the University of California, Davis. The exchange was groundbreaking, Sharpe says, because it brought two linguistic beings—humans and humpback whales—together. “You start getting the sense that there’s this mutual sense of being heard.” In their 2023 published results, McGowan, Sharpe, and their coauthors are careful not to characterize their exchange with Twain as a conversation. They write, “Twain was actively engaged in a type of vocal coordination” with the playback recordings. To the paper’s authors, the interspecies exchange could be a model for perhaps something even more remarkable: an exchange with an extraterrestrial intelligence. Sharpe and McGowan are members of Whale SETI, a team of scientists at the SETI Institute, which has been scanning the skies for decades, listening for signals that may be indicative of extraterrestrial life. The Whale SETI team seeks to show that animal communication, and particularly, complex animal vocalizations like those of humpback whales, can provide scientists with a model to help detect and decipher a message from an extraterrestrial intelligence. And, while they’ve been trying to communicate with whales for years, this latest reported encounter was the first time the whales talked back. It all might sound far-fetched. But then again, Laurance Doyle, an astrophysicist who founded the Whale SETI team and has been part of the SETI Institute since 1987, is accustomed to being doubted by the mainstream science community. © 2024 NautilusNext Inc.,
Keyword: Animal Communication; Language
Link ID: 29276 - Posted: 04.30.2024
By Saima May Sidik In 2010, Theresa Chaklos was diagnosed with chronic lymphocytic leukaemia — the first in a series of ailments that she has had to deal with since. She’d always been an independent person, living alone and supporting herself as a family-law facilitator in the Washington DC court system. But after illness hit, her independence turned into loneliness. Loneliness, in turn, exacerbated Chaklos’s physical condition. “I dropped 15 pounds in less than a week because I wasn’t eating,” she says. “I was so miserable, I just would not get up.” Fortunately a co-worker convinced her to ask her friends to help out, and her mood began to lift. “It’s a great feeling” to know that other people are willing to show up, she says. Many people can’t break out of a bout of loneliness so easily. And when acute loneliness becomes chronic, the health effects can be far-reaching. Chronic loneliness can be as detrimental as obesity, physical inactivity and smoking according to a report by Vivek Murthy, the US surgeon general. Depression, dementia, cardiovascular disease1 and even early death2 have all been linked to the condition. Worldwide, around one-quarter of adults feel very or fairly lonely, according to a 2023 poll conducted by the social-media firm Meta, the polling company Gallup and a group of academic advisers (see go.nature.com/48xhu3p). That same year, the World Health Organization launched a campaign to address loneliness, which it called a “pressing health threat”. But why does feeling alone lead to poor health? Over the past few years, scientists have begun to reveal the neural mechanisms that cause the human body to unravel when social needs go unmet. The field “seems to be expanding quite significantly”, says cognitive neuroscientist Nathan Spreng at McGill University in Montreal, Canada. And although the picture is far from complete, early results suggest that loneliness might alter many aspects of the brain, from its volume to the connections between neurons.
Keyword: Stress
Link ID: 29245 - Posted: 04.06.2024
By Emily Makowski & I spend my days surrounded by thousands of written words, and sometimes I feel as though there’s no escape. That may not seem particularly unusual. Plenty of people have similar feelings. But no, I’m not just talking about my job as a copy editor here at Scientific American, where I edit and fact-check an endless stream of science writing. This constant flow of text is all in my head. My brain automatically translates spoken words into written ones in my mind’s eye. I “see” subtitles that I can’t turn off whenever I talk or hear someone else talking. This same speech-to-text conversion even happens for the inner dialogue of my thoughts. This mental closed-captioning has accompanied me since late toddlerhood, almost as far back as my earliest childhood memories. And for a long time, I thought that everyone could “read” spoken words in their head the way I do. What I experience goes by the name of ticker-tape synesthesia. It is not a medical condition—it’s just a distinctive way of perceiving the surrounding world that relatively few people share. Not much is known about the neurophysiology or psychology of this phenomenon, sometimes called “ticker taping,” even though a reference to it first appeared in the scientific literature in the late 19th century. Ticker taping is considered a form of synesthesia, an experience in which the brain reroutes one kind of incoming sensory information so that it is processed as another. For example, sounds might be perceived as touch, allowing the affected person to “feel” them as tactile sensations. As synesthesia goes, ticker taping is relatively uncommon. “There are varieties of synesthesia which really have just been completely under the radar..., and ticker tape is really one of those,” says Mark Price, a cognitive psychologist at the University of Bergen in Norway. The name “ticker-tape synesthesia” itself evokes the concept’s late 19th-century origins. At that time stock prices transmitted by telegraph were printed on long paper strips, which would be torn into tiny bits and thrown from building windows during parades. © 2024 SCIENTIFIC AMERICAN,
Keyword: Attention; Language
Link ID: 29238 - Posted: 04.04.2024
Ian Sample Science editor Dogs understand what certain words stand for, according to researchers who monitored the brain activity of willing pooches while they were shown balls, slippers, leashes and other highlights of the domestic canine world. The finding suggests that the dog brain can reach beyond commands such as “sit” and “fetch”, and the frenzy-inducing “walkies”, to grasp the essence of nouns, or at least those that refer to items the animals care about. “I think the capacity is there in all dogs,” said Marianna Boros, who helped arrange the experiments at Eötvös Loránd University in Hungary. “This changes our understanding of language evolution and our sense of what is uniquely human.” Scientists have long been fascinated by whether dogs can truly learn the meanings of words and have built up some evidence to back the suspicion. A survey in 2022 found that dog owners believed their furry companions responded to between 15 and 215 words. More direct evidence for canine cognitive prowess came in 2011 when psychologists in South Carolina reported that after three years of intensive training, a border collie called Chaser had learned the names of more than 1,000 objects, including 800 cloth toys, 116 balls and 26 Frisbees. However, studies have said little about what is happening in the canine brain when it processes words. To delve into the mystery, Boros and her colleagues invited 18 dog owners to bring their pets to the laboratory along with five objects the animals knew well. These included balls, slippers, Frisbees, rubber toys, leads and other items. At the lab, the owners were instructed to say words for objects before showing their dog either the correct item or a different one. For example, an owner might say “Look, here’s the ball”, but hold up a Frisbee instead. The experiments were repeated multiple times with matching and non-matching objects. © 2024 Guardian News & Media Limited
Keyword: Language; Learning & Memory
Link ID: 29214 - Posted: 03.26.2024
By Darren Incorvaia Be it an arched eyebrow, a shaken head or a raised finger, humans wordlessly communicate complex ideas through gestures every day. This ability is rare in the animal kingdom, having been observed only in primates (SN: 8/10/10). Scientists now might be able to add a feathered friend to the club. Researchers have observed Japanese tits making what they call an “after you” gesture: A bird flutters its wings, cuing its mate to enter the nest first. The finding, reported in the March 25 Current Biology, “shows that Japanese tits not only use wing fluttering as a symbolic gesture, but also in a complex social context involving a sender, receiver and a specific goal, much like how humans communicate,” says biologist Toshitaka Suzuki of the University of Tokyo. Suzuki has been listening in on the calls of Japanese tits (Parus minor) for more than 17 years. During his extensive time in the field, he noticed that Japanese tits bringing food to the nest would sometimes perch on a branch and flutter their wings. At that point, their partners would enter the nest with the flutterer close behind. “This led me to investigate whether this behavior fulfills the criteria of gestures,” Suzuki says. Suzuki and Norimasa Sugita, a researcher at Tokyo’s National Museum of Nature and Science, observed eight mated pairs make 321 trips to their nests. A pattern quickly emerged: Females fluttered their wings far more often than males, with six females shaking it up while only one male did. Females almost always entered the nest first — unless they fluttered their wings. Then the males went first. © Society for Science & the Public 2000–2024.
Keyword: Animal Communication; Evolution
Link ID: 29213 - Posted: 03.26.2024
Ian Sample Science editor Dogs understand what certain words stand for, according to researchers who monitored the brain activity of willing pooches while they were shown balls, slippers, leashes and other highlights of the domestic canine world. The finding suggests that the dog brain can reach beyond commands such as “sit” and “fetch”, and the frenzy-inducing “walkies”, to grasp the essence of nouns, or at least those that refer to items the animals care about. “I think the capacity is there in all dogs,” said Marianna Boros, who helped arrange the experiments at Eötvös Loránd University in Hungary. “This changes our understanding of language evolution and our sense of what is uniquely human.” Scientists have long been fascinated by whether dogs can truly learn the meanings of words and have built up some evidence to back the suspicion. A survey in 2022 found that dog owners believed their furry companions responded to between 15 and 215 words. More direct evidence for canine cognitive prowess came in 2011 when psychologists in South Carolina reported that after three years of intensive training, a border collie called Chaser had learned the names of more than 1,000 objects, including 800 cloth toys, 116 balls and 26 Frisbees. However, studies have said little about what is happening in the canine brain when it processes words. To delve into the mystery, Boros and her colleagues invited 18 dog owners to bring their pets to the laboratory along with five objects the animals knew well. These included balls, slippers, Frisbees, rubber toys, leads and other items. © 2024 Guardian News & Media Limited
Keyword: Language; Evolution
Link ID: 29212 - Posted: 03.23.2024
By Anthony Ham What is the meaning of a cat’s meow that grows louder and louder? Or your pet’s sudden flip from softly purring as you stroke its back to biting your hand? It turns out these misunderstood moments with your cat may be more common than not. A new study by French researchers, published last month in the journal Applied Animal Behaviour Science, found that people were significantly worse at reading the cues of an unhappy cat (nearly one third got it wrong) than those of a contented cat (closer to 10 percent). The study also suggested that a cat’s meows and other vocalizations are greatly misinterpreted and that people should consider both vocal and visual cues to try to determine what’s going on with their pets. The researchers drew these findings from the answers of 630 online participants; respondents were volunteers recruited through advertisements on social media. Each watched 24 videos of differing cat behaviors. One third depicted only vocal communication, another third just visual cues, and the remainder involved both. “Some studies have focused on how humans understand cat vocalizations,” said Charlotte de Mouzon, lead author of the study and a cat behavior expert at the Université Paris Nanterre. “Other studies studied how people understand cats’ visual cues. But studying both has never before been studied in human-cat communication.” Cats display a wide range of visual signals: tails swishing side to side, or raised high in the air; rubbing and curling around our legs; crouching; flattening ears or widening eyes. Their vocals can range from seductive to threatening: meowing, purring, growling, hissing and caterwauling. At last count, kittens were known to use nine different forms of vocalization, while adult cats uttered 16. That we could better understand what a cat wants by using visual and vocal cues may seem obvious. But we know far less than we think we do. © 2024 The New York Times Compan
Keyword: Animal Communication; Evolution
Link ID: 29169 - Posted: 02.29.2024
Rob Stein Benjamin Franklin famously wrote: "In this world nothing can be said to be certain, except death and taxes." While that may still be true, there's a controversy simmering today about one of the ways doctors declare people to be dead. The debate is focused on the Uniform Determination of Death Act, a law that was adopted by most states in the 1980s. The law says that death can be declared if someone has experienced "irreversible cessation of all functions of the entire brain." But some parts of the brain can continue to function in people who have been declared brain dead, prompting calls to revise the statute. Many experts say the discrepancy needs to be resolved to protect patients and their families, maintain public trust and reconcile what some see as a troubling disconnect between the law and medical practice. The debate became so contentious, however, that the Uniform Law Commission, the group charged with rewriting model laws for states, paused its process last summer because participants couldn't reach a consensus. "I'm worried," says Thaddeus Pope, a bioethicist and lawyer at Mitchell Hamline School of Law in St. Paul, Minnesota. "There's a lot of conflict at the bedside over this at hospitals across the United States. Let's get in front of it and fix it before it becomes a crisis. It's such an important question that everyone needs to be on the same page." The second method, brain death, can be declared for people who have sustained catastrophic brain injury causing the permanent cessation of all brain function, such as from a massive traumatic brain injury or massive stroke, but whose hearts are still pumping through the use of ventilators or other artificial forms of life support. © 2024 npr
Keyword: Brain Injury/Concussion; Brain imaging
Link ID: 29147 - Posted: 02.13.2024
By Fletcher Reveley One afternoon in May 2020, Jerry Tang, a Ph.D. student in computer science at the University of Texas at Austin, sat staring at a cryptic string of words scrawled across his computer screen: “I am not finished yet to start my career at twenty without having gotten my license I never have to pull out and run back to my parents to take me home.” The sentence was jumbled and agrammatical. But to Tang, it represented a remarkable feat: A computer pulling a thought, however disjointed, from a person’s mind. For weeks, ever since the pandemic had shuttered his university and forced his lab work online, Tang had been at home tweaking a semantic decoder — a brain-computer interface, or BCI, that generates text from brain scans. Prior to the university’s closure, study participants had been providing data to train the decoder for months, listening to hours of storytelling podcasts while a functional magnetic resonance imaging (fMRI) machine logged their brain responses. Then, the participants had listened to a new story — one that had not been used to train the algorithm — and those fMRI scans were fed into the decoder, which used GPT1, a predecessor to the ubiquitous AI chatbot ChatGPT, to spit out a text prediction of what it thought the participant had heard. For this snippet, Tang compared it to the original story: “Although I’m twenty-three years old I don’t have my driver’s license yet and I just jumped out right when I needed to and she says well why don’t you come back to my house and I’ll give you a ride.” The decoder was not only capturing the gist of the original, but also producing exact matches of specific words — twenty, license. When Tang shared the results with his adviser, a UT Austin neuroscientist named Alexander Huth who had been working towards building such a decoder for nearly a decade, Huth was floored. “Holy shit,” Huth recalled saying. “This is actually working.”
Keyword: Brain imaging; Language
Link ID: 29073 - Posted: 01.03.2024


.gif)

