Links for Keyword: Language

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 655

By Bruce Bower An aptitude for mentally stringing together related items, often cited as a hallmark of human language, may have deep roots in primate evolution, a new study suggests. In lab experiments, monkeys demonstrated an ability akin to embedding phrases within other phrases, scientists report June 26 in Science Advances. Many linguists regard this skill, known as recursion, as fundamental to grammar (SN: 12/4/05) and thus peculiar to people. But “this work shows that the capacity to represent recursive sequences is present in an animal that will never learn language,” says Stephen Ferrigno, a Harvard University psychologist. Recursion allows one to elaborate a sentence such as “This pandemic is awful” into “This pandemic, which has put so many people out of work, is awful, not to mention a health risk.” Ferrigno and colleagues tested recursion in both monkeys and humans. Ten U.S. adults recognized recursive symbol sequences on a nonverbal task and quickly applied that knowledge to novel sequences of items. To a lesser but still substantial extent, so did 50 U.S. preschoolers and 37 adult Tsimane’ villagers from Bolivia, who had no schooling in math or reading. Those results imply that an ability to grasp recursion must emerge early in life and doesn’t require formal education. Three rhesus monkeys lacked humans’ ease on the task. But after receiving extra training, two of those monkeys displayed recursive learning, Ferrigno’s group says. One of the two animals ended up, on average, more likely to form novel recursive sequences than about three-quarters of the preschoolers and roughly half of the Bolivian villagers. © Society for Science & the Public 2000–2020.

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 27332 - Posted: 06.27.2020

By Julia Hollingsworth, CNN (CNN)Laura Molles is so attuned to birds that she can tell where birds of some species are from just by listening to their song. She's not a real-world Dr Doolittle. She's an ecologist in Christchurch, New Zealand, who specializes in a little-known area of science: bird dialects. While some birds are born knowing how to sing innately, many need to be taught how to sing by adults -- just like humans. Those birds can develop regional dialects, meaning their songs sound slightly different depending on where they live. Think Boston and Georgia accents, but for birds. Just as speaking the local language can make it easier for humans to fit in, speaking the local bird dialect can increase a bird's chances of finding a mate. And, more ominously, just as human dialects can sometimes disappear as the world globalizes, bird dialects can be shaped or lost as cities grow. The similarities between human language and bird song aren't lost on Molles -- or on her fellow bird dialect experts. "There are wonderful parallels," said American ornithologist Donald Kroodsma, the author of "Birdsong for the Curious Naturalist: Your Guide to Listening." "Culture, oral traditions -- it's all the same." For centuries, bird song has inspired poets and musicians, but it wasn't until the 1950s that scientists really started paying attention to bird dialects. One of the pioneers of the field was a British-born behaviorist named Peter Marler, who became interested in the subject when he noticed that chaffinches in the United Kingdom sounded different from valley to valley. At first, he transcribed bird songs by hand, according to a profile of him in a Rockefeller University publication. Later, he used a sonagram, which Kroodsma describes on his website as "a musical score for birdsong." ("You really need to see these songs to believe them, our eyes are so much better than our ears," Kroodsma said.) © 2020 Cable News Network.Turner Broadcasting System, Inc.

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 27303 - Posted: 06.17.2020

Nicola Davis Reading minds has just come a step closer to reality: scientists have developed artificial intelligence that can turn brain activity into text. While the system currently works on neural patterns detected while someone is speaking aloud, experts say it could eventually aid communication for patients who are unable to speak or type, such as those with locked in syndrome. “We are not there yet but we think this could be the basis of a speech prosthesis,” said Dr Joseph Makin, co-author of the research from the University of California, San Francisco. Writing in the journal Nature Neuroscience, Makin and colleagues reveal how they developed their system by recruiting four participants who had electrode arrays implanted in their brain to monitor epileptic seizures. These participants were asked to read aloud from 50 set sentences multiple times, including “Tina Turner is a pop singer”, and “Those thieves stole 30 jewels”. The team tracked their neural activity while they were speaking. This data was then fed into a machine-learning algorithm, a type of artificial intelligence system that converted the brain activity data for each spoken sentence into a string of numbers. To make sure the numbers related only to aspects of speech, the system compared sounds predicted from small chunks of the brain activity data with actual recorded audio. The string of numbers was then fed into a second part of the system which converted it into a sequence of words. © 2020 Guardian News & Media Limited

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 2: Cells and Structures: The Anatomy of the Nervous System
Link ID: 27155 - Posted: 03.31.2020

By James Gorman There’s something about a really smart dog that makes it seem as if there might be hope for the world. China is in the midst of a frightening disease outbreak and nobody knows how far it will spread. The warming of the planet shows no signs of stopping; it reached a record 70 degrees in Antarctica last week. Not to mention international tensions and domestic politics. But there’s a dog in Norway that knows not only the names of her toys, but also the names of different categories of toys, and she learned all this just by hanging out with her owners and playing her favorite game. So who knows what other good things could be possible? Right? This dog’s name is Whisky. She is a Border collie that lives with her owners and almost 100 toys, so it seems like things are going pretty well for her. Even though I don’t have that many toys myself, I’m happy for her. You can’t be jealous of a dog. Or at least you shouldn’t be. Whisky’s toys have names. Most are dog-appropriate like “the colorful rope” or “the small Frisbee.” However, her owner, Helge O. Svela said on Thursday that since the research was done, her toys have grown in number from 59 to 91, and he has had to give some toys “people” names, like Daisy or Wenger. “That’s for the plushy toys that resemble animals like ducks or elephants (because the names Duck and Elephant were already taken),” he said. During the research, Whisky proved in tests that she knew the names for at least 54 of her 59 toys. That’s not just the claim of a proud owner, and Mr. Svela is quite proud of Whisky, but the finding of Claudia Fugazza, an animal behavior researcher from Eötvös Loránd University in Budapest, who tested her. That alone makes Whisky part of a very select group, although not a champion. You may recall Chaser, another Border collie that knew the names of more than 1,000 objects and also knew words for categories of objects. And there are a few other dogs with shockingly large vocabularies, Dr. Fugazza said, including mixed breeds, and a Yorkie. These canine verbal prodigies are, however, few and far between. “It is really, really unusual, and it is really difficult to teach object names to dogs,” Dr. Fugazza said. © 2020 The New York Times Company

Related chapters from BN8e: Chapter 17: Learning and Memory; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 13: Memory, Learning, and Development
Link ID: 27063 - Posted: 02.21.2020

Thomas R. Sawallis and Louis-Jean Boë Sound doesn’t fossilize. Language doesn’t either. Even when writing systems have developed, they’ve represented full-fledged and functional languages. Rather than preserving the first baby steps toward language, they’re fully formed, made up of words, sentences and grammar carried from one person to another by speech sounds, like any of the perhaps 6,000 languages spoken today. So if you believe, as we linguists do, that language is the foundational distinction between humans and other intelligent animals, how can we study its emergence in our ancestors? Happily, researchers do know a lot about language – words, sentences and grammar – and speech – the vocal sounds that carry language to the next person’s ear – in living people. So we should be able to compare language with less complex animal communication. And that’s what we and our colleagues have spent decades investigating: How do apes and monkeys use their mouth and throat to produce the vowel sounds in speech? Spoken language in humans is an intricately woven string of syllables with consonants appended to the syllables’ core vowels, so mastering vowels was a key to speech emergence. We believe that our multidisciplinary findings push back the date for that crucial step in language evolution by as much as 27 million years. The sounds of speech Say “but.” Now say “bet,” “bat,” “bought,” “boot.” The words all begin and end the same. It’s the differences among the vowel sounds that keep them distinct in speech. © 2010–2019, The Conversation US, Inc.

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 26893 - Posted: 12.12.2019

By Viorica Marian Psycholinguistics is a field at the intersection of psychology and linguistics, and one if its recent discoveries is that the languages we speak influence our eye movements. For example, English speakers who hear candle often look at a candy because the two words share their first syllable. Research with speakers of different languages revealed that bilingual speakers not only look at words that share sounds in one language but also at words that share sounds across their two languages. When Russian-English bilinguals hear the English word marker, they also look at a stamp, because the Russian word for stamp is marka. Even more stunning, speakers of different languages differ in their patterns of eye movements when no language is used at all. In a simple visual search task in which people had to find a previously seen object among other objects, their eyes moved differently depending on what languages they knew. For example, when looking for a clock, English speakers also looked at a cloud. Spanish speakers, on the other hand, when looking for the same clock, looked at a present, because the Spanish names for clock and present—reloj and regalo—overlap at their onset. The story doesn’t end there. Not only do the words we hear activate other, similar-sounding words—and not only do we look at objects whose names share sounds or letters even when no language is heard—but the translations of those names in other languages become activated as well in speakers of more than one language. For example, when Spanish-English bilinguals hear the word duck in English, they also look at a shovel, because the translations of duck and shovel—pato and pala, respectively—overlap in Spanish. © 2019 Scientific American

Related chapters from BN8e: Chapter 18: Attention and Higher Cognition; Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 26875 - Posted: 12.06.2019

By Virginia Morell Say “sit!” to your dog, and—if he’s a good boy—he’ll likely plant his rump on the floor. But would he respond correctly if the word were spoken by a stranger, or someone with a thick accent? A new study shows he will, suggesting dogs perceive spoken words in a sophisticated way long thought unique to humans. “It’s a very solid and interesting finding,” says Tecumseh Fitch, an expert on vertebrate communication at the University of Vienna who was not involved in the research. The way we pronounce words changes depending on our sex, age, and even social rank. Some as-yet-unknown neural mechanism enables us to filter out differences in accent and pronunciation, helping us understand spoken words regardless of the speaker. Animals like zebra finches, chinchillas, and macaques can be trained to do this, but until now only humans were shown to do this spontaneously. In the new study, Holly Root-Gutteridge, a cognitive biologist at the University of Sussex in Brighton, U.K., and her colleagues ran a test that others have used to show dogs can recognize other dogs from their barks. The researchers filmed 42 dogs of different breeds as they sat with their owners near an audio speaker that played six monosyllabic, noncommand words with similar sounds, such as “had,” “hid,” and “who’d.” The words were spoken—not by the dog’s owner—but by several strangers, men and women of different ages and with different accents. © 2019 American Association for the Advancement of Science.

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 26866 - Posted: 12.04.2019

Jon Hamilton When we hear a sentence, or a line of poetry, our brains automatically transform the stream of sound into a sequence of syllables. But scientists haven't been sure exactly how the brain does this. Now, researchers from the University of California, San Francisco, think they've figured it out. The key is detecting a rapid increase in volume that occurs at the beginning of a vowel sound, they report Wednesday in Science Advances. "Our brain is basically listening for these time points and responding whenever they occur," says Yulia Oganian, a postdoctoral scholar at UCSF. The finding challenges a popular idea that the brain monitors speech volume continuously to detect syllables. Instead, it suggests that the brain periodically "samples" spoken language looking for specific changes in volume. The finding is "in line" with a computer model designed to simulate the way a human brain decodes speech, says Oded Ghitza, a research professor in the biomedical engineering department at Boston University who was not involved in the study. Detecting each rapid increase in volume associated with a syllable gives the brain, or a computer, an efficient way to deal with the "stream" of sound that is human speech, Ghitza says. And syllables, he adds, are "the basic Lego blocks of language." Oganian's study focused on a part of the brain called the superior temporal gyrus. "It's an area that has been known for about 150 years to be really important for speech comprehension," Oganian says. "So we knew if you can find syllables somewhere, it should be there." The team studied a dozen patients preparing for brain surgery to treat severe epilepsy. As part of the preparation, surgeons had placed electrodes over the area of the brain involved in speech. © 2019 npr

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 26841 - Posted: 11.21.2019

By Nicholas Bakalar People who never learned to read and write may be at increased risk for dementia. Researchers studied 983 adults 65 and older with four or fewer years of schooling. Ninety percent were immigrants from the Dominican Republic, where there were limited opportunities for schooling. Many had learned to read outside of school, but 237 could not read or write. Over an average of three and a half years, the participants periodically took tests of memory, language and reasoning. Illiterate men and women were 2.65 times as likely as the literate to have dementia at the start of the study, and twice as likely to have developed it by the end. Illiterate people, however, did not show a faster rate of decline in skills than those who could read and write. The analysis, in Neurology, controlled for sex, hypertension, diabetes, heart disease and other dementia risk factors. “Early life exposures and early life social opportunities have an impact on later life,” said the senior author, Jennifer J. Manly, a professor of neuropsychology at Columbia. “That’s the underlying theme here. There’s a life course of exposures and engagements and opportunities that lead to a healthy brain later in life.” “We would like to expand this research to other populations,” she added. “Our hypothesis is that this is relevant and consistent across populations of illiterate adults.” © 2019 The New York Times Company

Related chapters from BN8e: Chapter 7: Life-Span Development of the Brain and Behavior; Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 13: Memory, Learning, and Development; Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 26838 - Posted: 11.21.2019

By Natalia Sylvester My parents refused to let my sister and me forget how to speak Spanish by pretending they didn’t understand when we spoke English. Spanish was the only language we were allowed to speak in our one-bedroom apartment in Miami in the late 1980s. We both graduated from English as a second language lessons in record time as kindergartners and first graders, and we longed to play and talk and live in English as if it were a shiny new toy. “No te entiendo,” my mother would say, shaking her head and shrugging in feigned confusion anytime we slipped into English. My sister and I would let out exasperated sighs at having to repeat ourselves in Spanish, only to be interrupted by a correction of our grammar and vocabulary after every other word. One day you’ll thank me, my mother retorted. That day has come to pass 30 years later in ordinary places like Goodwill, a Walmart parking lot, a Costco Tire Center. I’m most thankful that I can speak Spanish because it has allowed me to help others. There was the young mother who wanted to know whether she could leave a cumbersome diaper bin aside at the register at Goodwill while she shopped. The cashier shook her head dismissively and said she didn’t understand. It wasn’t difficult to read the woman’s gestures — she was struggling to push her baby’s carriage while lugging the large box around the store. Even after I told the cashier what the woman was saying, her irritation was palpable. The air of judgment is one I’ve come to recognize: How dare this woman not speak English, how dare this other woman speak both English and Spanish. It was a small moment, but it speaks to how easy it would have been for the cashier to ignore a young Latina mother struggling to care for her child had there not been someone around to interpret. “I don’t understand,” she kept saying, though the mother’s gestures transcended language. I choose not to understand is what she really meant. © 2019 The New York Times Company

Related chapters from BN8e: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 26629 - Posted: 09.21.2019

By Catherine Matacic Italians are some of the fastest speakers on the planet, chattering at up to nine syllables per second. Many Germans, on the other hand, are slow enunciators, delivering five to six syllables in the same amount of time. Yet in any given minute, Italians and Germans convey roughly the same amount of information, according to a new study. Indeed, no matter how fast or slowly languages are spoken, they tend to transmit information at about the same rate: 39 bits per second, about twice the speed of Morse code. “This is pretty solid stuff,” says Bart de Boer, an evolutionary linguist who studies speech production at the Free University of Brussels, but was not involved in the work. Language lovers have long suspected that information-heavy languages—those that pack more information about tense, gender, and speaker into smaller units, for example—move slowly to make up for their density of information, he says, whereas information-light languages such as Italian can gallop along at a much faster pace. But until now, no one had the data to prove it. Scientists started with written texts from 17 languages, including English, Italian, Japanese, and Vietnamese. They calculated the information density of each language in bits—the same unit that describes how quickly your cellphone, laptop, or computer modem transmits information. They found that Japanese, which has only 643 syllables, had an information density of about 5 bits per syllable, whereas English, with its 6949 syllables, had a density of just over 7 bits per syllable. Vietnamese, with its complex system of six tones (each of which can further differentiate a syllable), topped the charts at 8 bits per syllable. © 2019 American Association for the Advancement of Science

Related chapters from BN8e: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 26576 - Posted: 09.05.2019

By Carolyn Wilke In learning to read, squiggles and lines transform into letters or characters that carry meaning and conjure sounds. A trio of cognitive neuroscientists has now mapped where that journey plays out inside the brain. As readers associate symbols with pronunciation and part of a word, a pecking order of brain areas processes the information, the researchers report August 19 in the Proceedings of the National Academy of Sciences. The finding unveils some of the mystery behind how the brain learns to tie visual cues with language (SN Online: 4/27/16). “We didn’t evolve to read,” says Jo Taylor, who is now at University College London but worked on the study while at Aston University in Birmingham, England. “So we don’t [start with] a bit of the brain that does reading.” Taylor — along with Kathy Rastle at Royal Holloway University of London in Egham and Matthew Davis at the University of Cambridge — zoomed in on a region at the back and bottom of the brain, called the ventral occipitotemporal cortex, that is associated with reading. Over two weeks, the scientists taught made-up words written in two unfamiliar, archaic scripts to 24 native English–speaking adults. The words were assigned the meanings of common nouns, such as lemon or truck. Then the researchers used functional MRI scans to track which tiny chunks of brain in that region became active when participants were shown the words learned in training. © Society for Science & the Public 2000–2019

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 2: Cells and Structures: The Anatomy of the Nervous System
Link ID: 26548 - Posted: 08.27.2019

Researchers believe that stuttering — a potentially lifelong and debilitating speech disorder — stems from problems with the circuits in the brain that control speech, but precisely how and where these problems occur is unknown. Using a mouse model of stuttering, scientists report that a loss of cells in the brain called astrocytes are associated with stuttering. The mice had been engineered with a human gene mutation previously linked to stuttering. The study (link is external), which appeared online in the Proceedings of the National Academy of Sciences, offers insights into the neurological deficits associated with stuttering. The loss of astrocytes, a supporting cell in the brain, was most prominent in the corpus callosum, a part of the brain that bridges the two hemispheres. Previous imaging studies have identified differences in the brains of people who stutter compared to those who do not. Furthermore, some of these studies in people have revealed structural and functional problems in the same brain region as the new mouse study. The study was led by Dennis Drayna, Ph.D., of the Section on Genetics of Communication Disorders, at the National Institute on Deafness and Other Communication Disorders (NIDCD), part of the National Institutes of Health. Researchers at the Washington University School of Medicine in St. Louis and from NIH’s National Institute of Biomedical Imaging and Bioengineering, and National Institute of Mental Health collaborated on the research. “The identification of genetic, molecular, and cellular changes that underlie stuttering has led us to understand persistent stuttering as a brain disorder,” said Andrew Griffith, M.D., Ph.D., NIDCD scientific director. “Perhaps even more importantly, pinpointing the brain region and cells that are involved opens opportunities for novel interventions for stuttering — and possibly other speech disorders.”

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 13: Memory, Learning, and Development
Link ID: 26513 - Posted: 08.19.2019

By Derrick Bryson Taylor Many owners struggle to teach their dogs to sit, fetch or even bark on command, but John W. Pilley, a professor emeritus of psychology at Wofford College, taught his Border collie to understand more than 1,000 nouns, a feat that earned them both worldwide recognition. For some time, Dr. Pilley had been conducting his own experiment teaching dogs the names of objects and was inspired by Border collie farmers to rethink his methods. Dr. Pilley was given a black-and-white Border collie as a gift by his wife Sally. For three years, Dr. Pilley trained the dog, named Chaser, four to five hours a day: He showed her an object, said its name up to 40 times, then hid it and asked her to find it. He used 800 cloth animal toys, 116 balls, 26 Frisbees and an assortment of plastic items to ultimately teach Chaser 1,022 nouns. In 2013, Dr. Pilley published his findings that explained that Chaser was taught to understand sentences containing a prepositional object, verb and direct object. Chaser died on Tuesday at 15. She had been living with Dr. Pilley’s wife and their daughter Robin in Spartanburg. Dr. Pilley died last year at 89. Another daughter, Pilley Bianchi, said on Saturday that Chaser had been in declining health in recent weeks. “The vet really determined that she died of natural causes,” Ms. Bianchi said. “She went down very quickly.” Ms. Bianchi, who helped her father train Chaser, said the dog had been undergoing acupuncture for arthritis but had no other known illnesses. © 2019 The New York Times Company

Related chapters from BN8e: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 26458 - Posted: 07.29.2019

By Ryan Dalton In the dystopian world of George Orwell’s Nineteen Eighty-Four, the government of Oceania aims to achieve thought control through the restriction of language. As explained by the character ‘Syme’, a lexicologist who is working to replace the English language with the greatly-simplified ‘Newspeak’: “Don’t you see that the whole aim of Newspeak is to narrow the range of thought?” While Syme’s own reflections were short-lived, the merits of his argument were not: the words and structure of a language can influence the thoughts and decisions of its speakers. This holds for English and Greek, Inuktitut and Newspeak. It also may hold for the ‘neural code’, the basic electrical vocabulary of the neurons in the brain. Neural codes, like spoken languages, are tasked with conveying all manner of information. Some of this information is immediately required for survival; other information has a less acute use. To accommodate these different needs, a balance is struck between the richness of information being transferred and the speed or reliability with which it is transferred. Where the balance is set depends on context. In the example of language, the mention of the movie Jaws at a dinner party might result in a ranging and patient—if disconcerting—discussion around the emotional impact of the film. In contrast, the observation of a dorsal fin breaking through the surf at the beach would probably elicit a single word, screamed by many beachgoers at once: “shark!” In one context, the language used has been optimized for richness; in the other, for speed and reliability. © 2019 Scientific American

Related chapters from BN8e: Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals; Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals; Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 26383 - Posted: 07.03.2019

/ By Dan Falk Suppose I give you the name of a body part, and ask you to list its main uses: I say legs, you say walking and running; I say ears, you say hearing. And if I say the brain? Well, that’s a no-brainer (so to speak); obviously the brain is for thinking. Of course, it does a bunch of other things, too; after all, when the brain ceases to function, we die — but clearly it’s where cognition happens. Tversky argues that gesturing is more than just a by-product of speech: it literally helps us think. Or is it? No one would argue that the brain isn’t vital for thinking — but quite a few 21st-century psychologists and cognitive scientists believe that the body, as well as the brain, is needed for thinking to actually happen. And it’s not just that the brain needs a body to keep it alive (that much is obvious), but rather, that the brain and the body somehow work together: it’s the combination of brain-plus-body that creates the mental world. The latest version of this proposition comes from Barbara Tversky, a professor emerita of psychology at Stanford University who also teaches at Columbia. Her new book, “Mind in Motion: How Action Shapes Thought,” is an extended argument for the interplay of mind and body in enabling cognition. She draws on many different lines of evidence, including the way we talk about movement and space, the way we use maps, the way we talk about and use numbers, and the way we gesture. Copyright 2019 Undark

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 5: The Sensorimotor System
Link ID: 26364 - Posted: 06.28.2019

By Darcey Steinke The J in “juice” was the first letter-sound, according to my mother, that I repeated in staccato, going off like a skipping record. This was when I was 3, before my stutter was stigmatized as shameful. In those earliest years my relationship to language was uncomplicated: I assumed my voice was more like a bird’s or a squirrel’s than my playmates’. This seemed exciting. I imagined, unlike fluent children, I might be able to converse with wild creatures, I’d learn their secrets, tell them mine and forge friendships based on interspecies intimacy. School put an end to this fantasy. Throughout elementary school I stuttered every time a teacher called on me and whenever I was asked to read out loud. In the third grade the humiliation of being forced to read a few paragraphs about stewardesses in the Weekly Reader still burns. The ST is hard for stutterers. What would have taken a fluent child five minutes took me an excruciating 25. It was around this time that I started separating the alphabet into good letters, V as well as M, and bad letters, S, F and T, plus the terrible vowel sounds, open and mysterious and nearly impossible to wrangle. Each letter had a degree of difficulty that changed depending upon its position in the sentence. Much later when I read that Nabokov as a child assigned colors to letters, it made sense to me that the hard G looked like “vulcanized rubber” and the R, “a sooty rag being ripped.” My beloved V, in the Nabokovian system, was a jewel-like “rose quartz.” My mother, knowing that kids ridiculed me — she once found a book, “The Mystery of the Stuttering Parrot,” that had been tossed onto our lawn — wanted to eradicate my speech impediment. She encouraged me to practice the strategies taught to me by a string of therapists, bouncing off an easy sound to a harder one and unclenching my throat, trying to slide out of a stammer. When I was 13 she got me a scholarship to a famous speech therapy program at a college near our house in Virginia. © 2019 The New York Times Company

Related chapters from BN8e: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 26313 - Posted: 06.10.2019

By Malin Fezehai Muazzez Kocek, 46, is considered one of the best whistlers in Kuşköy, a village tucked away in the picturesque Pontic Mountains in Turkey’s northern Giresun province. Her whistle can be heard over the area’s vast tea fields and hazelnut orchards, several miles farther than a person’s voice. When President Recep Tayyip Erdogan of Turkey visited Kuşköy in 2012, she greeted him and proudly whistled, “Welcome to our village!” She uses kuş dili, or “bird language,” which transforms the full Turkish vocabulary into varied-pitch frequencies and melodic lines. For hundreds of years, this whistled form of communication has been a critical for the farming community in the region, allowing complex conversations over long distances and facilitating animal herding. Today, there are about 10,000 people in the larger region that speak it, but because of the increased use of cellphones, which remove the need for a voice to carry over great distances, that number is dwindling. The language is at risk of dying out. Of Ms. Kocek’s three children, only her middle daughter, Kader, 14, knows bird language. Ms. Kocek began learning bird language at six years old, by working in the fields with her father. She has tried to pass the tradition on to her three daughters; even though they understand it, only her middle child, Kader Kocek, 14, knows how to speak, and can whistle Turkey’s national anthem. Turkey is one of a handful of countries in the world where whistling languages exist. Similar ways of communicating are known to have been used in the Canary Islands, Greece, Mexico, and Mozambique. They fascinate researchers and linguistic experts, because they suggest that the brain structures that process language are not as fixed as once thought. There is a long-held belief that language interpretation occurs mostly in the left hemisphere, and melody, rhythm and singing on the right. But a study that biopsychologist Onur Güntürkün conducted in Kuşköy, suggests that whistling language is processed in both hemispheres. © 2019 The New York Times Company

Related chapters from BN8e: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 26279 - Posted: 05.30.2019

Laura Sanders Advantages of speaking a second language are obvious: easier logistics when traveling, wider access to great literature and, of course, more people to talk with. Some studies have also pointed to the idea that polyglots have stronger executive functioning skills, brain abilities such as switching between tasks and ignoring distractions. But a large study of bilingual children in the U.S. finds scant evidence of those extra bilingual brain benefits. Bilingual children performed no better in tests measuring such thinking skills than children who knew just one language, researchers report May 20 in Nature Human Behaviour. To look for a relationship between bilingualism and executive function, researchers relied on a survey of U.S. adolescents called the ABCD study. From data collected at 21 research sites across the country, researchers identified 4,524 kids ages 9 and 10. Of these children, 1,740 spoke English and a second language (mostly Spanish, though 40 second languages were represented). On three tests that measured executive function, such as the ability to ignore distractions or quickly switch between tasks with different rules, the bilingual children performed similarly to children who spoke only English, the researchers found. “We really looked,” says study coauthor Anthony Dick, a developmental cognitive neuroscientist at Florida International University in Miami said. “We didn’t find anything.” |© Society for Science & the Public 2000 - 2019.

Related chapters from BN8e: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 26265 - Posted: 05.24.2019

By Sayuri Hayakawa, Viorica Marian As Emperor Akihito steps down from the Chrysanthemum Throne in Japan’s first abdication in 200 years, Naruhito officially becomes the new Emperor on May 1, 2019, ushering in a new era called Reiwa (令和; “harmony”). Japan’s tradition of naming eras reflects the ancient belief in the divine spirit of language. Kotodama (言霊; “word spirit”) is the idea that words have an almost magical power to alter physical reality. Through its pervasive impact on society, including its influence on superstitions and social etiquette, traditional poetry and modern pop songs, the word kotodama has, in a way, provided proof of its own concept. For centuries, many cultures have believed in the spiritual force of language. Over time, these ideas have extended from the realm of magic and mythology to become a topic of scientific investigation—ultimately leading to the discovery that language can indeed affect the physical world, for example, by altering our physiology. Our bodies evolve to adapt to our environments, not only over millions of years but also over the days and years of an individual’s life. For instance, off the coast of Thailand, there are children who can “see like dolphins.” Cultural and environmental factors have shaped how these sea nomads of the Moken tribe conduct their daily lives, allowing them to adjust their pupils underwater in a way that most of us cannot. © 2019 Scientific American

Related chapters from BN8e: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 26190 - Posted: 05.01.2019