Chapter 15. Language and Lateralization

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 21 - 40 of 2630

Jordana Cepelewicz We often think of memory as a rerun of the past — a mental duplication of events and sensations that we’ve experienced. In the brain, that would be akin to the same patterns of neural activity getting expressed again: Remembering a person’s face, for instance, might activate the same neural patterns as the ones for seeing their face. And indeed, in some memory processes, something like this does occur. But in recent years, researchers have repeatedly found subtle yet significant differences between visual and memory representations, with the latter showing up consistently in slightly different locations in the brain. Scientists weren’t sure what to make of this transformation: What function did it serve, and what did it mean for the nature of memory itself? Now, they may have found an answer — in research focused on language rather than memory. A team of neuroscientists created a semantic map of the brain that showed in remarkable detail which areas of the cortex respond to linguistic information about a wide range of concepts, from faces and places to social relationships and weather phenomena. When they compared that map to one they made showing where the brain represents categories of visual information, they observed meaningful differences between the patterns. And those differences looked exactly like the ones reported in the studies on vision and memory. The finding, published last October in Nature Neuroscience, suggests that in many cases, a memory isn’t a facsimile of past perceptions that gets replayed. Instead, it is more like a reconstruction of the original experience, based on its semantic content. All Rights Reserved © 2022

Keyword: Learning & Memory; Language
Link ID: 28202 - Posted: 02.12.2022

By Benjamin Mueller It appeared to be an ordinary fall: Bob Saget, the actor and comedian, knocked his head on something and, perhaps thinking nothing of it, went to sleep, his family said on Wednesday. But the chilling consequences — Mr. Saget, 65, died some hours later on Jan. 9 from blunt head trauma, a medical examiner ruled — have underscored the dangers of traumatic brain injuries, even those that do not initially seem to be causes for alarm. Some 61,000 deaths in 2019 were related to traumatic brain injuries, according to the Centers for Disease Control and Prevention, and nearly half of head trauma-related hospitalizations result from falls. Brain injury experts said on Thursday that Mr. Saget’s case was relatively uncommon: People with serious head trauma would be expected to have noticeable symptoms, like a headache, nausea or confusion. And they can generally be saved by surgeons opening up their skull and relieving pressure on the brain from bleeding. But certain situations put people at higher risk for the sort of deterioration that Mr. Saget experienced, doctors said. As serious a risk factor as any, doctors said, is simply being alone. Someone with a head injury can lose touch with their usual decision-making capacities and become confused, agitated or unusually sleepy. Those symptoms, in turn, can stand in the way of getting help. And while there was no indication that Mr. Saget was taking blood thinners, experts said the medications can greatly accelerate the type of bleeding after a head injury that forces the brain downward and compresses the centers that regulate breathing and other vital functions. More Americans are being prescribed these drugs as the population ages. Mr. Saget had been in an Orlando hotel room during a weekend of stand-up comedy acts when he was found unresponsive. The local medical examiner’s office announced on Wednesday that his death resulted from “blunt head trauma,” and said that “his injuries were most likely incurred from an unwitnessed fall.” © 2022 The New York Times Company

Keyword: Brain Injury/Concussion
Link ID: 28201 - Posted: 02.12.2022

Megan Lim Any parents out there will be familiar with the unique sort of misery that results when your kid has a new favorite song. They ask to hear it over and over, without regard for the rest of us. Well, it turns out that song sparrows might be better than children (and many adults, for that matter) when it comes to curating their playlists. Male sparrows, which attract females by singing, avoid tormenting their listeners with the same old tune. Instead they woo potential mates with a selection of 6 to 12 different songs. The song sparrow medley It might be hard to tell, but that audio clip contains three distinctive sparrow songs, each containing a unique signature of trills and notes. Even more impressive than the execution, though, is the way sparrows string their songs together. William Searcy, an ornithologist at the University of Miami, recently published a study in The Royal Society that analyzed patterns of song sparrow serenades. He said it would be easy for the birds to sing the first song, then the second, then the third and fourth. "But that's not what song sparrows are doing. They're not going through in a set order. They're varying the order from cycle to cycle, and that's more complicated," he said. In other words, rather than sing the same playlist every time, they hit shuffle. "What we're arguing is what they do is keep in memory the whole past cycle so they know what to sing next," Searcy said. The researchers are not sure why male sparrows shuffle their songs. But past work has shown that females prefer hearing a wider range of tunes, so maybe a new setlist keeps females interested. © 2022 npr

Keyword: Animal Communication; Sexual Behavior
Link ID: 28181 - Posted: 02.02.2022

By Meeri Kim Kellie Carr and her 13-year-old son, Daniel, sat in the waiting room of a pediatric neurology clinic for yet another doctor’s appointment in 2012. For years, she struggled to find out what was causing his weakened right side. It wasn’t an obvious deficit, by any means, and anyone not paying close attention would see a normal, healthy teenage boy. At that point, no one had any idea that Daniel had suffered a massive stroke as a newborn and lost large parts of his brain as a result. “It was the largest stroke I’d ever seen in a child who hadn’t died or suffered extreme physical and mental disability,” said Nico Dosenbach, the pediatric neurologist at Washington University School of Medicine in St. Louis who finally diagnosed him using a magnetic resonance imaging (MRI) scan. "If I saw the MRI first, I would have assumed this kid's probably in a wheelchair, has a feeding tube and might be on a ventilator," Dosenbach said. "Because normally, when a child is missing that much brain, it's bad." But Daniel — as an active, athletic young man who did fine in school — defied all logic. Before the discovery of the stroke, his mother had noticed some odd mannerisms, such as zipping up his coat or eating a burger using only his left hand. When engaged, his right hand often served as club-like support instead of a dexterous appendage with fingers. Daniel excelled as a left-handed pitcher in competitive baseball, but his coach found it unusual that he would always switch the glove to his left hand when catching the ball. Medical professionals tried to help — first his pediatrician, followed by an orthopedic doctor who sent him to physical therapy — but no one could figure out the root cause. They tried constraint-induced movement therapy, which forces patients to use the weaker arm by immobilizing the other in a cast, but Daniel soon rebelled and broke himself free. © 1996-2022 The Washington Post

Keyword: Development of the Brain; Stroke
Link ID: 28174 - Posted: 01.26.2022

Alejandra Marquez Janse & Christopher Intagliata Imagine you're moving to a new country on the other side of the world. Besides the geographical and cultural changes, you will find a key difference will be the language. But will your pets notice the difference? It was a question that nagged at Laura Cuaya, a brain researcher at the Neuroethology of Communication Lab at at Eötvös Loránd University in Budapest. "When I moved from Mexico to Hungary to start my post-doc research, all was new for me. Obviously, here, people in Budapest speak Hungarian. So you've had a different language, completely different for me," she said. The language was also new to her two dogs: Kun Kun and Odín. "People are super friendly with their dogs [in Budapest]. And my dogs, they are interested in interacting with people," Cuaya said. "But I wonder, did they also notice people here ... spoke a different language?" Cuaya set out to find the answer. She and her colleagues designed an experiment with 18 volunteer dogs — including her two border collies — to see if they could differentiate between two languages. Kun Kun and Odín were used to hearing Spanish; the other dogs Hungarian. The dogs sat still within an MRI machine, while listening to an excerpt from the story The Little Prince. They heard one version in Spanish, and another in Hungarian. Then the scientists analyzed the dogs' brain activity. © 2022 npr

Keyword: Language; Evolution
Link ID: 28145 - Posted: 01.08.2022

Jon Hamilton When baby mice cry, they do it to a beat that is synchronized to the rise and fall of their own breath. It's a pattern that researchers say could help explain why human infants can cry at birth — and how they learn to speak. Mice are born with a cluster of cells in the brainstem that appears to coordinate the rhythms of breathing and vocalizations, a team reports in the journal Neuron. If similar cells exist in human newborns, they could serve as an important building block for speech: the ability to produce one or many syllables between each breath. The cells also could explain why so many human languages are spoken at roughly the same tempo. "This suggests that there is a hardwired network of neurons that is fundamental to speech," says Dr. Kevin Yackle, the study's senior author and a researcher at the University of California, San Francisco. Scientists who study human speech have spent decades debating how much of our ability is innate and how much is learned. The research adds to the evidence that human speech relies — at least in part — on biological "building blocks" that are present from birth, says David Poeppel, a professor of psychology and neural science at New York University who was not involved in the study. But "there is just a big difference between a mouse brain and a human brain," Poeppel says. So the human version of this building block may not look the same. © 2022 npr

Keyword: Language; Evolution
Link ID: 28144 - Posted: 01.08.2022

Chloe Tenn On October 4, physiologist David Julius and neurobiologist Arden Patapoutian were awarded the Nobel Prize in Physiology or Medicine for their work on temperature, pain, and touch perception. Julius researched the burning sensation people experience from chilies, and identified an ion channel, TRPV1 that is activated by heat. Julius and Patapoutian then separately reported on the TRPM8 ion channel that senses menthol’s cold in 2002. Patapoutian’s group went on to discover the PIEZO1 and PIEZO2 ion channels that are involved in sensing mechanical pressure. The Nobel Committee wrote that the pair’s work inspired further research into understanding how the nervous system senses temperature and mechanical stimuli and that the laureates “identified critical missing links in our understanding of the complex interplay between our senses and the environment.” This year saw innovations in augmenting the brain’s capabilities by plugging it in to advanced computing technology. For example, a biology teacher who lost her vision 16 years ago was able to distinguish shapes and letters with the help of special glasses that interfaced with electrodes implanted in her brain. Along a similar vein, a computer connected to a brain-implant system discerned brain signals for handwriting in a paralyzed man, enabling him to type up to 90 characters per minute with an accuracy above 90 percent. Such studies are a step forward for technologies that marry cutting-edge neuroscience and computational innovation in an attempt to improve people’s lives. © 1986–2021 The Scientist.

Keyword: Pain & Touch; Language
Link ID: 28134 - Posted: 12.31.2021

Jeanne Paz Blocking an immune system molecule that accumulates after traumatic brain injury could significantly reduce the injury’s detrimental effects, according to a recent mouse study my neuroscience lab and I published in the journal Science. The cerebral cortex, the part of the brain involved in thinking, memory and language, is often the primary site of head injury because it sits directly beneath the skull. However, we found that another region near the center of the brain that regulates sleep and attention, the thalamus, was even more damaged than the cortex months after the injury. This may be due to increased levels of a molecule called C1q, which triggers a part of the immune system called the classical complement pathway. This pathway plays a key role in rapidly clearing pathogens and dead cells from the body and helps control the inflammatory immune response. C1q plays both helpful and harmful roles in the brain. On the one hand, accumulation of C1q in the brain can trigger abnormal elimination of synapses – the structures that allow neurons to communicate with one another – and contribute to neurodegenerative disease. On the other hand, C1q is also involved in normal brain development and protects the central nervous system from infection. In the case of traumatic brain injury, we found that C1q lingered in the thalamus at abnormally high levels for months after the initial injury and was associated with inflammation, dysfunctional brain circuits and neuronal death. This suggests that higher levels of C1q in the thalamus could contribute to several long-term effects of traumatic brain injury, such as sleep disruption and epilepsy. © 2010–2021, The Conversation US, Inc.

Keyword: Brain Injury/Concussion; Neuroimmunology
Link ID: 28112 - Posted: 12.15.2021

By Erin Blakemore Anger — such as road rage and the simmering displeasure of the ongoing pandemic — is the watchword for 2021. But be careful — those big emotions could trigger a stroke. FAQ: What to know about the omicron variant of the coronavirus Researchers in a global study devoted to figuring out stroke triggers found that about 1 in 11 stroke patients experience anger or emotional upset in the hour before their stroke symptoms begin. The study, published in the European Heart Journal, looked at data from 13,462 patients in 32 countries who had strokes. The patients completed extensive questionnaires during the first three days after they were hospitalized, answering questions about their medical history and what they had been doing and feeling before their stroke. Just over 8 percent of the patients surveyed said they had experienced anger or emotional upset within a day of symptom onset, which served as the control period. Just over 9 percent said they had been angry or upset within an hour of the first symptoms of their stroke, which was the test period. The risk of a stroke was higher in the test period when compared with the control period, the researchers said. “Our research found that anger or emotional upset was linked to an approximately 30% increase in risk of stroke during one hour after an episode — with a greater increase if the patient did not have a history of depression,” Andrew Smyth, a professor of clinical epidemiology at NUI Galway in Ireland who co-led the study, said in a statement. Lower education upped the odds of having a stroke linked with anger or emotional upset, as well.

Keyword: Stroke; Emotions
Link ID: 28109 - Posted: 12.15.2021

Daisy Yuhas Billions of people worldwide speak two or more languages. (Though the estimates vary, many sources assert that more than half of the planet is bilingual or multilingual.) One of the most common experiences for these individuals is a phenomenon that experts call “code switching,” or shifting from one language to another within a single conversation or even a sentence. This month Sarah Frances Phillips, a linguist and graduate student at New York University, and her adviser Liina Pylkkänen published findings from brain imaging that underscore the ease with which these switches happen and reveal how the neurological patterns that support this behavior are very similar in monolingual people. The new study reveals how code switching—which some multilingual speakers worry is “cheating,” in contrast to sticking to just one language—is normal and natural. Phillips spoke with Mind Matters editor Daisy Yuhas about these findings and why some scientists believe bilingual speakers may have certain cognitive advantages. Can you tell me a little bit about what drew you to this topic? I grew up in a bilingual household. My mother is from South Korea; my dad is African-American. So I grew up code switching a lot between Korean and English, as well as different varieties of English, such as African-American English and the more mainstream, standardized version. When you spend a lot of time code switching, and then you realize that this is something that is not well understood from a linguistic perspective, nor from a neurobiological perspective, you realize, “Oh, this is open territory.” © 2021 Scientific American

Keyword: Language
Link ID: 28095 - Posted: 12.01.2021

Andrew Gregory Health editor Drinking coffee or tea may be linked with a lower risk of stroke and dementia, according to the largest study of its kind. Strokes cause 10% of deaths globally, while dementia is one of the world’s biggest health challenges – 130 million are expected to be living with it by 2050. In the research, 365,000 people aged between 50 and 74 were followed for more than a decade. At the start the participants, who were involved in the UK Biobank study, self-reported how much coffee and tea they drank. Over the research period, 5,079 of them developed dementia and 10,053 went on to have at least one stroke. Researchers found that people who drank two to three cups of coffee or three to five cups of tea a day, or a combination of four to six cups of coffee and tea, had the lowest risk of stroke or dementia. Those who drank two to three cups of coffee and two to three cups of tea daily had a 32% lower risk of stroke. These people had a 28% lower risk of dementia compared with those who did not drink tea or coffee. The research, by Yuan Zhang and colleagues from Tianjin Medical University, China, suggests drinking coffee alone or in combination with tea is also linked with lower risk of post-stroke dementia. Writing in the journal Plos Medicine, the authors said: “Our findings suggested that moderate consumption of coffee and tea separately or in combination were associated with lower risk of stroke and dementia.” © 2021 Guardian News & Media Limited

Keyword: Stroke; Drug Abuse
Link ID: 28082 - Posted: 11.20.2021

Kate Wild “The skull acts as a bastion of privacy; the brain is the last private part of ourselves,” Australian neurosurgeon Tom Oxley says from New York. Oxley is the CEO of Synchron, a neurotechnology company born in Melbourne that has successfully trialled hi-tech brain implants that allow people to send emails and texts purely by thought. In July this year, it became the first company in the world, ahead of competitors like Elon Musk’s Neuralink, to gain approval from the US Food and Drug Administration (FDA) to conduct clinical trials of brain computer interfaces (BCIs) in humans in the US. Synchron has already successfully fed electrodes into paralysed patients’ brains via their blood vessels. The electrodes record brain activity and feed the data wirelessly to a computer, where it is interpreted and used as a set of commands, allowing the patients to send emails and texts. BCIs, which allow a person to control a device via a connection between their brain and a computer, are seen as a gamechanger for people with certain disabilities. “No one can see inside your brain,” Oxley says. “It’s only our mouths and bodies moving that tells people what’s inside our brain … For people who can’t do that, it’s a horrific situation. What we’re doing is trying to help them get what’s inside their skull out. We are totally focused on solving medical problems.” BCIs are one of a range of developing technologies centred on the brain. Brain stimulation is another, which delivers targeted electrical pulses to the brain and is used to treat cognitive disorders. Others, like imaging techniques fMRI and EEG, can monitor the brain in real time. “The potential of neuroscience to improve our lives is almost unlimited,” says David Grant, a senior research fellow at the University of Melbourne. “However, the level of intrusion that would be needed to realise those benefits … is profound”. © 2021 Guardian News & Media Limited

Keyword: Brain imaging; Language
Link ID: 28070 - Posted: 11.09.2021

Jon Hamilton Headaches, nausea, dizziness, and confusion are among the most common symptoms of a concussion. But researchers say a blow to the head can also make it hard to understand speech in a noisy room. "Making sense of sound is one of the hardest jobs that we ask our brains to do," says Nina Kraus, a professor of neurobiology at Northwestern University. "So you can imagine that a concussion, getting hit in the head, really does disrupt sound processing." About 15% to 20% of concussions cause persistent sound-processing difficulties, Kraus says, which suggests that hundreds of thousands of people are affected each year in the U.S. The problem is even more common in the military, where many of the troops who saw combat in Iraq and Afghanistan sustained concussions from roadside bombs. From ear to brain Our perception of sound starts with nerve cells in the inner ear that transform pressure waves into electrical signals, Kraus says. But it takes a lot of brain power to transform those signals into the auditory world we perceive. Article continues after sponsor message The brain needs to compare the signals from two ears to determine the source of a sound. Then it needs to keep track of changes in volume, pitch, timing and other characteristics. Kraus's lab, called Brainvolts, is conducting a five-year study of 500 elite college athletes to learn how a concussion can affect the brain's ability to process the huge amount of auditory information it receives. And she devotes an entire chapter to concussion in her 2021 book, Of Sound Mind: How Our Brain Constructs a Meaningful Sonic World. © 2021 npr

Keyword: Brain Injury/Concussion; Hearing
Link ID: 28064 - Posted: 11.06.2021

Jordana Cepelewicz Hearing is so effortless for most of us that it’s often difficult to comprehend how much information the brain’s auditory system needs to process and disentangle. It has to take incoming sounds and transform them into the acoustic objects that we perceive: a friend’s voice, a dog barking, the pitter-patter of rain. It has to extricate relevant sounds from background noise. It has to determine that a word spoken by two different people has the same linguistic meaning, while also distinguishing between those voices and assessing them for pitch, tone and other qualities. According to traditional models of neural processing, when we hear sounds, our auditory system extracts simple features from them that then get combined into increasingly complex and abstract representations. This process allows the brain to turn the sound of someone speaking, for instance, into phonemes, then syllables, and eventually words. But in a paper published in Cell in August, a team of researchers challenged that model, reporting instead that the auditory system often processes sound and speech simultaneously and in parallel. The findings suggest that how the brain makes sense of speech diverges dramatically from scientists’ expectations, with the signals from the ear branching into distinct brain pathways at a surprisingly early stage in processing — sometimes even bypassing a brain region thought to be a crucial stepping-stone in building representations of complex sounds.

Keyword: Language; Hearing
Link ID: 28058 - Posted: 10.30.2021

Jordana Cepelewicz Hearing is so effortless for most of us that it’s often difficult to comprehend how much information the brain’s auditory system needs to process and disentangle. It has to take incoming sounds and transform them into the acoustic objects that we perceive: a friend’s voice, a dog barking, the pitter-patter of rain. It has to extricate relevant sounds from background noise. It has to determine that a word spoken by two different people has the same linguistic meaning, while also distinguishing between those voices and assessing them for pitch, tone and other qualities. According to traditional models of neural processing, when we hear sounds, our auditory system extracts simple features from them that then get combined into increasingly complex and abstract representations. This process allows the brain to turn the sound of someone speaking, for instance, into phonemes, then syllables, and eventually words. But in a paper published in Cell in August, a team of researchers challenged that model, reporting instead that the auditory system often processes sound and speech simultaneously and in parallel. The findings suggest that how the brain makes sense of speech diverges dramatically from scientists’ expectations, with the signals from the ear branching into distinct brain pathways at a surprisingly early stage in processing — sometimes even bypassing a brain region thought to be a crucial stepping-stone in building representations of complex sounds. Simons Foundation All Rights Reserved © 2021

Keyword: Language
Link ID: 28054 - Posted: 10.27.2021

Nicola Davis They have fluffy ears, a penetrating stare and a penchant for monogamy. But it turns out that indris – a large, critically endangered species of lemur – have an even more fascinating trait: an unexpected sense of rhythm. Indri indri are known for their distinctive singing, a sound not unlike a set of bagpipes being stepped on. The creatures often strike up a song with members of their family either in duets or choruses, featuring sounds from roars to wails. Now scientists say they have analysed the songs of 39 indris living in the rainforest of Madagascar, revealing that – like humans – the creatures employ what are known as categorical rhythms. These rhythms are essentially distinctive and predictable patterns of intervals between the onset of notes. For example in a 1:1 rhythm, all the intervals are of equal length, while a 1:2 rhythm has some twice as long as those before or after – like the opening bars of We Will Rock You by Queen. “They are quite predictable [patterns], because the next note is going to come either one unit or two whole units after the previous note,” said Dr Andrea Ravignani, co-author of the research from the Max Planck Institute for Psycholinguistics. While the 1:1 rhythms have previously been identified in certain songbirds, the team say their results are the first time categorical rhythms have been identified in a non-human mammal. “The evidence is even stronger than in birds,” said Ravignani. © 2021 Guardian News & Media Limited

Keyword: Animal Communication; Language
Link ID: 28050 - Posted: 10.27.2021

ByRachel Fritts Across North America, hundreds of bird species waste time and energy raising chicks that aren’t their own. They’re the victims of a “brood parasite” called the cowbird, which adds its own egg to their clutch, tricking another species into raising its offspring. One target, the yellow warbler, has a special call to warn egg-warming females when cowbirds are casing the area. Now, researchers have found the females act on that warning 1 day later—suggesting their long-term memories might be much better than thought. “It’s a very sophisticated and subtle behavioral response,” says Erick Greene, a behavioral ecologist at the University of Montana, Missoula, who was not involved in the study. “Am I surprised? I guess I’m more in awe. It’s pretty dang cool.” Birds have been dazzling scientists with their intellects for decades. Western scrub jays, for instance, can remember where they’ve stored food for the winter—and can even keep track of when it will spoil. There’s evidence that other birds might have a similarly impressive ability to remember certain meaningful calls. “Animals are smart in the context in which they need to be smart,” says Mark Hauber, an animal behavior researcher at the University of Illinois, Urbana-Champaign (UIUC), and the Institute of Advanced Studies in Berlin, who co-authored the new study. He wanted to see whether yellow warblers had the capacity to remember their own important warning call known as a seet. Shelby Lawson The birds make the staccato sound of this call only when a cowbird is near. When yellow warbler females hear it, they go back to their nests and sit tight. (It could just as well be called a “seat” call.) But it’s been unclear whether they still remember the warning in the morning. © 2021 American Association for the Advancement of Science.

Keyword: Animal Communication; Learning & Memory
Link ID: 28039 - Posted: 10.16.2021

Linda Geddes Your dog might follow commands such as “sit”, or become uncontrollably excited at the mention of the word “walkies”, but when it comes to remembering the names of toys and other everyday items, most seem pretty absent-minded. Now a study of six “genius dogs” has advanced our understanding of dogs’ memories, suggesting some of them possess a remarkable grasp of the human language. Hungarian researchers spent more than two years scouring the globe for dogs who could recognise the names of their various toys. Although most can learn commands to some degree, learning the names of items appears to be a very different task, with most dogs unable to master this skill. Max (Hungary), Gaia (Brazil), Nalani (Netherlands), Squall (US), Whisky (Norway), and Rico (Spain) made the cut after proving they knew the names of more than 28 toys, with some knowing more than 100. They were then enlisted to take part in a series of livestreamed experiments known as the Genius Dog Challenge. “These gifted dogs can learn new names of toys in a remarkable speed,” said Dr Claudia Fugazza at Eötvös Loránd University in Budapest, who led the research team. “In our previous study we found that they could learn a new toy name after hearing it only four times. But, with such short exposure, they did not form a long-term memory of it.” To further push the dogs’ limits, their owners were tasked with teaching them the names of six, and then 12 new toys in a single week. © 2021 Guardian News & Media Limited

Keyword: Animal Communication; Language
Link ID: 28023 - Posted: 10.06.2021

By Jackie Rocheleau Elevated blood levels of a specific protein may help scientists predict who has a better chance of bouncing back from a traumatic brain injury. The protein, called neurofilament light or NfL for short, lends structural support to axons, the tendrils that send messages between brain cells. Levels of NfL peak on average at 10 times the typical level 20 days after injury and stay above normal a year later, researchers report September 29 in Science Translational Medicine. The higher the peak NfL blood concentrations after injury, the tougher the recovery for people with TBI six and 12 months later, shows the study of 197 people treated at eight trauma centers across Europe for moderate to severe TBI. Brain scans of 146 participants revealed that their peak NfL concentrations predicted the extent of brain shrinkage after six months, and axon damage at six and 12 months after injury, neurologist Neil Graham of Imperial College London and his colleagues found. These researchers also had a unique opportunity to check that the blood biomarker, which gives indirect clues about the brain injury, actually measured what was happening in the brain. In 18 of the participants that needed brain surgery, researchers sampled the fluid surrounding injured neurons. NfL concentrations there correlated with NfL concentrations in the blood. “The work shows that a new ultrasensitive blood test can be used to accurately diagnose traumatic brain injury,” says Graham. “This blood test can predict quite precisely who’s going to make a good recovery and who’s going to have more difficulties.” © Society for Science & the Public 2000–2021.

Keyword: Brain Injury/Concussion
Link ID: 28017 - Posted: 10.02.2021

By Sierra Carter Black women who have experienced more racism throughout their lives have stronger brain responses to threat, which may hurt their long-term health, according to a new study I conducted with clinical neuropsychologist Negar Fani and other colleagues. I am part of a research team that for more than 15 years has studied the ways stress related to trauma exposure can affect the mind and body. In our recent study, we took a closer look at a stressor that Black Americans disproportionately face in the United States: racism. My colleagues and I completed research with 55 Black women who reported how much they’d been exposed to traumatic experiences, such as childhood abuse and physical or sexual violence, and to racial discrimination, experiencing unfair treatment due to race or ethnicity. We asked them to focus on a task that required attention while simultaneously looking at stressful images. We used functional MRI to observe their brain activity during that time. We found that Black women who reported more experiences of racial discrimination had more response activity in brain regions that are associated with vigilance and watching out for threat — that is, the middle occipital cortex and ventromedial prefrontal cortex. Their reactions were above and beyond the response caused by traumatic experiences not related to racism. Our research suggests that racism had a traumalike effect on Black women’s health; being regularly attuned to the threat of racism can tax important body-regulation tools and worsen brain health.

Keyword: Stress; Brain Injury/Concussion
Link ID: 28015 - Posted: 10.02.2021