Links for Keyword: Brain imaging
Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.
By Gary Stix This year was full of roiling debate and speculation about the prospect of machines with superhuman capabilities that might, sooner than expected, leave the human brain in the dust. A growing public awareness of ChatGPT and other so-called large language models (LLMs) dramatically expanded public awareness of artificial intelligence. In tandem, it raised the question of whether the human brain can keep up with the relentless pace of AI advances. The most benevolent answer posits that humans and machines need not be cutthroat competitors. Researchers found one example of potential cooperation by getting AI to probe the infinite complexity of the ancient game of Go—which, like chess, has seen a computer topple the highest-level human players. A study published in March showed how people might learn from machines with such superhuman skills. And understanding ChatGPT’s prodigious abilities offers some inkling as to why an equivalence between the deep neural networks that underlie the famed chatbot and the trillions of connections in the human brain is constantly invoked. Importantly, the machine learning incorporated into AI has not totally distracted mainstream neuroscience from avidly pursuing better insights into what has been called “the most complicated object in the known universe”: the brain. One of the grand challenges in science—understanding the nature of consciousness—received its due in June with the prominent showcasing of experiments that tested the validity of two competing theories, both of which purport to explain the underpinnings of the conscious self. The past 12 months provided lots of examples of impressive advances for you to store in your working memory. Now here’s a closer look at some of the standout mind and brain stories we covered in Scientific American in 2023. © 2023 SCIENTIFIC AMERICAN
Related chapters from BN: Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System; Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals; Chapter 15: Language and Lateralization
Link ID: 29069 - Posted: 12.31.2023
Emily Baumgaertner This is not a work of art. It’s an image of microscopic blood flow in a rat’s brain, taken with one of many new tools that are yielding higher levels of detail in brain imaging. Here are seven more glorious images from neuroscience research → © 2023 The New York Times Company
Related chapters from BN: Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System
Related chapters from MM:Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 29059 - Posted: 12.22.2023
Liam Drew In a laboratory in San Francisco, California, a woman named Ann sits in front of a huge screen. On it is an avatar created to look like her. Thanks to a brain–computer interface (BCI), when Ann thinks of talking, the avatar speaks for her — and in her own voice, too. In 2005, a brainstem stroke left Ann almost completely paralysed and unable to speak. Last year, neurosurgeon Edward Chang, at the University of California, San Francisco, placed a grid of more than 250 electrodes on the surface of Ann’s brain, on top of the regions that once controlled her body, face and larynx. As Ann imagined speaking certain words, researchers recorded her neural activity. Then, using machine learning, they established the activity patterns corresponding to each word and to the facial movements Ann would, if she could, use to vocalize them. The system can convert speech to text at 78 words per minute: a huge improvement on previous BCI efforts and now approaching the 150 words per minute considered average for regular speech1. Compared with two years ago, Chang says, “it’s like night and day”. In an added feat, the team programmed the avatar to speak aloud in Ann’s voice, basing the output on a recording of a speech she made at her wedding. “It was extremely emotional for Ann because it was the first time that she really felt that she was speaking for almost 20 years,” says Chang. This work was one of several studies in 2023 that boosted excitement about implantable BCIs. Another study2 also translated neural activity into text at unprecedented speed. And in May, scientists reported that they had created a digital bridge between the brain and spinal cord of a man paralysed in a cycling accident3. A BCI decoded his intentions to move and directed a spinal implant to stimulate the nerves of his legs, allowing him to walk. © 2023 Springer Nature Limited
Related chapters from BN: Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System; Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals; Chapter 5: The Sensorimotor System
Link ID: 28997 - Posted: 11.11.2023
by Giorgia Guglielmi / The ability to see inside the human brain has improved diagnostics and revealed how brain regions communicate, among other things. Yet questions remain about the replicability of neuroimaging studies that aim to connect structural or functional differences to complex traits or conditions, such as autism. Some neuroscientists call these studies ‘brain-wide association studies’ — a nod to the ‘genome-wide association studies,’ or GWAS, that link specific variants to particular traits. But unlike GWAS, which typically analyze hundreds of thousands of genomes at once, most published brain-wide association studies involve, on average, only about two dozen participants — far too few to yield reliable results, a March analysis suggests. Spectrum talked to Damien Fair, co-lead investigator on the study and director of the Masonic Institute for the Developing Brain at the University of Minnesota in Minneapolis, about solutions to the problem and reproducibility issues in neuroimaging studies in general. Spectrum: How have neuroimaging studies changed over time, and what are the consequences? Damien Fair: The realization that we could noninvasively peer inside the brain and look at how it’s reacting to certain types of stimuli blew open the doors on studies correlating imaging measurements with behaviors or phenotypes. But even though there was a shift in the type of question that was being asked, the study design stayed identical. That has caused a lot of the reproducibility issues we’re seeing today, because we didn’t change sample sizes. The opportunity is huge right now because we finally, as a community, are understanding how to use magnetic resonance imaging for highly reliable, highly reproducible, highly generalizable findings. S: Where did the reproducibility issues in neuroimaging studies begin? DF: The field got comfortable with a certain type of study that provided significant and exciting results, but without having the rigor to show how those findings reproduced. For brain-wide association studies, the importance of having large samples just wasn’t realized until more recently. It was the same problem in the early age of genome-wide association studies looking at common genetic variants and how they relate to complex traits. If you’re underpowered, highly significant results may not generalize to the population. © 2023 Simons Foundation
Related chapters from BN: Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System
Related chapters from MM:Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 28980 - Posted: 11.01.2023
By Carl Zimmer An international team of scientists has mapped the human brain in much finer resolution than ever before. The brain atlas, a $375 million effort started in 2017, has identified more than 3,300 types of brain cells, an order of magnitude more than was previously reported. The researchers have only a dim notion of what the newly discovered cells do. The results were described in 21 papers published on Thursday in Science and several other journals. Ed Lein, a neuroscientist at the Allen Institute for Brain Science in Seattle who led five of the studies, said that the findings were made possible by new technologies that allowed the researchers to probe millions of human brain cells collected from biopsied tissue or cadavers. “It really shows what can be done now,” Dr. Lein said. “It opens up a whole new era of human neuroscience.” Still, Dr. Lein said that the atlas was just a first draft. He and his colleagues have only sampled a tiny fraction of the 170 billion cells estimated to make up the human brain, and future surveys will certainly uncover more cell types, he said. Biologists first noticed in the 1800s that the brain was made up of different kinds of cells. In the 1830s, the Czech scientist Jan Purkinje discovered that some brain cells had remarkably dense explosions of branches. Purkinje cells, as they are now known, are essential for fine-tuning our muscle movements. Later generations developed techniques to make other cell types visible under a microscope. In the retina, for instance, researchers found cylindrical “cone cells” that capture light. By the early 2000s, researchers had found more than 60 types of neurons in the retina alone. They were left to wonder just how many kinds of cells were lurking in the deeper recesses of the brain, which are far harder to study. © 2023 The New York Times Company
Related chapters from BN: Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals; Chapter 13: Memory and Learning
Link ID: 28963 - Posted: 10.14.2023
by Maris Fessenden A new lightweight device with a wisplike tether can record neural activity while mice jump, run and explore their environment. The open-source recording system, which its creators call ONIX, overcomes several of the limitations of previous systems and enables the rodents to move more freely during recording. The behavior that ONIX allows brings to mind children running around in a playground, says Jakob Voigts, a researcher at the Howard Hughes Medical Institute’s Janelia Research Campus in Ashburn, Virginia, who helped build and test the system. He and his colleagues describe their work in a preprint posted on bioRxiv earlier this month. To understand how the brain creates complex behaviors — such as those found in social interaction, sensory processing and cognition, which are commonly affected in autism — researchers observe brain signals as these behaviors unfold. Head-mounted devices enable researchers to eavesdrop on the electrical chatter between brain cells in mice, rats and primates. But as the smallest of these animal models, mice present some significant challenges. Current neural recording systems are bulky and heavy, making the animals carry up to a fifth of their body weight on their skulls. Predictably, this slows the mice down and tires them out. And most neural recording systems use a tether to relay signals from the mouse’s brain to a computer. But this tether twists and tangles as the mouse turns its head and body, exerting torque that the mouse can feel. Researchers must therefore periodically replace or untangle the tether. Longer tethers allow for more time to elapse between changeouts, but the interruptions still affect natural behavior. And battery-powered, wireless systems add too much weight. Altogether, these challenges inhibit natural behaviors and limit the amount of time that recording can take place, preventing scientists from studying, for example, the complete process of learning a new task. © 2023 Simons Foundation
Related chapters from BN: Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals; Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System
Related chapters from MM:Chapter 3: The Chemistry of Behavior: Neurotransmitters and Neuropharmacology; Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 28930 - Posted: 09.27.2023
By Gina Kolata Tucker Marr’s life changed forever last October. He was on his way to a wedding reception when he fell down a steep flight of metal stairs, banging the right side of his head so hard he went into a coma. He’d fractured his skull, and a large blood clot formed on the left side of his head. Surgeons had to remove a large chunk of his skull to relieve pressure on his brain and to remove the clot. “Getting a piece of my skull taken out was crazy to me,” Mr. Marr said. “I almost felt like I’d lost a piece of me.” But what seemed even crazier to him was the way that piece was restored. Mr. Marr, a 27-year-old analyst at Deloitte, became part of a new development in neurosurgery. Instead of remaining without a piece of skull or getting the old bone put back, a procedure that is expensive and has a high rate of infection, he got a prosthetic piece of skull made with a 3-D printer. But it is not the typical prosthesis used in such cases. His prosthesis, which is covered by his skin, is embedded with an acrylic window that would let doctors peer into his brain with ultrasound. A few medical centers are offering such acrylic windows to patients who had to have a piece of skull removed to treat conditions like a brain injury, a tumor, a brain bleed or hydrocephalus. “It’s very cool,” Dr. Michael Lev, director of emergency radiology at Massachusetts General Hospital, said. But, “it is still early days,” he added. Advocates of the technique say that if a patient with such a window has a headache or a seizure or needs a scan to see if a tumor is growing, a doctor can slide an ultrasound probe on the patient’s head and look at the brain in the office. © 2023 The New York Times Company
Related chapters from BN: Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System; Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals; Chapter 15: Language and Lateralization
Link ID: 28914 - Posted: 09.16.2023
Neurotransmitters are the words our brain cells use to communicate with one another. For years, researchers relied on tools that provided limited temporal and spatial resolution to track changes in the fast chemical chat between neurons. But that started to change about ten years ago for glutamate—the most abundant excitatory neurotransmitter in vertebrates that plays an essential role in learning, memory, and information processing—when scientists engineered the first glutamate fluorescent reporter, iGluSnFR, which provided a readout of neurons’ fast glutamate release. In 2013, researchers at the Howard Hughes Medical Institute collaborated with scientists from other institutions to develop the first generation of iGluSnFR.1 To create the biosensor, the team combined a bacteria-derived glutamate binding protein, Gltl, a wedged fluorescent GFP protein, and a membrane-targeting protein that anchors the reporter to the surface of the cell. Upon glutamate binding, the Gltl protein changes its conformation, increasing the fluorescence intensity of GFP. In their first study, the team showcased the utility of the biosensor for monitoring glutamate levels by demonstrating selective activation by glutamate in cell cultures. By conducting experiments with brain cells from the C. elegans worm, zebrafish, and mice, they confirmed that the reporter also tracked glutamate in vivo, a finding that set iGluSnFR apart from existing glutamate sensors. The first iGluSnFR generation allowed researchers to study glutamate dynamics in different biological systems, but the indicator could not detect small amounts of the neurotransmitter or keep up with brain cells’ fast glutamate release bouts. Making improvements © 1986–2023 The Scientist.
Related chapters from BN: Chapter 4: The Chemistry of Behavior: Neurotransmitters and Neuropharmacology; Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Related chapters from MM:Chapter 4: Development of the Brain; Chapter 3: The Chemistry of Behavior: Neurotransmitters and Neuropharmacology
Link ID: 28901 - Posted: 09.10.2023
By Miryam Naddaf, It took 10 years, around 500 scientists and some €600 million, and now the Human Brain Project — one of the biggest research endeavours ever funded by the European Union — is coming to an end. Its audacious goal was to understand the human brain by modelling it in a computer. During its run, scientists under the umbrella of the Human Brain Project (HBP) have published thousands of papers and made significant strides in neuroscience, such as creating detailed 3D maps of at least 200 brain regions, developing brain implants to treat blindness and using supercomputers to model functions such as memory and consciousness and to advance treatments for various brain conditions. “When the project started, hardly anyone believed in the potential of big data and the possibility of using it, or supercomputers, to simulate the complicated functioning of the brain,” says Thomas Skordas, deputy director-general of the European Commission in Brussels. Advertisement Almost since it began, however, the HBP has drawn criticism. The project did not achieve its goal of simulating the whole human brain — an aim that many scientists regarded as far-fetched in the first place. It changed direction several times, and its scientific output became “fragmented and mosaic-like”, says HBP member Yves Frégnac, a cognitive scientist and director of research at the French national research agency CNRS in Paris. For him, the project has fallen short of providing a comprehensive or original understanding of the brain. “I don’t see the brain; I see bits of the brain,” says Frégnac. HBP directors hope to bring this understanding a step closer with a virtual platform — called EBRAINS — that was created as part of the project. EBRAINS is a suite of tools and imaging data that scientists around the world can use to run simulations and digital experiments. “Today, we have all the tools in hand to build a real digital brain twin,” says Viktor Jirsa, a neuroscientist at Aix-Marseille University in France and an HBP board member. But the funding for this offshoot is still uncertain. And at a time when huge, expensive brain projects are in high gear elsewhere, scientists in Europe are frustrated that their version is winding down. “We were probably one of the first ones to initiate this wave of interest in the brain,” says Jorge Mejias, a computational neuroscientist at the University of Amsterdam, who joined the HBP in 2019. Now, he says, “everybody’s rushing, we don’t have time to just take a nap”. Chequered past
Related chapters from BN: Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals; Chapter 14: Attention and Higher Cognition
Link ID: 28884 - Posted: 08.26.2023
Jon Hamilton Scientists have genetically engineered a squid that is almost as transparent as the water it's in. The squid will allow researchers to watch brain activity and biological processes in a living animal. Sponsor Message ARI SHAPIRO, HOST: For most of us, it would take magic to become invisible, but for some lucky, tiny squid, all it took was a little genetic tweaking. As part of our Weekly Dose of Wonder series, NPR's Jon Hamilton explains how scientists created a see-through squid. JON HAMILTON, BYLINE: The squid come from the Marine Biological Laboratory in Woods Hole, Mass. Josh Rosenthal is a senior scientist there. He says even the animal's caretakers can't keep track of them. JOSH ROSENTHAL: They're really hard to spot. We know we put it in this aquarium, but they might look for a half-hour before they can actually see it. They're that transparent. HAMILTON: Almost invisible. Carrie Albertin, a fellow at the lab, says studying these creatures has been transformative. CARRIE ALBERTIN: They are so strikingly see-through. It changes the way you interpret what's going on in this animal, being able to see completely through the body. HAMILTON: Scientists can watch the squid's three hearts beating in synchrony or see its brain cells at work. And it's all thanks to a gene-editing technology called CRISPR. A few years ago, Rosenthal and Albertin decided they could use CRISPR to create a special octopus or squid for research. ROSENTHAL: Carrie and I are highly biased. We both love cephalopods - right? - and we have for our entire careers. HAMILTON: So they focused on the hummingbird bobtail squid. It's smaller than a thumb and shaped like a dumpling. Like other cephalopods, it has a relatively large and sophisticated brain. Rosenthal takes me to an aquarium to show me what the squid looks like before its genes are altered. ROSENTHAL: Here is our hummingbird bobtail squid. You can see him right there in the bottom, just kind of sitting there hunkered down in the sand. At night, it'll come out and hunt and be much more mobile. © 2023 npr
Related chapters from BN: Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 28883 - Posted: 08.26.2023
Diana Kwon Santiago Ramón y Cajal revolutionized neurobiology in the late nineteenth century with his exquisitely detailed illustrations of neural tissues. Created through years of meticulous microscopy work, the Spanish physician-scientist’s drawings revealed the unique cellular morphology of the brain. “With Cajal’s work, we saw that the cells of the brain don’t look like the cells of every other part of the body — they have incredible morphologies that you just don’t see elsewhere,” says Evan Macosko, a neuroscientist at the Broad Institute of MIT and Harvard in Cambridge, Massachusetts. Ramón y Cajal’s drawings provided one of the first clues that the keys to understanding how the brain governs its many functions, from regulating blood pressure and sleep to controlling cognition and mood, might lie at the cellular level. Still, when it comes it comes to the brain, crucial information remained — and indeed, remains — missing. “In order to have a fundamental understanding of the brain, we really need to know how many different types of cells there are, how are they organized, and how they interact with each other,” says Xiaowei Zhuang, a biophysicist at Harvard University in Cambridge. What neuroscientists require, Zhuang explains, is a way to systematically identify and map the many categories of brain cells. Now researchers are closing in on such a resource, at least in mice. By combining high-throughput single-cell RNA sequencing with spatial transcriptomics — methods for determining which genes are expressed in individual cells, and where those cells are located — they are creating some of the most comprehensive atlases of the mouse brain so far. The crucial next steps will be working out what these molecularly defined cell types do, and bringing the various brain maps together to create a unified resource that the broader neuroscience community can use. © 2023 Springer Nature Limited
Related chapters from BN: Chapter 7: Life-Span Development of the Brain and Behavior; Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 28880 - Posted: 08.24.2023
By Lauren Leffer When a nematode wriggles around a petri dish, what’s going on inside a tiny roundworm’s even tinier brain? Neuroscientists now have a more detailed answer to that question than ever before. As with any experimental animal, from a mouse to a monkey, the answers may hold clues about the contents of more complex creatures’ noggin, including what resides in the neural circuitry of our own head. A new brain “atlas” and computer model, published in Cell on Monday, lays out the connections between the actions of the nematode species Caenorhabditis elegans and this model organism’s individual brain cells. With the findings, researchers can now observe a C. elegans worm feeding or moving in a particular way and infer activity patterns for many of the animal’s behaviors in its specific neurons. Through establishing those brain-behavior links in a humble roundworm, neuroscientists are one step closer to understanding how all sorts of animal brains, even potentially human ones, encode action. “I think this is really nice work,” says Andrew Leifer, a neuroscientist and physicist who studies nematode brains at Princeton University and was not involved in the new research. “One of the most exciting reasons to study how a worm brain works is because it holds the promise of being able to understand how any brain generates behavior,” he says. “What we find in the worm forms hypotheses to look for in other organisms.” Biologists have been drawn to the elegant simplicity of nematode biology for many decades. South African biologist Sydney Brenner received a Nobel Prize in Physiology or Medicine in 2002 for pioneering work that enabled C. elegans to become an experimental animal for the study of cell maturation and organ development. C. elegans was the first multicellular organism to have its entire genome and nervous system mapped. The first neural map, or “connectome,” of a C. elegans brain was published in 1986. In that research, scientists hand drew connections using colored pencils and charted each of the 302 neurons and approximately 5,000 synapses inside the one-millimeter-long animal’s transparent body. Since then a subdiscipline of neuroscience has emerged—one dedicated to plotting out the brains of increasingly complex organisms. Scientists have compiled many more nematode connectomes, as well as brain maps of a marine annelid worm, a tadpole, a maggot and an adult fruit fly. Yet these maps simply serve as a snapshot in time of a single animal. They can tell us a lot about brain structure but little about how behaviors relate to that structure. © 2023 Scientific American
Related chapters from BN: Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals; Chapter 13: Memory and Learning
Link ID: 28879 - Posted: 08.24.2023
Liam Drew Scientific advances are rapidly making science-fiction concepts such as mind-reading a reality — and raising thorny questions for ethicists, who are considering how to regulate brain-reading techniques to protect human rights such as privacy. On 13 July, neuroscientists, ethicists and government ministers discussed the topic at a Paris meeting organized by UNESCO, the United Nations scientific and cultural agency. Delegates plotted the next steps in governing such ‘neurotechnologies’ — techniques and devices that directly interact with the brain to monitor or change its activity. The technologies often use electrical or imaging techniques, and run the gamut from medically approved devices, such as brain implants for treating Parkinson’s disease, to commercial products such as wearables used in virtual reality (VR) to gather brain data or to allow users to control software. How to regulate neurotechnology “is not a technological discussion — it’s a societal one, it’s a legal one”, Gabriela Ramos, UNESCO’s assistant director-general for social and human sciences, told the meeting. Advances in neurotechnology include a neuroimaging technique that can decode the contents of people’s thoughts, and implanted brain–computer interfaces (BCIs) that can convert people’s thoughts of handwriting into text1. The field is growing fast — UNESCO’s latest report on neurotechnology, released at the meeting, showed that, worldwide, the number of neurotechnology-related patents filed annually doubled between 2015 and 2020. Investment rose 22-fold between 2010 and 2020, the report says, and neurotechnology is now a US$33-billion industry. One area in need of regulation is the potential for neurotechnologies to be used for profiling individuals and the Orwellian idea of manipulating people’s thoughts and behaviour. Mass-market brain-monitoring devices would be a powerful addition to a digital world in which corporate and political actors already use personal data for political or commercial gain, says Nita Farahany, an ethicist at Duke University in Durham, North Carolina, who attended the meeting. © 2023 Springer Nature Limited
Related chapters from BN: Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System
Related chapters from MM:Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 28859 - Posted: 07.27.2023
by Holly Barker By bloating brain samples and imaging them with a powerful microscope, researchers can reconstruct neurons across the entire mouse brain, according to a new preprint. The technique could help scientists uncover the neural circuits responsible for complex behaviors, as well as the pathways that are altered in neurological conditions. Tracking axons can help scientists understand how individual neurons and brain areas communicate over long distances. But tracing their path through the brain is tricky, says study investigator Adam Glaser, senior scientist at the Allen Institute for Neural Dynamics in Seattle, Washington. Axons, which are capable of spanning the entire brain, can be less than a micrometer in diameter, so mapping their route requires detailed imaging, he says. One existing approach involves a microscope that slices off an ultra-thin section of the brain and then scans it, repeating the process about 20,000 times to capture the entire mouse brain. Scientists then blend the images together to form a 3D reconstruction of neuronal pathways. But the process takes several days and is therefore more prone to complications — bubbles forming on the lens, say — than faster techniques, Glaser says. And slicing can distort the edges of the image, making it “challenging or impossible” to stitch them back together, says Paul Tillberg, principal scientist at the Howard Hughes Medical Institute’s Janelia Research Campus in Ashburn, Virginia, who was not involved in the study. “This is particularly an issue when reconstructing brain-wide axonal projections, where a single point of confusion can misalign an entire axonal arbor to the wrong neuron,” he says. © 2023 Simons Foundation
Related chapters from BN: Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System
Related chapters from MM:Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 28850 - Posted: 07.19.2023
Davide Castelvecchi The wrinkles that give the human brain its familiar walnut-like appearance have a large effect on brain activity, in much the same way that the shape of a bell determines the quality of its sound, a study suggests1. The findings run counter to a commonly held theory about which aspect of brain anatomy drives function. The study’s authors compared the influence of two components of the brain’s physical structure: the outer folds of the cerebral cortex — the area where most higher-level brain activity occurs — and the connectome, the web of nerves that links distinct regions of the cerebral cortex. The team found that the shape of the outer surface was a better predictor of brainwave data than was the connectome, contrary to the paradigm that the connectome has the dominant role in driving brain activity. “We use concepts from physics and engineering to study how anatomy determines function,” says study co-author James Pang, a physicist at Monash University in Melbourne, Australia. The results were published in Nature on 31 May1. ‘Exciting’ a neuron makes it fire, which sends a message zipping to other neurons. Excited neurons in the cerebral cortex can communicate their state of excitation to their immediate neighbours on the surface. But each neuron also has a long filament called an axon that connects it to a faraway region within or beyond the cortex, allowing neurons to send excitatory messages to distant brain cells. In the past two decades, neuroscientists have painstakingly mapped this web of connections — the connectome — in a raft of organisms, including humans. The authors wanted to understand how brain activity is affected by each of the ways in which neuronal excitation can spread: across the brain’s surface or through distant interconnections. To do so, the researchers — who have backgrounds in physics and neuroscience — tapped into the mathematical theory of waves.
Related chapters from BN: Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals; Chapter 13: Memory and Learning
Link ID: 28811 - Posted: 06.03.2023
By Matteo Wong If you are willing to lie very still in a giant metal tube for 16 hours and let magnets blast your brain as you listen, rapt, to hit podcasts, a computer just might be able to read your mind. Or at least its crude contours. Researchers from the University of Texas at Austin recently trained an AI model to decipher the gist of a limited range of sentences as individuals listened to them—gesturing toward a near future in which artificial intelligence might give us a deeper understanding of the human mind. The program analyzed fMRI scans of people listening to, or even just recalling, sentences from three shows: Modern Love, The Moth Radio Hour, and The Anthropocene Reviewed. Then, it used that brain-imaging data to reconstruct the content of those sentences. For example, when one subject heard “I don’t have my driver’s license yet,” the program deciphered the person’s brain scans and returned “She has not even started to learn to drive yet”—not a word-for-word re-creation, but a close approximation of the idea expressed in the original sentence. The program was also able to look at fMRI data of people watching short films and write approximate summaries of the clips, suggesting the AI was capturing not individual words from the brain scans, but underlying meanings. The findings, published in Nature Neuroscience earlier this month, add to a new field of research that flips the conventional understanding of AI on its head. For decades, researchers have applied concepts from the human brain to the development of intelligent machines. ChatGPT, hyperrealistic-image generators such as Midjourney, and recent voice-cloning programs are built on layers of synthetic “neurons”: a bunch of equations that, somewhat like nerve cells, send outputs to one another to achieve a desired result. Yet even as human cognition has long inspired the design of “intelligent” computer programs, much about the inner workings of our brains has remained a mystery. Now, in a reversal of that approach, scientists are hoping to learn more about the mind by using synthetic neural networks to study our biological ones. It’s “unquestionably leading to advances that we just couldn’t imagine a few years ago,” says Evelina Fedorenko, a cognitive scientist at MIT. Copyright (c) 2023 by The Atlantic Monthly Group.
Related chapters from BN: Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System; Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals; Chapter 15: Language and Lateralization
Link ID: 28802 - Posted: 05.27.2023
By Marla Broadfoot In Alexandre Dumas’s classic novel The Count of Monte-Cristo, a character named Monsieur Noirtier de Villefort suffers a terrible stroke that leaves him paralyzed. Though he remains awake and aware, he is no longer able to move or speak, relying on his granddaughter Valentine to recite the alphabet and flip through a dictionary to find the letters and words he requires. With this rudimentary form of communication, the determined old man manages to save Valentine from being poisoned by her stepmother and thwart his son’s attempts to marry her off against her will. Dumas’s portrayal of this catastrophic condition — where, as he puts it, “the soul is trapped in a body that no longer obeys its commands” — is one of the earliest descriptions of locked-in syndrome. This form of profound paralysis occurs when the brain stem is damaged, usually because of a stroke but also as the result of tumors, traumatic brain injury, snakebite, substance abuse, infection or neurodegenerative diseases like amyotrophic lateral sclerosis (ALS). The condition is thought to be rare, though just how rare is hard to say. Many locked-in patients can communicate through purposeful eye movements and blinking, but others can become completely immobile, losing their ability even to move their eyeballs or eyelids, rendering the command “blink twice if you understand me” moot. As a result, patients can spend an average of 79 days imprisoned in a motionless body, conscious but unable to communicate, before they are properly diagnosed. The advent of brain-machine interfaces has fostered hopes of restoring communication to people in this locked-in state, enabling them to reconnect with the outside world. These technologies typically use an implanted device to record the brain waves associated with speech and then use computer algorithms to translate the intended messages. The most exciting advances require no blinking, eye tracking or attempted vocalizations, but instead capture and convey the letters or words a person says silently in their head. © 2023 Annual Reviews
Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 28791 - Posted: 05.21.2023
By Laura Sanders Like Dumbledore’s wand, a scan can pull long strings of stories straight out of a person’s brain — but only if that person cooperates. This “mind-reading” feat, described May 1 in Nature Neuroscience, has a long way to go before it can be used outside of sophisticated laboratories. But the result could ultimately lead to seamless devices that help people who can’t talk or otherwise communicate easily. The research also raises privacy concerns about unwelcome neural eavesdropping (SN: 2/11/21). “I thought it was fascinating,” says Gopala Anumanchipalli, a neural engineer at the University of California, Berkeley who wasn’t involved in the study. “It’s like, ‘Wow, now we are here already,’” he says. “I was delighted to see this.” As opposed to implanted devices that have shown recent promise, the new system requires no surgery (SN: 11/15/22). And unlike other external approaches, it produces continuous streams of words instead of having a more constrained vocabulary. For the new study, three people lay inside a bulky MRI machine for at least 16 hours each. They listened to stories, mostly from The Moth podcast, while functional MRI scans detected changes in blood flow in the brain. These changes are proxies for brain activity, albeit slow and imperfect measures. With this neural data in hand, computational neuroscientists Alexander Huth and Jerry Tang of the University of Texas at Austin and colleagues were able to match patterns of brain activity to certain words and ideas. The approach relied on a language model that was built with GPT, one of the forerunners that enabled today’s AI chatbots (SN: 4/12/23). © Society for Science & the Public 2000–2023.
Related chapters from BN: Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals; Chapter 14: Attention and Higher Cognition
Link ID: 28769 - Posted: 05.03.2023
By Oliver Whang Think of the words whirling around in your head: that tasteless joke you wisely kept to yourself at dinner; your unvoiced impression of your best friend’s new partner. Now imagine that someone could listen in. On Monday, scientists from the University of Texas, Austin, made another step in that direction. In a study published in the journal Nature Neuroscience, the researchers described an A.I. that could translate the private thoughts of human subjects by analyzing fMRI scans, which measure the flow of blood to different regions in the brain. Already, researchers have developed language-decoding methods to pick up the attempted speech of people who have lost the ability to speak, and to allow paralyzed people to write while just thinking of writing. But the new language decoder is one of the first to not rely on implants. In the study, it was able to turn a person’s imagined speech into actual speech and, when subjects were shown silent films, it could generate relatively accurate descriptions of what was happening onscreen. “This isn’t just a language stimulus,” said Alexander Huth, a neuroscientist at the university who helped lead the research. “We’re getting at meaning, something about the idea of what’s happening. And the fact that that’s possible is very exciting.” The study centered on three participants, who came to Dr. Huth’s lab for 16 hours over several days to listen to “The Moth” and other narrative podcasts. As they listened, an fMRI scanner recorded the blood oxygenation levels in parts of their brains. The researchers then used a large language model to match patterns in the brain activity to the words and phrases that the participants had heard. © 2023 The New York Times Company
Related chapters from BN: Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals; Chapter 14: Attention and Higher Cognition
Link ID: 28768 - Posted: 05.03.2023
Sara Reardon The little voice inside your head can now be decoded by a brain scanner — at least some of the time. Researchers have developed the first non-invasive method of determining the gist of imagined speech, presenting a possible communication outlet for people who cannot talk. But how close is the technology — which is currently only moderately accurate — to achieving true mind-reading? And how can policymakers ensure that such developments are not misused? Most existing thought-to-speech technologies use brain implants that monitor activity in a person’s motor cortex and predict the words that the lips are trying to form. To understand the actual meaning behind the thought, computer scientists Alexander Huth and Jerry Tang at the University of Texas at Austin and their colleagues combined functional magnetic resonance imaging (fMRI), a non-invasive means of measuring brain activity, with artificial intelligence (AI) algorithms called large language models (LLMs), which underlie tools such as ChatGPT and are trained to predict the next word in a piece of text. In a study published in Nature Neuroscience on 1 May, the researchers had 3 volunteers lie in an fMRI scanner and recorded the individuals’ brain activity while they listened to 16 hours of podcasts each1. By measuring the blood flow through the volunteers’ brains and integrating this information with details of the stories they were listening to and the LLM’s ability to understand how words relate to one another, the researchers developed an encoded map of how each individual’s brain responds to different words and phrases. Next, the researchers recorded the participants’ fMRI activity while they listened to a story, imagined telling a story or watched a film that contained no dialogue. Using a combination of the patterns they had previously encoded for each individual and algorithms that determine how a sentence is likely to be constructed based on other words in it, the researchers attempted to decode this new brain activity. The video below shows the sentences produced from brain recordings taken while a study participant watched a clip from the animated film Sintel about a girl caring for a baby dragon. © 2023 Springer Nature Limited
Related chapters from BN: Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals; Chapter 14: Attention and Higher Cognition
Link ID: 28767 - Posted: 05.03.2023