Most Recent Links

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 501 - 520 of 29242

By George Musser Had you stumbled into a certain New York University auditorium in March 2023, you might have thought you were at pure neuroscience conference. In fact, it was a workshop on artificial intelligence—but your confusion could have been readily forgiven. Speakers talked about “ablation,” a procedure of creating brain lesions, as commonly done in animal model experiments. They mentioned “probing,” like using electrodes to tap into the brain’s signals. They presented linguistic analyses and cited long-standing debates in psychology over nature versus nurture. Plenty of the hundred or so researchers in attendance probably hadn’t worked with natural brains since dissecting frogs in seventh grade. But their language choices reflected a new milestone for their field: The most advanced AI systems, such as ChatGPT, have come to rival natural brains in size and complexity, and AI researchers are studying them almost as if they were studying a brain in a skull. As part of that, they are drawing on disciplines that traditionally take humans as their sole object of study: psychology, linguistics, philosophy of mind. And in return, their own discoveries have started to carry over to those other fields. These various disciplines now have such closely aligned goals and methods that they could unite into one field, Grace Lindsay, assistant professor of psychology and data science at New York University, argued at the workshop. She proposed calling this merged science “neural systems understanding.” “Honestly, it’s neuroscience that would benefit the most, I think,” Lindsay told her colleagues, noting that neuroscience still lacks a general theory of the brain. “The field that I come from, in my opinion, is not delivering. Neuroscience has been around for over 100 years. I really thought that, when people developed artificial neural systems, they could come to us.” © 2024 Simons Foundation

Keyword: Consciousness; Language
Link ID: 29344 - Posted: 06.06.2024

By Andrew Jacobs An independent advisory panel of the Food and Drug Administration rejected the use of MDMA-assisted therapy for post-traumatic stress disorder on Tuesday, highlighting the unparalleled regulatory challenges of a novel therapy using the drug commonly known as Ecstasy. Before the vote, members of the panel raised concerns about the designs of the two studies submitted by the drug’s sponsor, Lykos Therapeutics. Many questions focused on the fact that study participants were by and large able to correctly guess whether they had been given MDMA, also known by the names of Ecstasy or molly. The panel voted 9-2 on whether the MDMA-assisted therapy was effective, and voted 10-1 on whether the proposed treatment’s benefits outweighed its risks. Other panelists expressed concerns over the drug’s potential cardiovascular effects, and possible bias among the therapists and facilitators who guided the sessions and may have positively influenced patient outcomes. A case of misconduct involving a patient and therapist in the study also weighed on some panelists’ minds. Many of the committee members said they were especially worried about the failure of Lykos to collect detailed data from participants on the potential for abuse of a drug that generates feelings of bliss and well-being. “I absolutely agree that we need new and better treatments for PTSD,” said Paul Holtzheimer, deputy director for research at the National Center for PTSD, a panelist who voted no on the question of whether the benefits of MDMA-therapy outweighed the risks. “However, I also note that premature introduction of a treatment can actually stifle development, stifle implementation and lead to premature adoption of treatments that are either not completely known to be safe, not fully effective or not being used at their optimal efficacy,” he added. © 2024 The New York Times Company

Keyword: Stress; Drug Abuse
Link ID: 29343 - Posted: 06.06.2024

Leyland Cecco in Toronto A leading federal scientist in Canada has alleged he was barred from investigating a mystery brain illness in the province of New Brunswick and said he fears more than 200 people affected by the condition are experiencing unexplained neurological decline. The allegations, made in leaked emails to a colleague seen by the Guardian, have emerged two years after the eastern province closed its investigation into a possible “cluster” of cases. “All I will say is that my scientific opinion is that there is something real going on in [New Brunswick] that absolutely cannot be explained by the bias or personal agenda of an individual neurologist,” wrote Michael Coulthart, a prominent microbiologist. “A few cases might be best explained by the latter, but there are just too many (now over 200).” New Brunswick health officials warned in 2021 that more than 40 residents were suffering from a possible unknown neurological syndrome, with symptoms similar to those of the degenerative brain disorder Creutzfeldt-Jakob disease. Those symptoms were varied and dramatic: some patients started drooling and others felt as though bugs were crawling on their skin. A year later, however, an independent oversight committee created by the province determined that the group of patients had most likely been misdiagnosed and were suffering from known illnesses such as cancer and dementia. The committee and the New Brunswick government also cast doubt on the work of neurologist Alier Marrero, who was initially referred dozens of cases by baffled doctors in the region, and subsequently identified more cases. The doctor has since become a fierce advocate for patients he feels have been neglected by the province. © 2024 Guardian News & Media Limited

Keyword: Alzheimers; Depression
Link ID: 29342 - Posted: 06.04.2024

By Amorina Kingdon Like most humans, I assumed that sound didn’t work well in water. After all, Jacques Cousteau himself called the ocean the “silent world.” I thought, beyond whales, aquatic animals must not use sound much. How wonderfully wrong I was. In water a sound wave travels four and a half times faster, and loses less energy, than in air. It moves farther and faster and carries information better. In the ocean, water exists in layers and swirling masses of slightly different densities, depending on depth, temperature, and saltiness. The physics-astute reader will know that the density of the medium in which sound travels influences its speed. So, as sound waves spread through the sea, their speed changes, causing complex reflection or refraction and bending of the sound waves into “ducts” and “channels.” Under the right circumstances, these ducts and channels can carry sound waves hundreds and even thousands of kilometers. What about other sensory phenomena? Touch and taste work about the same in water as in air. But the chemicals that tend to carry scent move slower in water than in air. And water absorbs light very easily, greatly diminishing visibility. Even away from murky coastal waters, in the clearest seas, light vanishes below several hundred meters and visibility below several dozen. So sound is often the best, if not only, way for ocean and freshwater creatures to signal friends, detect enemies, and monitor the world underwater. And there is much to monitor: Earthquakes, mudslides, and volcanic activity rumble through the oceans, beyond a human’s hearing range. Ice cracks, booms, and scrapes the seafloor. Waves hiss and roar. Raindrops plink. If you listen carefully, you can tell wind speed, rainfall, even drop size, by listening to the ocean as a storm passes. Even snowfall makes a sound. © 2024 NautilusNext Inc.,

Keyword: Animal Communication; Sexual Behavior
Link ID: 29341 - Posted: 06.04.2024

By Andrea Muraski I had a nightmare last night. It began like many of my dreams do – I was on vacation with my extended family. This time, we were in Australia, visiting family friends in a big house. Things took a turn when — in some way that I can’t quite explain — I got mixed up in this Australian family’s jewelry theft and smuggling operation. And I lied about it in front of my relatives, to protect myself and my co-conspirators. Before I woke up, I was terrified I’d be sent to prison. The dream seems bizarre, but when I pick the narrative apart, there are clear connections to my waking life. For instance, I recently listened to a podcast where a pair of fancy hairpins suspiciously go missing during a family gathering. Moreover, I’m moving tomorrow and still have packing to do. When the movers arrive in the morning, if I haven't finished packing, I'll face the consequences of my lack of preparedness – a crime, at least to my subconscious. Dr. Rahul Jandial, neurosurgeon, neuroscientist and author of This is Why You Dream: What Your Sleeping Brain Reveals About Your Waking Life, says the major themes and images of vivid dreams like these are worth paying attention to, and trying to derive meaning from. (For me, I decided that the next time I have to move, I’m taking the day before off!) I spoke with Dr. Jandial about what else we can learn from our dreams, including some of modern science’s most remarkable findings, and theories, about the dreaming brain. 1. Dreams are not random From dream diaries recorded in ancient Egypt and China to reports from anthropologists in the Amazon, to surveys of modern Americans, evidence shows our dreams have a lot in common. For example, being chased and falling are pretty consistent. “Reports of nightmares and erotic dreams are nearly universal,” Jandial says, while people rarely report dreaming about math. Jandial says the lack of math makes sense because the part of your brain primarily responsible for logic — the prefrontal cortex — is typically not involved in dreaming. © 2024 npr

Keyword: Sleep
Link ID: 29340 - Posted: 06.04.2024

By Sumeet Kulkarni As spring turns to summer in the United States, warming conditions have started to summon enormous numbers of red-eyed periodical cicadas out of their holes in the soil across the east of the country. This year sees an exceptionally rare joint emergence of two cicada broods: one that surfaces every 13 years and another with a 17-year cycle. They last emerged together in 1803, when Thomas Jefferson was US president. This year, billions or even trillions of cicadas from these two broods — each including multiple species of the genus Magicicada — are expected to swarm forests, fields and urban neighbourhoods. To answer readers’ cicada questions, Nature sought help from three researchers. Katie Dana is an entomologist affiliated with the Illinois Natural History Survey at the University of Illinois at Urbana-Champaign. John Lill is an insect ecologist at George Washington University in Washington DC. Fatima Husain is a cognitive neuroscientist at the University of Illinois at Urbana-Champaign. Their answers have been edited for length and clarity. Why do periodical cicadas have red eyes? JL: We’re not really sure. We do know that cicadas’ eyes turn red in the winter before the insects come out. The whole coloration pattern in periodical cicadas is very bright: red eyes, black and orange wings. They’re quite different from the annual cicadas, which are green and black, and more camouflaged. It’s a bit of an enigma why the periodical ones are so brightly coloured, given that it just makes them more obvious to predators. There are no associated defences with being brightly coloured — it kind of flies in the face of what we know about bright coloration in a lot of other animals, where usually it’s some kind of signal for toxicity. There also exist mutants with brown, orange, golden or even blue eyes. People hunt for blue-eyed ones; it’s like trying to find a four-leaf clover. © 2024 Springer Nature Limited

Keyword: Animal Communication; Sexual Behavior
Link ID: 29339 - Posted: 06.04.2024

By Rebecca Horne The drawings and photographs of Santiago Ramón y Cajal are familiar to any neuroscientist—and probably anyone even remotely interested in the field. Most people who take a cursory look at his iconic images might assume that he created them using only direct observation. But that’s not the case, according to a paper published in March 2024 by Dawn Hunter, visual artist and associate professor of art at the University of South Carolina, and her colleagues. For instance, the Golgi-stained tissue Ramón y Cajal drew contained neurons that were cut in half—so he painstakingly reconstructed the cells by drawing from elements in multiple slides. And he also fleshed out his illustrations using educated guesses and classical drawing principles, such as contrast and occlusion. In this way, Ramón y Cajal’s art training was essential to his research, Hunter says. She came across Ramón y Cajal’s drawings while creating illustrations for a neuroscience textbook. “The first time I saw his work, out of pure inspiration, I decided to draw it,” she says. “It was in those moments of drawing that I realized his process was more profound and conceptually layered than merely retracing pencil lines with ink. Examining Ramón y Cajal’s work through the act of drawing is a more active experience than viewing his work as a gallery visitor or in a textbook.” In 2015, Hunter installed her drawings and paintings alongside original Ramón y Cajal works in an ongoing exhibition at the U.S. National Institutes of Health (NIH). That effort led to a Fulbright fellowship to Spain in 2017, providing her access to the Legado Cajal archives at the Instituto Cajal National Archives, which contain thousands of Ramón y Cajal artifacts. Hunter spoke to The Transmitter about her research in Spain and her realizations about how Ramón y Cajal worked as an artist and as a scientist. The Transmitter: What do you think your work contributes that is new? Dawn Hunter: It spells out the connection to [Ramón y Cajal’s] art training. There are some things that to me as a painter are obvious to zero in on that nobody’s really talked about. For example, Ramón y Cajal’s copying of the Renaissance painter Rafael’s entire portfolio. That in itself is a profound thing. © 2024 Simons Foundation

Keyword: Brain imaging
Link ID: 29338 - Posted: 06.04.2024

By Emily Underwood You’re driving somewhere, eyes on the road, when you start to feel a tingling sensation in your lower abdomen. That extra-large Coke you drank an hour ago has made its way through your kidneys into your bladder. “Time to pull over,” you think, scanning for an exit ramp. To most people, pulling into a highway rest stop is a profoundly mundane experience. But not to neuroscientist Rita Valentino, who has studied how the brain senses, interprets and acts on the bladder’s signals. She’s fascinated by the brain’s ability to take in sensations from the bladder, combine them with signals from outside of the body, like the sights and sounds of the road, then use that information to act — in this scenario, to find a safe, socially appropriate place to pee. “To me, it’s really an example of one of the beautiful things that the brain does,” she says. Scientists used to think that our bladders were ruled by a relatively straightforward reflex — an “on-off” switch between storing urine and letting it go. “Now we realize it’s much more complex than that,” says Valentino, now director of the division of neuroscience and behavior at the National Institute of Drug Abuse. An intricate network of brain regions that contribute to functions like decision-making, social interactions and awareness of our body’s internal state, also called interoception, participates in making the call. In addition to being mind-bogglingly complex, the system is also delicate. Scientists estimate, for example, that more than 1 in 10 adults have overactive bladder syndrome — a common constellation of symptoms that includes urinary urgency (the sensation of needing to pee even when the bladder isn’t full), nocturia (the need for frequent nightly bathroom visits) and incontinence. Although existing treatments can improve symptoms for some, they don’t work for many people, says Martin Michel, a pharmacologist at Johannes Gutenberg University in Mainz, Germany, who researches therapies for bladder disorders. Developing better drugs has proven so challenging that all major pharmaceutical companies have abandoned the effort, he adds.

Keyword: Miscellaneous
Link ID: 29337 - Posted: 06.02.2024

By Joanne Silberner Think for a minute about the little bumps on your tongue. You probably saw a diagram of those taste bud arrangements once in a biology textbook — sweet sensors at the tip, salty on either side, sour behind them, bitter in the back. But the idea that specific tastes are confined to certain areas of the tongue is a myth that “persists in the collective consciousness despite decades of research debunking it,” according to a review published this month in The New England Journal of Medicine. Also wrong: the notion that taste is limited to the mouth. The old diagram, which has been used in many textbooks over the years, originated in a study published by David Hanig, a German scientist, in 1901. But the scientist was not suggesting that various tastes are segregated on the tongue. He was actually measuring the sensitivity of different areas, said Paul Breslin, a researcher at Monell Chemical Senses Center in Philadelphia. “What he found was that you could detect things at a lower concentration in one part relative to another,” Dr. Breslin said. The tip of the tongue, for example, is dense with sweet sensors but contains the others as well. The map’s mistakes are easy to confirm. If you place a lemon wedge at the tip of your tongue, it will taste sour, and if you put a bit of honey toward the side, it will be sweet. The perception of taste is a remarkably complex process, starting from that first encounter with the tongue. Taste cells have a variety of sensors that signal the brain when they encounter nutrients or toxins. For some tastes, tiny pores in cell membranes let taste chemicals in. Such taste receptors aren’t limited to the tongue; they are also found in the gastrointestinal tract, liver, pancreas, fat cells, brain, muscle cells, thyroid and lungs. We don’t generally think of these organs as tasting anything, but they use the receptors to pick up the presence of various molecules and metabolize them, said Diego Bohórquez, a self-described gut-brain neuroscientist at Duke University. For example, when the gut notices sugar in food, it tells the brain to alert other organs to get ready for digestion. © 2024 The New York Times Company

Keyword: Chemical Senses (Smell & Taste)
Link ID: 29336 - Posted: 06.02.2024

By Elissa Welle The traditional story of Alzheimer’s disease casts two key proteins in starring roles—each with clear stage directions: Plaques of sticky amyloid beta protein accumulate outside neurons as the condition unfolds, and tangles of tau protein gum up the insides of the cells. But it may be time for a rewrite. Amyloid beta, too, coalesces inside neurons and seems to mark them for early death, according to research posted on a preprint server last November. In brain slices from people with Alzheimer’s, but not in those from age-matched controls, cells containing intracellular amyloid beta decreased in number as the disease progressed. At first, the result appeared to be a mistake, says study investigator Alessia Caramello, a postdoctoral researcher in the UK Dementia Research Institute. Intracellular amyloid beta is “nowhere to be found” in most discussions of Alzheimer’s disease, she says. “It’s never mentioned. Never ever.” Instead, the field has long focused on the buildup of amyloid beta outside the cell. But even before those plaques form, there seems to be another pathological event, she says—namely intracellular amyloid—“Why not look at it?” The work from Caramello and her colleagues is not the first to suggest that amyloid beta, or Abeta for short, wreaks havoc inside neurons, not just in the extracellular space between them. This “inside-out” hypothesis, as it has been called, has implications for how scientists understand Alzheimer’s disease. In particular, it could help to account for some big mysteries around the condition—such as why the extent of amyloid beta plaques in the brain doesn’t always correlate with symptoms, why neurons die and why treatments to lessen plaques marginally slow down, but do not halt, the disease. “It just puts a totally different spin on how you need to address this,” says Gunnar Gouras, professor of experimental neurology at Lund University and a proponent of the inside-out hypothesis. “It’s really a cell biological, neurobiological issue that is a bit more complex. And we need to also study this instead of just saying, ‘Abeta is bad; we’ve got to get rid of it.’” © 2024 Simons Foundation

Keyword: Alzheimers
Link ID: 29335 - Posted: 06.02.2024

By Ben Casselman Long before people develop dementia, they often begin falling behind on mortgage payments, credit card bills and other financial obligations, new research shows. A team of economists and medical experts at the Federal Reserve Bank of New York and Georgetown University combined Medicare records with data from Equifax, the credit bureau, to study how people’s borrowing behavior changed in the years before and after a diagnosis of Alzheimer’s or a similar disorder. What they found was striking: Credit scores among people who later develop dementia begin falling sharply long before their disease is formally identified. A year before diagnosis, these people were 17.2 percent more likely to be delinquent on their mortgage payments than before the onset of the disease, and 34.3 percent more likely to be delinquent on their credit card bills. The issues start even earlier: The study finds evidence of people falling behind on their debts five years before diagnosis. “The results are striking in both their clarity and their consistency,” said Carole Roan Gresenz, a Georgetown University economist who was one of the study’s authors. Credit scores and delinquencies, she said, “consistently worsen over time as diagnosis approaches, and so it literally mirrors the changes in cognitive decline that we’re observing.” The research adds to a growing body of work documenting what many Alzheimer’s patients and their families already know: Decision-making, including on financial matters, can begin to deteriorate long before a diagnosis is made or even suspected. People who are starting to experience cognitive decline may miss payments, make impulsive purchases or put money into risky investments they would not have considered before the disease. “There’s not just getting forgetful, but our risk tolerance changes,” said Lauren Hersch Nicholas, a professor at the University of Colorado School of Medicine who has studied dementia’s impact on people’s finances. “It might seem suddenly like a good move to move a diversified financial portfolio into some stock that someone recommended.” © 2024 The New York Times Company

Keyword: Alzheimers
Link ID: 29334 - Posted: 06.02.2024

Sacha Pfeiffer A few weeks ago, at about 6:45 in the morning, I was at home, waiting to talk live on the air with Morning Edition host Michel Martin about a story I'd done, when I suddenly heard a loud metallic hammering. It sounded like a machine was vibrating my house. It happened again about 15 seconds later. And again after that. This rhythmic clatter seemed to be coming from my basement utility closet. Was my furnace breaking? Or my water heater? I worried that it might happen while I was on the air. Luckily, the noise stopped while I spoke with Michel, but restarted later. This time I heard another sound, a warbling or trilling, possibly inside my chimney. Was there an animal in there? I ran outside, looked up at my roof — and saw a woodpecker drilling away at my metal chimney cap. I've seen and heard plenty of woodpeckers hammer on trees. But never on metal. So to find out why the bird was doing this, I called an expert: Kevin McGowan, an ornithologist at the Cornell Lab of Ornithology who recently created a course called "The Wonderful World of Woodpeckers." McGowan said woodpeckers batter wood to find food, make a home, mark territory and attract a mate. But when they bash away at metal, "what the birds are trying to do is make as big a noise as possible," he said, "and a number of these guys have found that — you know what? If you hammer on metal, it's really loud!" Woodpeckers primarily do this during the springtime breeding season, and their metallic racket has two purposes, "basically summarized as: All other guys stay away, all the girls come to me," McGowan said. "And the bigger the noise, the better." © 2024 npr

Keyword: Sexual Behavior; Animal Communication
Link ID: 29333 - Posted: 06.02.2024

By Andrew Jacobs and Christina Jewett The Food and Drug Administration on Friday raised concerns about the health effects of MDMA as a treatment for post-traumatic stress disorder, citing flaws in a company’s studies that could pose major obstacles to approval of a treatment anticipated to help people struggling with the condition. The agency said that bias had seeped into the studies because participants and therapists were readily able to figure out who got MDMA versus a placebo. It also flagged “significant increases” in blood pressure and pulse rates that could “trigger cardiovascular events.” The staff analysis was conducted for an independent advisory panel that will meet Tuesday to consider an application by Lykos Therapeutics for the use of MDMA-assisted therapy. The agency’s concerns highlight the unique and complex issues facing regulators as they weigh the therapeutic value of an illegal drug commonly known as Ecstasy that has long been associated with all-night raves and cuddle puddles. Approval would mark a seismic change in the nation’s tortuous relationship with psychedelic compounds, most of which the Drug Enforcement Administration classifies as illegal substances that have “no currently accepted medical use and a high potential for abuse.” Research like the current studies on MDMA therapy have corralled the support of various groups and lawmakers from both parties for treatment of PTSD, a condition affecting millions of Americans, especially military veterans who face an outsize risk of suicide. No new therapy has been approved for PTSD in more than 20 years. “What’s happening is truly a paradigm shift for psychiatry,” said David Olson, director of the U.C. Davis Institute for Psychedelics and Neurotherapeutics. “MDMA is an important step for the field because we really lack effective treatments, period, and people need help now.” © 2024 The New York Times Company

Keyword: Drug Abuse; Depression
Link ID: 29332 - Posted: 06.02.2024

By Liqun Luo The brain is complex; in humans it consists of about 100 billion neurons, making on the order of 100 trillion connections. It is often compared with another complex system that has enormous problem-solving power: the digital computer. Both the brain and the computer contain a large number of elementary units—neurons and transistors, respectively—that are wired into complex circuits to process information conveyed by electrical signals. At a global level, the architectures of the brain and the computer resemble each other, consisting of largely separate circuits for input, output, central processing, and memory.1 Which has more problem-solving power—the brain or the computer? Given the rapid advances in computer technology in the past decades, you might think that the computer has the edge. Indeed, computers have been built and programmed to defeat human masters in complex games, such as chess in the 1990s and recently Go, as well as encyclopedic knowledge contests, such as the TV show Jeopardy! As of this writing, however, humans triumph over computers in numerous real-world tasks—ranging from identifying a bicycle or a particular pedestrian on a crowded city street to reaching for a cup of tea and moving it smoothly to one’s lips—let alone conceptualization and creativity. So why is the computer good at certain tasks whereas the brain is better at others? Comparing the computer and the brain has been instructive to both computer engineers and neuroscientists. This comparison started at the dawn of the modern computer era, in a small but profound book entitled The Computer and the Brain, by John von Neumann, a polymath who in the 1940s pioneered the design of a computer architecture that is still the basis of most modern computers today.2 Let’s look at some of these comparisons in numbers (Table 1). © 2024 NautilusNext Inc.,

Keyword: Stroke
Link ID: 29331 - Posted: 05.29.2024

Rodrigo Duarte Around 8% of human DNA is made up of genetic sequences acquired from ancient viruses. These sequences, known as human endogenous retroviruses (or Hervs), date back hundreds of thousands to millions of years – with some even predating the emergence of Homo sapiens. Our latest research suggests that some ancient viral DNA sequences in the human genome play a role in susceptibility to psychiatric disorders such as schizophrenia, bipolar disorder and major depressive disorder. Hervs represent the remnants of these infections with ancient retroviruses. Retroviruses are viruses that insert a copy of their genetic material into the DNA of the cells they infect. Retroviruses probably infected us on multiple occasions during our evolutionary past. When these infections occurred in sperm or egg cells that generated offspring, the genetic material from these retroviruses was passed on to subsequent generations, becoming a permanent part of our lineage. Initially, scientists considered Hervs to be “junk DNA” – parts of our genome with no discernible function. But as our understanding of the human genome has advanced, it’s become evident that this so-called junk DNA is responsible for more functions than originally hypothesised. First, researchers found that Hervs can regulate the expression of other human genes. A genetic feature is said to be “expressed” if its DNA segment is used to produce RNA (ribonucleic acid) molecules. These RNA molecules can then serve as intermediaries leading to the production of specific proteins, or help to regulate other parts of the genome. Initial research suggested that Hervs regulate the expression of neighbouring genes with important biological functions. One example of this is a Herv that regulates the expression of a gene involved in modifying connections between brain cells. © 2010–2024, The Conversation US, Inc.

Keyword: Depression; Schizophrenia
Link ID: 29330 - Posted: 05.29.2024

By Elie Dolgin The COVID-19 pandemic didn’t just reshape how children learn and see the world. It transformed the shape of their eyeballs. As real-life classrooms and playgrounds gave way to virtual meetings and digital devices, the time that children spent focusing on screens and other nearby objects surged — and the time they spent outdoors dropped precipitously. This shift led to a notable change in children’s anatomy: their eyeballs lengthened to better accommodate short-vision tasks. Study after study, in regions ranging from Europe to Asia, documented this change. One analysis from Hong Kong even reported a near doubling in the incidence of pathologically stretched eyeballs among six-year-olds compared with pre-pandemic levels1. This elongation improves the clarity of close-up images on the retina, the light-sensitive layer at the back of the eye. But it also makes far-away objects appear blurry, leading to a condition known as myopia, or short-sightedness. And although corrective eyewear can usually address the issue — allowing children to, for example, see a blackboard or read from a distance — severe myopia can lead to more-serious complications, such as retinal detachment, macular degeneration, glaucoma and even permanent blindness. Rates of myopia were booming well before the COVID-19 pandemic. Widely cited projections in the mid-2010s suggested that myopia would affect half of the world’s population by mid-century (see ‘Rising prevalence’), which would effectively double the incidence rate in less than four decades2 (see ‘Affecting every age’). Now, those alarming predictions seem much too modest, says Neelam Pawar, a paediatric ophthalmologist at the Aravind Eye Hospital in Tirunelveli, India. “I don’t think it will double,” she says. “It will triple.” © 2024 Springer Nature Limited

Keyword: Vision; Development of the Brain
Link ID: 29329 - Posted: 05.29.2024

By Matthew Hutson ChatGPT and other AI tools are upending our digital lives, but our AI interactions are about to get physical. Humanoid robots trained with a particular type of AI to sense and react to their world could lend a hand in factories, space stations, nursing homes and beyond. Two recent papers in Science Robotics highlight how that type of AI — called reinforcement learning — could make such robots a reality. “We’ve seen really wonderful progress in AI in the digital world with tools like GPT,” says Ilija Radosavovic, a computer scientist at the University of California, Berkeley. “But I think that AI in the physical world has the potential to be even more transformational.” The state-of-the-art software that controls the movements of bipedal bots often uses what’s called model-based predictive control. It’s led to very sophisticated systems, such as the parkour-performing Atlas robot from Boston Dynamics. But these robot brains require a fair amount of human expertise to program, and they don’t adapt well to unfamiliar situations. Reinforcement learning, or RL, in which AI learns through trial and error to perform sequences of actions, may prove a better approach. “We wanted to see how far we can push reinforcement learning in real robots,” says Tuomas Haarnoja, a computer scientist at Google DeepMind and coauthor of one of the Science Robotics papers. Haarnoja and colleagues chose to develop software for a 20-inch-tall toy robot called OP3, made by the company Robotis. The team not only wanted to teach OP3 to walk but also to play one-on-one soccer. “Soccer is a nice environment to study general reinforcement learning,” says Guy Lever of Google DeepMind, a coauthor of the paper. It requires planning, agility, exploration, cooperation and competition. © Society for Science & the Public 2000–2024.

Keyword: Robotics
Link ID: 29328 - Posted: 05.29.2024

By Elissa Welle A new study suggests that the brain clears less waste during sleep and under anesthesia than while in other states—directly contradicting prior results that suggest sleep initiates that process. The findings are stirring fresh debate on social media and elsewhere over the glymphatic system hypothesis, which contends that convective flow of cerebrospinal fluid clears the sleeping brain of toxins. The new work, published 13 May in Nature Neuroscience, proposes that fluid diffusion is responsible for moving waste throughout the brain. It uses a different method than the earlier studies—injecting tracers into mouse brain tissue instead of cerebrospinal fluid—which is likely a more reliable way to understand how the fluid moves through densely packed neurons, says Jason Rihel, professor of behavioral genetics at University College London, who was not involved in any of the studies on brain clearance. The findings have prompted some sleep researchers, including Rihel, to question the existence of a glymphatic system and whether brain clearance is tied to sleep-wake states, he says. But leading proponents of the sleep-induced clearance theory are pushing back against the study’s techniques. The new study is “misleading” and “extremely poorly done,” says Maiken Nedergaard, professor of neurology at the University of Rochester Medical Center, whose 2013 study on brain clearance led to the hypothesis of a glymphatic system. She says she plans to challenge the work in a proposed Matters Arising commentary for Nature Neuroscience. Inserting needles into the brain damages the tissue, and injecting fluid, as the team behind the new work did, increases intracranial pressure, says Jonathan Kipnis, professor of pathology and immunology at Washington University School of Medicine in St. Louis. Kipnis and his colleagues published a study in February in support of the glymphatic system hypothesis that suggests neural activity facilitates brain clearance. “You disturb the system when you inject into the brain,” Kipnis says, “and that’s why we were always injecting in the CSF.” © 2024 Simons Foundation

Keyword: Sleep
Link ID: 29327 - Posted: 05.25.2024

By Mariana Lenharo Crows know their numbers. An experiment has revealed that these birds can count their own calls, showcasing a numerical skill previously only seen in people. Investigating how animals understand numbers can help scientists to explore the biological origins of humanity’s numerical abilities, says Giorgio Vallortigara, a neuroscientist at the University of Trento in Rovereto, Italy. Being able to produce a deliberate number of vocalizations on cue, as the birds in the experiment did, “is actually a very impressive achievement”, he notes. Andreas Nieder, an animal physiologist at the University of Tübingen in Germany and a co-author of the study published 23 May in Science1, says it was amazing to see how cognitively flexible these corvids are. “They have a reputation of being very smart and intelligent, and they proved this once again.” The researchers worked with three carrion crows (Corvus corone) that had already been trained to caw on command. Over the next several months, the birds were taught to associate visual cues — a screen showing the digits 1, 2, 3 or 4 — with the number of calls they were supposed to produce. They were later also introduced to four auditory cues that were each associated with a distinct number. During the experiment, the birds stood in front of the screen and were presented with a visual or auditory cue. They were expected to produce the number of vocalizations associated with the cue and to peck at an ‘enter key’ on the touchscreen monitor when they were done. If they got it right, an automated feeder delivered bird-seed pellets and mealworms as a reward. They were correct most of the time. “Their performance was way beyond chance and highly significant,” says Nieder. © 2024 Springer Nature Limited

Keyword: Attention; Evolution
Link ID: 29326 - Posted: 05.25.2024

By Steven Strogatz For decades, the best drug therapies for treating depression, like SSRIs, have been based on the idea that depressed brains don’t have enough of the neurotransmitter serotonin. Yet for almost as long, it’s been clear that simplistic theory is wrong. Recent research into the true causes of depression is finding clues in other neurotransmitters and the realization that the brain is much more adaptable than scientists once imagined. Treatments for depression are being reinvented by drugs like ketamine that can help regrow synapses, which can in turn restore the right brain chemistry and improve whole body health. In this episode, John Krystal, a neuropharmacologist at the Yale School of Medicine, shares the new findings in mental health research that are revolutionizing psychiatric medication. STEVEN STROGATZ: According to the World Health Organization, 280 million people worldwide suffer from depression. For decades, people with chronic depression have been told their problem lies with a chemical imbalance in the brain, specifically a deficit in a neurotransmitter called serotonin. And based on this theory, many have been prescribed antidepressants known as selective serotonin reuptake inhibitors, or SSRIs, to correct this chemical imbalance. This theory has become the common narrative, yet almost from the beginning, researchers have questioned the role of serotonin in depression, even though SSRIs do seem to bring a lot of relief to many people. So, if bad brain chemistry isn’t at the root of chronic depression, what is? If the thinking behind SSRIs is wrong, why do they seem to help? And is it possible that as we get closer to the true cause of depression, we may find better treatments for other conditions as well? © 2024 the Simons Foundation.

Keyword: Depression
Link ID: 29325 - Posted: 05.25.2024