Most Recent Links
Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.
By Carl Zimmer For decades, neuroengineers have dreamed of helping people who have been cut off from the world of language. A disease like amyotrophic lateral sclerosis, or A.L.S., weakens the muscles in the airway. A stroke can kill neurons that normally relay commands for speaking. Perhaps, by implanting electrodes, scientists could instead record the brain’s electric activity and translate that into spoken words. Now a team of researchers has made an important advance toward that goal. Previously they succeeded in decoding the signals produced when people tried to speak. In the new study, published on Thursday in the journal Cell, their computer often made correct guesses when the subjects simply imagined saying words. Christian Herff, a neuroscientist at Maastricht University in the Netherlands who was not involved in the research, said the result went beyond the merely technological and shed light on the mystery of language. “It’s a fantastic advance,” Dr. Herff said. The new study is the latest result in a long-running clinical trial, called BrainGate2, that has already seen some remarkable successes. One participant, Casey Harrell, now uses his brain-machine interface to hold conversations with his family and friends. In 2023, after A.L.S. had made his voice unintelligible, Mr. Harrell agreed to have electrodes implanted in his brain. Surgeons placed four arrays of tiny needles on the left side, in a patch of tissue called the motor cortex. The region becomes active when the brain creates commands for muscles to produce speech. A computer recorded the electrical activity from the implants as Mr. Harrell attempted to say different words. Over time, with the help of artificial intelligence, the computer accurately predicted almost 6,000 words, with an accuracy of 97.5 percent. It could then synthesize those words using Mr. Harrell’s voice, based on recordings made before he developed A.L.S. © 2025 The New York Times Company
Keyword: Language; Robotics
Link ID: 29892 - Posted: 08.16.2025
Heidi Ledford Scientists are closing in on the ability to apply genome editing to a formidable new target: the human brain. In the past two years, a spate of technological advances and promising results in mice have been laying the groundwork for treating devastating brain disorders using techniques derived from CRISPR–Cas9 gene editing. Researchers hope that human trials are just a few years away. “The data have never looked so good,” says Monica Coenraads, founder and chief executive of the Rett Syndrome Research Trust in Trumbull, Connecticut. “This is less and less science fiction, and closer to reality.” Daunting challenge Researchers have already developed gene-editing therapies to treat diseases of the blood, liver and eyes. In May, researchers reported1 a stunning success using a bespoke gene-editing therapy to treat a baby boy named KJ with a deadly liver disease. But the brain poses special challenges. The molecular components needed to treat KJ were inserted into fatty particles that naturally accumulate in the liver. Researchers are searching for similar particles that can selectively target the brain, which is surrounded by a defensive barrier that can prevent many substances from entering. Although KJ’s story was exciting, it was also frustrating for those whose family members have neurological diseases, says Coenraads, whose organization focuses on Rett syndrome, a rare disorder that affects brain development. “The question that I hear from our families is, ‘It was done so quickly for him. What’s taking us so long?’” she says. That pool of concerned families is growing as physicians and families increasingly turn to genome sequencing to find the causes of once-mysterious brain disorders, says Cathleen Lutz, a geneticist at The Jackson Laboratory in Bar Harbor, Maine. “People are starting to now find out that their child’s seizures, for example, are related to particular genetic mutations,” she says. © 2025 Springer Nature Limited
Keyword: Genes & Behavior
Link ID: 29891 - Posted: 08.16.2025
By Sofia Caetano Avritzer Vomiting up a droplet of sugar might not seem like the most romantic gesture from a potential suitor. But for one fly species, males that spill their guts are quite a catch. Drosophila subobscura flies’ peculiar “romantic” barfing might have evolved by repurposing brain cells that usually control digestion for more romantic pursuits, researchers report August 14 in Science. Most male fruit flies court by following the females around and vibrating their wings to serenade them with a species-specific love song, says Adriane Otopalik. But some fly species, like D. subobscura, spice things up a little. The males will vomit a bit of their last meal and offer it to females they are interested in, says Otopalik, a neuroscientist at Howard Hughes Medical Institute’s Janelia Research Campus in Ashburn, Va. Nuptial gifts like these are common in some animals, like male spiders attempting to win over their mates without getting their heads bitten off. Scientists think female flies, which can be “very choosy,” might use this romantic barf to pick suitable suitors, says Otopalik, who was not involved in the study. The thousands of neurons that control most of male fruit flies’ courtship produce a male-specific version of a protein called fruitless. Artificially activating these neurons can make D. subobscura males go through the motions of their seduction dance — even when there aren’t any females around, says Daisuke Yamamoto, an evolutionary biologist at National Institute of Information and Communications Technology in Kobe, Japan. Yamamoto and his collaborators wondered if somewhere in these courtship brain cells was the key to understanding how nuptial gift giving evolved. © Society for Science & the Public 2000–2025
Keyword: Sexual Behavior; Genes & Behavior
Link ID: 29890 - Posted: 08.16.2025
By Pam Belluck Sometimes the pain felt like lightning bolts. Or snakes biting. Or needles. “Just imagine the worst burn you’ve ever had, all over your body, never going away,” said Ed Mowery, 55, describing his life with chronic pain. “I would wake up in the middle of night, screaming at the top of my lungs.” Beginning with a severe knee injury he got playing soccer at 15, he underwent about 30 major surgeries for various injuries over the decades, including procedures on his knees, spine and ankles. Doctors put in a spinal cord stimulator, which delivers electrical pulses to relieve pain, and prescribed morphine, oxycodone and other medications, 17 a day at one point. Nothing helped. Unable to walk or sit for more than 10 minutes, Mr. Mowery, of Rio Rancho, N.M., had to stop working at his job selling electronics to engineering companies and stop playing guitar with his death metal band. Out of options four years ago, Mr. Mowery signed up for a cutting-edge experiment: a clinical trial involving personalized deep brain stimulation to try to ease chronic pain. The study, published on Wednesday, outlines a new approach for the most devastating cases of chronic pain, and could also provide insights to help drive invention of less invasive therapies, pain experts said. “It’s highly innovative work, using the experience and technology they have developed and applying it to an underserved area of medicine,” said Dr. Andre Machado, chief of the Neurological Institute at Cleveland Clinic, who was not involved in the study. Chronic pain, defined as lasting at least three months, afflicts about 20 percent of adults in the United States, an estimated 50 million people, according to the Centers for Disease Control and Prevention. In about a third of cases, the pain substantially limits daily activities, the C.D.C. reported. © 2025 The New York Times Company
Keyword: Pain & Touch
Link ID: 29889 - Posted: 08.16.2025
Hannah Devlin Science correspondent Attention deficit hyperactivity disorder medication is linked to significantly lower risk of suicidal behaviours, substance misuse, transport accidents and criminality, according to a study of the wider outcomes of treatment. The research, based on the medical records of nearly 150,000 people in Sweden, suggested that the drugs could have meaningful benefits beyond helping with the core symptoms of ADHD. Although the study was not a randomised trial – and so cannot definitively prove that medication caused improved outcomes – it adds to evidence of the substantial value of treatment. “We found that ADHD medication was associated with significantly reduced rates of first occurrences of suicidal behaviours, substance misuse, transport accidents and criminality,” said Prof Samuele Cortese, a child and adolescent psychiatrist and researcher at the University of Southampton. “Our results should inform the debate on the effects and safety of ADHD medications.” After accounting for factors including age, sex, education level, psychiatric diagnoses and medical history, ADHD medication was associated with reduced rates of a first occurrence of four of the five outcomes investigated: a 17% reduction for suicidal behaviour, 15% for substance misuse, 12% for transport accidents and 13% for criminality. It is well established that ADHD, thought to affect about 5% of children and 2.5% of adults worldwide, is linked to higher rates of mental health problems including suicide, substance misuse and accidental injuries. People with ADHD are also disproportionately represented within the criminal justice system. © 2025 Guardian News & Media Limited
Keyword: ADHD; Depression
Link ID: 29888 - Posted: 08.16.2025
By Dan Samorodnitsky Water is the most fundamental need for all life on Earth. Not every organism needs oxygen, and many make their own food. But for all creatures, from deep-sea microbes and slime molds to trees and humans, water is nonnegotiable. “The first act of life was the capture of water within a cell membrane,” a pair of neurobiologists wrote in a recent review. Ever since, cells have had to stay wet enough to stay alive. Water is the medium in which all chemical reactions in an organism take place, and those reactions are finely tuned to a narrow range of ratios between water and salt, another essential ingredient in life’s chemistry. The cells in your body are permeable to water, so if the water-salt balance of the surrounding fluid — blood, lymph or cerebrospinal fluid, for example — is outside its healthy range, cells can swell or shrink, shrivel or potentially burst. An imbalance can cause brain cells to malfunction, losing their ability to manage ion concentrations across their membranes and propagate action potentials. Although these effects of insufficient water are felt by every cell in the body, cells themselves do not cry out in thirst. Instead, it’s the brain that monitors the body’s water levels and manifests the experience of thirst — a dry tongue, hot throat and rapid onset of malaise — which compels a behavior: acquire water. “These neural circuits that control hunger and thirst are located deep in primitive brain structures like the hypothalamus and brainstem,” said Zachary Knight (opens a new tab), a neuroscientist at the University of California, San Francisco, who recently co-authored a review paper in Neuron (opens a new tab) on the neurobiology of thirst. Because these brain areas are difficult to study — due not only to their location, but also to their composition, with many different cell types and crisscrossed circuitry — it’s only in the last decade or so that neuroscientists have begun to understand how thirst fundamentally works. The body, researchers have found, is filled with sensors that feed clues to the brain about how much water or salt an organism needs to consume. How those sensors work, or what they even are, continues to elude scientists. Their existence offers a tantalizing insight: Water may be fundamental to life, but thirst is an educated guess. © 2025 Simons Foundation
Keyword: Obesity
Link ID: 29887 - Posted: 08.13.2025
By Phie Jacobs When it comes to telling males and females apart, many bird species reject subtlety altogether. Roosters stand out thanks to their big, bright comb and ear-splitting “cock-a-doodle-doo.” Bachelor birds-of-paradise flaunt their vibrant plumage to attract more subdued females. And the male peacock’s feathered train is so ostentatious it famously threw even Charles Darwin for a loop. But that’s not the case for all bird species. When males and females look pretty much the same, scientists must try harder—often using DNA testing—to separate the sexes. According to a new study of wild Australian birds, these methods may be leading to misidentification in cases where an individual’s gonads and outward appearance don’t align with the genetic sex determined by its chromosomes. As scientists report today in Biology Letters, this phenomenon—known as sex reversal—may be more common than anyone expected. The discovery is likely to “raise some eyebrows” (or is it ruffle some feathers?), says Blanche Capel, a biologist at Duke University who wasn’t involved in the new work. Although sex determination is often viewed as a straightforward process, she explains, the reality is much more complicated. In humans, individuals with XX chromosomes typically develop as female, whereas those with XY chromosomes are usually male. But Judith Mank, a zoologist at the University of British Columbia, notes it’s the genes carried on those chromosomes—not the chromosomes—that are the main players. The SRY gene on the Y chromosome, for example, kick-starts male development in mammals. Anyone missing this key gene will end up developing as female, even if they have XY chromosomes. “We think of sex chromosomes as being sex determining,” says Mank, who also wasn’t involved in the new research. “That’s not true.” What’s more, it can matter how these genes are expressed on a cell-by-cell basis. In some species such as fruit flies, zebrafish, and chickens, individual cells have their own sexual identity based on the genes they happen to contain or express, rather than being influenced by the body’s overall hormone levels. When different cells contain different sets of chromosomes, this process can give rise to individuals called gynandromorphs, which exhibit both male and female characteristics. © 2025 American Association for the Advancement of Science.
Keyword: Sexual Behavior; Evolution
Link ID: 29886 - Posted: 08.13.2025
By Tim Vernimmen Mexican tetras are a most peculiar fish species. They occur in many rivers and lakes across Mexico and southern Texas, where they look perfectly ordinary. But unlike most other fishes, tetras also live in caves. And there, in the absence of light, they look dramatically different: They’re very pale and, remarkably, they lack eyes. Time and again, whenever a population was swept into a cave and survived long enough for natural selection to have its way, the eyes disappeared. “But it’s not that everything has been lost in cavefish,” says geneticist Jaya Krishnan of the Oklahoma Medical Research Foundation. “Many enhancements have also happened.” Though the demise of their eyes continues to fascinate biologists, in recent years attention has shifted to other intriguing aspects of cavefish biology. It has become increasingly clear that they haven’t just lost sight, but also gained many adaptations that help them to thrive in their cave environment, including some that may hold clues to treatments for obesity and diabetes in people. It has long been debated why the eyes were lost. Some biologists used to argue that they just withered away over generations because cave-dwelling animals with faulty eyes experienced no disadvantage. But another explanation is now considered more likely, says evolutionary physiologist Nicolas Rohner of the University of Münster in Germany: “Eyes are very expensive in terms of resources and energy. Most people now agree that there must be some advantage to losing them, if you don’t need them.” Scientists have observed that mutations in different genes involved in eye formation have led to eye loss. In other words, says Krishnan, “different cavefish populations have lost their eyes in different ways.”
Keyword: Evolution; Vision
Link ID: 29885 - Posted: 08.13.2025
By Danielle Ivory. Julie Tate and Megan Twohey Amy Enochs was texting with other parents, all wondering why their central Ohio elementary school had gone into lockdown, when the school called. Several fourth graders, including Ms. Enochs’s daughter, had eaten marijuana gummies and were being taken to the hospital with racing pulses, nausea and hallucinations. A classmate had found the gummies at home and mistaken them for Easter candy. Ms. Enochs recalled hyperventilating that spring day three years ago. “I was scared to death,” she said, her voice breaking. “It was shock and panic.” As legalization and commercialization of cannabis have spread across the United States, making marijuana edibles more readily available, the number of cannabis-related incidents reported to poison control centers has sharply increased: from about 930 cases in 2009 to more than 22,000 last year, data from America’s Poison Centers shows. Of those, more than 13,000 caused documented negative effects and were classified by the organization as nonlethal poisonings. These numbers are almost certainly an undercount, public health officials say, because hospitals are not required to report such cases. More than 75 percent of the poisonings last year involved children or teenagers. In most instances of cannabis exposure, the physical effects were not severe, according to the poison control data. But a growing number of poisonings have led to breathing problems or other life-threatening consequences. In 2009, just 10 such cases were reported to poison centers; last year, there were more than 620 — a vast majority of them children or teens. More than 100 required ventilators. © 2025 The New York Times Company
Keyword: Drug Abuse
Link ID: 29884 - Posted: 08.13.2025
By Holly Barker Prairie voles do not need oxytocin receptors to bond with a mate or care for their pups, but the receptors are indispensable for forming robust friendships, according to a study published today in Current Biology. Female voles that lack the receptors struggle to make friends with other females, and when they do, they are not motivated to spend time with friends over strangers and quickly lose track of their friends in a group, the study found. The findings suggest that oxytocin is required for nurturing specific relationships, rather than for general sociability, says principal investigator Annaliese Beery, associate professor of integrative biology and neuroscience at the University of California, Berkeley. That concept—known as selectivity—is a “really important component of human friendships,” she says. Until now, prairie voles have typically been used to probe the neural basis of love: The animals are unusual among rodents for selecting a single partner to nest and raise pups with. Compared with non-monogamous vole species, prairie voles have a high density of oxytocin receptors in multiple brain regions, including the nucleus accumbens. Drugs that block the receptors there impair mate attachment in prairie voles, whereas brain infusions of oxytocin fast track the animal’s choice of a lifelong partner. Oxytocin appears to be especially important in the initial stages of bond formation, according to studies of transgenic prairie voles. Voles genetically engineered to carry loss-of-function mutations in both copies of the oxytocin receptor gene are less likely to bond with a littermate they have been housed with for less than a week, according to work reported in a 2024 preprint. © 2025 Simons Foundation
Keyword: Hormones & Behavior; Evolution
Link ID: 29883 - Posted: 08.09.2025
By K. R. Callaway Ever bite into something so bitter that you had to spit it out? An ages-old genetic mutation helps you and other animals perceive bitterness and thus avoid toxins associated with it. But while most creatures instinctively spit first and ask questions later, molecular biologists have been trying to get a taste of what bitterness can tell us about sensory evolution and human physiology. A new study, published in the Journal of Agricultural and Food Chemistry, is the first analysis of how taste receptors respond to a mushroom’s bitter compounds—which include some of the most potently bitter flavors currently known to science. The bitter bracket mushroom is nontoxic but considered inedible because of its taste. Researchers extracted its bitter compounds, finding two familiar ones—and three that were previously unknown. Instead of tasting these substances themselves, the scientists introduced them to an “artificial tongue” that they made by inserting human taste receptors into fast-growing embryonic kidney cells. One of the newfound bitter substances activated the taste receptors even at the lowest concentration measured, 63.3 micrograms per liter. That’s like sensing three quarters of a cup of sugar in an Olympic-sized swimming pool. Humans have about 25 kinds of bitter taste receptors lining our mouths and throats, but these same receptors also grow throughout the body—in the lungs, digestive tract and even brain. Despite their ubiquity, they have been only partially explored. Four of our bitter receptors have no known natural activator. Finding activating compounds could illuminate the interactions that might have shaped those taste receptors’ evolution, says study lead author Maik Behrens, a molecular biologist at the Leibniz Institute for Food Systems Biology. © 2025 SCIENTIFIC AMERICAN,
Keyword: Chemical Senses (Smell & Taste)
Link ID: 29882 - Posted: 08.09.2025
By Annika Inampudi Once placed in sodas as a mood enhancer and eventually made into a drug for bipolar disorder, lithium is probably best known for powering the batteries in our electronics. But a new study suggests yet another potential use for this versatile metal as a treatment for Alzheimer’s disease. In a paper published today in Nature, researchers report reduced lithium levels in the brains of people with Alzheimer’s and mild cognitive impairment. They also found that a form of lithium improved memory when fed to mice with Alzheimer’s-like symptoms. The paper is a “thorough and pioneering exploration” of lithium’s role in the brain during cognitive decline, says Ashley Bush, a psychiatrist at the Florey Institute of Neuroscience and Mental Health who was not involved in the study. The work offers a new route forward for a field still eager to find new treatments despite the recent approval of antiamyloid drugs, he says. Found in extremely low concentrations in rocks and seawater, lithium enters the human body through foods such as cereals, cabbage, and tomatoes, or through drinking water that naturally flows through lithium-rich rocks. For reasons that remain unknown, lithium seems to stabilize mood, and the compound lithium carbonate has been used for decades to treat mania in bipolar disorder. Some previous studies have hinted that the metal might also have neuroprotective effects, leading researchers to propose it as a treatment for neurodegenerative conditions. But these findings come mostly from observational studies and haven’t proved lithium can change the course of Alzheimer’s. Small clinical trials in dementia patients have produced mixed results. © 2025 American Association for the Advancement of Science.
Keyword: Alzheimers
Link ID: 29881 - Posted: 08.09.2025
By Shoshana Walter In 2005, J. was a young pharmacist, in the middle of a divorce, when he decided he needed a change. He was outgoing, a former rugby player, and he had begun to feel out of place among his quiet co-workers. “Does a pharmacist ever come over to you and chitchat?” he says. “They’re very mousy and very introverted.” For his new job, J. — who asked to be referred to by his first initial to protect his privacy — had in mind something a little more glamorous: pharmaceutical sales. He found a contract position at Reckitt Benckiser Pharmaceuticals, a U.S. subsidiary of a household-goods company based in Britain that was best known for Lysol and French’s mustard. The company had recently introduced Suboxone, a groundbreaking new medication in the United States that treats opioid addiction. Much like nicotine gum, Suboxone worked as a substitute, binding to the same receptors in the brain as illicit opioids, taking away withdrawal symptoms, quelling cravings and making it hard to continue misusing drugs. At other companies — like Purdue Pharma, the maker of OxyContin — sales reps regularly trawled doctors’ offices and used company credit cards to treat physicians to expensive meals and lavish trips. At Reckitt, sales reps were told they had a different mandate. “You weren’t a credit card on legs,” says Chris Hassan, who oversaw Reckitt’s sales force at the time. Reps held the title “clinical liaisons,” and their job was not only to sell Suboxone but also to convince doctors that addiction was a disease, not a moral failing, and that it could be treated with medication instead of prison sentences. Reckitt hired people of all backgrounds — counselors and behavioral-health clinicians as well as traditional salespeople, including ones they recruited from Purdue Pharma. Those who had sold OxyContin, Hassan notes, seemed especially motivated to sell the solution to the problem they had helped cause. “The people that had mirrors in their home and had to look at themselves, they didn’t like what they saw,” he says. “Purdue was a great source of hires for us.” Almost right away, however, it became clear that most doctors were not lining up to care for addicted patients. Some were the same physicians who were driving the opioid crisis by overprescribing painkillers. Others felt ill equipped to treat substance users or dismissed such patients as untrustworthy. An addicted patient was “a liar or crook,” says George Agapios, an Indiana doctor who initially resisted offering treatment. He describes many physicians’ feelings in that era as: “The people associated with it were not exactly the cream of the crop — so let’s not waste our time.” Several doctors turned Reckitt reps away from their offices. © 2025 The New York Times Company
Keyword: Drug Abuse
Link ID: 29880 - Posted: 08.09.2025
By Tina Hesman Saey A snail may hold the key to restoring vision for people with some eye diseases. Golden apple snails (Pomacea canaliculata) are freshwater snails from South America. Alice Accorsi became familiar with the species as a graduate student in Italy. “You could literally buy them in a pet store as snails that clean the bottom of the fish tanks,” she recalls. Turns out, the snails are among the most invasive species in the world. And that got Accorsi thinking: Why are they so resilient and able to thrive in new environments? She began studying the snails’ immune systems and has now found they are not the only parts of the animals able to bounce back from adversity. These snails can completely regrow a functional eye within months of having one amputated, Accorsi and colleagues report August 6 in Nature Communications. Side-by-side images of snail eyes. On the left is a normal, intact snail eye. On the right is an eye that has regrown two months after it was surgically removed. The eyes look similar. They are both round with a black spot in the middle. A snail’s eye was surgically removed, but it grew a new one. Two months after amputation the new eye (right) looks much like the uninjured one (left).Alice Accorsi Scientists have known for centuries that some snails can regrow their heads, and research has revealed other animals can regenerate bodies, tails or limbs. But this finding is exciting because apple snails have camera-like eyes similar to those of humans. Understanding how the snails re-create or repair their eyes might lead to therapies to heal people’s eye injuries or reverse diseases such as macular degeneration. Accorsi, now a developmental biologist at the University of California, Davis, used the molecular scissors called CRISPR/Cas9 to genetically disable certain key genes involved in eye development and established lineages of snails carrying those mutations. © Society for Science & the Public 2000–2025.
Keyword: Vision; Regeneration
Link ID: 29879 - Posted: 08.06.2025
By Andrew Iwaniuk, Georg Striedter Sleep is the most obvious behavior that, in most animals, follows a circadian rhythm. But have you ever seen a bird asleep? Maybe you have, though they usually wake up before you get close enough to see whether they have their eyes closed. Moreover, just because an animal is still and closed its eyes, does that really mean it is sleeping? Maybe it is just resting. Conversely, might some birds sleep with one or both eyes open? Indeed, it is difficult to tell whether an animal is sleeping just by observing it. To overcome this problem, researchers may prod the animal to see whether it is less responsive at certain times of day. A more definitive method for demonstrating sleep in vertebrates is to record an animal’s brain waves (its electroencephalogram, or EEG), because these waves change significantly as an individual falls asleep and then progresses through several stages of sleep. In birds, the use of EEG recordings is essential because they can sleep with one or both eyes open, presumably so they can stay alert to threats. Ostriches, for example, tend to sleep while sitting on the ground, holding their head up high, and keeping both eyes open. They certainly look alert during this time, but EEG waves reveal that they are actually asleep Types and patterns of sleep An EEG measures the activity of many neurons simultaneously. In mammals, it is usually recorded from multiple electrodes placed over the neocortex; in birds, the electrodes are typically placed on top of the hyperpallium (aka the Wulst; see Chapter 1). In addition to performing an EEG, sleep researchers typically record the animal’s eye movements and an electromyogram (EMG), which is a measure of muscle activity, often characterized as muscle “tone.” These kinds of studies have revealed that, in mammals, the transition from the waking state to sleep is marked by a shift from EEG waves that are low in amplitude (i.e., small) and high in frequency (>20 Hz) to waves that are much larger but lower in frequency (1–4 Hz). Because the latter state is characterized by powerful low-frequency EEG waves (aka slow-wave activity), it is commonly called slow-wave sleep (SWS). The mechanisms that cause SWS are complicated and involve a variety of sleep-promoting processes. However, the large amplitude of these slow waves reflects that, during SWS, numerous neurons fire in rhythm with one another so that their electrical potentials sum when they are recorded through the EEG electrodes. © 2025 Simons Foundation
Keyword: Sleep; Evolution
Link ID: 29878 - Posted: 08.06.2025
By Kamal Nahas High-intensity yoga for less than 30 minutes, twice a week, may be the best workout routine for catching high-quality shut-eye, a new study shows. But before people jump on the yoga trend, researchers say more experiments are needed to confirm the study’s findings. While exercise in general is known to improve sleep, a meta-analysis published July 11 in Sleep and Biological Rhythms presents a broad comparison of exercise routines and their influence on sleep quality. By indirectly comparing 30 trials from about a dozen countries, researchers at Harbin Sport University in China ranked how well different exercise methods influence sleep. Yoga won out, followed by walking, resistance training and aerobic exercise. While sleep disorders can be treated with cognitive behavioral therapy or sleeping pills, these interventions don’t work for everyone. “Medications are helpful in the short-term, but some of them have negative effects on the elderly,” says Saurabh Thosar, a sleep researcher at the Oregon Institute of Occupational Health Sciences in Portland. Exercise offers an alternative, but it’s tough to tell which routine is best, making it unclear how best to prescribe it. Trials that investigate this question tend to include one or two types of exercise differing in factors such as how hard, how often or how long they were performed for. Given the global prevalence of sleep problems such as insomnia, which recent estimates say affects about 16 percent of people worldwide, there is a pressing need to find the best exercise to prescribe for a good night’s snooze. © Society for Science & the Public 2000–2025.
Keyword: Sleep
Link ID: 29877 - Posted: 08.06.2025
By Bridget Alex More than 10 million years ago, ancestral apes in Africa rummaged through leaf litter for tasty morsels: fallen, fermenting fruit. Tapping this resource may have given some apes a nutritional boost, an advantage that could have paved the way for the evolution of our own alcohol tolerance. A study out today in BioScience adds support to this so-called “drunken monkey” hypothesis by examining just how often living apes indulge in fallen—presumably boozy—fruits. The research also gives this behavior a much-needed name: “scrumping.” The work provides “a fresh and useful perspective on the importance of fallen fruit,” says Amanda Melin, a biological anthropologist at the University of Calgary who was not involved with the research. She adds that scrumping “is an efficient and evocative way to describe this behavior” that she will use in the future. The form of alcohol we imbibe, ethanol, occurs naturally when yeast grows in fruits, saps, or nectars. Many animals, from elephants to songbirds, can get buzzed off these wild taps. Meanwhile, most human societies have invented ways to ferment food and drink. Biomolecular traces on artifacts show that by at least 8000 years ago, people in the Caucasus region were brewing alcoholic beverages from grapes, while people in China were sipping on boozy drinks made from many ingredients, including millet, rice, ginger, and yam. These beverages’ arrival coincides roughly with the start of farming. In fact, some scholars think cereals may have been domesticated for beer rather than bread. The idea that our species’ ability to consume alcohol arose in our distant primate ancestors was formulated by evolutionary biologist Robert Dudley 25 years ago as he was studying monkeys—hence the name of the hypothesis—rather than the chimps and other apes analyzed in the new study. Rank, fermenting fruit is easy to sniff out, the idea goes, so being able to eat it would have given ancient apes an additional resource that other animals avoided.
Keyword: Drug Abuse; Evolution
Link ID: 29876 - Posted: 08.06.2025
Mariana Lenharo In late 2005, five months after a car accident, a 23-year-old woman lay unresponsive in a hospital bed. She had a severe brain injury and showed no sign of awareness. But when researchers scanning her brain asked her to imagine playing tennis, something striking happened: brain areas linked to movement lit up on her scan1. The experiment, conceived by neuroscientist Adrian Owen and his colleagues, suggested that the woman understood the instructions and decided to cooperate — despite appearing to be unresponsive. Owen, now at Western University in London, Canada, and his colleagues had introduced a new way to test for consciousness. Whereas some previous tests relied on observing general brain activity, this strategy zeroed in on activity directly linked to a researcher’s verbal command. The strategy has since been applied to hundreds of unresponsive people, revealing that many maintain an inner life and are aware of the world around them, at least to some extent. A 2024 study found that one in four people who were physically unresponsive had brain activity that suggested they could understand and follow commands to imagine specific activities, such as playing tennis or walking through a familiar space2. The tests rely on advanced neuroimaging techniques, so are mostly limited to research settings because of their high costs and the needed expertise. But since 2018, medical guidelines have started to recommend using these tests in clinical practice3. Since these methods emerged, scientists have been developing ways to probe layers of consciousness that are even more hidden. The stakes are high. Tens of thousands of people worldwide are currently in a persistent unresponsive state. Assessing their consciousness can guide important treatment decisions, such as whether to keep them on life support. Studies also suggest that hospitalized, unresponsive people with hidden signs of awareness are more likely to recover than are those without such signs (see, for example, ref. 4). © 2025 Springer Nature Limited
Keyword: Consciousness
Link ID: 29875 - Posted: 08.02.2025
By Tim Bayne One of the key scientific questions about consciousness concerns its distribution. We know that adult humans have the capacity for consciousness, but what about human neonates, bees or artificial intelligence (AI) systems? Who else—other than ourselves—belongs in the “consciousness club,” and how might we figure this out? It is tempting to assume, as many do, that we need a theory of consciousness to answer the distribution question. In the words of neuroscientists Giulio Tononi and Christof Koch, “we need not only more data but also a theory of consciousness—one that says what experience is and what type of physical systems can have it.” This is what philosopher Jonathan Birch has labeled the “theory-heavy” approach to the distribution problem. But there are serious issues with the theory-heavy approach. One is that we don’t have a consensus theory of consciousness. In a highly selective review that Anil Seth and I published in 2022, we listed no fewer than 22 neurobiological theories of consciousness. This overabundance of theories could reasonably be ignored if most agreed on fundamental questions in the field, such as which systems have the capacity for consciousness or the question of when consciousness first emerges in human development, but they don’t. A further problem with the theory-heavy approach is that in order to speak to the distribution problem, a theory cannot be restricted to consciousness as it occurs in adult humans, but must also apply to human infants, nonhuman animals, synthetic biological systems and AI. But because theories are largely based on data drawn from the study of adult humans, there will inevitably be a gap between the evidence base of a general theory and its scope. Why should we think that a theory developed in response to adult humans applies to different kinds of systems? © 2025 Simons Foundation
Keyword: Consciousness
Link ID: 29874 - Posted: 08.02.2025
By Roni Caryn Rabin The Food and Drug Administration on Wednesday approved a medical device that offers new hope to patients incapacitated by rheumatoid arthritis, a chronic condition that afflicts 1.5 million Americans and is often resistant to treatment. The condition is usually managed with medications. The device represents a radical departure from standard care, tapping the power of the brain and nervous system to tamp down the uncontrolled inflammation that leads to the debilitating autoimmune disease. The SetPoint System is an inch-long device that is surgically implanted into the neck, where it sits in a pod wrapped around the vagus nerve, which some scientists believe is the longest nerve in the body. The device electrically stimulates the nerve for one minute each day. The stimulation can turn off crippling inflammation and “reset” the immune system, research has shown. Most drugs used to treat rheumatoid arthritis suppress the immune system, leaving patients vulnerable to serious infections. On a recent episode of the American College of Rheumatology podcast, the SetPoint implant was described as representing a “true paradigm shift” in treatment of the disease, which until now has relied almost entirely on an evolving set of pharmaceutical interventions, from gold salts to powerful agents called biologics. The F.D.A. designated the implant as a breakthrough last year in order to expedite its development and approval. It represents an early test of the promise of so-called bioelectronic medicine to modulate inflammation, which plays a key role in diseases including diabetes, heart disease and cancer. Clinical trials are already underway testing vagus nerve stimulation to manage inflammatory bowel disease in children, lupus and other conditions. Trials for patients with multiple sclerosis and Crohn’s disease are also planned. In a yearlong randomized controlled trial of 242 patients that included a sham-treatment arm, over half of the participants using the SetPoint implant alone achieved remission or saw their disease recede. Measures of joint pain and swelling fell by 60 percent and 63 percent, respectively. © 2025 The New York Times Company
Keyword: Pain & Touch; Neuroimmunology
Link ID: 29873 - Posted: 08.02.2025