Most Recent Links

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 1601 - 1620 of 29322

Researchers have identified distinct differences among the cells comprising a tissue in the retina that is vital to human visual perception. The scientists from the National Eye Institute (NEI) discovered five subpopulations of retinal pigment epithelium (RPE)—a layer of tissue that nourishes and supports the retina’s light-sensing photoreceptors. Using artificial intelligence, the researchers analyzed images of RPE at single-cell resolution to create a reference map that locates each subpopulation within the eye. A report on the research published in Proceedings of the National Academy of Sciences. “These results provide a first-of-its-kind framework for understanding different RPE cell subpopulations and their vulnerability to retinal diseases, and for developing targeted therapies to treat them,” said Michael F. Chiang, M.D., director of the NEI, part of the National Institutes of Health. “The findings will help us develop more precise cell and gene therapies for specific degenerative eye diseases,” said the study’s lead investigator, Kapil Bharti, Ph.D., who directs the NEI Ocular and Stem Cell Translational Research Section. Vision begins when light hits the rod and cone photoreceptors that line the retina in the back of the eye. Once activated, photoreceptors send signals through a complex network of other retinal neurons that converge at the optic nerve before traveling to various centers in the brain. The RPE sits beneath the photoreceptors as a monolayer, one cell deep. Age and disease can cause metabolic changes in RPE cells that can lead to photoreceptor degeneration. The impact on vision from these RPE changes varies dramatically by severity and where the RPE cells reside within the retina. For example, late-onset retinal degeneration (L-ORD) affects mostly peripheral retina and, therefore, peripheral vision. Age-related macular degeneration (AMD), a leading cause of vision loss, primarily affects RPE cells in the macula, which is crucial for central vision.

Keyword: Vision
Link ID: 28318 - Posted: 05.07.2022

by Rachel Zamzow Inflammation may inflate or thin out brain regions tied to autism and schizophrenia, researchers report in a new study. The findings add nuance to the long-held hypothesis that immune activation elevates the risk for neurodevelopmental conditions. Autism, for example, is associated with prenatal exposure to infection, previous studies show. Taking a different approach, the new work focuses on how a genetic predisposition to inflammation affects brain development in the general population, says John Williams, research fellow at the University of Birmingham in the United Kingdom, who conducted the work with lead researcher Rachel Upthegrove, professor of psychiatry and youth mental health at the university. By pinpointing where inflammation leaves its mark in the brain, the findings serve as a guidepost for future studies of people with neuropsychiatric conditions, he says. “We think that it points to something that’s fairly transdiagnostic.” For their analyses, the team drew on brain imaging and genetic data from 10,828 women and 9,860 men in the general population who participated in the UK Biobank. They explored how 1,436 possible structural changes in the brain track with having single-nucleotide variants previously shown to increase circulating levels of five inflammatory molecules — interleukin 1 (IL-1), IL-2, IL-6, C-reactive protein and brain-derived neurotrophic factor. Three variants thought to boost IL-6 were associated with 33 structural changes, most notably increased volume in the middle temporal gyrus and fusiform gyrus, and decreased cortical thickness in the superior frontal gyrus — all brain areas implicated in autism. Variants associated with other inflammatory molecules did not track with brain changes, the researchers found. © 2022 Simons Foundation

Keyword: Autism; Genes & Behavior
Link ID: 28317 - Posted: 05.07.2022

Perspective by Susan Berger As I faced a prophylactic double mastectomy in hopes of averting cancer, I had many questions for my surgeons — one of which was about pain. I was stunned when both my breast surgeon and plastic surgeon said that a nerve block would leave me pain-free for about three days, after which the worst of the pain would be over. Pectoralis nerve (PECS) blocks were developed to provide analgesia or pain relief for chest surgeries, including breast surgery. That is what happened. I went through the mastectomy Dec. 1 after learning I had the PALB2 gene mutation that carried a sharply elevated risk of breast cancer as well as a higher risk of ovarian and pancreatic cancers. I also had my fallopian tubes and ovaries removed in July. I had learned about the gene mutation in April 2021, when one of my daughters found out she was a carrier. As a 24-year breast cancer survivor and longtime health reporter, I was astonished that I had heard nothing about this mutation. I researched it and wrote “This Breast Cancer Gene Is Less Well Known, but Nearly as Dangerous” in August. After the double mastectomy, I also wrote about it for The Washington Post. Just as my surgeons at NorthShore University HealthSystem predicted, I was released from the hospital the same day as my surgery and remarkably pain-free. I took one Tramadol (a step down from stronger medications containing codeine) when I got home — only because it was suggested I take one pill. As I recovered, I only took Advil and Tylenol. The opioid epidemic is a major public health issue in the United States and nerve blocks could be a solution. According to a study published in the Journal of Clinical Medicine in 2021, 1 in 20 surgical patients will continue to use opioids beyond 90 days. “There is no association with magnitude of surgery, major versus minor, and the strongest predictor of continued use is surgical exposure,” the study states. © 1996-2022 The Washington Post

Keyword: Pain & Touch; Drug Abuse
Link ID: 28316 - Posted: 05.07.2022

By Jim Robbins TUCSON, Ariz. — In a small room in a building at the Arizona-Sonora Desert Museum, the invertebrate keeper, Emma Califf, lifts up a rock in a plastic box. “This is one of our desert hairies,” she said, exposing a three-inch-long scorpion, its tail arced over its back. “The largest scorpion in North America.” This captive hairy, along with a swarm of inch-long bark scorpions in another box, and two dozen rattlesnakes of varying species and sub- species across the hall, are kept here for the coin of the realm: their venom. Efforts to tease apart the vast swarm of proteins in venom — a field called venomics — have burgeoned in recent years, and the growing catalog of compounds has led to a number of drug discoveries. As the components of these natural toxins continue to be assayed by evolving technologies, the number of promising molecules is also growing. “A century ago we thought venom had three or four components, and now we know just one type of venom can have thousands,” said Leslie V. Boyer, a professor emeritus of pathology at the University of Arizona. “Things are accelerating because a small number of very good laboratories have been pumping out information that everyone else can now use to make discoveries.” She added, “There’s a pharmacopoeia out there waiting to be explored.” It is a striking case of modern-day scientific alchemy: The most highly evolved of natural poisons on the planet are creating a number of effective medicines with the potential for many more. One of the most promising venom-derived drugs to date comes from the deadly Fraser Island funnel web spider of Australia, which halts cell death after a heart attack. Blood flow to the heart is reduced after a heart attack, which makes the cell environment more acidic and leads to cell death. The drug, a protein called Hi1A, is scheduled for clinical trials next year. In the lab, it was tested on the cells of beating human hearts. It was found to block their ability to sense acid, “so the death message is blocked, cell death is reduced, and we see improved heart cell survival,” said Nathan Palpant, a researcher at the University of Queensland in Australia who helped make the discovery. © 2022 The New York Times Company

Keyword: Pain & Touch; Neurotoxins
Link ID: 28315 - Posted: 05.04.2022

If you’ve ever been put under anaesthesia, you might recall the disorienting feeling of blinking your eyes one moment and the next, waking up hours later. Now, findings from a new study illustrate just how profoundly general anaesthesia alters the state of the brain as it induces and maintains unconsciousness. It’s the first paper to track travelling brain waves in subjects all the way through the process of losing to regaining consciousness. An interdisciplinary team has found that the commonly used anaesthetic, propofol, substantially alters how different frequencies of brain waves travel along the cortex – the surface of the brain – and the research has been published in the Journal of Cognitive Neuroscience. Unconsciousness induced by propofol may be in part due to an increase in the strength and direction of slow delta traveling brain waves that disrupt higher-frequency waves associated with cognition. “The rhythms that we associate with higher cognition are drastically altered by propofol,” explains senior author Earl Miller, professor of neuroscience with the Department of Brain and Cognitive Sciences at the Massachusetts Institute of Technology (MIT) in the US. “The beta traveling waves seen during wakefulness are pushed aside, redirected by delta traveling waves that have been altered and made more powerful by the anaesthetic,” he says. “The deltas come through like a bull in a china shop.” Conscious brains show a mixture of brain waves of different frequencies, which rotate or travel straight in various directions: you could think of them like the numerous waves on a choppy ocean.

Keyword: Sleep
Link ID: 28314 - Posted: 05.04.2022

By Lola Butcher While Covid-19’s death toll grabbed the spotlight these past two years, another epidemic continued marching grimly onward in America: deaths from opioid overdose. A record 68,630 individuals died from opioid overdoses in 2020, partly as a result of the isolation and social distancing forced by the pandemic; early data suggest that death rates in many states were even worse in the first half of 2021. But the coronavirus pandemic may also have had a paradoxical benefit for those addicted to opioids: Because Covid-19 made in-person health care unsafe, US telehealth regulations were relaxed so that more services — including addiction treatment — could be provided online. As a result, people with opioid use disorder are accessing medication and support across the country in greater numbers than ever before. While it’s too soon to know for sure whether this helps more people kick their addiction, early signs are promising. The federal government estimates that 2.7 million Americans — nearly 1 percent of the population — have opioid use disorder, also known as opioid addiction. It is a chronic brain disease that develops over time because of repeated use of prescription opioids such as hydrocodone, oxycodone and morphine or illicit fentanyl and heroin. A person with opioid use disorder has a 20 times greater risk of death from overdose, infectious diseases, trauma and suicide than one who does not. Fortunately, two medications — methadone and buprenorphine, both approved by the US Food & Drug Administration — help individuals manage withdrawal symptoms and control or eliminate their compulsive opioid use. Patients who receive these medications fare better than those who do not on a long list of outcomes, says Eric Weintraub, who heads the Division of Alcohol and Drug Abuse at the University of Maryland School of Medicine. They have fewer overdoses; less injection drug use; reduced risk for disease transmission; decreased criminal activity; lower rates of illegal drug use; and better treatment-retention rates. Indeed, people with opioid use disorder receiving long-term treatment with methadone or buprenorphine are up to 50 percent less likely to die from an overdose. © 2022 Annual Reviews

Keyword: Drug Abuse
Link ID: 28313 - Posted: 05.04.2022

Ellen Phiddian Tricyclic antidepressants have long been known to have more than one purpose: among other things, they can alleviate pain – particularly nerve pain. Recent research has finally established why these tricyclic antidepressants (TCAs) can help with nerve pain. The discovery could lead to the rapid development of pain relief medications that don’t include the side effects of TCAs. Nerve pain comes from a variety of sources – including cancer, diabetes, trauma, multiple sclerosis, and infections. These treatments could address a range of different types of nerve pain. It turns out the drugs inhibit a key protein in our nerves, called an N-type calcium channel. These N-type calcium channels are shaped like tiny gates, allowing positively charged calcium ions, or Ca2+, through them. This helps with the transmission of pain signals in the body. Researchers have long been keen to find things that “close” the gate of these calcium channels because that’s likely to have analgesic effects. Adjunct Professor Peter Duggan, a researcher with the CSIRO and senior collaborator on the project, says that he and his colleagues initially stumbled across TCAs from a very different direction: they were investigating the toxins of venomous marine cone snails. “A few of the components in that toxin are actually painkillers and they block these calcium ion channels very, very effectively,” says Duggan. The cone snail toxin has the potential to be very dangerous to people, as well as needing to be administered in an impractical way, so the researchers started looking at similar compounds that might have some of the same properties.

Keyword: Pain & Touch; Depression
Link ID: 28312 - Posted: 05.04.2022

By Gina Kolata An experimental drug has enabled people with obesity or who are overweight to lose about 22.5 percent of their body weight, about 52 pounds on average, in a large trial, the drug’s maker announced on Thursday. The company, Eli Lilly, has not yet submitted the data for publication in a peer-reviewed medical journal or presented them in a public setting. But the claims nonetheless amazed medical experts. “Wow (and a double Wow!)” Dr. Sekar Kathiresan, chief executive of Verve Therapeutics, a company focusing on heart disease drugs, wrote in a tweet. Drugs like Eli Lilly’s, he added, are “truly going to revolutionize the treatment of obesity!!!” Dr. Kathiresan has no ties to Eli Lilly or to the drug. Dr. Lee Kaplan, an obesity expert at the Massachusetts General Hospital, said that the drug’s effect “appears to be significantly better than any other anti-obesity medication that is currently available in the U.S.” The results, he added, are “very impressive.” Dr. Kaplan who consults for a dozen pharmaceutical companies, including Eli Lilly, said he was not involved in the new trial or in the development of this drug. On average, participants in the study weighed 231 pounds at the outset and had a body mass index, or B.M.I. — a commonly used measure of obesity — of 38. (Obesity is defined as a B.M.I. of 30 and higher. At the end of the study, those taking the higher doses of the Eli Lilly drug, called tirzepatide, weighed about 180 pounds and had a B.M.I. just below 30, on average. The results far exceed those usually seen in trials of weight-loss medications and are usually seen only in surgical patients. Some trial participants lost enough weight to fall into the normal range, said Dr. Louis J. Aronne, director of the comprehensive weight control program at Weill Cornell Medical Center, who worked with Eli Lilly as the study’s principal investigator. Most of the people in the trial did not qualify for bariatric surgery, which is reserved for people with a B.M.I. over 40, or those with a B.M.I. from 35 to 40 with sleep apnea or Type 2 diabetes. The risk of developing diabetes is many times higher for people with obesity than for people without it. © 2022 The New York Times Company

Keyword: Obesity
Link ID: 28311 - Posted: 04.30.2022

By Elizabeth Preston On dry nights, the San hunter-gatherers of Namibia often sleep under the stars. They have no electric lights or new Netflix releases keeping them awake. Yet when they rise in the morning, they haven’t gotten any more hours of sleep than a typical Western city-dweller who stayed up doom-scrolling on their smartphone. Research has shown that people in non-industrial societies — the closest thing to the kind of setting our species evolved in — average less than seven hours a night, says evolutionary anthropologist David Samson at the University of Toronto Mississauga. That’s a surprising number when you consider our closest animal relatives. Humans sleep less than any ape, monkey or lemur that scientists have studied. Chimps sleep around 9.5 hours out of every 24. Cotton-top tamarins sleep around 13. Three-striped night monkeys are technically nocturnal, though really, they’re hardly ever awake — they sleep for 17 hours a day. Samson calls this discrepancy the human sleep paradox. “How is this possible, that we’re sleeping the least out of any primate?” he says. Sleep is known to be important for our memory, immune function and other aspects of health. A predictive model of primate sleep based on factors such as body mass, brain size and diet concluded that humans ought to sleep about 9.5 hours out of every 24, not seven. “Something weird is going on,” Samson says. Research by Samson and others in primates and non-industrial human populations has revealed the various ways that human sleep is unusual. We spend fewer hours asleep than our nearest relatives, and more of our night in the phase of sleep known as rapid eye movement, or REM. The reasons for our strange sleep habits are still up for debate but can likely be found in the story of how we became human. Graph shows average time spent sleep of different primate species. Humans sleep the least at seven hours per night; the three-striped night monkey sleeps the most at nearly 17 hours. © 2022 Annual Reviews

Keyword: Sleep; Evolution
Link ID: 28310 - Posted: 04.30.2022

By James Gorman Don’t judge a book by its cover. Don’t judge a dog by its breed. After conducting owner surveys for 18,385 dogs and sequencing the genomes of 2,155 dogs, a group of researchers reported a variety of findings in the journal Science on Thursday, including that for predicting some dog behaviors, breed is essentially useless, and for most, not very good. For instance, one of the clearest findings in the massive, multifaceted study is that breed has no discernible effect on a dog’s reactions to something it finds new or strange. This behavior is related to what the nonscientist might call aggression and would seem to cast doubt on breed stereotypes of aggressive dogs, like pit bulls. One thing pit bulls did score high on was human sociability, no surprise to anyone who has seen internet videos of lap-loving pit bulls. Labrador retriever ancestry, on the other hand, didn’t seem to have any significant correlation with human sociability. This is not to say that there are no differences among breeds, or that breed can’t predict some things. If you adopt a Border collie, said Elinor Karlsson of the Broad Institute and the University of Massachusetts Chan Medical School, an expert in dog genomics and an author of the report, the probability that it will be easier to train and interested in toys “is going to be higher than if you adopt a Great Pyrenees.” But for any given dog you just don’t know — on average, breed accounts for only about 9 percent of the variations in any given dog’s behavior. And no behaviors were restricted to any one breed, even howling, though the study found that behavior was more strongly associated with breeds like Siberian huskies than with other dogs. And yet, in what might seem paradoxical at first, the researchers also found that behavior patterns are strongly inherited. The behaviors they studied had a 25 percent heritability, a complex measure which indicates the influence of genes, but depends on the group of animals studied. But with enough dogs, heritability is a good measure of what’s inherited. In comparing whole genomes, they found several genes that clearly influence behavior, including one for how friendly dogs are. © 2022 The New York Times Company

Keyword: Genes & Behavior; Aggression
Link ID: 28309 - Posted: 04.30.2022

By Monique Brouillette Neuroscientists have long aspired to understand the intangible properties of the mind. Our most treasured cerebral qualities, like the ability to think, write poetry, fall in love and even envision a higher spiritual realm, are all generated in the brain. But how the squishy, pinkish-gray, wrinkled mass of the physical brain gives rise to these impalpable experiences remains a mystery. Some neuroscientists think the key to cracking that mystery is a better map of the brain’s circuitry. Nearly 40 years ago, scientists achieved a milestone by completing a wiring diagram that traced all the connections of the 302 neurons of the roundworm Caenorhabditis elegans. They were traced by hand on printed sheets of electron microscope images, a meticulous and herculean task that took years to complete. The project marked the first-ever complete connectome — a comprehensive map of the neuronal connections in an animal’s nervous system. Today, thanks to advances in computing and image analysis algorithms, it can take less than a month to map a roundworm’s connectome. These technological improvements mean that scientists can set their sights on larger animals. They are closing in on the connectome of fruit fly larvae, with more than 9,000 cells, and adult flies, with 100,000 neurons. Next, they hope to map the brain of a developing fish and, perhaps within the next decade, a mouse, with roughly 70 million neurons — a project nearly a thousand times more ambitious than any done so far. And they have already started to map small pieces of the human brain, an unfathomable quest when the worm connectome was initially mapped. © 2022 Annual Reviews

Keyword: Brain imaging
Link ID: 28308 - Posted: 04.30.2022

By Laura Sanders Young kids’ brains are especially tuned to their mothers’ voices. Teenagers’ brains, in their typical rebellious glory, are most decidedly not. That conclusion, described April 28 in the Journal of Neuroscience, may seem laughably obvious to parents of teenagers, including neuroscientist Daniel Abrams of Stanford University School of Medicine. “I have two teenaged boys myself, and it’s a kind of funny result,” he says. But the finding may reflect something much deeper than a punch line. As kids grow up and expand their social connections beyond their family, their brains need to be attuned to that growing world. “Just as an infant is tuned into a mom, adolescents have this whole other class of sounds and voices that they need to tune into,” Abrams says. He and his colleagues scanned the brains of 7- to 16-year-olds as they heard the voices of either their mothers or unfamiliar women. To simplify the experiment down to just the sound of a voice, the words were gibberish: teebudieshawlt, keebudieshawlt and peebudieshawlt. As the children and teenagers listened, certain parts of their brains became active. Previous experiments by Abrams and his colleagues have shown that certain regions of the brains of kids ages 7 to 12 — particularly those parts involved in detecting rewards and paying attention — respond more strongly to mom’s voice than to a voice of an unknown woman. “In adolescence, we show the exact opposite of that,” Abrams says. In these same brain regions in teens, unfamiliar voices elicited greater responses than the voices of their own dear mothers. The shift from mother to other seems to happen between ages 13 and 14. Society for Science & the Public 2000–2022.

Keyword: Language; Development of the Brain
Link ID: 28307 - Posted: 04.30.2022

By Lisa Feldman Barrett Do your facial movements broadcast your emotions to other people? If you think the answer is yes, think again. This question is under contentious debate. Some experts maintain that people around the world make specific, recognizable faces that express certain emotions, such as smiling in happiness, scowling in anger and gasping with widened eyes in fear. They point to hundreds of studies that appear to demonstrate that smiles, frowns, and so on are universal facial expressions of emotion. They also often cite Charles Darwin’s 1872 book The Expression of the Emotions in Man and Animals to support the claim that universal expressions evolved by natural selection. Other scientists point to a mountain of counterevidence showing that facial movements during emotions vary too widely to be universal beacons of emotional meaning. People may smile in hatred when plotting their enemy’s downfall and scowl in delight when they hear a bad pun. In Melanesian culture, a wide-eyed gasping face is a symbol of aggression, not fear. These experts say the alleged universal expressions just represent cultural stereotypes. To be clear, both sides in the debate acknowledge that facial movements vary for a given emotion; the disagreement is about whether there is enough uniformity to detect what someone is feeling. This debate is not just academic; the outcome has serious consequences. Today you can be turned down for a job because a so-called emotion-reading system watching you on camera applied artificial intelligence to evaluate your facial movements unfavorably during an interview. In a U.S. court of law, a judge or jury may sometimes hand down a harsher sentence, even death, if they think a defendant’s face showed a lack of remorse. Children in preschools across the country are taught to recognize smiles as happiness, scowls as anger and other expressive stereotypes from books, games and posters of disembodied faces. And for children on the autism spectrum, some of whom have difficulty perceiving emotion in others, these teachings do not translate to better communication. © 2022 Scientific American,

Keyword: Emotions; Evolution
Link ID: 28306 - Posted: 04.30.2022

Natalia Mesa Cravings for sugary treats and other “wants” in humans are driven by the activity of dopamine-producing cells in our mesolimbic system. Experimental research now suggests that a similar system might also exist in honeybees (Apis mellifera), spurring them to “want” to search for sources of nectar. In a study published today (April 28) in Science, researchers found that bees’ dopamine levels were elevated during the search for food and dropped once the food was consumed. Dopamine may also help trigger a hedonic, or pleasant, “memory” of the sugary treat, the researchers say, as dopamine levels rose again when foragers danced to tell other foragers about the foods’ locations. “The whole story is new. To show that there is a wanting system in insects is generally new,” says study coauthor Martin Giurfa, a neuroscientist at Paul Sabatier University in Toulouse, France. “Bees are truly amazing.” In both humans and invertebrates, dopamine is known to be involved in learning and reward. Giurfa and his team have been studying the neurotransmitter in bees, and several years ago, they characterized many of the neural pathways that involved dopamine. “We found so many so diverse pathways that we said, ‘There might be more than just representing reinforcement, representing punishment, representing reward.’” He began to look for other roles dopamine might play in honeybee behavior. bee next to pink flower © 1986–2022 The Scientist.

Keyword: Drug Abuse; Evolution
Link ID: 28305 - Posted: 04.30.2022

By Helen Ouyang After an hour-and-a-half bus ride last November, Julia Monterroso arrived at a white Art Deco building in West Hollywood, just opposite a Chanel store and the Ivy, a restaurant famous for its celebrity sightings. Monterroso was there to see Brennan Spiegel, a gastroenterologist and researcher at Cedars-Sinai who runs one of the largest academic medical initiatives studying virtual reality as a health therapy. He started the program in 2015 after the hospital received a million-dollar donation from an investment banker on its board. Spiegel saw Monterroso in his clinic the week before and thought he might be able to help alleviate her symptoms. Monterroso is 55 and petite, with youthful bangs and hair clipped back by tiny jeweled barrettes. Eighteen months earlier, pain seized her lower abdomen and never went away. After undergoing back surgery in September to treat a herniated disc — and after the constant ache in her abdomen worsened — she had to stop working as a housecleaner. Eventually, following a series of tests that failed to reveal any clear cause, she landed in Spiegel’s office. She rated her pain an 8 on a 10-point scale, with 10 being the most severe. Chronic pain is generally defined as pain that has lasted three months or longer. It is one of the leading causes of long-term disability in the world. By some measures, 50 million Americans live with chronic pain, in part because the power of medicine to relieve pain remains woefully inadequate. As Daniel Clauw, who runs the Chronic Pain and Fatigue Research Center at the University of Michigan, put it in a 2019 lecture, there isn’t “any drug in any chronic-pain state that works in better than one out of three people.” He went on to say that nonpharmacological therapy should instead be “front and center in managing chronic pain — rather than opioids, or for that matter, any of our drugs.” Virtual reality is emerging as an unlikely tool for solving this intractable problem. The V.R. segment in health care alone, which according to some estimates is already valued at billions of dollars, is expected to grow by multiples of that in the next few years, with researchers seeing potential for it to help with everything from anxiety and depression to rehabilitation after strokes to surgeons strategizing where they will cut and stitch. In November, the Food and Drug Administration gave authorization for the first V.R. product to be marketed for the treatment of chronic pain. © 2022 The New York Times Company

Keyword: Pain & Touch; Vision
Link ID: 28304 - Posted: 04.27.2022

By Michele Lent Hirsch Sleep problems are a hallmark of modern American life — perhaps never more so than recently. In 2016, the Centers for Disease Control and Prevention found that a third of Americans were getting too little sleep at night. But then came the stressors of the pandemic, job losses, disrupted schedules and closed schools, which kept record numbers of Americans up at night or unable to wake up in the morning. As many as 2 in 3 Americans reported getting either too much or too little sleep, in a survey from the American Psychological Association during the pandemic’s second year. And the insomnia of the past two years may be stubbornly hanging on: Many people continue having more trouble falling asleep or staying asleep or have seen unusual shifts in their sleep schedules. All of this is taking a toll. “These different types of sleep changes seem to be closely related to [problems with] mental health,” says Karianne Dion, a graduate student in clinical psychology at the University of Ottawa. Research she co-wrote, published in the Journal of Sleep Research in 2021, found “worse symptoms of stress, anxiety, and depression” among those who are sleeping less or going to bed later and waking up later than before. Researchers have long known that anxiety and depression can lead to sleeplessness, while sleeping poorly can increase the likelihood of anxiety and depression. But a good night’s rest is also critical for a strong immune system, as well as for health overall. Insufficient sleep over time is associated with a greater risk of diabetes, high blood pressure and heart disease, according to the CDC. It can lead to memory and cognitive issues as well. So how can we get the sleep we need? Here’s how to solve seven common problems that can interfere with your rest and your health. © 1996-2022 The Washington Post

Keyword: Sleep
Link ID: 28303 - Posted: 04.27.2022

By Hope Reese Can we do without love? For many years, the neuroscientist Stephanie Ortigue believed that the answer was yes. Even though she researched the science of human connections, Dr. Ortigue — an only child and, in her 20s and 30s, contentedly single — couldn’t completely grasp its importance in her own life. “I told myself that being unattached made me a more objective researcher: I could investigate love without being under its spell,” she writes in her new book, “Wired for Love: A Neuroscientist’s Journey Through Romance, Loss and the Essence of Human Connection.” But then, in 2011, at age 37, she met John Cacioppo at a neuroscience conference in Shanghai. Dr. Cacioppo, who popularized the concept that prolonged loneliness can be as toxic to health as smoking, intrigued her. The two scientists fell hard for each other and married. She took his last name and they soon became colleagues at the University of Chicago’s Pritzker School of Medicine (where she now directs the Brain Dynamics Laboratory) — forming a team at home and in the lab. “Wired for Love” is the neurobiological story of how love rewires the brain. It’s also a personal love story — one that took a sad turn when John died of cancer in March 2018. Here, Dr. Cacioppo discusses what exactly love does to the brain, how to fight loneliness and how love is, literally, a product of the imagination. You went from being happily single, to coupled, to then losing your husband. How did meeting him bring your research on love to life? Sign Up for Love Letter Your weekly dose of real stories that examine the highs, lows and woes of relationships. This newsletter will include the best of Modern Love, weddings and love in the news. Get it sent to your inbox. When we first met, we spoke for three hours, but I couldn’t feel time go by. I felt euphoria — from the rush of dopamine. I blushed — a sign of adrenaline. We became closer, physically, and started imitating each other. This was from the activation of mirror neurons, a network of brain cells that are activated when you move or feel something, and when you see another person moving. When you have a strong connection with someone, the mirror neuron system is boosted. © 2022 The New York Times Company

Keyword: Sexual Behavior; Emotions
Link ID: 28302 - Posted: 04.27.2022

By Melinda Wenner Moyer The more popular antidepressants become, the more questions they raise. The drugs are one of the most widely prescribed types of medications in the United States, with more than one out of eight Americans over 18 having recently taken them, according to a survey from the Centers for Disease Control and Prevention. Yet we know very little about how well antidepressants work over the long term, and especially how they affect overall quality of life, experts say. Most clinical drug trials have followed people taking antidepressants for only eight to 12 weeks, so it’s unclear what happens when patients take them for longer than that, said Gemma Lewis, a research psychologist at University College London who studies the causes, treatment and prevention of depression and anxiety. “We definitely need longer follow-ups of people who are using or are not using antidepressants, to see what the long-term outcomes are,” Dr. Lewis said. A study published yesterday in the journal PLoS One aimed to close this knowledge gap by comparing, over the course of two years, the changes in quality of life reported by Americans with depression who took antidepressants versus the changes reported by those with the same diagnosis who did not take the medications. The study included people who took all types of antidepressants, including selective serotonin reuptake inhibitors like Prozac, serotonin-norepinephrine reuptake inhibitors like Effexor and older antidepressants such as clomipramine and phenelzine. Researchers assessed both mental and physical quality of life with a survey that asked questions about subjects’ physical health, energy levels, mood, pain and ability to perform daily activities, among other things. The paper found no significant differences in the changes in quality of life reported by the two groups, which suggests that antidepressant drugs may not improve long-term quality of life. Both groups reported slight increases in the mental aspects of quality of life over time, and slight drops in their physical quality of life. But the study is imperfect, researchers say, and it certainly doesn’t settle the debate over the effectiveness of these drugs. © 2022 The New York Times Company

Keyword: Depression
Link ID: 28301 - Posted: 04.27.2022

By Linda Searing Already known to help ease depression, regular exercise may also help prevent it, with people who exercised just half the recommended weekly amount lowering their risk for depression by 18 percent, according to research published in the journal JAMA Psychiatry. However, those who were more active, meeting at least the minimum recommended physical activity level, reduced their risk for depression by 25 percent, compared with inactive people. The findings stem from the analysis of data from 15 studies, involving 191,130 adults who were tracked for at least three years. Those who met activity guidelines did at least 150 minutes a week of moderate-intensity activity, such as brisk walking, as recommended in the Physical Activity Guidelines for Americans. Mental health experts note that nearly 10 percent of American adults struggle with some form of depression each year. Antidepressant medication and talk therapy are commonly prescribed treatments, but exercise is also considered an effective treatment. Exercise sparks the brain’s release of endorphins, sometimes referred to as feel-good hormones. It can also quiet the mind, quelling the cycle of negative thoughts that often accompany depression, and can help reduce stress, improve sleep and boost self-esteem. Urging doctors to encourage their patients to increase their physical activity, the researchers wrote that the study’s findings suggest “significant mental health benefits from being physically active, even at levels below the public health recommendations.” If less-active participants in the study had exercised more, they say, 11.5 percent of depression cases could have been prevented.

Keyword: Depression
Link ID: 28300 - Posted: 04.27.2022

Carrie Arnold Playing the mating game is risky. Organisms must cope with the existential risk that swiping right on the wrong choice could doom future generations to a lifetime of bad genes. They also have to contend with more immediate burdens and risks: Participants need to gather resources for courting and summon energy to pursue a potential partner. Animals engaged in amorous activities also make easy targets for predators. Small wonder, then, that when times are good, the roundworm Caenorhabditis elegans doesn’t bother with the process. As a mostly hermaphroditic species (with a few males thrown in for variety), a C. elegans worm usually self-fertilizes its eggs until its sperm stash is depleted late in life; only then does it produce a pheromone to attract males and stay in the reproductive game. But when environmental conditions become stressful, the worms become sexually attractive much sooner. For them, sex is the equivalent of a Hail Mary pass — a desperate gamble that if their offspring are more genetically diverse, some will fare better under the new, rougher conditions. Scientists thought this stress-induced shift was purely fleeting. But recently when scientists at Tel Aviv University raised C. elegans in too-warm conditions for more than 10 generations, they discovered that the worms continued to be sexually attractive for several more generations after they were moved to cooler surroundings. It’s an observation that highlights how inheritance does not always reduce to a simple accounting of the genes in organisms, and it may point to a mechanism that works in tandem with traditional natural selection in shaping the evolution of some organisms. As the new paper in Developmental Cell shows, the cause of this trait wasn’t a genetic change to the worm’s DNA but rather an inherited “epigenetic” change that influenced how the DNA was used. The researchers — senior author Oded Rechavi, a biologist at Tel Aviv University, first author Itai Toker (now a postdoctoral fellow at Columbia University) and their colleagues — identified a small RNA molecule that can be passed between generations to signal for production of the pheromone. In effect, this heritable RNA molecule improves the odds that the worms will evolve in stressful times. All Rights Reserved © 2022

Keyword: Sexual Behavior; Epigenetics
Link ID: 28299 - Posted: 04.23.2022