Most Recent Links

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 11241 - 11260 of 29238

By BENJAMIN EWEN-CAMPEN This may seem obvious. But in evolutionary terms, the benefits of sexual reproduction are not immediately clear. Male rhinoceros beetles grow huge, unwieldy horns half the length of their body that they use to fight for females. Ribbon-tailed birds of paradise produce outlandish plumage to attract a mate. Darwin was bothered by such traits, since his theory of evolution couldn’t completely explain them (“The sight of a feather in a peacock’s tail, whenever I gaze at it, makes me feel sick!” he wrote to a friend). Moreover, sex allows an unrelated, possibly inferior partner to insert half a genome into the next generation. So why is sex nearly universal across animals, plants and fungi? Shouldn’t natural selection favor animals that forgo draining displays and genetic roulette and simply clone themselves? Yes and no. Many animals do clone themselves; certain sea anemones can bud identical twins from the sides of their bodies. Aphids, bees and ants can reproduce asexually. Virgin births sometimes occur among hammerhead sharks, turkeys, boa constrictors and komodo dragons. But nearly all animals engage in sex at some point in their lives. Biologists say that the benefits of sex come from the genetic rearrangements that occur during meiosis, the special cell division that produces eggs and sperm. During meiosis, combinations of the parents’ genes are broken up and reconfigured into novel arrangements in the resulting sperm and egg cells, creating new gene combinations that might be advantageous. © 2013 Salon Media Group, Inc.

Keyword: Sexual Behavior; Evolution
Link ID: 18571 - Posted: 08.28.2013

Beth Skwarecki Be careful what you say around a pregnant woman. As a fetus grows inside a mother's belly, it can hear sounds from the outside world—and can understand them well enough to retain memories of them after birth, according to new research. It may seem implausible that fetuses can listen to speech within the womb, but the sound-processing parts of their brain become active in the last trimester of pregnancy, and sound carries fairly well through the mother's abdomen. "If you put your hand over your mouth and speak, that's very similar to the situation the fetus is in," says cognitive neuroscientist Eino Partanen of the University of Helsinki. "You can hear the rhythm of speech, rhythm of music, and so on." A 1988 study suggested that newborns recognize the theme song from their mother's favorite soap opera. More recent studies have expanded on the idea of fetal learning, indicating that newborns already familiarized themselves with sounds of their parent’s native language; one showed that American newborns seem to perceive Swedish vowel sounds as unfamiliar, sucking on a high-tech pacifier to hear more of the new sounds. Swedish infants showed the same response to English vowels. But those studies were based on babies' behaviors, which can be tricky to test. Partanen and his team decided instead to outfit babies with EEG sensors to look for neural traces of memories from the womb. "Once we learn a sound, if it's repeated to us often enough, we form a memory of it, which is activated when we hear the sound again," he explains. This memory speeds up recognition of sounds in the learner's native language and can be detected as a pattern of brain waves, even in a sleeping baby. © 2012 American Association for the Advancement of Science.

Keyword: Language; Development of the Brain
Link ID: 18570 - Posted: 08.27.2013

By Felicity Muth Humans love their victory displays. You only have to watch a game of football (or soccer to US-readers) to see some victory displays of the most ridiculous kind. Why do people do such things? If there was no crowd there, it is unlikely that they would perform such displays. But is it for the sake of the sex they are wishing to attract, or perhaps to put people they are competing with in no doubt of their accomplishment? Other animals, of course, also compete with each other, for food, resources and mates. And, like humans, how they behave once they win or lose a competition may depend on who’s around to see it. Male spring crickets fight with each other for territories and females Male spring field crickets fight with other males. The winners tend to do a lot better with the lady crickets, as the winners may gain the best territory, and because females of this species prefer dominant males. Now for the part that may surprise you: the males that win these fights will perform a victory display just like humans – after beating another male, the male winner performs an aggressive song and jerks his body in a particular way to show off that he’s won this fight. But, like with humans, the question arises: why do males do these victory displays? Is it to show the loser male that he has lost, or to show other males and females that he’s won? © 2013 Scientific American

Keyword: Sexual Behavior; Attention
Link ID: 18569 - Posted: 08.27.2013

Erika Check Hayden US behavioural researchers have been handed a dubious distinction — they are more likely than their colleagues in other parts of the world to exaggerate findings, according to a study published today. The research highlights the importance of unconscious biases that might affect research integrity, says Brian Martinson, a social scientist at the HealthPartners Institute for Education and Research in Minneapolis, Minnesota, who was not involved with the study. “The take-home here is that the ‘bad guy/good guy’ narrative — the idea that we only need to worry about the monsters out there who are making up data — is naive,” Martinson says.

 The study, published in Proceedings of the National Academy of Sciences1, was conducted by John Ioannidis, a physician at Stanford University in California, and Daniele Fanelli, an evolutionary biologist at the University of Edinburgh, UK. The pair examined 82 meta-analyses in genetics and psychiatry that collectively combined results from 1,174 individual studies. The researchers compared meta-analyses of studies based on non-behavioural parameters, such as physiological measurements, to those based on behavioural parameters, such as progression of dementia or depression.

 The researchers then determined how well the strength of an observed result or effect reported in a given study agreed with that of the meta-analysis in which the study was included. They found that, worldwide, behavioural studies were more likely than non-behavioural studies to report ‘extreme effects’ — findings that deviated from the overall effects reported by the meta-analyses.
 And US-based behavioural researchers were more likely than behavioural researchers elsewhere to report extreme effects that deviated in favour of their starting hypotheses.

 © 2013 Nature Publishing Group

Keyword: Brain imaging; Genes & Behavior
Link ID: 18568 - Posted: 08.27.2013

By Clayton Aldern We’ve been here before. Two or three times a year, a team of neuroscientists comes along and tightropes over the chasm that is dystopian research. Across the valley lies some pinnacle of human achievement; below flows the dirty, coursing river of mind control and government-sponsored brainwashing and all things Nineteen Eighty-Four. Cliffside, maybe clutching our tinfoil caps, we bite our nails and try to keep our faith in the scientists. This time is no different. On July 26, a research team took its first step onto the tightrope. Working under Nobel laureate Susumu Tonegawa, the MIT group reported that they had created a false memory in the brain of a mouse. “Our data,” wrote the authors in Science, “demonstrate that it is possible to generate an internally represented and behaviorally expressed fear memory via artificial means.” While the sterility reserved for scientific research abstracts tends to diffuse the élan of the work, the gravity here is apparent. Which brings us to the cliff and the chasm. That devil-klaxon of a sound effect from Inception always seems appropriate for heralding reports with sci-fi undertones. In the case of the closest thing we have to an actual inception, it seems particularly apt. But the group’s work is not Inception per se, and it’s certainly not Total Recall. That’s not to say it isn’t unnerving. It’s also not to say the study isn’t remarkable. More than anything, the Science paper’s publication is a reminder that neuroscience is inching over some dangerous ethical waters, and from here, it is important to tread carefully. © 2013 Scientific American

Keyword: Learning & Memory
Link ID: 18567 - Posted: 08.27.2013

By Eleanor Bradford BBC Scotland Health Correspondent More than half of all teenagers may be sleep deprived, according to experts. A combination of natural hormone changes and greater use of screen-based technology means many are not getting enough sleep. Research has suggested teenagers need nine hours' sleep to function properly. "Sleep is fundamentally important but despite this it's been largely ignored as part of our biology," said Russell Foster, Professor of Circadian Neuroscience at Oxford University. "Within the context of teenagers, here we have a classic example where sleep could enhance enormously the quality of life and, indeed, the educational performance of our young people. "Yet they're given no instruction about the importance of sleep and sleep is a victim to the many other demands that are being made of them." At One Level Up, an internet cafe and gaming centre in Glasgow, I found a group of young people who are used to very late nights. "There's things called 'grinds' which we have on Saturdays which are an all-nighter until 10 in the morning," said 17-year-old Jack Barclay. "We go home, sleep till 8pm at night and then do the exact same thing again. I like staying up." Fourteen-year-old Rachel admitted occasionally falling asleep in class because she stayed up late at night playing computer games. "If it's a game that will save easily I'll go to bed when my mum says, 'OK you should probably get some rest', but if it's a game where you have to go to a certain point to save I'll be like, 'five more minutes!' and then an hour later 'five more minutes!', and it does mess up your sleeping pattern. BBC © 2013

Keyword: Sleep; Development of the Brain
Link ID: 18566 - Posted: 08.27.2013

By Scicurious Optogenetics likes to light up debate. Optogenetics is a hot technique in neuroscience research right now, involving taking a light-activited gene (called a channel rhodopsin) targeted into a single neuron type, and inserting it into the genome of, say, a mouse (yes, we can do this now). When you then shine a light into the mouse’s brain, the channel rhodopsin responds, and the neurons that are now expressing the channel rhodopsin fire. This means that you can get a single type of neuron to fire (or not, there are ones that inhibit firing, too), whenever you want to, merely by turning on a light. I actually remember where I WAS when I first heard of optogenetics. I came into the lab in the morning, was going about my daily business, and hadn’t checked the daily Tables of Contents for journals yet (I get these delivered into my email). I remember the postdoc, normally a pretty phlegmatic person, actually putting a little excitement into their voice, “hey guys, look at this.” The paper was this one. We all crowded around. It took us all a few minutes to “get it”. As it began to sink it, I had two thoughts. The first? “WHOA, THAT IS AWESOME.” The second? “Great, I know what’s going to be the hot stuff now.” There are fashions in science. Not the kind where everyone dyes their lab coat plaid or creates cutoffs out of their Personal Protective Equipment (though that would be hilarious). There are experimental fashions. Lesions were once really “in”. Knockouts were hot stuff in the 90s. fMRI enjoyed (and still does enjoy) its moment in the sun, electrophysiology often adds a little je ne sais quoi to a paper. DREADDs, CLARITY. And when a new thing comes along and is going to be hot? You can sniff it out a mile away. For next year? I’m betting on GEVIs, myself. They’ll be all the rage. © 2013 Scientific American

Keyword: Miscellaneous
Link ID: 18565 - Posted: 08.27.2013

By Laura Sanders Despite the adage, there actually is such a thing as bad publicity, a fact that brain scientists have lately discovered. A couple of high-profile opinion pieces in the New York Times have questioned the usefulness of neuroscience, claiming, as columnist David Brooks did in June, that studying brain activity will never reveal the mind. Or that neuroscience is a pesky distraction from solving real social problems, as scholar Benjamin Fong wrote on August 11. Let’s start with Brooks. Some of his complaints about brain scans, with their colorful blobs lighting up active parts of the brain, are quite legitimate. Functional MRI studies are notoriously difficult to make sense of. In fact, this powerful technology has been used to find brain activity in a dead salmon. Dubious fMRI studies do trickle into the hands of sensationalistic journalists, medical hucksters and marketers, who twist the results into self-serving sound bites. All true. But Brooks’ essay conflates the entire field of neuroscience with some bad seeds. Some studies should never have been done, others mislead people, waste resources and sensationalize their results. But for every one of those studies, countless others tell us something important about how the human brain works. Serious scientists use a huge variety of techniques — yes, even fMRI — responsibly, and interpret their results cautiously. Judging the whole enterprise of neuroscience by its weakest studies is disingenuous. There is bad science, just like there’s bad food, bad music and bad TV. Trashing all brain research because a tiny bit of it stinks is like throwing your new flat screen off a balcony because you accidentally turned on Jersey Shore. © Society for Science & the Public 2000 - 2013

Keyword: Brain imaging; Consciousness
Link ID: 18564 - Posted: 08.27.2013

By Harvey Black The intelligence of the corvid family—a group of birds that includes crows, ravens, magpies, rooks and jackdaws—rivals that of apes and dolphins. Recent studies are revealing impressive details about crows' social reasoning, offering hints about how our own interpersonal intelligence may have evolved. One recent focus has been on how these birds respond to the sight of human faces. For example, crows take to the skies more quickly when an approaching person looks directly at them, as opposed to when an individual nears with an averted gaze, according to a report by biologist Barbara Clucas of Humboldt State University and her colleagues in the April issue of Ethology. The researchers walked toward groups of crows in three locations in the Seattle area, with their eyes either on the birds or on some point in the distance. The crows scattered earlier when the approaching person was looking at them, unlike other animals that avoid people no matter what a person is doing. Clucas speculates that ignoring a human with an averted gaze is a learned adaptation to life in the big city. Indeed, many studies have shown that crows are able to learn safety behaviors from one another. For example, John Marzluff of the University of Washington (who co-authored the aforementioned paper with Clucas) used masked researchers to test the learning abilities of crows. He and his colleagues ventured into Seattle parks wearing one of two kinds of masks. The people wearing one kind of mask trapped birds; the others simply walked by. Five years later the scientists returned to the parks with their masks. The birds present at the original trapping remembered which masks corresponded to capturing—and they passed this information to their young and other crows. All the crows responded to the sight of a researcher wearing a trapping mask by immediately mobbing the individual and shrieking. © 2013 Scientific American

Keyword: Intelligence; Evolution
Link ID: 18563 - Posted: 08.27.2013

By VASILIS K. POZIOS, PRAVEEN R. KAMBAM and H. ERIC BENDER EARLIER this summer the actor Jim Carrey, a star of the new superhero movie “Kick-Ass 2,” tweeted that he was distancing himself from the film because, in the wake of the Sandy Hook massacre, “in all good conscience I cannot support” the movie’s extensive and graphically violent scenes. Mark Millar, a creator of the “Kick-Ass” comic book series and one of the movie’s executive producers, responded that he has “never quite bought the notion that violence in fiction leads to violence in real life any more than Harry Potter casting a spell creates more boy wizards in real life.” While Mr. Carrey’s point of view has its adherents, most people reflexively agree with Mr. Millar. After all, the logic goes, millions of Americans see violent imagery in films and on TV every day, but vanishingly few become killers. But a growing body of research indicates that this reasoning may be off base. Exposure to violent imagery does not preordain violence, but it is a risk factor. We would never say: “I’ve smoked cigarettes for a long time, and I don’t have lung cancer. Therefore there’s no link between smoking cigarettes and lung cancer.” So why use such flawed reasoning when it comes to media violence? There is now consensus that exposure to media violence is linked to actual violent behavior — a link found by many scholars to be on par with the correlation of exposure to secondhand smoke and the risk of lung cancer. In a meta-analysis of 217 studies published between 1957 and 1990, the psychologists George Comstock and Haejung Paik found that the short-term effect of exposure to media violence on actual physical violence against a person was moderate to large in strength. Mr. Comstock and Ms. Paik also conducted a meta-analysis of studies that looked at the correlation between habitual viewing of violent media and aggressive behavior at a point in time. They found 200 studies showing a moderate, positive relationship between watching television violence and physical aggression against another person. © 2013 The New York Times Company

Keyword: Aggression; Learning & Memory
Link ID: 18562 - Posted: 08.26.2013

By James Gallagher Health and science reporter, BBC News Taking cocaine can change the structure of the brain within hours in what could be the first steps of drug addiction, according to US researchers. Animal tests, reported in the journal Nature Neuroscience, showed new structures linked to learning and memory began to grow soon after the drug was taken. Mice with the most brain changes showed a greater preference for cocaine. Experts described it as the brain "learning addiction". The team at University of California, Berkeley and UC San Francisco looked for tiny protrusions from brain cells called dendritic spines. They are heavily implicated in memory formation. The place or environment that drugs are taken plays an important role in addiction. In the experiments, the mice were allowed to explore freely two very different chambers - each with a different smell and surface texture. Once they had picked a favourite they were injected with cocaine in the other chamber. A type of laser microscopy was used to look inside the brains of living mice to hunt for the dendritic spines. More new spines were produced when the mice were injected with cocaine than with water, suggesting new memories being formed around drug use. The difference could be detected two hours after the first dose. BBC © 2013

Keyword: Drug Abuse
Link ID: 18561 - Posted: 08.26.2013

By: George Will, Washington Post PRINCETON, N.J. — Fifty years from now, when Malia and Sasha are grandmothers, their father’s presidency might seem most consequential because of a small sum — $100 million —for studying something small. “As humans,” Barack Obama said when announcing the initiative to study the brain, “we can identify galaxies light-years away ... but we still haven’t unlocked the mystery of the three pounds of matter that sits between our ears.” Actually, understanding the brain will be a resounding success without unlocking the essential mystery, which is: How does matter become conscious of itself? Or should we say, how does it become — or acquire — consciousness? Just trying to describe this subject takes scientists onto intellectual terrain long occupied by philosophers. Those whose field is the philosophy of mind will learn from scientists such as Princeton’s David Tank, aleader of the BRAIN Initiative, which aims at understanding how brain regions and cells work together, moment to moment, throughout our lives. If, as is said, a physicist is an atom’s way of knowing about atoms, thena neuroscientist like Tank is a brain cell’s way of knowing about brain cells. Each of us has about 100 billion of those, each of which communicates with an average of 10,000 other nerve cells. The goal of neuroscientists is to discover how these neural conversations give rise to a thought, a memory ora decision. And to understand how the brain functions, from which we may understand disorders such as autism, schizophrenia and epilepsy. © 2013 Forum Communications Co.

Keyword: Brain imaging
Link ID: 18560 - Posted: 08.26.2013

By Brian Mossop A fine line separates creativity and madness. Bipolar disorder teeters along that line, with patients experiencing moments of impulsive thought, which can yield bold insights or quickly descend into confusion or rage. In her new book, Haldol and Hyacinths, Iranian-American author and activist Moezzi presents a captivating autobiographical account of her struggle with bipolar disorder. Using a series of vignettes, she reconstructs her downward spiral into psychosis, which eventually led to a suicide attempt and multiple stays in mental health facilities. From seemingly innocuous bouts of insomnia to full-blown hallucinations, Moezzi describes how she descended into madness. Moezzi's medical issues first emerged in her sophomore year of college, when she began to experience severe abdominal pain, later diagnosed as pancreatitis. Doctors decided to remove her pancreas to save her life and prevent a cyst from festering. Everyone she knew rallied alongside her during this time. Things were much different when Moezzi's bipolar disorder took hold in the years following her physical illness. She soon discovered that mental illness has no heroes, no celebrity spokesperson, no champions. Relying solely on the support of her immediate family and a devoted husband, Moezzi saw that the disorder carries a stigma, exacerbated by inaccurate media portrayals. Even worse is the plight of patients in places such as Moezzi's homeland of Iran, where mental illness is simply ignored. Despite bipolar disorder being the sixth leading cause of disability in the world, there is not even a word for the disease in Farsi. © 2013 Scientific American

Keyword: Schizophrenia
Link ID: 18559 - Posted: 08.26.2013

By Susan Gaidos If you’re someone who enjoys being recognized, Julian Lim is your kind of waiter. Lim, who’s working his way through college waiting tables, remembers the face of everyone that walks through the door of the South Bend, Ind., restaurant where he works. His abilities go beyond making his customers feel special. This spring, when he cut his hand on broken glass, he pegged the emergency room nurse as a fellow student from his grade school days. Though they’d never spoken, and the girl had since undergone changes in appearance, Lim recognized her instantly. Carrie Shanafelt is good with faces, too. A professor of literature at Grinnell College in Iowa, Shanafelt can spot her students outside the classroom, whether it’s the first week of class or years later. And Ajay Jansari, an information technology specialist in London, often has to see a face only once to remember it, even those he meets thousands of miles from home. While some people say they never forget a face, these folks have scientific studies to back their claims. Called “super recognizers,” they’re among a small group of individuals being studied by scientists at Dartmouth College and in England to better understand how some people can recognize almost every face they have ever seen. Scientists are now putting super recognizers’ skills to the test to get a handle on how face-processing areas of the brain work to make a few people so adept at recalling faces. Findings from the studies may advance understanding of how most people categorize faces — a subject that is still poorly understood. © Society for Science & the Public 2000 - 2013

Keyword: Attention
Link ID: 18558 - Posted: 08.24.2013

By GRETCHEN REYNOLDS Our genes may have a more elevated moral sense than our minds do, according to a new study of the genetic effects of happiness. They can, it seems, reward us with healthy gene activity when we’re unselfish — and chastise us, at a microscopic level, when we put our own needs and desires first. To reach that slightly unsettling conclusion, researchers from the University of North Carolina and the University of California, Los Angeles, had 80 healthy volunteers complete an online questionnaire that asked why they felt satisfied with their lives. Then the researchers drew their blood and analyzed their white blood cells. Scientists have long surmised that moods affect health. But the underlying cellular mechanisms were murky until they began looking at gene-expression profiles inside white blood cells. Gene expression is the complex process by which genes direct the production of proteins. These proteins jump-start other processes, which in the case of white blood cells control much of the body’s immune response. It turned out that different forms of happiness were associated with quite different gene-expression profiles. Specifically, those volunteers whose happiness, according to their questionnaires, was primarily hedonic, to use the scientific term, or based on consuming things, had surprisingly unhealthy profiles, with relatively high levels of biological markers known to promote increased inflammation throughout the body. Such inflammation has been linked to the development of cancer, diabetes and cardiovascular disease. They also had relatively low levels of other markers that increase antibody production, to better fight off infections. Copyright 2013 The New York Times Company

Keyword: Emotions; Genes & Behavior
Link ID: 18557 - Posted: 08.24.2013

Virginia Morell A wolf’s howl is one of the most iconic sounds of nature, yet biologists aren’t sure why the animals do it. They’re not even sure if wolves howl voluntarily or if it’s some sort of reflex, perhaps caused by stress. Now, scientists working with captive North American timber wolves in Austria report that they’ve solved part of the mystery. Almost 50 years ago, wildlife biologists suggested that a wolf’s howls were a way of reestablishing contact with other pack members after the animals became separated, which often happens during hunts. Yet, observers of captive wolves have also noted that the pattern of howls differs depending on the size of the pack and whether the dominant, breeding wolf is present, suggesting that the canids’ calls are not necessarily automatic responses. Friederike Range, a cognitive ethologist at the University of Veterinary Medicine in Vienna, was in a unique position to explore the conundrum. Since 2008, she and her colleagues have hand-raised nine wolves at the Wolf Science Center in Ernstbrunn, which she co-directs. “We started taking our wolves for walks when they were 6 weeks old, and as soon as we took one out, the others would start to howl,” she says. “So immediately we became interested in why they howl.” Although the center’s wolves don’t hunt, they do howl differently in different situations, Range says. “So we also wanted to understand these variations in their howling.” © 2012 American Association for the Advancement of Science.

Keyword: Animal Communication; Language
Link ID: 18556 - Posted: 08.24.2013

By CARL ZIMMER Evolutionary biologists have come to recognize humans as a tremendous evolutionary force. In hospitals, we drive the evolution of resistant bacteria by giving patients antibiotics. In the oceans, we drive the evolution of small-bodied fish by catching the big ones. In a new study, a University of Minnesota biologist, Emilie C. Snell-Rood, offers evidence suggesting we may be driving evolution in a more surprising way. As we alter the places where animals live, we may be fueling the evolution of bigger brains. Dr. Snell-Rood bases her conclusion on a collection of mammal skulls kept at the Bell Museum of Natural History at the University of Minnesota. Dr. Snell-Rood picked out 10 species to study, including mice, shrews, bats and gophers. She selected dozens of individual skulls that were collected as far back as a century ago. An undergraduate student named Naomi Wick measured the dimensions of the skulls, making it possible to estimate the size of their brains. Two important results emerged from their research. In two species — the white-footed mouse and the meadow vole — the brains of animals from cities or suburbs were about 6 percent bigger than the brains of animals collected from farms or other rural areas. Dr. Snell-Rood concludes that when these species moved to cities and towns, their brains became significantly bigger. Dr. Snell-Rood and Ms. Wick also found that in rural parts of Minnesota, two species of shrews and two species of bats experienced an increase in brain size as well. Dr. Snell-Rood proposes that the brains of all six species have gotten bigger because humans have radically changed Minnesota. Where there were once pristine forests and prairies, there are now cities and farms. In this disrupted environment, animals that were better at learning new things were more likely to survive and have offspring. © 2013 The New York Times Company

Keyword: Evolution; Brain imaging
Link ID: 18555 - Posted: 08.24.2013

JoNel Aleccia TODAY When doctors told Pete and Michelle Gallagher that they wanted to remove half of their 3-year-old son’s brain, the Attica, Ohio, parents were horrified. But a new study shows the extreme procedure may offer some kids their best shot at a normal life. “We panicked,” said Pete Gallagher, recalling their reaction seven years ago. The couple also knew that the dramatic surgery known as a hemispherectomy might be the only workable option to stop the severe seizures, more than a dozen a day, that were robbing Aiden of his ability to function – and to learn. “He had forgotten his alphabet. He had forgotten how to count. It was all slipping,” the father said. Today, Aiden is a healthy, red-haired fifth-grader who goes to regular school and loves to play baseball and basketball. He hasn’t had a seizure since the rare operation, making the boy a poster child for new research that finds the procedure offers real-world success for children suffering from devastating epilepsy. “The brain has an amazing capacity to work around the function that it has lost,” said Dr. Ajay Gupta, head of pediatric epilepsy at the Cleveland Clinic. In the first large-scale study to look at the everyday capabilities of kids who undergo hemispherectomy, Gupta and his colleagues reviewed 186 operations performed at their center between 1997 and 2009 and took a close look at 115 patients. They confirmed what doctors knew, but had little practical data to support: That removing the diseased hemisphere of a seizure-prone brain allows sufferers to learn and grow and, in some cases, lead normal lives.

Keyword: Epilepsy; Development of the Brain
Link ID: 18554 - Posted: 08.24.2013

By Melinda Wenner Moyer Few phenomena have created as divisive a rift recently among health professionals as the so-called “obesity paradox,” the repeated finding that obese people with certain health conditions live longer than slender people with the same ailments. And when a January meta-analysis involving nearly three million research subjects suggested that overweight people in the general population also live longer than their slimmer counterparts, the head of Harvard University’s nutrition department, Walter Willett, called the work “a pile of rubbish.” A few new studies suggest that these paradoxes may largely be artifacts of flawed research designs, but some experts disagree, accusing the new studies of being inaccurate. Among the biggest questions raised by this new research is the impact of age: whether obesity becomes more or less deadly as people get older and why. The January meta-analysis, led by U.S. Centers for Disease Control and Prevention senior scientist Katherine Flegal, pooled data from 97 studies of the general global population and reported that, in sum, overweight individuals—those with a body mass index of 25 to 29.9—were 6 percent less likely to die over various short time periods than people of normal weight (with a BMI 18.5 to 24.9) were. For people over the age of 65, however, being overweight conferred a 10 percent survival advantage. Flegals' findings also suggest that obesity, which has always been considered a major health risk, is not always dangerous and that it becomes less so with age: Adults with grade 1 obesity (BMIs of 30 to 34.9), she found, were no more likely to die than were normal weight adults; for grade 2 obesity (BMI of 35 to 39.9), the increased death risk for adults of all ages was 29 percent, but restricting the analysis to adults over the age of 65, the increased death risk associated with grade 2 obesity was not statistically significant.. The older a person is, the analysis seemed to say, the safer extra pounds become. © 2013 Scientific American

Keyword: Obesity
Link ID: 18553 - Posted: 08.24.2013

I HAVE been struggling with an addiction to opiates for the past three years. It started with prescription painkillers and progressed to full-blown heroin dependence. In an attempt to kick the habit I signed up for a traditional 30-step inpatient treatment that involved individual and group counselling, and which cost about $30,000. That was a year ago, and it didn't work. I felt unable to stay away from heroin. Now I am at a small clinic in Baja California, Mexico, where I am taking part in the first trial to investigate the effectiveness of treating heroin addiction with a single dose of ibogaine – a psychoactive substance derived from the rainforest shrub Tabernanthe iboga. "Ibogaine can take you many places, causing you to experience a range of emotions, memories and visions. If any of these images become too frightening, just open your eyes." I am reassured by the words of the director of the clinic, Jeff Israel, but the drug's history is not all rosy. Several clinical trials have shown that low doses of ibogaine taken over the course of a few weeks can greatly reduce cravings for heroin and other drugs. There was extensive research on it in the 1990s, with good evidence of safety in animals and a handful of studies in humans. The US National Institute on Drug Abuse invested over $1 million, but then abandoned the project in 1995. A study had shown that at high doses, ibogaine caused some brain cell degeneration in rats. Lower doses similar to those used in human addiction trials showed no such effect, however. © Copyright Reed Business Information Ltd.

Keyword: Drug Abuse
Link ID: 18552 - Posted: 08.24.2013