Most Recent Links

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 11061 - 11080 of 29390

By ANDREW HIGGINS BRUSSELS — Facing a decision on whether to impose tight restrictions on a booming market for electronic cigarettes, members of the European Parliament received a pleading letter in September that was signed by thousands of former smokers worried that “the positive story of e-cigarettes may be about to come to an abrupt halt.” The signatures had been collected via a website, saveecigs.com, which proclaimed itself the voice of the “forgotten millions in this debate” — people who had taken up e-cigarettes to stop smoking, and their grateful families. The website, however, was not quite the grass-roots effort it claimed to be. The text of the letter it asked people to sign was drafted by a London lobbyist hired by Totally Wicked, an e-cigarette company. The website had been set up by a British woman living in Iceland who had previously worked for the owners of Totally Wicked. As the headquarters of the European Union, Brussels sets regulatory standards that resonate around the world. It rivals Washington as a focus for corporate lobbying, with an estimated 30,000 professional lobbyists with registered lobbying firms and thousands more who operate beneath the radar. In this case, a determined lobbying campaign, marrying corporate interests in a fledgling but fast-growing industry with voices elicited from the general public, was aimed at a compelling public health issue: whether e-cigarettes, which deliver nicotine without burning tobacco, should be regulated as medicinal products, just as nicotine patches are. The stakes were substantial. Although e-cigarettes have not been linked to any serious health issues, they have been in widespread use for such a short time that researchers have no basis yet for determining if there are long-term risks. © 2013 The New York Times Company

Keyword: Drug Abuse
Link ID: 18903 - Posted: 11.10.2013

by Guest Writer, Emilie Reas! My first trip to a haunted house is as vivid today as when I was 5 years old. As I made my way past a taunting witch and a rattling skeleton, my eyes fell upon a blood-soaked zombie. My heart raced, my throat swelled, and the tears began to flow. Even now, as a mature (ahem) adult, the ghosts and goblins don’t faze me. But those vacant zombie eyes and pale skin? Oh, the horror! My rational brain knows how irrational my fear is, yet still I shudder, gripped by the same terror that first overwhelmed me decades ago. Unsettling experiences occur daily that we easily brush off – a creepy movie, a turbulent plane ride, or a nip at your ankle by the neighbor’s dog. But occasionally, the fear sticks, establishing a permanent memory that can haunt us for years. At their mildest, such fear memories cause discomfort or embarrassment, but at their worst, they can be downright debilitating. Do spiders make you scream? Are you unable to speak in public without a trembling voice and hands? Maybe you suffered a traumatic accident that’s made you terrified to get back behind the wheel of a car. We’ve all experienced the disruptive effects of a fearful experience we just can’t shake. Yet scientists don’t fully understand why some traumatic events are fleeting, while others are stored as lasting memories. Past research has shown that particular areas of the brain, such as the anterior cingulate and insula, are active during fearful experiences, but also during many other situations, including while monitoring surroundings and emotions or paying attention to important information. Other regions, including the amygdala, hippocampus and prefrontal cortex, are more specialized to support memory for emotional experiences, as they play important roles in emotional processing, memory and attention. While it’s clear that establishing fear memories relies on cross-talk between these regions, it’s not known how they solidify fears into memory and determine which particular ones will endure for the long-term. © Society for Science & the Public 2000 - 2013

Keyword: Learning & Memory; Stress
Link ID: 18902 - Posted: 11.09.2013

By JACKIE CALMES and ROBERT PEAR WASHINGTON — The Obama administration on Friday will complete a generation-long effort to require insurers to cover care for mental health and addiction just like physical illnesses when it issues long-awaited regulations defining parity in benefits and treatment. The rules, which will apply to almost all forms of insurance, will have far-reaching consequences for many Americans. In the White House, the regulations are also seen as critical to President Obama’s program for curbing gun violence by addressing an issue on which there is bipartisan agreement: Making treatment more available to those with mental illness could reduce killings, including mass murders. In issuing the regulations, senior officials said, the administration will have acted on all 23 executive actions that the president and Vice President Joseph R. Biden Jr. announced early this year to reduce gun crimes after the Newtown, Conn., school massacre. In planning those actions, the administration anticipated that gun control legislation would fail in Congress as pressure from the gun lobby proved longer-lasting than the national trauma over the killings of first graders and their caretakers last Dec. 14. “We feel actually like we’ve made a lot of progress on mental health as a result in this year, and this is kind of the big one,” said a senior administration official, one of several who described the outlines of the regulations that Kathleen Sebelius, the secretary of health and human services, will announce at a mental health conference on Friday in Atlanta with the former first lady Rosalynn Carter. While laws and regulations dating to 1996 took initial steps in requiring insurance parity for medical and mental health, “here we’re doing full parity, and we’ve also taken steps to extend it to the people covered in the Affordable Care Act,” the senior official said. “This is kind of the final word on parity.” © 2013 The New York Times Company

Keyword: Depression; Drug Abuse
Link ID: 18901 - Posted: 11.09.2013

by Sarah Zielinski If you put two birds together and gave them a problem, would they be any better at solving it than if they were alone? A study in Animal Behaviour of common mynas finds that not only are they no better at problem solving when in a pair than when on their own, the birds actually get a lot worse when put in a group. Andrea S. Griffin and her research team from the University of Newcastle in Callaghan, Australia, began by using dog food pellets as bait to capture common mynas (a.k.a. the Indian mynah, Acridotheres tristis) from around Newcastle. Then they gave each of the birds an innovation test, consisting of a box containing a couple of drawers and some Petri dishes. To get to the food hidden in spots in the box, the birds would have to get creative and figure out how to open one of the four containers by doing things like levering up a lid or pushing open a drawer. The scientists then ranked the birds by innovative ability before pairing them up. Half the pairs consisted of a high-innovation and a low-innovation myna, and the other half were pairs of medium-innovation birds. Then the pairs each received an innovation test similar to the one with boxes. Another experiment tested the birds in same-sex groups of five. On their own, 29 of 34 birds were able to access at least one container. But in pairs, only 15 of the 34 birds did so, and they took a lot longer. Performance dropped for both high- and medium-innovation birds, and it didn’t improve for the low-ranked ones, which had done so poorly the first time around that their results couldn’t get any worse. In groups of five, birds’ results fell even further: No mynas solved any of those tasks. © Society for Science & the Public 2000 - 2013

Keyword: Intelligence; Learning & Memory
Link ID: 18900 - Posted: 11.09.2013

Where, exactly, does the sand flea have sex? On the dusty ground, where it spends the first half of its life? Or already nestled snugly in its host—such as in a human foot—where it can suck the blood it needs to nourish its eggs? The answer to this question, which has long puzzled entomologists and tropical health experts, seems to be the latter. A new study, in which a researcher let a sand flea grow inside her skin, concludes that the parasites most likely copulate when the females are already inside their hosts. Tunga penetrans, also known as the chigger flea, sand flea, chigoe, jigger, nigua, pique, or bicho de pé, is widespread in the Caribbean, South America, and sub-Saharan Africa. The immature female burrows permanently into the skin of a warm-blooded host—it also attacks dogs, rats, cattle, and other mammals—where over 2 weeks it swells up to many times its original size, reaching a diameter of up to 10 mm. Through a small opening at the end of its abdominal cone, the insect breathes, defecates, and expels eggs. The female usually dies after 4 to 6 weeks, still embedded in the skin. Native to the Caribbean, sand fleas infected crewmen sailing with Columbus on the Santa Maria after they were shipwrecked on Haiti. They and others brought the parasite back to the Old World, where it eventually became endemic across sub-Saharan Africa. Even today it is an occasional stowaway, showing up in European and North American travel clinics in the feet of tourists who have gone barefoot on tropical beaches. For people living in infested regions, however, the flea is a serious public health issue. What starts as a pale circle in the skin turns red and then black, becoming painful, itchy—and often infected, a condition called tungiasis. One flea seems to attract others, and people can be infested with dozens at once. © 2013 American Association for the Advancement of Science

Keyword: Sexual Behavior; Evolution
Link ID: 18899 - Posted: 11.09.2013

by Laura Sanders Neonatal intensive care units are crammed full of life-saving equipment and people. The technology that fills these bustling hubs is responsible for saving the lives of fragile young babies. That technology is also responsible for quite a bit of noise. In the NICU, monitors beep, incubators whir and nurses, doctors and family members talk. This racket isn’t just annoying: NICU noise often exceeds acceptable levels set by the American Academy of Pediatrics, a 2009 analysis found. To dampen the din, many hospitals are shifting away from open wards to private rooms for preemies. Sounds like a no-brainer, right? Fragile babies get their own sanctuaries where they can recover and grow in peace. But in a surprising twist, a new study finds that this peace and quiet may actually be bad for some babies. Well aware of the noise problem in the NICU ward, Roberta Pineda of Washington University School of Medicine in St. Louis and colleagues went into their study of 136 preterm babies expecting to see benefits in babies who stayed in private rooms. Instead, the researchers found the exact opposite. By the time they left the hospital, babies who stayed in private rooms had less mature brains than those who stayed in an open ward. And two years later, babies who had stayed in private rooms performed worse on language tests. The results were not what the team expected. “It was extremely surprising,” Pineda told me. The researchers believe that the noise abatement effort made things too quiet for these babies. As distressing data from Romanian orphanages highlights, babies need stimulation to thrive. Children who grew up essentially staring at white walls with little contact from caregivers develop serious brain and behavioral problems, heartbreaking results from the Bucharest Early Intervention Project show. Hearing language early in life, even before birth, might be a crucial step in learning to talk later. And babies tucked away in private rooms might be missing out on some good stimulation. © Society for Science & the Public 2000 - 2013

Keyword: Hearing; Development of the Brain
Link ID: 18898 - Posted: 11.09.2013

Stanley Rachman. “Will these hands ne'er be clean?” In Shakespeare's play Macbeth, Lady Macbeth helps to plot the brutal murder of King Duncan. Afterwards she feels tainted by Duncan's blood and insists that “all the perfumes of Arabia” could not sweeten her polluted hands. Baffled by her compulsive washing, her doctor is forced to admit: “This disease is beyond my practise.” In the 400 years since Macbeth was first performed, other doctors, psychiatrists, neuroscientists and clinical psychologists — myself included — have also found the problem beyond the reach of their own expertise. We see compulsive washing a lot, mostly as a symptom of obsessive–compulsive disorder (OCD), but also in people who have suffered a physical or emotional trauma, for example in women who have suffered sexual assault. The events trigger a deep-seated psychological, and ultimately biological, response. We know that the driving force of compulsive washing is a fear of contamination by dirt and germs. An obsessive fear of contact with sexual fluids, for example, can drive compulsive washing in OCD and force people to restrict sexual activity to a specific room in the house. Compulsive washing fails to relieve the anxiety. Most patients with OCD continue to feel contaminated despite vigorous attempts to clean themselves. Why does repeated washing fail? There is much debate at present about the direction that psychiatric medicine and research should take. We should not underestimate what we can continue to learn from the careful observation of patients. Such observations have led my colleagues and me to diagnose a new cause of OCD and other types of compulsive washing: mental contamination. © 2013 Nature Publishing Group

Keyword: OCD - Obsessive Compulsive Disorder; Emotions
Link ID: 18897 - Posted: 11.08.2013

Roughly a year ago, I found myself at an elegant dinner party filled with celebrities and the very wealthy. I am a young professor at a major research university, and my wife and I were invited to mingle and chat with donors to the institution. To any outside observer, my career was ascendant. Having worked intensely and passionately at science for my entire adult life, I had secured my dream job directing an independent neuroscience research laboratory. I was talking to a businessman who had family members affected by a serious medical condition. He turned to me and said: “You're a neuroscientist. What do you know about Parkinson's disease?” My gaze darted to catch the eyes of my wife, but she was involved in another conversation. I was on my own, and I paused to gather my thoughts before responding. Because I had a secret. It was a secret that I hadn't yet told any of my colleagues: I have Parkinson's. I am still at the beginning of my fascinating, frightening and ultimately life-affirming journey as a brain scientist with a disabling disease of the brain. Already it has given me a new perspective on my work, it has made me appreciate life and it has allowed me to see myself as someone who can make a difference in ways that I never expected. But it took a bit of time to get here. The first signs I remember the first time I noticed that something was wrong. Four years ago, I was filling out a mountain of order forms for new lab equipment. After a few pages, my hand became a quaking lump of flesh and bone, locked uselessly in a tense rigor. A few days later, I noticed my walk was changing: rather than swinging my arm at my side, I held it in front of me rigidly, even grabbing the bottom edge of my shirt. I also had an occasional twitch in the last two fingers of my hand. © 2013 Nature Publishing Group

Keyword: Parkinsons
Link ID: 18896 - Posted: 11.08.2013

Kenneth S. Kosik Twenty years of research and more than US$1-billion worth of clinical trials have failed to yield an effective drug treatment for Alzheimer's disease. Most neuroscientists, clinicians and drug developers now agree that people at risk of the condition will probably need to receive medication before the onset of any cognitive symptoms. Yet a major stumbling block for early intervention is the absence of tools that can reveal the first expression of the insidious disease. So far, researchers have tended to focus on macroscopic changes associated with the disease, such as the build up of insoluble plaques of protein in certain areas of the brain, or on individual genes or molecular pathways that seem to be involved in disease progression. I contend that detecting the first disruptions to brain circuitry, and tracking the anatomical and physiological damage underlying the steady cognitive decline that is symptomatic of Alzheimer's, will require tools that operate at the 'mesoscopic' scale: techniques that probe the activity of thousands or millions of networked neurons. Although such tools are yet to be realized, several existing technologies indicate that they are within reach. Charted territory All the current approaches that are used to diagnose Alzheimer's are crude and unreliable. Take the classic biomarkers of the disease: a build up of plaques of the protein β-amyloid in a person's cerebral cortex, for instance, or elevated levels of the tau protein and dampened levels of β-amyloid in their cerebrospinal fluid. Although such markers are predictive of the disease, the interval between their appearance and the onset of cognitive problems is hugely variable, ranging from months to decades. © 2013 Nature Publishing Group

Keyword: Alzheimers; Brain imaging
Link ID: 18895 - Posted: 11.08.2013

Virginia Gewin Corey White felt pretty fortunate during his job search late last year. Over the course of 4 months, he found at least 25 posts to apply for — even after he had filtered the possibilities to places where his wife also had job prospects. Competition for the jobs was, as he expected, fierce, but he secured three interviews. In the end, he says, it was his skills in functional magnetic resonance imaging (fMRI) that helped him to clinch a post at Syracuse University in New York, where they were eager to elevate their neuroscience profile. The human brain is something of an enigma. Much is known about its physical structure, but quite how it manages to marshal its myriad components into a powerhouse capable of performing so many different tasks remains a mystery. Neuroimaging offers one way to help find out, and universities and government initiatives are betting on it. Already, an increasing number of universities across the United States and Europe are buying scanners dedicated to neuroimaging — a clear signal that the area is set for growth. “Institutions feel an imperative to develop an imaging programme because everybody's got to have one to be competitive,” says Mark Cohen, an imaging pioneer at the Semel Institute for Neuroscience and Human Behavior at the University of California, Los Angeles. At the same time, a slew of major projects focusing on various aspects of the brain is seeking to paint the most comprehensive picture yet of the organ's organizing principles — from genes to high-level cognition. As a result, young scientists with computational expertise, a fluency in multiple imaging techniques and a willingness to engage in interdisciplinary collaborations could readily carve out a career in this dynamic landscape. © 2013 Nature Publishing Group

Keyword: Brain imaging
Link ID: 18894 - Posted: 11.08.2013

Helen Shen A mixture of excitement, hope and anxiety made for an electric atmosphere in the crowded hotel ballroom. On a Monday morning in early May, neuroscientists, physicists and engineers packed the room in Arlington, Virginia, to its 150-person capacity, while hundreds more followed by webcast. Only a month earlier, US President Barack Obama had unveiled the neuroscience equivalent of a Moon shot: a far-reaching programme that could rival Europe's 10-year, €1-billion (US$1.3-billion) Human Brain Project (see page 5). The US Brain Research Through Advancing Innovative Neurotechnologies (BRAIN) Initiative would develop a host of tools to study brain activity, the president promised, and lead to huge breakthroughs in understanding the mind. But Obama's vague announcement on 2 April had left out key details, such as what the initiative's specific goals would be and how it would be implemented. So at their first opportunity — a workshop convened on 6 May by the National Science Foundation (NSF) and the Kavli Foundation of Oxnard, California — researchers from across the neuroscience spectrum swarmed to fill in the blanks and advocate for their favourite causes. The result was chaotic, acknowledges Van Wedeen, a neurobiologist at Harvard Medical School in Boston, Massachusetts, and one of the workshop's organizers. Everyone was afraid of being left out of 'the next big thing' in neuroscience — even though no one knew exactly what that might be. “The belief is we're ready for a leap forward,” says Wedeen. “Which leap, and in which direction, is still being debated.” © 2013 Nature Publishing Group

Keyword: Brain imaging; Development of the Brain
Link ID: 18893 - Posted: 11.08.2013

From supercomputing to imaging, technologies have developed far enough that it is now possible for us to imagine a day when we will understand the murky workings of our most complex organ: the brain. True, that day remains distant, but scientists are no longer considered crazy if they report a glimpse of it on the horizon. This turning point has been marked by the independent launches this year of two major brain projects: US President Barack Obama’s Brain Research Through Advancing Innovative Neurotechnologies (BRAIN) Initiative and the European Commission’s Human Brain Project. Even if they fail to achieve the ambitions the research community sets for them, they are signals of a new confidence. Right now, the two projects are not equal. The BRAIN Initiative is in an early phase of development, and has so far been promised little new money. The impetus behind it was a brash proposal by a group of neuroscientists for a billion-dollar project to measure the activity of every neuron in the human brain. That ambition was lost on the starting block when peers, justifiably, deemed it scientifically inappropriate — but it is yet to be replaced by a single goal of equivalently Apollo-programme proportions (see page 26). This may make it hard to maintain the political support large projects always need. Conversely, the Human Brain Project — headquartered in Switzerland, where it will soon relocate from Lausanne to its new base in Geneva — has 135 partner institutes and is blessed with a plenitude of money and planning. And it has a romantic Moon-landing-level goal: to simulate the human brain in a computer within ten years, and provide it to scientists as a research resource. Programme leaders have committed €72 million (US$97 million) to the 30-month ramp-up stage; those monies started to flow into labs after the project’s launch last month. The project has a detailed ten-year road map, laden with explicit milestones. © 2013 Nature Publishing Group

Keyword: Brain imaging
Link ID: 18892 - Posted: 11.08.2013

M. Mitchell Waldrop Kwabena Boahen got his first computer in 1982, when he was a teenager living in Accra. “It was a really cool device,” he recalls. He just had to connect up a cassette player for storage and a television set for a monitor, and he could start writing programs. But Boahen wasn't so impressed when he found out how the guts of his computer worked. “I learned how the central processing unit is constantly shuffling data back and forth. And I thought to myself, 'Man! It really has to work like crazy!'” He instinctively felt that computers needed a little more 'Africa' in their design, “something more distributed, more fluid and less rigid”. Today, as a bioengineer at Stanford University in California, Boahen is among a small band of researchers trying to create this kind of computing by reverse-engineering the brain. The brain is remarkably energy efficient and can carry out computations that challenge the world's largest supercomputers, even though it relies on decidedly imperfect components: neurons that are a slow, variable, organic mess. Comprehending language, conducting abstract reasoning, controlling movement — the brain does all this and more in a package that is smaller than a shoebox, consumes less power than a household light bulb, and contains nothing remotely like a central processor. To achieve similar feats in silicon, researchers are building systems of non-digital chips that function as much as possible like networks of real neurons. Just a few years ago, Boahen completed a device called Neurogrid that emulates a million neurons — about as many as there are in a honeybee's brain. And now, after a quarter-century of development, applications for 'neuromorphic technology' are finally in sight. © 2013 Nature Publishing Group

Keyword: Robotics; Learning & Memory
Link ID: 18891 - Posted: 11.08.2013

Ewen Callaway Children with autism make less eye contact than others of the same age, an indicator that is used to diagnose the developmental disorder after the age of two years. But a paper published today in Nature1 reports that infants as young as two months can display signs of this condition, the earliest detection of autism symptoms yet. If the small study can be replicated in a larger population, it might provide a way of diagnosing autism in infants so that therapies can begin early, says Warren Jones, research director at the Marcus Autism Center in Atlanta, Georgia. Jones and colleague Ami Klin studied 110 infants from birth — 59 of whom had an increased risk of being diagnosed with autism because they had a sibling with the disorder, and 51 of whom were at lower risk. One in every 88 children has an autism spectrum disorder (ASD), according to the most recent survey by the US Centers for Disease Control and Prevention in Atlanta. At ten regular intervals over the course of two years, the researchers in the new study showed infants video images of their carers and used eye-tracking equipment and software to track where the babies gazed. “Babies come into the world with a lot of predispositions towards making eye contact,” says Jones. “Young babies look more at the eyes than at any part of the face, and they look more at the face than at any part of the body.” Twelve children from the high-risk group were diagnosed with an ASD — all but two of them boys — and one male from the low-risk group was similarly diagnosed. Between two and six months of age, these children tended to look at eyes less and less over time. However, when the study began, these infants tended to gaze at eyes just as often as children who would not later develop autism. © 2013 Nature Publishing Group

Keyword: Autism; Vision
Link ID: 18890 - Posted: 11.07.2013

By PAM BELLUCK In a study published Wednesday, researchers using eye-tracking technology found that children who were found to have autism at age 3 looked less at people’s eyes when they were babies than children who did not develop autism. But contrary to what the researchers expected, the difference was not apparent at birth. It emerged in the next few months and autism experts said that might suggest a window during which the progression toward autism can be halted or slowed. The study, published online in the journal Nature, found that infants who later developed autism began spending less time looking at people’s eyes between 2 and 6 months of age and paid less attention to eyes as they grew older. By contrast, babies who did not develop autism looked increasingly at people’s eyes until about 9 months old, and then kept their attention to eyes fairly constant into toddlerhood. “This paper is a major leap forward,” said Dr. Lonnie Zwaigenbaum, a pediatrician and autism researcher at the University of Alberta, who was not involved in the study. “Documenting that there’s a developmental difference between 2 and 6 months is a major, major finding.” The authors, Warren R. Jones and Ami Klin, both of the Marcus Autism Center and Emory University, also found that babies who showed the steepest decline in looking at people’s eyes over time developed the most severe autism. “Kids whose eye fixation falls off most rapidly are the ones who later on are the most socially disabled and show the most symptoms,” said Dr. Jones, director of research at the autism center. “These are the earliest known signs of social disability, and they are associated with outcome and with symptom severity. Our ultimate goal is to translate this discovery into a tool for early identification” of children with autism. Copyright 2013 The New York Times Company

Keyword: Autism
Link ID: 18889 - Posted: 11.07.2013

Most of us don’t think twice when we extend our arms to hug a friend or push a shopping cart—our limbs work together seamlessly to follow our mental commands. For researchers designing brain-controlled prosthetic limbs for people, however, this coordinated arm movement is a daunting technical challenge. A new study showing that monkeys can move two virtual limbs with only their brain activity is a major step toward achieving that goal, scientists say. The brain controls movement by sending electrical signals to our muscles through nerve cells. When limb-connecting nerve cells are damaged or a limb is amputated, the brain is still able to produce those motion-inducing signals, but the limb can't receive them or simply doesn’t exist. In recent years, scientists have worked to create devices called brain-machine interfaces (BMIs) that can pick up these interrupted electrical signals and control the movements of a computer cursor or a real or virtual prosthetic. So far, the success of BMIs in humans has been largely limited to moving single body parts, such as a hand or an arm. Last year, for example, a woman paralyzed from the neck down for 10 years commanded a robotic arm to pick up and lift a piece of chocolate to her mouth just by thinking about it. But, "no device will ever work for people unless it restores bimanual behaviors,” says neuroscientist Miguel Nicolelis at Duke University in Durham, North Carolina, senior author of the paper. "You need to use both arms and hands for the simplest tasks.” In 2011, Nicolelis made waves by announcing on The Daily Show that he is developing a robotic, thought-controlled "exoskeleton" that will allow paralyzed people to walk again. Further raising the stakes, he pledged that the robotic body suit will enable a paralyzed person to kick a soccer ball during the opening ceremony of the 2014 Brazil World Cup. (Nicolelis is Brazilian and his research is partly funded by the nation’s government.) © 2013 American Association for the Advancement of Science

Keyword: Robotics
Link ID: 18888 - Posted: 11.07.2013

Learning a musical instrument as a child gives the brain a boost that lasts long into adult life, say scientists. Adults who used to play an instrument, even if they have not done so in decades, have a faster brain response to speech sounds, research suggests. The more years of practice during childhood, the faster the brain response was, the small study found. The Journal of Neuroscience work looked at 44 people in their 50s, 60s and 70s. The volunteers listened to a synthesised speech syllable, "da", while researchers measured electrical activity in the region of the brain that processes sound information - the auditory brainstem. Despite none of the study participants having played an instrument in nearly 40 years, those who completed between four and 14 years of music training early in life had a faster response to the speech sound than those who had never been taught music. Lifelong skill Researcher Michael Kilgard, of Northwestern University, said: "Being a millisecond faster may not seem like much, but the brain is very sensitive to timing and a millisecond compounded over millions of neurons can make a real difference in the lives of older adults." As people grow older, they often experience changes in the brain that compromise hearing. For instance, the brains of older adults show a slower response to fast-changing sounds, which is important for interpreting speech. Musical training may help offset this, according to Dr Kilgard's study. BBC © 2013

Keyword: Hearing; Alzheimers
Link ID: 18887 - Posted: 11.07.2013

by Catherine de Lange Speak more than one language? Bravo! It seems that being bilingual helps delay the onset of several forms of dementia. Previous studies of people with Alzheimer's disease in Canada showed that those who are fluent in two languages begin to exhibit symptoms four to five years later than people who are monolingual. Thomas Bak at the University of Edinburgh, UK, wanted to know whether this was truly down to language, or whether education or immigration status might be driving the delay, since most bilingual people living in Toronto, where the first studies were conducted, tended to come from an immigrant background. He also wondered whether people suffering from other forms of dementia might experience similar benefits. He teamed up with Suvarna Alladi, a neurologist working on memory disorders at Nizam's Institute of Medical Sciences (NIMSH) in Hyderabad, India. "In India, bilingualism is part of everyday life," says Bak. The team compared the age that dementia symptoms appeared in some 650 people who visited the NIMSH over six years. About half spoke at least two languages. This group's symptoms started on average four and a half years later than those in people who were monolingual. "Incredibly the number of years in delay of symptom onset they reported in the Indian sample is identical to our findings," says Ellen Bialystok, at Toronto's York University, who conducted the original Canadian studies. What's more, the same pattern appeared in three different types of dementia: Alzheimer's, frontotemporal and vascular. The results also held true for a group of people who were illiterate, suggesting that the benefits of being bilingual don't depend on education. © Copyright Reed Business Information Ltd.

Keyword: Alzheimers; Language
Link ID: 18886 - Posted: 11.07.2013

On Easter Sunday in 2008, the phantom noises in Robert De Mong’s head dropped in volume -- for about 15 minutes. For the first time in months, he experienced relief, enough at least to remember what silence was like. And then they returned, fierce as ever. It was six months earlier that the 66-year-old electrical engineer first awoke to a dissonant clamor in his head. There was a howling sound, a fingernails-on-a-chalkboard sound, “brain zaps” that hurt like a headache and a high frequency "tinkle" noise, like musicians hitting triangles in an orchestra. Many have since disappeared, but two especially stubborn noises remain. One he describes as monkeys banging on symbols. Another resembles frying eggs and the hissing of high voltage power lines. He hears those sounds every moment of every day. De Mong was diagnosed in 2007 with tinnitus, a condition that causes a phantom ringing, buzzing or roaring in the ears, perceived as external noise. When the sounds first appeared, they did so as if from a void, he said. No loud noise trauma had preceded the tinnitus, as it does for some sufferers -- it was suddenly just there. And the noises haunted him, robbed him of sleep and fueled a deep depression. He lost interest in his favorite hobby: tinkering with his ‘78 Trans Am and his two Corvettes. He stopped going into work. That month, De Mong visited an ear doctor, who told him he had high frequency hearing loss in both ears. Another doctor at the Stanford Ear, Nose and Throat clinic confirmed it, and suggested hearing aids as a possibility. They helped the hearing, but did nothing for the ringing. © 1996-2013 MacNeil/Lehrer Productions.

Keyword: Hearing
Link ID: 18885 - Posted: 11.07.2013

By NICHOLAS BAKALAR Children who do not sleep enough may be increasing their risk for obesity, according to a new study. Researchers randomly divided 37 children aged 8 to 11 into two groups. Each group increased their habitual time in bed by an hour and a half per night for one week, then decreased their time by the same amount the next week. They wore electronic devices to measure sleep time, were assessed for daily food intake three times a week, and had blood tests to measure leptin, a hormone that affects hunger, and high levels of which correlate with fat tissue accumulations. Children consumed 134 calories fewer each day during the increased sleep week than the during the week with less sleep. Fasting leptin levels were lower when the children slept more and, over all, the children’s weight averaged about a half pound less at the end of long sleep weeks than short ones. The study was published online in Pediatrics. The lead author, Chantelle N. Hart, an associate professor of public health at Temple University who was at Brown University when she did the study, cautioned that it was small, and looked only at acute changes in sleep and their effect on eating behaviors. Still, she said, “I think these findings suggest that getting a good night’s sleep in childhood could have important benefits for weight regulation through decreased food intake.” Copyright 2013 The New York Times Company

Keyword: Obesity; Sleep
Link ID: 18884 - Posted: 11.07.2013