Most Recent Links
Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.
by Bethany Brookshire One day when I came in to the office, my air conditioning unit was making a weird rattling sound. At first, I was slightly annoyed, but then I chose to ignore it and get to work. In another 30 minutes, I was completely oblivious to the noise. It wasn’t until my cubicle neighbor Meghan Rosen came in and asked about the racket that I realized the rattle was still there. My brain had habituated to the sound. Habituation, the ability to stop noticing or responding to an irrelevant signal, is one of the simplest forms of learning. But it turns out that at the level of a brain cell, it’s a far more complex process than scientists previously thought. In the June 18 Neuron, Mani Ramaswami of Trinity College Dublin proposes a new framework to describe how habituation might occur in our brains. The paper not only offers a new mechanism to help us understand one of our most basic behaviors, it also demonstrates how taking the time to integrate new findings into a novel framework can help push a field forward. Our ability to ignore the irrelevant and familiar has been a long-known feature of human learning. It’s so simple, even a sea slug can do it. Because the ability to habituate is so simple, scientists hypothesized that the mechanism behind it must also be simple. The previous framework for habituation has been synaptic depression, a decrease in chemical release. When one brain cell sends a signal to another, it releases chemical messengers into a synapse, the small gap between neurons. Receptors on the other side pick up this excitatory signal and send the message onward. But in habituation, neurons would release fewer chemicals, making the signal less likely to hit the other side. Fewer chemicals, fewer signals, and you’ve habituated. Simple. But, as David Glanzman, a neurobiologist at the University of California, Los Angeles points out, there are problems with this idea. © Society for Science & the Public 2000 - 2013
Keyword: Learning & Memory
Link ID: 19772 - Posted: 06.25.2014
|By Lisa Marshall Is Alzheimer's disease an acquired form of Down syndrome? When neurobiologist Huntington Potter first posed the question in 1991, Alzheimer's researchers were skeptical. They were just beginning to explore the causes of the memory-robbing neurological disease. Scientists already knew that by age 40, nearly 100 percent of patients with Down syndrome, who have an extra copy of chromosome 21, had brains full of beta-amyloid peptide—the neuron-strangling plaque that is a hallmark of Alzheimer's. They also knew that the gene that codes for that protein lives on chromosome 21, suggesting that people acquire more plaque because they get an extra dose of the peptide. Potter, though, suggested that if people with Down syndrome develop Alzheimer's because of an extra chromosome 21, healthy people may develop Alzheimer's for the same reason. A quarter of a century later mounting evidence supports the idea. “What we hypothesized in the 1990s and have begun to prove is that people with Alzheimer's begin to make molecular mistakes and generate cells with three copies of chromosome 21,” says Potter, who was recently appointed director of Alzheimer's disease research at the University of Colorado School of Medicine, with the express purpose of studying Alzheimer's through the lens of Down syndrome. He is no longer the only one exploring the link. In recent years dozens of studies have shown Alzheimer's patients possess an inordinate amount of Down syndrome–like cells. One 2009 study by Russian researchers found that up to 15 percent of the neurons in the brains of Alzheimer's patients contained an extra copy of chromosome 21. Others have shown Alzheimer's patients have 1.5 to two times as many skin and blood cells with the extra copy as healthy controls. Potter's own research in mice suggests a vicious cycle: when normal cells are exposed to the beta-amyloid peptide, they tend to make mistakes when dividing, producing more trisomy 21 cells, which, in turn, produce more plaque. In August, Potter and his team published a paper in the journal Neurobiology of Aging describing why those mistakes may occur: the inhibition of a specific enzyme. © 2014 Scientific American
Keyword: Alzheimers
Link ID: 19771 - Posted: 06.25.2014
By Jim Tankersley COLUMBUS, Ohio — First they screwed the end of the gray cord into the metal silo rising out of Ian Burkhart’s skull. Later they laid his right forearm across two foam cylinders, and they wrapped it with thin strips that looked like film from an old home movie camera. They ran him through some practice drills, and then it was time for him to try. If he succeeded at this next task, it would be science fiction come true: His thoughts would bypass his broken spinal cord. With the help of an algorithm and some electrodes, he would move his once-dead limb again — a scientific first. “Ready?” the young engineer, Nick Annetta, asked from the computer to his left. “Three. Two. One.” Burkhart, 23, marshaled every neuron he could muster, and he thought about his hand. 1 of 14 The last time the hand obeyed him, it was 2010 and Burkhart was running into the Atlantic Ocean. The hand had gripped the steering wheel as he drove the van from Ohio University to North Carolina’s Outer Banks, where he and friends were celebrating the end of freshman year. The hand unclenched to drop his towel on the sand. Burkhart splashed into the waves, the hand flying above his head, the ocean warm around his feet, the sun roasting his arms, and he dived. In an instant, he felt nothing. Not his hand. Not his legs. Only the breeze drying the saltwater on his face.
Keyword: Robotics
Link ID: 19770 - Posted: 06.25.2014
By Gary Stix Tony Zador: The human brain has 100 billion neurons, a mouse brain has maybe 100 million. What we’d really like to understand is how we go from a bunch of neurons to thought, feelings, behavior. We think that the key is to understand how the different neurons are connected to one another. So traditionally there have been a lot of techniques for studying connectivity but at a fairly crude level. We can, for instance, tell that a bunch of neurons here tend to be connected to a bunch of neurons there. There are also techniques for looking at how single neurons are connected but only for individual links between those neurons. What we would love to be able to do is to tell how every single neuron in the brain is connected to every single other neuron in the brain. So if you wanted to navigate through the United States, one of the most useful things you could have is a roadmap. It wouldn’t tell you everything about the United States, but it would be very hard to get around without a complete roadmap of the country. We need something like that for the brain. Zador: Traditionally the way people study connectivity is as a branch of microscopy. Typically what people do is they use one method or another to label a neuron and then they observe that neuron at some level of resolution. But the challenge that’s at the core of all the microscopy techniques is that neurons can extend long distances. That might be millimeters in a mouse brain or, in fact, in a giraffe brain, there are neurons that go all the way from the brain to its foot, which can be over 15 feet. Brain cells are connected with one another at structures called synapses, which are below the resolution of light microscopy. That means that if you really want to understand how one neuron is connected to another, you need to resolve the synapse, which requires electron microscopy. You have to take incredibly thin sections of brain and then image them. © 2014 Scientific American
Keyword: Autism; Genes & Behavior
Link ID: 19769 - Posted: 06.25.2014
Helen Shen As US science agencies firm up plans for a national ten-year neuroscience initiative, California is launching an ambitious project of its own. On 20 June, governor Jerry Brown signed into law a state budget that allocates US$2 million to establish the California Blueprint for Research to Advance Innovations in Neuroscience (Cal-BRAIN) project. Cal-BRAIN is the first state-wide programme to piggyback on the national Brain Research through Advancing Innovative Neurotechnologies (BRAIN) initiative announced by US President Barack Obama in April 2013 (see Nature 503, 26–28; 2013). The national project is backed this year by $110 million in public funding from the National Institutes of Health (NIH), the Defense Advanced Research Projects Agency (DARPA) and the National Science Foundation (NSF). California researchers and lawmakers hope that the state’s relatively modest one-time outlay will pave the way for a larger multiyear endeavour that gives its scientists an edge in securing grants from the national initiative. “It’s a drop in the bucket, but it’s an important start,” says Zack Lynch, executive director of the Neurotechnology Industry Organization, an advocacy group in San Francisco, California. Cal-BRAIN sets itself apart from the national effort by explicitly seeking industry involvement. The proposal emphasizes the potential economic benefits of neuroscience research and calls for the formation of a programme to facilitate the translation of any discoveries into commercial applications. © 2014 Nature Publishing Group,
Keyword: Brain imaging
Link ID: 19768 - Posted: 06.25.2014
by Sarah Zielinski Would you recognize a stop sign if it was a different shape, though still red and white? Probably, though there might be a bit of a delay. After all, your brain has long been trained to expect a red-and-white octagon to mean “stop.” The animal and plant world also uses colorful signals. And it would make sense if a species always used the same pattern to signal the same thing — like how we can identify western black widows by the distinctive red hourglass found on the adult spiders’ back. But that doesn’t always happen. Even with really important signals, such as the ones that tell a predator, “Don’t eat me — I’m poisonous.” Consider the dyeing dart frog (Dendrobates tinctorius), which is found in lowland forests of the Guianas and Brazil. The backs of the 5-centimeter-long frogs are covered with a yellow-and-black pattern, which warns of its poisonous nature. But that pattern isn’t the same from frog to frog. Some are decorated with an elongated pattern; others have more complex, sometimes interrupted patterns. The difference in patterns should make it harder for predators to recognize the warning signal. So why is there such variety? Because the patterns aren’t always viewed on a static frog, and the different ways that the frogs move affects how predators see the amphibians, according to a study published June 18 in Biology Letters. Bibiana Rojas of Deakin University in Geelong, Australia, and colleagues studied the frogs in a nature reserve in French Guiana from February to July 2011. They found 25 female and 14 male frogs, following each for two hours from about 2.5 meters away, where the frog wouldn’t notice a scientist. As a frog moved, a researcher would follow, recording how far it went and in what direction. Each frog was then photographed. © Society for Science & the Public 2000 - 2013.
Keyword: Vision; Neurotoxins
Link ID: 19767 - Posted: 06.25.2014
By HELENE STAPINSKI A few months ago, my 10-year-old daughter, Paulina, was suffering from a bad headache right before bedtime. She went to lie down and I sat beside her, stroking her head. After a few minutes, she looked up at me and said, “Everything in the room looks really small.” And I suddenly remembered: When I was young, I too would “see things far away,” as I once described it to my mother — as if everything in the room were at the wrong end of a telescope. The episodes could last anywhere from a few minutes to an hour, but they eventually faded as I grew older. I asked Paulina if this was the first time she had experienced such a thing. She shook her head and said it happened every now and then. When I was a little girl, I told her, it would happen to me when I had a fever or was nervous. I told her not to worry and that it would go away on its own. Soon she fell asleep, and I ran straight to my computer. Within minutes, I discovered that there was an actual name for what turns out to be a very rare affliction — Alice in Wonderland Syndrome. Episodes usually include micropsia (objects appear small) or macropsia (objects appear large). Some sufferers perceive their own body parts to be larger or smaller. For me, and Paulina, furniture a few feet away seemed small enough to fit inside a dollhouse. Dr. John Todd, a British psychiatrist, gave the disorder its name in a 1955 paper, noting that the misperceptions resemble Lewis Carroll’s descriptions of what happened to Alice. It’s also known as Todd’s Syndrome. Alice in Wonderland Syndrome is not an optical problem or a hallucination. Instead, it is most likely caused by a change in a portion of the brain, likely the parietal lobe, that processes perceptions of the environment. Some specialists consider it a type of aura, a sensory warning preceding a migraine. And the doctors confirmed that it usually goes away by adulthood. © 2014 The New York Times Company
Keyword: Vision; Attention
Link ID: 19766 - Posted: 06.24.2014
By Tanya Lewis and Live Science They say laughter is the best medicine. But what if laughter is the disease? For a 6-year-old girl in Bolivia who suffered from uncontrollable and inappropriate bouts of giggles, laughter was a symptom of a serious brain problem. But doctors initially diagnosed the child with “misbehavior.” “She was considered spoiled, crazy — even devil-possessed,” José Liders Burgos Zuleta of the Advanced Medical Image Centre in La Paz said in a statement. [ But Burgos Zuleta discovered that the true cause of the girl’s laughing seizures, medically called gelastic seizures, was a brain tumor. After the girl underwent a brain scan, the doctors discovered a hamartoma, a small, benign tumor that was pressing against her brain’s temporal lobe. Surgeons removed the tumor, the doctors said. She stopped having the uncontrollable attacks of laughter and now laughs only normally, they said. Gelastic seizures are a relatively rare form of epilepsy, said Solomon Moshé, a pediatric neurologist at Albert Einstein College of Medicine in New York. “It’s not necessarily ‘ha-ha-ha’ laughing,” Moshé said. “There’s no happiness in this. Some of the kids may be very scared,” he added. The seizures are most often caused by tumors in the hypothalamus, although they can also come from tumors in other parts of brain, Moshé said. Although laughter is the main symptom, patients may also have outbursts of crying.
Keyword: Emotions; Epilepsy
Link ID: 19765 - Posted: 06.24.2014
|By Lindsey Konkel and Environmental Health News Babies whose moms lived within a mile of crops treated with widely used pesticides were more likely to develop autism, according to new research. The study of 970 children, born in farm-rich areas of Northern California, is part of the largest project to date that is exploring links between autism and environmental exposures. The University of California, Davis research – which used women’s addresses to determine their proximity to insecticide-treated fields – is the third project to link prenatal pesticide exposures to autism and related disorders. “The weight of evidence is beginning to suggest that mothers’ exposures during pregnancy may play a role in the development of autism spectrum disorders,” said Kim Harley, an environmental health researcher at the University of California, Berkeley who was not involved in the new study. One in every 68 U.S. children has been identified with an autism spectrum disorder—a group of neurodevelopmental disorders characterized by difficulties with social interactions, according to the Centers for Disease Control and Prevention. “This study does not show that pesticides are likely to cause autism, though it suggests that exposure to farming chemicals during pregnancy is probably not a good thing,” said Dr. Bennett Leventhal, a child psychiatrist at University of California, San Francisco who studies autistic children. He did not participate in the new study. The biggest known contributor to autism risk is having a family member with it. Siblings of a child with autism are 35 times more likely to develop it than those without an autistic brother or sister, according to the National Institutes of Health. © 2014 Scientific American
Keyword: Autism; Neurotoxins
Link ID: 19764 - Posted: 06.24.2014
By DOUGLAS QUENQUA When it comes to forming memories that involve recalling a personal experience, neuroscientists are of two minds. Some say that each memory is stored in a single neuron in a region of the brain called the hippocampus. But a new study is lending weight to the theory of neuroscientists who believe that every memory is spread out, or distributed, across many neurons in that part of the brain. By watching patients with electrodes in their brains play a memory game, researchers found that each such memory is committed to cells distributed across the hippocampus. Though the proportion of cells responsible for each memory is small (about 2 percent of the hippocampus), the absolute number is in the millions. So the loss of any one cell should not have a noticeable effect on memory or mental acuity, said Peter N. Steinmetz, a research neurologist at the Dignity Health Barrow Neurological Institute in Phoenix and senior author of the study. “The significance of losing one cell is substantially reduced because you’ve got this whole population that’s turning on” when you access a memory, he said. The findings also suggest that memory researchers “need to use techniques that allow us to look at the whole population of neurons” rather than focus on individual cells. The patients in the study, which is published in Proceedings of the National Academy of Sciences, first memorized a list of words on a computer screen, then viewed a second list that included those words and others. When asked to identify words they had seen earlier, the patients displayed cell-firing activity consistent with the distributed model of memory. © 2014 The New York Times Company
Keyword: Learning & Memory
Link ID: 19763 - Posted: 06.24.2014
|By Tori Rodriguez One of the most devastating aspects of Alzheimer's is its effect on patients' ability to recall life events. Several studies have found that music helps to strengthen these individuals' autobiographical memories, and a paper in the November 2013 Journal of Neurolinguistics builds on these findings by exploring the linguistic quality of those recollections. Researchers instructed 18 patients with Alzheimer's and 18 healthy control subjects to tell stories from their lives in a silent room or while listening to the music of their choice. Among the Alzheimer's patients, the music-cued stories contained a greater number of meaningful words, were more grammatically complex and conveyed more information per number of words. Music may enhance narrative memories because “music and language processing share a common neural basis,” explains study co-author Mohamad El Haj of Lille University in France. © 2014 Scientific American
Keyword: Alzheimers
Link ID: 19762 - Posted: 06.24.2014
Sarah C. P. Williams There’s a reason people say “Calm down or you’re going to have a heart attack.” Chronic stress—such as that brought on by job, money, or relationship troubles—is suspected to increase the risk of a heart attack. Now, researchers studying harried medical residents and harassed rodents have offered an explanation for how, at a physiological level, long-term stress can endanger the cardiovascular system. It revolves around immune cells that circulate in the blood, they propose. The new finding is “surprising,” says physician and atherosclerosis researcher Alan Tall of Columbia University, who was not involved in the new study. “The idea has been out there that chronic psychosocial stress is associated with increased cardiovascular disease in humans, but what’s been lacking is a mechanism,” he notes. Epidemiological studies have shown that people who face many stressors—from those who survive natural disasters to those who work long hours—are more likely to develop atherosclerosis, the accumulation of fatty plaques inside blood vessels. In addition to fats and cholesterols, the plaques contain monocytes and neutrophils, immune cells that cause inflammation in the walls of blood vessels. And when the plaques break loose from the walls where they’re lodged, they can cause more extreme blockages elsewhere—leading to a stroke or heart attack. Studying the effect of stressful intensive care unit (ICU) shifts on medical residents, biologist Matthias Nahrendorf of Harvard Medical School in Boston recently found that blood samples taken when the doctors were most stressed out had the highest levels of neutrophils and monocytes. To probe whether these white blood cells, or leukocytes, are the missing link between stress and atherosclerosis, he and his colleagues turned to experiments on mice. © 2014 American Association for the Advancement of Science
Keyword: Stress
Link ID: 19761 - Posted: 06.23.2014
By Adam Carter, CBC News Women who take antidepressants when they’re pregnant could unknowingly predispose their kids to type 2 diabetes and obesity later on in life, new research out of McMaster University suggests. The study, conducted by associate professor of obstetrics and gynecology Alison Holloway and PhD student Nicole De Long, found a link between the antidepressant fluoxetine and increased risk of obesity and diabetes in children. Holloway cautions that this is not a warning for all pregnant women to stop taking antidepressants, but rather to start a conversation about prenatal care and what works best on an individual basis. “There are a lot of women who really need antidepressants to treat depression. This is what they need,” Holloway told CBC. “We’re not saying you should necessarily take patients off antidepressants because of this — but women should have this discussion with their caregiver.” “Obesity and Type 2 diabetes in children is on the rise and there is the argument that it is related to lifestyle and availability of high calorie foods and reduced physical activity, but our study has found that maternal antidepressant use may also be a contributing factor to the obesity and diabetes epidemic.” According to a study out of Memorial University in St. John's, obesity rates in Canada have tripled between 1985 and 2011. Canada also ranks poorly when it comes to its overall number of cases of diabetes, according to international report from the Organization for Economic Co-operation and Development, released last year. © CBC 2014
Keyword: Depression; Obesity
Link ID: 19760 - Posted: 06.23.2014
Nicola Davis The old adage that we eat with our eyes appears to be correct, according to research that suggests diners rate an artistically arranged meal as more tasty – and are prepared to pay more for it. The team at Oxford University tested the idea by gauging the reactions of diners to food presented in different ways. Inspired by Wassily Kandinsky's "Painting Number 201" Franco-Columbian chef and one of the authors of the study, Charles Michel, designed a salad resembling the abstract artwork to explore how the presentation of food affects the dining experience. "A number of chefs now are realising that they are being judged by how their foods photograph – be it in the fancy cookbooks [or], more often than not, when diners instagram their friends," explains Professor Charles Spence, experimental psychologist at the University of Oxford and a co-author of the study. Thirty men and 30 women were each presented with one of three salads containing identical ingredients, arranged either to resemble the Kandinsky painting, a regular tossed salad, or a "neat" formation where each component was spaced away from the others. Seated alone at a table mimicking a restaurant setting, and unaware that other versions of the salad were on offer, each participant was given two questionnaires asking them to rate various aspects of the dish on a 10-point scale, before and after tucking into the salad. Before participants sampled their plateful, the Kandinsky-inspired dish was rated higher for complexity, artistic presentation and general liking. Participants were prepared to pay twice as much for the meal as for either the regular or "neat arrangements". © 2014 Guardian News and Media Limited
Keyword: Chemical Senses (Smell & Taste); Attention
Link ID: 19759 - Posted: 06.23.2014
By ANDREW POLLACK It is a tantalizingly simple idea for losing weight: Before meals, swallow a capsule that temporarily swells up in the stomach, making you feel full. Now, some early results for such a pill are in. And they are only partly fulfilling. People who took the capsule lost 6.1 percent of their weight after 12 weeks, compared with 4.1 percent for those taking a placebo, according to results presented Sunday at an endocrinology meeting in Chicago. Gelesis, the company developing the capsule, declared the results a triumph and said it would start a larger study next year aimed at winning approval for the product, called Gelesis100. “I’m definitely impressed, absolutely,” Dr. Arne V. Astrup, head of the department of nutrition, exercise and sports at the University of Copenhagen in Denmark and the lead investigator in the study, said in an interview. He said the physical mode of action could make the product safer than many existing diet drugs, which act chemically on the brain to influence appetite. But Dr. Daniel H. Bessesen, an endocrinologist at the University of Colorado who was not involved in the study, said weight loss of 2 percent beyond that provided by a placebo was “very modest.” “It doesn’t look like a game changer,” he said. Gelesis, a privately held company based in Boston, is one of many trying to come up with a product that can provide significant weight loss without bariatric surgery. Two new drugs — Qsymia from Vivus, and Belviq from Arena Pharmaceuticals and Eisai — have had disappointing sales since their approvals in 2012. Reasons include modest effectiveness, safety concerns, lack of insurance reimbursement and a belief among some doctors and overweight people that obesity is not a disease. © 2014 The New York Times Company
Keyword: Obesity
Link ID: 19758 - Posted: 06.23.2014
by Frank Swain WHEN it comes to personal electronics, it's difficult to imagine iPhones and hearing aids in the same sentence. I use both and know that hearing aids have a well-deserved reputation as deeply uncool lumps of beige plastic worn mainly by the elderly. Apple, on the other hand, is the epitome of cool consumer electronics. But the two are getting a lot closer. The first "Made for iPhone" hearing aids have arrived, allowing users to stream audio and data between smartphones and the device. It means hearing aids might soon be desirable, even to those who don't need them. A Bluetooth wireless protocol developed by Apple last year lets the prostheses connect directly to Apple devices, streaming audio and data while using a fraction of the power consumption of conventional Bluetooth. LiNX, made by ReSound (pictured), and Halo hearing aids made by Starkey – both international firms – use the iPhone as a platform to offer users new features and added control over their hearing aids. "The main advantage of Bluetooth is that the devices are talking to each other, it's not just one way," says David Nygren, UK general manager of ReSound. This is useful as hearing aids have long suffered from a restricted user interface – there's not much room for buttons on a device the size of a kidney bean. This is a major challenge for hearing-aid users, because different environments require different audio settings. Some devices come with preset programmes, while others adjust automatically to what their programming suggests is the best configuration. This is difficult to get right, and often devices calibrated in the audiologist's clinic fall short in the real world. © Copyright Reed Business Information Ltd.
Keyword: Hearing
Link ID: 19757 - Posted: 06.23.2014
Carl Zimmer A novelist scrawling away in a notebook in seclusion may not seem to have much in common with an NBA player doing a reverse layup on a basketball court before a screaming crowd. But if you could peer inside their heads, you might see some striking similarities in how their brains were churning. That’s one of the implications of new research on the neuroscience of creative writing. For the first time, neuroscientists have used fMRI scanners to track the brain activity of both experienced and novice writers as they sat down — or, in this case, lay down — to turn out a piece of fiction. The researchers, led by Martin Lotze of the University of Greifswald in Germany, observed a broad network of regions in the brain working together as people produced their stories. But there were notable differences between the two groups of subjects. The inner workings of the professionally trained writers in the bunch, the scientists argue, showed some similarities to people who are skilled at other complex actions, like music or sports. The research is drawing strong reactions. Some experts praise it as an important advance in understanding writing and creativity, while others criticize the research as too crude to reveal anything meaningful about the mysteries of literature or inspiration. Dr. Lotze has long been intrigued by artistic expression. In previous studies, he has observed the brains of piano players and opera singers, using fMRI scanners to pinpoint regions that become unusually active in the brain. Needless to say, that can be challenging when a subject is singing an aria. Scanners are a lot like 19th-century cameras: They can take very sharp pictures, if their subject remains still. To get accurate data, Dr. Lotze has developed software that can take into account fluctuations caused by breathing or head movements. © 2014 The New York Times Company
Keyword: Language; Brain imaging
Link ID: 19756 - Posted: 06.21.2014
Karen Ravn To the west, the skies belong to the carrion crow. To the east, the hooded crow rules the roost. In between, in a narrow strip running roughly north to south through central Europe, the twain have met, and mated, for perhaps as long as 10,000 years. But although the crows still look very different — carrion crows are solid black, whereas hooded crows are grey — researchers have found that they are almost identical genetically. The taxonomic status of carrion crows (Corvus corone) and hooded crows (Corvus cornix) has been debated ever since Carl Linnaeus, the founding father of taxonomy, declared them to be separate species in 1758. A century later, Darwin called any such classification impossible until the term 'species' had been defined in a generally accepted way. But the definition is still contentious, and many believe it always will be. The crows are known to cross-breed and produce viable offspring, so lack the reproductive barriers that some biologists consider essential to the distinction of a species, leading to proposals that they are two subspecies of carrion crow. In fact, evolutionary biologist Jochen Wolf from Uppsala University in Sweden and his collaborators have now found that the populations living in the cross-breeding zone are so similar genetically that the carrion crows there are more closely related to hooded crows than to the carrion crows farther west1. Only a small part of the genome — less than 0.28% — differs between the populations, the team reports in this week's Science1. This section is located on chromosome 18, in an area associated with pigmentation, visual perception and hormonal regulation. It is no coincidence, the researchers suggest, that the main differences between carrion and hooded crows are in colouring, mating preferences (both choose mates whose colouring matches theirs), and hormone-influenced social behaviours (carrion crows lord it over hooded ones). © 2014 Nature Publishing Group,
Keyword: Sexual Behavior; Evolution
Link ID: 19755 - Posted: 06.21.2014
By Gary Stix James DiCarlo: We all have this intuitive feel for what object recognition is. It’s the ability to discriminate your face from other faces, a car from other cars, a dog from a camel, that ability we all intuitively feel. But making progress in understanding how our brains are able to accomplish that is a very challenging problem and part of the reason is that it’s challenging to define what it isn’t and is. We take this problem for granted because it seems effortless to us. However, a computer vision person would tell you is that this is an extremely challenging problem because each object presents an essentially infinite number of images to your retina so you essentially never see the same image of each object twice. SA: It seems like object recognition is actually one of the big problems both in neuroscience and in the computational science of machine learning? DiCarlo: That’s right., not only machine learning but also in psychology or cognitive science because the objects that we see are the sources in the world of what we use to build higher cognition, things like memory and decision-making. Should I reach for this, should I avoid it? Our brains can’t do what you would call higher cognition without these foundational elements that we often take for granted. SA: Maybe you can talk about what’s actually happening in the brain during this process. DiCarlo: It’s been known for several decades that there’s a portion of the brain, the temporal lobe down the sides of our head, that, when lost or damaged in humans and non-human primates, leads to deficits of recognition. So we had clues that that’s where these algorithms for object recognition are living. But just saying that part of your brain solves the problem is not really specific. It’s still a very large piece of tissue. Anatomy tells us that there’s a whole network of areas that exist there, and now the tools of neurophysiology and still more advanced tools allow us to go in and look more closely at the neural activity, especially in non-human primates. We can then begin to decipher the actual computations to the level that an engineer might, for instance, in order to emulate what’s going on in our heads. © 2014 Scientific American
Keyword: Vision; Attention
Link ID: 19754 - Posted: 06.21.2014
—By Indre Viskontas He might be fictional. But the gigantic Hodor, a character in the blockbuster Game of Thrones series, nonetheless sheds light on something very much in the realm of fact: how our ability to speak emerges from a complex ball of neurons, and how certain brain-damaged patients can lose very specific aspects of that ability. According to George R.R. Martin, who wrote the epic books that inspired the HBO show, the 7-foot-tall Hodor could only say one word—"Hodor"—and everyone therefore tended to assume that was his name. Here's one passage about Hodor from the first novel in Martin's series: Theon Greyjoy had once commented that Hodor did not know much, but no one could doubt that he knew his name. Old Nan had cackled like a hen when Bran told her that, and confessed that Hodor's real name was Walder. No one knew where "Hodor" had come from, she said, but when he started saying it, they started calling him by it. It was the only word he had. Yet it's clear that Hodor can understand much more than he can say; he's able to follow instructions, anticipate who needed help, and behave in socially appropriate ways (mostly). Moreover, he says this one word in many different ways, implying very different meanings: So what might be going on in Hodor's brain? Hodor's combination of impoverished speech production with relatively normal comprehension is a classic, albeit particularly severe, presentation of expressive aphasia, a neurological condition usually caused by a localized stroke in the front of the brain, on the left side. Some patients, however, have damage to that part of the brain from other causes, such as a tumor, or a blow to the head. ©2014 Mother Jones
Keyword: Language
Link ID: 19753 - Posted: 06.21.2014


.gif)

