Most Recent Links
Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.
By SUNITA SAH A POPULAR remedy for a conflict of interest is disclosure — informing the buyer (or the patient, etc.) of the potential bias of the seller (or the doctor, etc.). Disclosure is supposed to act as a warning, alerting consumers to their adviser’s stake in the matter so they can process the advice accordingly. But as several recent studies I conducted show, there is an underappreciated problem with disclosure: It often has the opposite of its intended effect, not only increasing bias in advisers but also making advisees more likely to follow biased advice. When I worked as a physician, I witnessed how bias could arise from numerous sources: gifts or sponsorships from the pharmaceutical industry; compensation for performing particular procedures; viewing our own specialties as delivering more effective treatments than others’ specialties. Although most physicians, myself included, tend to believe that we are invulnerable to bias, thus making disclosures unnecessary, regulators insist on them, assuming that they work effectively. To some extent, they do work. Disclosing a conflict of interest — for example, a financial adviser’s commission or a physician’s referral fee for enrolling patients into clinical trials — often reduces trust in the advice. But my research has found that people are still more likely to follow this advice because the disclosure creates increased pressure to follow the adviser’s recommendation. It turns out that people don’t want to signal distrust to their adviser or insinuate that the adviser is biased, and they also feel pressure to help satisfy their adviser’s self-interest. Instead of functioning as a warning, disclosure can become a burden on advisees, increasing pressure to take advice they now trust less. © 2016 The New York Times Company
Keyword: Attention
Link ID: 22416 - Posted: 07.09.2016
by Adriana Heguy, molecular biologist and genomics researcher: Interestingly, tongue-curling ability is not solely genetic, and the genetic component may be very small. Monozygotic (identical) twins are not always concordant for tongue-curling ability, so if there is a genetic component, it’s clearly not Mendelian. In other words, it’s not a trait coded by one single gene, and it’s clearly influenced by the environment—in this case, practice. But for some reason this is one of the “myths” about genetics that gets spread around in high school, where it is used as an example of a simple Mendelian trait with a simple dominant-recessive nature. It’s hard to comment on the evolutionary purpose of an ability so heavily influenced by the environment, and not obviously useful. There are many traits for which we do not have the faintest idea why they exist or what purpose they serve. In the case of tongue-curling, it’s possible that it’s a case of fine motor control of the tongue. We need to be able to move our tongues to not bite them when we eat, for example, and for swirling food around. For unknown reasons, some individuals are better than others at controlling tongue movement. And since the ability can be acquired by practicing (though not everybody apparently succeeds), it does seem likely that it is indeed a question of motor control. Most people are able to do it. It’s quite common. But it could be that evolution had nothing to do with it. Or it could be a spandrel; in other words, a side effect of evolution. Maybe the evolution of dexterity or finer motor control of other muscles resulted in tongue “dexterity.” It’s possible that it is an atavism, something that increased tongue muscle control was once useful for tasting or eating certain kinds of foods millions of years ago, and it has not disappeared because the developmental program for fine muscle control is still there.
Keyword: Genes & Behavior; Evolution
Link ID: 22415 - Posted: 07.09.2016
By Andy Coghlan It could be that romantic restaurant, or your favourite park bench. A specific part of the brain seems to be responsible for learning and remembering the precise locations of places that are special to us, research in mice has shown for the first time. Place cells are neurons that help us map our surroundings, and both mice and humans have such cells in the hippocampus – a brain region vital for learning, memory and navigation. Nathan Danielson at Columbia University in New York and his colleagues focused on a part of the hippocampus that feeds signals to the rest of the brain, called CA1. They found that in mice, the CA1 layer where general environment maps are learned and stored is different to the one for locations that have an important meaning. Treadmill test They discovered this by recording brain activity in the two distinct layers of CA1, using mice placed on a treadmill. The treadmill rotated between six distinctive surface materials – including silky ribbons, green pom-pom fabric and silver glitter masking tape. At all times, the mice were able to lick a sensor to try to trigger the release of drinking water. During the first phase of the experiment, however, the sensor only worked at random times. The mice formed generalised maps of their experience on the multi-surfaced treadmill, and the team found that these were stored in the superficial layer of CA1. © Copyright Reed Business Information Ltd.
Keyword: Learning & Memory
Link ID: 22414 - Posted: 07.09.2016
The most sophisticated, widely adopted, and important tool for looking at living brain activity actually does no such thing. Called functional magnetic resonance imaging, what it really does is scan for the magnetic signatures of oxygen-rich blood. Blood indicates that the brain is doing something, but it’s not a direct measure of brain activity. Which is to say, there’s room for error. That’s why neuroscientists use special statistics to filter out noise in their fMRIs, verifying that the shaded blobs they see pulsing across their computer screens actually relate to blood flowing through the brain. If those filters don’t work, an fMRI scan is about as useful at detecting neuronal activity as your dad’s “brain sucking alien” hand trick. And a new paper suggests that might actually be the case for thousands of fMRI studies over the past 15 years. The paper, published June 29 in the Proceedings of the National Academy of Science, threw 40,000 fMRI studies done over the past 15 years into question. But many neuroscientists—including the study’s whistleblowing authors—are now saying the negative attention is overblown. Neuroscience has long struggled over just how useful fMRI data is at showing brain function. “In the early days these fMRI signals were very small, buried in a huge amount of noise,” says Elizabeth Hillman, a biomedical engineer at the Zuckerman Institute at Columbia University. A lot of this noise is literal: noise from the scanner, noise from the electrical components, noise from the person’s body as it breathes and pumps blood.
Keyword: Brain imaging
Link ID: 22413 - Posted: 07.09.2016
By Michael Price Doctors and soldiers could soon place their trust in an unusual ally: the mouse. Scientists have genetically engineered mice to be ultrasensitive to specific smells, paving the way for animals that are “tuned” to sniff out land mines or chemical signatures of diseases like Parkinson’s and Alzheimer’s. Trained rats and dogs have long been used to detect the telltale smell of TNT in land mines, and research suggests that dogs can smell the trace chemical signals of low blood sugar or certain types of cancer. Mice also have powerful sniffers: They sport about 1200 genes dedicated to odorant receptors, cellular sensors that react to a scent’s chemical signature. That’s a few hundred less than rats and about the same as dogs. (Humans have a paltry 350.) Paul Feinstein wants to upgrade the mouse’s already sensitive nose. For the last decade, the neurobiologist at Hunter College in New York City has been studying how odorant receptors form on the surface of neurons within the olfactory system. During development, each olfactory neuron specializes to express a single odorant receptor, which binds to chemicals in the air to detect a specific odor. In other words, each olfactory neuron has a singular receptor that senses a particular smell. Normally, there is an even distribution of receptors throughout the system, so each receptor can be found in about 0.1% of mouse neurons. Feinstein wondered if he could make the mouse’s nose pay more attention to particular scents by making certain odorant receptors more numerous. He and colleagues developed a string of DNA that, when injected into the nucleus of a fertilized mouse egg, appears to make olfactory neurons more likely to develop one particular odorant receptor than the others. This receptor, called M71, detects acetophenone, a chemical that smells like jasmine. When the team added four or more copies of the DNA sequence to a mouse egg, a full 1% of neurons carried it—10 times more than normal. © 2016 American Association for the Advancement of Science.
Keyword: Chemical Senses (Smell & Taste)
Link ID: 22412 - Posted: 07.08.2016
By Nicholas Bakalar A new study has identified a bacterial blueprint for chronic fatigue syndrome, offering further evidence that it is a physical disease with biological causes and not a psychological condition. Chronic fatigue syndrome is a condition that causes extreme and lasting fatigue, preventing people from taking part in even the most routine daily activities. There are no tests to confirm the diagnosis, which has prompted speculation that it is a psychological condition rather than a physical illness. In a study published in Microbiome, researchers recruited 48 people with C.F.S. and 39 healthy controls. Then they analyzed the quantity and variety of bacteria species in their stool. They also searched for markers of inflammation in their blood. The stool samples of those with C.F.S. had significantly lower diversity of species compared with the healthy people — a finding typical of inflammatory bowel disease as well. The scientists also discovered that people with C.F.S. had higher blood levels of lipopolysaccharides, inflammatory molecules that may indicate that bacteria have moved from the gut into the bloodstream, where they can produce various symptoms of disease. Using these criteria, the researchers were able to accurately identify more than 83 percent of C.F.S. cases based on the diversity of their gut bacteria and lipopolysaccharides in their blood. Finding a biomarker for C.F.S. has been an ongoing goal for researchers who hope to one day develop a diagnostic test for the condition. Still, the senior author of the study, Maureen R. Hanson, a professor of molecular biology at Cornell, said the bacteria blueprint in the new study is not yet a method of definitively diagnosing C.F.S. The importance of the finding, she said, is that it may offer new clues as to why people have these symptoms. © 2016 The New York Times Company
Keyword: Depression
Link ID: 22411 - Posted: 07.08.2016
Andrew Orlowski Special Report If the fMRI brain-scanning fad is well and truly over, then many fashionable intellectual ideas look like collateral damage, too. What might generously be called the “British intelligentsia” – our chattering classes – fell particularly hard for the promise that “new discoveries in brain science” had revealed a new understanding of human behaviour, which shed new light on emotions, personality and decision making. But all they were looking at was statistical quirks. There was no science to speak of, the results of the experiments were effectively meaningless, and couldn’t support the (often contradictory) conclusions being touted. The fMRI machine was a very expensive way of legitimising an anecdote. This is an academic scandal that’s been waiting to explode for years, for plenty of warning signs were there. In 2005, Ed Vul, now a psychology professor at UCSD, and Hal Pashler – then and now at UCSD – were puzzled by a claim being made in a talk by a neuroscience researcher. He was explaining study that purported to report a high correlation between a test subject’s brain activity and the speed with which they left the room after the study. “It seemed unbelievable to us that activity in this specific brain area could account for so much of the variance in walking speed,” explained Vul. “Especially so, because the fMRI activity was measured some two hours before the walking happened. So either activity in this area directly controlled motor action with a delay of two hours — something we found hard to believe — or there was something fishy going on.” IT © 1998–2016
Keyword: Brain imaging
Link ID: 22410 - Posted: 07.08.2016
By Louise Whiteley It’s an appealing idea: the notion that understanding the learning brain will tell us how to maximise children’s potential, bypassing the knotty complexities of education research. But promises to replace sociological complexity with biological certainty should always be treated with caution. Hilary and Steven Rose are deeply sceptical of claims that neuroscience can inform education and early intervention policy, and deeply concerned about the use of such claims to support neoliberal agendas. They argue that focusing on the brain encourages a focus on the individual divorced from their social context, and that this is easily aligned with a view of poor achievement as a personal moral failing, rather than a practical consequence of poverty and inequality. Whether or not you end up cheerleading for the book’s political agenda, its deconstruction of faulty claims about how neuroscience translates into the classroom is relevant to anyone interested in education. The authors tear apart the scientific logic of policy documents, interrogate brain-based interventions and dismantle prevalent neuro-myths. One of the book’s meatiest chapters deals with government reports advocating early intervention to increase “mental capital”, and thus reduce the future economic burden of deprived, underachieving brains. As we discover, the neuroscientific foundations of these reports are shaky. For instance, they tend to assume that the more synaptic connections between brain cells the better, and that poor environment in a critical early period permanently reduces the number of synapses. This makes early intervention focusing on the individual child and “poor parenting” seem like the obvious solution. But pruning of synapses is just as important to brain development, and learning involves the continual forming and reforming of synaptic connections. More is not necessarily better. And while an initial explosion in synapses can be irreversibly disrupted by extreme neglect, the evidence just isn’t there yet for extrapolating this to the more common kinds of childhood deprivation that such reports address.
Keyword: Development of the Brain; Learning & Memory
Link ID: 22409 - Posted: 07.08.2016
Tough love, interventions and 12-step programs are some of the most common methods of treating drug addiction, but journalist Maia Szalavitz says they're often counterproductive. "We have this idea that if we are just cruel enough and mean enough and tough enough to people with addiction, that they will suddenly wake up and stop, and that is not the case," she tells Fresh Air's Terry Gross. Szalavitz is the author of Unbroken Brain, a book that challenges traditional notions of addiction and treatment. Her work is based on research and experience; she was addicted to cocaine and heroin from the age of 17 until she was 23. Szalavitz is a proponent of "harm reduction" programs that take a nonpunitive approach to helping addicts and "treat people with addiction like human beings." In her own case, she says that getting "some kind of hope that I could change" enabled her to get the help she needed. On her criticism of 12-step programs I think that 12-step programs are fabulous self help. I think they can be absolutely wonderful as support groups. My issue with 12-step programs is that 80 percent of addiction treatment in this country consists primarily of indoctrinating people into 12-step programs, and no other medical care in the United States is like that. The data shows that cognitive behavioral therapy and motivational enhancement therapy are equally effective, and they have none of the issues around surrendering to a higher power, or prayer or confession. © 2016 npr
Keyword: Drug Abuse
Link ID: 22408 - Posted: 07.08.2016
By Jessica Hamzelou TEENAGE pregnancies have hit record lows in the Western world, largely thanks to increased use of contraceptives of all kinds. But strangely, we don’t really know what hormonal contraceptives – pills, patches and injections that contain synthetic sex hormones – are doing to the developing bodies and brains of teenage girls. You’d be forgiven for assuming that we do. After all, the pill has been around for more than 50 years. It has been through many large trials assessing its effectiveness and safety, as have the more recent patches and rings, and the longer-lasting implants and injections. But those studies were done in adult women – very few have been in teenage girls. And biologically, there is a big difference. At puberty, our bodies undergo an upheaval as our hormones go haywire. It isn’t until our 20s that things settle down and our brains and bones reach maturity. “If a drug is going to be given to 11 and 12-year-olds, it needs to be tested in 11 and 12-year-olds,” says Joe Brierley of the clinical ethics committee at Great Ormond Street Hospital in London. Legislation introduced in the US in 2003 and in Europe in 2007 was intended to make this happen but a New Scientist investigation can reveal that there is still scant data on what contraceptives actually do to developing girls. The few studies that have been done suggest that tipping the balance of oestrogen and progesterone during this time may have far-reaching effects, although there is not yet enough data to say whether we should be alarmed. © Copyright Reed Business Information Ltd.
Keyword: Hormones & Behavior; Development of the Brain
Link ID: 22407 - Posted: 07.08.2016
Shefali Luthra Prescription drug prices continue to climb, putting the pinch on consumers. Some older Americans appear to be seeking an alternative to mainstream medicines that has become easier to get legally in many parts of the country. Just ask Cheech and Chong. Research published Wednesday found that states that legalized medical marijuana — which is sometimes recommended for symptoms like chronic pain, anxiety or depression — saw declines in the number of Medicare prescriptions for drugs used to treat those conditions and a dip in spending by Medicare Part D, which covers the cost on prescription medications. Because the prescriptions for drugs like opioid painkillers and antidepressants — and associated Medicare spending on those drugs — fell in states where marijuana could feasibly be used as a replacement, the researchers said it appears likely legalization led to a drop in prescriptions. That point, they said, is strengthened because prescriptions didn't drop for medicines such as blood-thinners, for which marijuana isn't an alternative. The study, which appears in Health Affairs, examined data from Medicare Part D from 2010 to 2013. It is the first study to examine whether legalization of marijuana changes doctors' clinical practice and whether it could curb public health costs. The findings add context to the debate as more lawmakers express interest in medical marijuana. This year, Ohio and Pennsylvania passed laws allowing the drug for therapeutic purposes, making it legal in 25 states, plus Washington, D.C. The approach could also come to a vote in Florida and Missouri this November. A federal agency is considering reclassifying medical marijuana under national drug policy to make it more readily available. © 2016 npr
Keyword: Drug Abuse
Link ID: 22406 - Posted: 07.07.2016
By ERICA GOODE Irving Gottesman, a pioneer in the field of behavioral genetics whose work on the role of heredity in schizophrenia helped transform the way people thought about the origins of serious mental illness, died on June 29 at his home in Edina, Minn., a suburb of Minneapolis. He was 85. His wife, Carol, said he died while taking an afternoon nap. Although Dr. Gottesman had some health problems, she said, his death was unexpected, and several of his colleagues said they received emails from him earlier that day. Dr. Gottesman was perhaps best known for a study of schizophrenia in British twins he conducted with another researcher, James Shields, at the Maudsley Hospital in London in the 1960s. The study, which found that identical twins were more likely than fraternal twins to share a diagnosis of schizophrenia, provided strong evidence for a genetic component to the illness and challenged the notion that it was caused by bad mothering, the prevailing view at the time. But the findings also underscored the contribution of a patient’s environment: If genes alone were responsible for schizophrenia, the disorder should afflict both members of every identical pair; instead, it appeared in both twins in only about half of the identical pairs in the study. This interaction between nature and nurture, Dr. Gottesman believed, was critical to understanding human behavior, and he warned against tilting too far in one direction or the other in explaining mental illness or in accounting for differences in personality or I.Q. © 2016 The New York Times Company
Keyword: Schizophrenia; Genes & Behavior
Link ID: 22405 - Posted: 07.07.2016
By Emily Rosenzweig Life deals most of us a consistent stream of ego blows, be they failures at work, social slights, or unrequited love. Social psychology has provided decades of insight into just how adept we are at defending ourselves against these psychic threats. We discount negative feedback, compare ourselves favorably to those who are worse off than us, attribute our failures to others, place undue value on our own strengths, and devalue opportunities denied to us–all in service of protecting and restoring our sense of self-worth. As a group, this array of motivated mental processes that support mood repair and ego defense has been called the “psychological immune system.” Particularly striking to social psychologists is our ability to remain blind to our use of these motivated strategies, even when it is apparent to others just how biased we are. However there are times when we either cannot remain blind to our own psychological immune processes, or where we may find ourselves consciously wanting to use them expressly for the purpose of restoring our ego or our mood. What then? Can we believe a conclusion we reach even when we know that we arrived at it in a biased way? For example, imagine you’ve recently gone through a breakup and want to get over your ex. You decide to make a mental list of all of their character flaws in an effort to feel better about the relationship ending. A number of prominent social psychologists have suggested you’re out of luck—knowing that you’re focusing only on your ex’s worst qualities prevents you from believing the conclusion you’ve come to that you’re better off without him or her. In essence, they argue that we must remain blind to our own biased mental processes in order to reap their ego-restoring benefits. And in many ways this closely echoes the position that philosophers like Mele have taken about the possibility of agentic self-deception. © 2016 Scientific American
Keyword: Attention; Consciousness
Link ID: 22404 - Posted: 07.07.2016
DAVID GREENE, HOST: Nearly one-quarter of all Americans reach for a bottle of acetaminophen every single week. Many of you might know this drug as Tylenol. It's a pain killer that can take the edge off a headache or treat you when you have a fever. It also might have another effect. And let's talk about this with NPR social science correspondent Shankar Vedantam. And, Shankar, straight out, is this going to make me not want to take Tylenol, what you're about to tell me? VEDANTAM: It might make you not want to take Tylenol when you're talking with me, David. GREENE: Oh, even more interesting. VEDANTAM: (Laughter) I was speaking with Dominik Mischkowski. He's currently a researcher at the National Institutes of Health. He recently conducted a couple of double blind experiments. These are experiments where the volunteers are given either sugar pills or Tylenol, but neither the volunteers nor the researchers know which volunteers are getting which pill. Mischkowski and his advisers at Ohio State University, Jennifer Crocker and Baldwin Way, they played loud noises for the volunteers. Not surprisingly, volunteers given Tylenol experienced less physical discomfort than volunteers given the placebo. © 2016 npr
Keyword: Pain & Touch
Link ID: 22403 - Posted: 07.07.2016
By Patrick Monahan Animals like cuttlefish and octopuses can rapidly change color to blend into the background and dazzle prospective mates. But there’s only one problem: As far as we know, they can’t see in color. Unlike our eyes, the eyes of cephalopods—cuttlefish, octopuses, and their relatives—contain just one kind of color-sensitive protein, apparently restricting them to a black and white view of the world. But a new study shows how they might make do. By rapidly focusing their eyes at different depths, cephalopods could be taking advantage of a lensing property called “chromatic blur.” Each color of light has a different wavelength—and because lenses bend some wavelengths more than others, one color of light shining through a lens can be in focus while another is still blurry. So with the right kind of eye, a quick sweep of focus would let the viewer figure out the actual color of an object based on when it blurs. The off-center pupils of many cephalopods—including the w-shaped pupils of cuttlefish (above)—make this blurring effect more extreme, according to a study published this week in the Proceedings of the National Academy of Sciences. In that study, scientists built a computer model of an octopus eye and showed that—for an object at least one body length away—it could determine the object’s color just by changing focus. Because this is all still theoretical, the next step is testing whether live cephalopods actually see color this way—and whether any other “colorblind” animals might, too. © 2016 American Association for the Advancement of Science.
Keyword: Vision; Evolution
Link ID: 22402 - Posted: 07.07.2016
It's no secret that passwords aren't impenetrable. Even outside of major incidents like the celebrity nude photo hack, or when millions of passwords get released online, like what happened to Twitter recently, many of us may still be at risk of having our data compromised due to password-related security flaws. According to a June 2015 survey from mobile identity company TeleSign, two in five people were notified in the preceding year that their personal information was compromised or that they had been hacked or had their password stolen. But a new technology developed by the BioSense lab at the University of California, Berkeley could make all of that a thing of the past. Over the course of three years, the lab's co-director, John Chuang, and his graduate students have been working on a technology called passthoughts, which would use a person's brainwaves to identify them, according to CNET. The team has found that a passthought — something like a song that someone could sing in their mind — isn't easily forgotten and can achieve a 99-per-cent authentication accuracy rate. The device used to capture passthoughts resembles a telephone headset. It relies on EEG technology, detecting electrical activity in your brain via electrodes strapped to your head. And although Chuang's team say the technology has improved greatly in recent years, the awkwardness of the device might hinder it from being widely adopted. ©2016 CBC/Radio-Canada.
Keyword: Brain imaging
Link ID: 22401 - Posted: 07.06.2016
By Damian Garde, A boy in Pakistan became a local legend as a street performer in recent years by traversing hot coals and lancing his arms with knives without so much as a wince. A thousand miles away, in China, lived a family wracked by excruciating bouts of inexplicable pain, passed down generation after generation. Scientists eventually determined what the boy and the family had in common: mutations in a gene that functions like an on-off switch for agony. Now, a bevy of biotech companies, including Genentech and Biogen, are staking big money on the idea that they can develop drugs that toggle that switch to relieve pain without the risk of addiction. The gene in question is SCN9A, which is responsible for producing a pain-related protein called Nav1.7. In patients who feel nothing, SCN9A is pretty much broken. In those who feel searing random pain, the gene is cranking out far too much Nav1.7. That discovery raises an obvious question: Can blocking Nav1.7 provide relief for many types of pain—and someday, perhaps, replace dangerous opioid therapies? “That’s the dream,” said David Hackos, a senior scientist at Genentech, which has two Nav1.7 treatments in the first stage of clinical development. It’s too early make any sweeping predictions—and, indeed, a Pfizer pill targeting Nav1.7 has already stumbled—but the pharma industry clearly sees the potential for a blockbuster. © 2016 Scientific American
Keyword: Pain & Touch
Link ID: 22400 - Posted: 07.06.2016
Anthony Devlin/ Antidepressant use is at an all-time in high in England, where prescriptions filled for these drugs has doubled over the last decade. Figures from the Health and Social Care Information Centre show that in 2015, 61 million prescriptions were filled for antidepressant drugs, including citalopram and fluoxetine. This is up from 57.1 million in 2014, and 29.4 million back in 2005. “The reasons for this increase in antidepressant prescriptions could include a greater awareness of mental illness and more willingness to seek help,” says Gillian Connor of the charity Rethink Mental Illness. “However, with our overstretched and underfunded mental health services, too often antidepressants are the only treatment available.” UK guidelines suggest that people should be offered antidepressants as a first treatment option for moderate depression, but some critics argue that it would be better to steer people to talking therapies. In May, Andrew Green, a GP in East Riding and chairman of the British Medical Association’s Clinical and Prescribing Subcommittee, told a meeting of the UK’s All-Party Parliamentary Group for Prescribed Drug Dependence that one of the reasons doctors resort to prescribing antidepressants is because the waiting lists for talking therapies are so long. © Copyright Reed Business Information Ltd.
Keyword: Depression
Link ID: 22399 - Posted: 07.06.2016
By David Shultz Making eye contact for an appropriate length of time is a delicate social balancing act: too short, and we look shifty and untrustworthy; too long, and we seem awkward and overly intimate. To make this Goldilocks-like dilemma even trickier, it turns out that different people prefer to lock eyes for different amounts of time. So what’s too long or too short for one person might be just right for another. In a new study, published today in Royal Society Open Science, researchers asked a group of 498 volunteers to watch a video of an actor staring out from a screen and press a button if their gazes met for an uncomfortably long or short amount of time (above). During the test, the movement of their eyes and the size of their pupils were recorded with eye-tracking technology. On average, participants had a “preferred gaze duration” of 3.3 seconds, give or take 0.7 seconds. That’s a pretty narrow band for someone on their first date! Making things even harder, individual preferences can also be measured: Researchers found that how quickly people’s pupils dilated—an automatic reflex whenever someone looks into the eyes of another—was a good indicator of how long they wanted to gaze. The longer their preferred gaze, the faster their pupils expanded. The differences are so subtle, though, that they can only be seen with the eye-tracking software—making any attempts to game the system is likely to end up awkward rather than informative. © 2016 American Association for the Advancement of Science.
Keyword: Autism; Emotions
Link ID: 22398 - Posted: 07.06.2016
George Johnson A paper in The British Medical Journal in December reported that cognitive behavioral therapy — a means of coaxing people into changing the way they think — is as effective as Prozac or Zoloft in treating major depression. In ways no one understands, talk therapy reaches down into the biological plumbing and affects the flow of neurotransmitters in the brain. Other studies have found similar results for “mindfulness” — Buddhist-inspired meditation in which one’s thoughts are allowed to drift gently through the head like clouds reflected in still mountain water. Findings like these have become so commonplace that it’s easy to forget their strange implications. Depression can be treated in two radically different ways: by altering the brain with chemicals, or by altering the mind by talking to a therapist. But we still can’t explain how mind arises from matter or how, in turn, mind acts on the brain. This longstanding conundrum — the mind-body problem — was succinctly described by the philosopher David Chalmers at a recent symposium at The New York Academy of Sciences. “The scientific and philosophical consensus is that there is no nonphysical soul or ego, or at least no evidence for that,” he said. Descartes’s notion of dualism — mind and body as separate things — has long receded from science. The challenge now is to explain how the inner world of consciousness arises from the flesh of the brain. © 2016 The New York Times Company
Keyword: Consciousness
Link ID: 22397 - Posted: 07.05.2016