Chapter 14. Attention and Consciousness

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.

Links 1 - 20 of 1479

By Sara Reardon To many people’s eyes, artist Mark Rothko’s enormous paintings are little more than swaths of color. Yet a Rothko can fetch nearly $100 million. Meanwhile, Pablo Picasso’s warped faces fascinate some viewers and terrify others. Why do our perceptions of beauty differ so widely? The answer may lie in our brain networks. Researchers have now developed an algorithm that can predict art preferences by analyzing how a person’s brain breaks down visual information and decides whether a painting is “good.” The findings show for the first time how intrinsic features of a painting combine with human judgment to give art value in our minds. Most people—including researchers—consider art preferences to be all over the map, says Anjan Chatterjee, a neurologist and cognitive neuroscientist at the University of Pennsylvania who was not involved in the study. Many preferences are rooted in biology–sugary foods, for instance, help us survive. And people tend to share similar standards of beauty when it comes to human faces and landscapes. But when it comes to art, “There are relatively arbitrary things we seem to care about and value,” Chatterjee says. To figure out how the brain forms value judgments about art, computational neuroscientist Kiyohito Iigaya and his colleagues at the California Institute of Technology first asked more than 1300 volunteers on the crowdsourcing website Amazon Mechanical Turk to rate a selection of 825 paintings from four Western genres including impressionism, cubism, abstract art, and color field painting. Volunteers were all over the age of 18, but researchers didn’t specify their familiarity with art or their ethnic or national origin. © 2020 American Association for the Advancement of Science

Keyword: Vision; Attention
Link ID: 27062 - Posted: 02.21.2020

By Richard Klasco, M.D. A. The theory of the “sugar high” has been debunked, yet the myth persists. The notion that sugar might make children behave badly first appeared in the medical literature in 1922. But the idea did not capture the public’s imagination until Dr. Ben Feingold’s best-selling book, “Why Your Child Is Hyperactive,” was published in 1975. In his book, Dr. Feingold describes the case of a boy who might well be “patient zero” for the putative connection between sugar and hyperactivity: [The mother’s] fair-haired, wiry son loved soft drinks, candy and cake — not exactly abnormal for any healthy child. He also seemed to go completely wild after birthday parties and during family gatherings around holidays. In the mid-’70s, stimulant drugs such as Ritalin and amphetamine were becoming popular for the treatment of attention deficit hyperactivity disorder. For parents who were concerned about drug side effects, the possibility of controlling hyperactivity by eliminating sugar proved to be an enticing, almost irresistible, prospect. Some studies supported the theory. They suggested that high sugar diets caused spikes in insulin secretion, which triggered adrenaline production and hyperactivity. But the data were weak and were soon questioned by other scientists. An extraordinarily rigorous study settled the question in 1994. Writing in the New England Journal of Medicine, a group of scientists tested normal preschoolers and children whose parents described them as being sensitive to sugar. Neither the parents, the children nor the research staff knew which of the children were getting sugary foods and which were getting a diet sweetened with aspartame and other artificial sweeteners. Urine was tested to verify compliance with the diets. Nine different measures of cognitive and behavioral performance were assessed, with measurements taken at five-second intervals. © 2020 The New York Times Company

Keyword: ADHD; Obesity
Link ID: 27060 - Posted: 02.21.2020

Maternal obesity may increase a child’s risk for attention-deficit hyperactivity disorder (ADHD), according to an analysis by researchers from the Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD), part of the National Institutes of Health. The researchers found that mothers — but not fathers — who were overweight or obese before pregnancy were more likely to report that their children had been diagnosed with attention-deficit hyperactivity disorder (ADHD) or to have symptoms of hyperactivity, inattentiveness or impulsiveness at ages 7 to 8 years old. Their study appears in The Journal of Pediatrics. The study team analyzed the NICHD Upstate KIDS Study, which recruited mothers of young infants and followed the children through age 8 years. In this analysis of nearly 2,000 children, the study team found that women who were obese before pregnancy were approximately twice as likely to report that their child had ADHD or symptoms of hyperactivity, inattention or impulsiveness, compared to children of women of normal weight before pregnancy. The authors suggest that, if their findings are confirmed by additional studies, healthcare providers may want to screen children of obese mothers for ADHD so that they could be offered earlier interventions. The authors also note that healthcare providers could use evidence-based strategies to counsel women considering pregnancy on diet and lifestyle. Resources for plus-size pregnant women and their healthcare providers are available as part of NICHD’s Pregnancy for Every Body initiative.

Keyword: ADHD; Development of the Brain
Link ID: 27055 - Posted: 02.20.2020

By Laura Sanders SEATTLE — Live bits of brain look like any other piece of meat — pinkish, solid chunks of neural tissue. But unlike other kinds of tissue or organs donated for research, they hold the memories, thoughts and feelings of a person. “It is identified with who we are,” Karen Rommelfanger, a neuroethicist at Emory University in Atlanta, said February 13 in a news conference at the annual meeting of the American Association for the Advancement of Science. That uniqueness raises a whole new set of ethical quandaries when it comes to experimenting with living brain tissue, she explained. Such donations are crucial to emerging research aimed at teasing out answers to what makes us human. For instance, researchers at the Seattle-based Allen Institute for Brain Science conduct experiments on live brain tissue to get clues about how the cells in the human brain operate (SN: 8/7/19). These precious samples, normally discarded as medical waste, are donated by patients undergoing brain surgery and raced to the lab while the nerve cells are still viable. Other experiments rely on systems that are less sophisticated than a human brain, such as brain tissue from other animals and organoids. These clumps of neural tissue, grown from human stem cells, are still a long way from mimicking the complexities of the human brain (SN: 10/24/19). But with major advances, these systems might one day be capable of much more advanced behavior, which might ultimately lead to awareness, a conundrum that raises ethical issues. © Society for Science & the Public 2000–2020

Keyword: Consciousness; Emotions
Link ID: 27047 - Posted: 02.18.2020

By Tam Hunt Strangely, modern science was long dominated by the idea that to be scientific means to remove consciousness from our explanations, in order to be “objective.” This was the rationale behind behaviorism, a now-dead theory of psychology that took this trend to a perverse extreme. Behaviorists like John Watson and B.F. Skinner scrupulously avoided any discussion of what their human or animal subjects thought, intended or wanted, and focused instead entirely on behavior. They thought that because thoughts in other peoples’ heads, or in animals, are impossible to know with certainty, we should simply ignore them in our theories. We can only be truly scientific, they asserted, if we focus solely on what can be directly observed and measured: behavior. Erwin Schrödinger, one of the key architects of quantum mechanics in the early part of the 20th century, labeled this approach in his philosophical 1958 book Mind and Matter, the “principle of objectivation” and expressed it clearly: Advertisement “By [the principle of objectivation] I mean … a certain simplification which we adopt in order to master the infinitely intricate problem of nature. Without being aware of it and without being rigorously systematic about it, we exclude the Subject of Cognizance from the domain of nature that we endeavor to understand. We step with our own person back into the part of an onlooker who does not belong to the world, which by this very procedure becomes an objective world.” Schrödinger did, however, identify both the problem and the solution. He recognized that “objectivation” is just a simplification that is a temporary step in the progress of science in understanding nature. © 2020 Scientific American

Keyword: Consciousness
Link ID: 27044 - Posted: 02.18.2020

By Bernardo Kastrup At least since the Enlightenment, in the 18th century, one of the most central questions of human existence has been whether we have free will. In the late 20th century, some thought neuroscience had settled the question. However, as it has recently become clear, such was not the case. The elusive answer is nonetheless foundational to our moral codes, criminal justice system, religions and even to the very meaning of life itself—for if every event of life is merely the predictable outcome of mechanical laws, one may question the point of it all. But before we ask ourselves whether we have free will, we must understand what exactly we mean by it. A common and straightforward view is that, if our choices are predetermined, then we don’t have free will; otherwise we do. Yet, upon more careful reflection, this view proves surprisingly inappropriate. To see why, notice first that the prefix “pre” in “predetermined choice” is entirely redundant. Not only are all predetermined choices determined by definition, all determined choices can be regarded as predetermined as well: they always result from dispositions or necessities that precede them. Therefore, what we are really asking is simply whether our choices are determined. In this context, a free-willed choice would be an undetermined one. But what is an undetermined choice? It can only be a random one, for anything that isn’t fundamentally random reflects some underlying disposition or necessity that determines it. There is no semantic space between determinism and randomness that could accommodate choices that are neither. This is a simple but important point, for we often think—incoherently—of free-willed choices as neither determined nor random. © 2020 Scientific American

Keyword: Consciousness
Link ID: 27024 - Posted: 02.07.2020

By Charles Zanor We all know people who say they have “no sense of direction,” and our tendency is almost always to minimize such claims rather than take them at full force. Yet for some people that description is literally true, and true in all circumstances: If they take a single wrong turn on an established route they often become totally lost. This happens even when they are just a few miles from where they live. Ellen Rose had been a patient of mine for years before I realized that she had this life-long learning disability. Advertisement I was made aware of it not long after I moved my psychology office from Agawam, Massachusetts to Suffield, Connecticut, just five miles away. I gave Ellen a fresh set of directions from the Springfield, Massachusetts area that took her south on Interstate 91 to Exit 47W, then across the Connecticut River to Rte 159 in Suffield. I thought it would pose no problem at all for her. A few minutes past her scheduled appointment time she called to say that she was lost. She had come south on Route 91 and had taken the correct exit, but she got confused and almost immediately hooked a right onto a road going directly north, bringing her back over the Massachusetts line to the town of Longmeadow. She knew this was wrong but did not know how to correct it, so I repeated the directions to get on 91 South and so on. Minutes passed, and then more minutes passed, and she called again to say that somehow she had driven by the exit she was supposed to take and was in Windsor, Connecticut. I kept her on the phone and guided her turn by turn to my office. Advertisement When I asked her why she hadn’t taken Exit 47W, she said that she saw it but it came up sooner than she expected so she kept going. This condition—developmental topographic disorientation—didn’t even have a formal name until 2009, when Giuseppe Iaria reported his first case in the journal Neuropsychologia. To understand DTD it is best to begin by saying that there are two main ways that successful travelers use to navigate their environment. © 2020 Scientific American,

Keyword: Learning & Memory; Development of the Brain
Link ID: 27021 - Posted: 02.05.2020

By Sue Halpern During the 2016 Presidential primary, SPARK Neuro, a company that uses brain waves and other physiological signals to delve into the subliminal mind, decided to assess people’s reactions to the Democratic candidates. The company had not yet launched, but its C.E.O., Spencer Gerrol, was eager to refine its technology. In a test designed to uncover how people are actually feeling, as opposed to how they say they are feeling, SPARK Neuro observed, among other things, that the cadence of Bernie Sanders’s voice grabbed people’s attention, while Hillary Clinton’s measured tones were a bore. A few months later, Katz Media Group, a radio-and-television-ad representative firm, hired Gerrol’s group to study a cohort of undecided voters in Florida and Pennsylvania. The company’s chief marketing officer, Stacey Schulman, picked SPARK Neuro because its algorithm took into account an array of neurological and physiological signals. “Subconscious emotion underlies conscious decision-making, which is interesting for the marketing world but critically important in the political realm,” Schulman told me. “This measures how the body is responding, and it happens before you articulate it.” Neuromarketing—gauging consumers’ feelings and beliefs by observing and measuring spontaneous, unmediated physiological responses to an ad or a sales pitch—is not new. “For a while, using neuroscience to do marketing was something of a fad, but it has been applied to commerce for a good ten years now,” Schulman said. Nielsen, the storied media-insight company, has a neuromarketing division. Google has been promoting what it calls “emotion analytics” to advertisers. A company called Realeyes claims to have trained artificial intelligence to “read emotions” through Webcams; another called Affectiva says that it “provides deep insight into unfiltered and unbiased consumer emotional response to brand content” through what it calls “facial coding.” Similarly, ZimGo Polling, a South Korean company that operates in the United States, has paired facial-recognition technology with “automated emotion understanding” and natural language processing to give “insights into how people feel about real-time issues,” and “thereby enables a virtual 24/7 town hall meeting with citizens.” This is crucial, according to the C.E.O. of ZimGo’s parent company, because “people vote on emotion.” © 2020 Condé Nast

Keyword: Attention; Emotions
Link ID: 27017 - Posted: 02.04.2020

Roger E. Beaty, Ph.D. When we think about creativity, the arts often come to mind. Most people would agree that writers, painters, and actors are all creative. This is what psychologists who study the subject refer to as Big-C creativity: publicly-recognizable, professional-level performance. But what about creativity on a smaller scale? This is what researchers refer to as little-c creativity, and it is something that we all possess and express in our daily lives, from inventing new recipes to performing a do-it-yourself project to thinking of clever jokes to entertain the kids. One way psychologists measure creative thinking is by asking people to think of uncommon uses for common objects, such as a cup or a cardboard box. Their responses can be analyzed on different dimensions, such as fluency (the total number of ideas) and originality. Surprisingly, many people struggle with this seemingly simple task, only suggesting uses that closely resemble the typical uses for the object. The same happens in other tests that demand ideas that go beyond what we already know (i.e., “thinking outside the box”). Such innovation tasks assess just one aspect of creativity. Many new tests are being developed that tap into other creative skills, from visuospatial abilities essential for design (like drawing) to scientific abilities important for innovation and discovery. But where do creative ideas come from, and what makes some people more creative than others? Contrary to romantic notions of a purely spontaneous process, increasing evidence from psychology and neuroscience experiments indicates that creativity requires cognitive effort—in part, to overcome the distraction and “stickiness” of prior knowledge (remember how people think of common uses when asked to devise creative ones). In light of these findings, we can consider general creative thinking as a dynamic interplay between the brain’s memory and control systems. Without memory, our minds would be a blank slate—not conducive to creativity, which requires knowledge and expertise. But without mental control, we wouldn’t be able to push thinking in new directions and avoid getting stuck on what we already know. © 2020 The Dana Foundation

Keyword: Attention
Link ID: 26979 - Posted: 01.22.2020

Jennifer Rankin in Brussels A pioneering Belgian neurologist has been awarded €1m to fund further work in helping diagnose the most severe brain injuries, as he seeks to battle “the silent epidemic” and help people written off as “vegetative” who, it is believed, will never recover. Steven Laureys, head of the coma science group at Liège University hospital, plans to use the £850,000 award – larger than the Nobel prize – to improve the diagnosis of coma survivors labelled as being in a “persistent vegetative state”. That is “a horrible term” he says, although still one widely used by the general public and many clinicians. Laureys, who has spent more than two decades exploring the boundaries of human consciousness, prefers the term “unresponsive wakefulness” to describe people who are unconscious but show signs of being awake, such as opening their eyes or moving. These patients are often wrongly described as being in a coma, a condition that only lasts a few weeks, in which people are completely unresponsive. “The old view was to consider consciousness, which was one of the biggest mysteries for science to solve, as all or nothing,” he told the Guardian, shortly after he was awarded the Generet prize by Belgium’s King Baudouin Foundation this week. He said that a third of patients he treats at the Liège coma centre had been wrongly diagnosed as being in a vegetative state, despite signs of consciousness. As a young doctor in the 1990s he was frustrated by the questions that torture the families of coma survivors: can their loved ones see or hear them? Can they feel anything, including pain? Laureys and his 30-strong team of engineers and clinicians have shown that some of those with a “vegetative state” diagnosis are minimally conscious, showing signs of awareness such as responding to commands with their eyes. © 2020 Guardian News & Media Limited

Keyword: Consciousness
Link ID: 26970 - Posted: 01.20.2020

Matthew Schafer and Daniela Schiller How do animals, from rats to humans, intuit shortcuts when moving from one place to another? Scientists have discovered mental maps in the brain that help animals picture the best routes from an internalized model of their environments. Physical space is not all that is tracked by the brain's mapmaking capacities. Cognitive models of the environment may be vital to mental processes, including memory, imagination, making inferences and engaging in abstract reasoning. Most intriguing is the emerging evidence that maps may be involved in tracking the dynamics of social relationships: how distant or close individuals are to one another and where they reside within group hierarchies. We are often told that there are no shortcuts in life. But the brain—even the brain of a rat—is wired in a way that completely ignores this kind of advice. The organ, in fact, epitomizes a shortcut-finding machine. The first indication that the brain has a knack for finding alternative routes was described in 1948 by Edward Tolman of the University of California, Berkeley. Tolman performed a curious experiment in which a hungry rat ran across an unpainted circular table into a dark, narrow corridor. The rat turned left, then right, and then took another right and scurried to the far end of a well-lit narrow strip, where, finally, a cup of food awaited. There were no choices to be made. The rat had to follow the one available winding path, and so it did, time and time again, for four days. On the fifth day, as the rat once again ran straight across the table into the corridor, it hit a wall—the path was blocked. The animal went back to the table and started looking for alternatives. Overnight, the circular table had turned into a sunburst arena. Instead of one track, there were now 18 radial paths to explore, all branching off from the sides of the table. After venturing out a few inches on a few different paths, the rat finally chose to run all the way down path number six, the one leading directly to the food. © 2020 Scientific American,

Keyword: Attention
Link ID: 26961 - Posted: 01.15.2020

By Gareth Cook One of science’s most challenging problems is a question that can be stated easily: Where does consciousness come from? In his new book Galileo’s Error: Foundations for a New Science of Consciousness, philosopher Philip Goff considers a radical perspective: What if consciousness is not something special that the brain does but is instead a quality inherent to all matter? It is a theory known as “panpsychism,” and Goff guides readers through the history of the idea, answers common objections (such as “That’s just crazy!”) and explains why he believes panpsychism represents the best path forward. He answered questions from Mind Matters editor Gareth Cook. Can you explain, in simple terms, what you mean by panpsychism? In our standard view of things, consciousness exists only in the brains of highly evolved organisms, and hence consciousness exists only in a tiny part of the universe and only in very recent history. According to panpsychism, in contrast, consciousness pervades the universe and is a fundamental feature of it. This doesn’t mean that literally everything is conscious. The basic commitment is that the fundamental constituents of reality—perhaps electrons and quarks—have incredibly simple forms of experience. And the very complex experience of the human or animal brain is somehow derived from the experience of the brain’s most basic parts. It might be important to clarify what I mean by “consciousness,” as that word is actually quite ambiguous. Some people use it to mean something quite sophisticated, such as self-awareness or the capacity to reflect on one’s own existence. This is something we might be reluctant to ascribe to many nonhuman animals, never mind fundamental particles. But when I use the word consciousness, I simply mean experience: pleasure, pain, visual or auditory experience, et cetera. © 2020 Scientific American,

Keyword: Consciousness
Link ID: 26959 - Posted: 01.15.2020

By Joseph Stern, M.D. The bullet hole in the teenager’s forehead was so small, it belied the damage already done to his brain. The injury was fatal. We knew this the moment he arrived in the emergency room. Days later, his body was being kept alive in the intensive care unit despite an exam showing that he was brain-dead and no blood was flowing to his brain. Eventually, all his organs failed and his heart stopped beating. But the nurses continued to care for the boy and his family, knowing he was already dead but trying to help the family members with the agonizing process of accepting his death. This scenario occurs all too frequently in the neurosurgical I.C.U. Doctors often delay the withdrawal of life-sustaining supports such as ventilators and IV drips, and nurses continue these treatments — adhering to protocols, yet feeling internal conflict. A lack of consensus or communication among doctors, nurses and families often makes these situations more difficult for all involved. Brain death is stark and final. When the patient’s brain function has ceased, bodily death inevitably follows, no matter what we do. Continued interventions, painful as they may be, are necessarily of limited duration. We can keep a brain-dead patient’s body alive for a few days at the most before his heart stops for good. Trickier and much more common is the middle ground of a neurologically devastating injury without brain death. Here, decisions can be more difficult, and electing to continue or to withdraw treatment much more problematic. Inconsistent communication and support between medical staff members and families plays a role. A new field, neuropalliative care, seeks to focus “on outcomes important to patients and families” and “to guide and support patients and families through complex choices involving immense uncertainty and intensely important outcomes of mind and body.” © 2020 The New York Times Company

Keyword: Consciousness
Link ID: 26958 - Posted: 01.14.2020

By John Horgan Last month I participated in a symposium hosted by the Center for Theory & Research at Esalen, a retreat center in Big Sur, California. Fifteen men and women representing physics, psychology and other fields attempted to make sense of mystical and paranormal experiences, which are generally ignored by conventional, materialist science. The organizers invited me because of my criticism of hard-core materialism and interest in mysticism, but in a recent column I pushed back against ideas advanced at the meeting. Below other attendees push back against me. My fellow speaker Bjorn Ekeberg, whose response is below, took the photos of Esalen, including the one of me beside a stream (I'm the guy on the right). -- John Horgan Jeffrey Kripal, philosopher of religion at Rice University and author, most recently, of The Flip: Epiphanies of Mind and the Future of Knowledge (see our online chats here and here): Thank you, John, for reporting on your week with us all. As one of the moderators of “Physics, Experience and Metaphysics,” let me try to reply, briefly (and too simplistically), to your various points. First, let me begin with something that was left out of your generous summary: the key role of the imagination in so many exceptional or anomalous experiences. As you yourself pointed out with respect to your own psychedelic opening, this is no ordinary or banal “imagination.” This is a kind of “super-imagination” that projects fantastic visionary displays that none of us could possibly come up with in ordinary states: this is a flying caped Superman to our bespectacled Clark Kent. None of this, of course, implies that anything seen in these super-imagined states is literally true (like astral travel or ghosts) or non-human, but it does tell us something important about why the near-death or psychedelic experiencers commonly report that these visionary events are “more real” than ordinary reality (which is also, please note, partially imagined, if our contemporary neuroscience of perception is correct). Put in terms of a common metaphor that goes back to Plato, the fictional movies on the screen can ALL be different and, yes, of course, humanly and historically constructed, but the Light projecting them can be quite Real and the Same. Fiction and reality are in no way exclusive of one another in these paradoxical states. © 2020 Scientific American

Keyword: Consciousness
Link ID: 26953 - Posted: 01.13.2020

By John Horgan I just spent a week at a symposium on the mind-body problem, the deepest of all mysteries. The mind-body problem--which encompasses consciousness, free will and the meaning of life--concerns who we really are. Are we matter, which just happens to give rise to mind? Or could mind be the basis of reality, as many sages have insisted? The week-long powwow, called “Physics, Experience and Metaphysics,” took place at Esalen Institute, the legendary retreat center in Big Sur, California. Fifteen men and women representing physics, psychology, philosophy, religious studies and other fields sat in a room overlooking the Pacific and swapped mind-body ideas. What made the conference unusual, at least for me, was the emphasis on what were called “exceptional experiences,” involving telepathy, telekinesis, astral projection, past-life recall and mysticism. I’ve been obsessed with mysticism since I was a kid. As defined by William James in The Varieties of Religious Experience, mystical experiences are breaches in your ordinary life, during which you encounter absolute reality--or, if you prefer, God. You believe, you know, you are seeing things the way they really are. These experiences are usually brief, lasting only minutes or hours. They can be triggered by trauma, prayer, meditation or drugs, or they may strike you out of the blue. Advertisement I’ve had mild mystical intuitions while sober, for example, during a Buddhist retreat last year. But my most intense experience, by far, happened in 1981 while I was under the influence of a potent hallucinogen. I tried to come to terms with my experiences in my book Rational Mysticism, but my obsession endures. © 2019 Scientific American

Keyword: Consciousness
Link ID: 26924 - Posted: 12.30.2019

By Sarah Bate Alice is six years old. She struggles to make friends at school and often sits alone in the playground. She loses her parents in the supermarket and approaches strangers at pickup. Once she became separated from her family on a trip to the zoo, and she now has an intense fear of crowded places. Alice has a condition called face blindness, also known as prosopagnosia. This difficulty in recognising facial identity affects 2 percent of the population. Like Alice, most of these people are born with the condition, although a small number acquire face-recognition difficulties after brain injury or illness. Unfortunately, face blindness seems largely resilient to improvement. Yet a very recent study offers more promising findings: children’s face-recognition skills substantially improved after they played a modified version of the game Guess Who?over a two-week period. In the traditional version of Guess Who?, two players see an array of 24 cartoon faces, and each selects a target. Both then take turns asking yes/no questions about the appearance of their opponent’s chosen face, typically inquiring about eye color, hairstyle and accessories such as hats or spectacles. The players use the answers to eliminate faces in the array; when only one remains, they can guess the identity of their opponent’s character. The experimental version of the game preserved this basic setup but used lifelike faces that differed only in the size or spacing of the eyes, nose or mouth. That is, the hairstyle and outer face shape were identical, and children had to read the faces solely on the basis of small differences between the inner features. This manipulation is thought to reflect a key processing strategy that underlies human face recognition: the ability to account not only for the size and shape of features but also the spacing between them. Evidence suggests this ability to process faces “holistically” is impaired in face blindness. The Guess Who? training program aimed to capitalize on this link. Children progressed through 10 levels of the game, with differences between the inner features becoming progressively less obvious. Children played for half an hour per day on any 10 days over a two-week period, advancing to the next level when they won the game on two consecutive rounds. © 2019 Scientific American

Keyword: Attention
Link ID: 26921 - Posted: 12.27.2019

By John Horgan Philosophy has taken a beating lately, even, or especially, from philosophers, who are compulsive critics, even, especially, of their own calling. But bright young women and men still aspire to be full-time truth-seekers in our corrupt, capitalist world. Over the past five years, I have met a bunch of impressive young philosophers while doing research on the mind-body problem. Hedda Hassel Mørch, for example. I first heard Mørch (pronounced murk) speak in 2015 at a New York University workshop on integrated information theory, and I ran into her at subsequent events at NYU and elsewhere. She makes a couple of appearances—one anonymous--in my book Mind-Body Problems. We recently crossed tracks in online chitchat about panpsychism, which proposes that consciousness is a property of all matter, not just brains. I’m a panpsychism critic, she’s a proponent. Below Mørch answers some questions.—John Horgan Horgan: Why philosophy? And especially philosophy of mind? Mørch: I remember thinking at some point that if I didn’t study philosophy I would always be curious about what philosophers know. And even if it turned out that they know nothing then at least I would know I wasn’t missing anything. Advertisement One reason I was attracted to philosophy of mind in particular was that it seemed like an area where philosophy clearly has some real and useful work to do. In other areas of philosophy, it might seem that many central questions can either be deflated or taken over by science. For example, in ethics, one might think there are no moral facts and so all we can do is figure out what we mean by the words “right” and “wrong”. And in metaphysics, questions such as “is the universe infinite” can now, at least arguably, be understood as scientific questions. But consciousness is a phenomenon which is obviously real, and the question of how it arises from the brain is clearly a substantive, not merely verbal question, which does not seem tractable by science as we know it. As David Chalmers says, science as we know it can only tackle the so-called easy problems of consciousness, not the hard problem. © 2019 Scientific American

Keyword: Consciousness
Link ID: 26904 - Posted: 12.19.2019

By Gretchen Reynolds Top athletes’ brains are not as noisy as yours and mine, according to a fascinating new study of elite competitors and how they process sound. The study finds that the brains of fit, young athletes dial down extraneous noise and attend to important sounds better than those of other young people, suggesting that playing sports may change brains in ways that alter how well people sense and respond to the world around them. For most of us with normal hearing, of course, listening to and processing sounds are such automatic mental activities that we take them for granted. But “making sense of sound is actually one of the most complex jobs we ask of our brains,” says Nina Kraus, a professor and director of the Auditory Neuroscience Laboratory at Northwestern University in Evanston, Ill., who oversaw the new study. Sound processing also can be a reflection of broader brain health, she says, since it involves so many interconnected areas of the brain that must coordinate to decide whether any given sound is familiar, what it means, if the body should respond and how a particular sound fits into the broader orchestration of other noises that constantly bombard us. For some time, Dr. Kraus and her collaborators have been studying whether some people’s brains perform this intricate task more effectively than others. By attaching electrodes to people’s scalps and then playing a simple sound, usually the spoken syllable “da,” at irregular intervals, they have measured and graphed electrical brain wave activity in people’s sound-processing centers. © 2019 The New York Times Company

Keyword: Attention
Link ID: 26901 - Posted: 12.18.2019

By Virginia Morell Dogs may not be able to count to 10, but even the untrained ones have a rough sense of how many treats you put in their food bowl. That’s the finding of a new study, which reveals that our canine pals innately understand quantities in much the same way we do. The study is “compelling and exciting,” says Michael Beran, a psychologist at Georgia State University in Atlanta who was not involved in the research. “It further increases our confidence that [these representations of quantity in the brain] are ancient and widespread among species.” The ability to rapidly estimate the number of sheep in a flock or ripened fruits on a tree is known as the “approximate number system.” Previous studies have suggested monkeys, fish, bees, and dogs have this talent. But much of this research has used trained animals that receive multiple tests and rewards. That leaves open the question of whether the ability is innate in these species, as it is in humans. In the new study, Gregory Berns, a neuroscientist at Emory University in Atlanta, and colleagues recruited 11 dogs from various breeds, including border collies, pitbull mixes, and Labrador golden retriever mixes, to see whether they could find brain activity associated with a sensitivity to numbers. The team, which pioneered canine brain scanning (by getting dogs to voluntarily enter a functional magnetic resonance imaging scanner and remain motionless), had their subjects enter the scanner, rest their heads on a block, and fix their eyes on a screen at the opposite end (see video, above). On the screen was an array of light gray dots on a black background whose number changed every 300 milliseconds. If dogs, like humans and nonhuman primates, have a dedicated brain region for representing quantities, their brains should show more activity there when the number of dots was dissimilar (three small dots versus 10 large ones) than when they were constant (four small dots versus four large dots). © 2019 American Association for the Advancement of Science.

Keyword: Attention; Evolution
Link ID: 26900 - Posted: 12.18.2019

Christof Koch A future where the thinking capabilities of computers approach our own is quickly coming into view. We feel ever more powerful machine-learning (ML) algorithms breathing down our necks. Rapid progress in coming decades will bring about machines with human-level intelligence capable of speech and reasoning, with a myriad of contributions to economics, politics and, inevitably, warcraft. The birth of true artificial intelligence will profoundly affect humankind’s future, including whether it has one. The following quotes provide a case in point: “From the time the last great artificial intelligence breakthrough was reached in the late 1940s, scientists around the world have looked for ways of harnessing this ‘artificial intelligence’ to improve technology beyond what even the most sophisticated of today’s artificial intelligence programs can achieve.” Advertisement “Even now, research is ongoing to better understand what the new AI programs will be able to do, while remaining within the bounds of today’s intelligence. Most AI programs currently programmed have been limited primarily to making simple decisions or performing simple operations on relatively small amounts of data.” These two paragraphs were written by GPT-2, a language bot I tried last summer. Developed by OpenAI, a San Francisco–based institute that promotes beneficial AI, GPT-2 is an ML algorithm with a seemingly idiotic task: presented with some arbitrary starter text, it must predict the next word. The network isn’t taught to “understand” prose in any human sense. Instead, during its training phase, it adjusts the internal connections in its simulated neural networks to best anticipate the next word, the word after that, and so on. Trained on eight million Web pages, its innards contain more than a billion connections that emulate synapses, the connecting points between neurons. When I entered the first few sentences of the article you are reading, the algorithm spewed out two paragraphs that sounded like a freshman’s effort to recall the gist of an introductory lecture on machine learning during which she was daydreaming. The output contains all the right words and phrases—not bad, really! Primed with the same text a second time, the algorithm comes up with something different. © 2019 Scientific American,

Keyword: Consciousness; Robotics
Link ID: 26894 - Posted: 12.12.2019