Chapter 14. Attention and Consciousness

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 1455

By Gabriel Finkelstein Unlike Charles Darwin and Claude Bernard, who endure as heroes in England and France, Emil du Bois-Reymond is generally forgotten in Germany — no streets bear his name, no stamps portray his image, no celebrations are held in his honor, and no collections of his essays remain in print. Most Germans have never heard of him, and if they have, they generally assume that he was Swiss. But it wasn’t always this way. Du Bois-Reymond was once lauded as “the foremost naturalist of Europe,” “the last of the encyclopedists,” and “one of the greatest scientists Germany ever produced.” Contemporaries celebrated him for his research in neuroscience and his addresses on science and culture; in fact, the poet Jules Laforgue reported seeing his picture hanging for sale in German shop windows alongside those of the Prussian royal family. Those familiar with du Bois-Reymond generally recall his advocacy of understanding biology in terms of chemistry and physics, but during his lifetime he earned recognition for a host of other achievements. He pioneered the use of instruments in neuroscience, discovered the electrical transmission of nerve signals, linked structure to function in neural tissue, and posited the improvement of neural connections with use. He served as a professor, as dean, and as rector at the University of Berlin, directed the first institute of physiology in Prussia, was secretary of the Prussian Academy of Sciences, established the first society of physics in Germany, helped found the Berlin Society of Anthropology, oversaw the Berlin Physiological Society, edited the leading German journal of physiology, supervised dozens of researchers, and trained an army of physicians. © 2019 Scientific American

Keyword: Consciousness
Link ID: 26811 - Posted: 11.11.2019

By Dan Falk At the moment, you’re reading these words and, presumably, thinking about what the words and sentences mean. Or perhaps your mind has wandered, and you’re thinking about dinner, or looking forward to bingeing the latest season of “The Good Place.” But you’re definitely experiencing something. How is that possible? Every part of you, including your brain, is made of atoms, and each atom is as lifeless as the next. Your atoms certainly don’t know or feel or experience anything, and yet you — a conglomeration of such atoms — have a rich mental life in which a parade of experiences unfolds one after another. The puzzle of consciousness has, of course, occupied the greatest minds for millennia. The philosopher David Chalmers has called the central mystery the “hard problem” of consciousness. Why, he asks, does looking at a red apple produce the experience of seeing red? And more generally: Why do certain arrangements of matter experience anything? Anyone who has followed the recent debates over the nature of consciousness will have been struck by the sheer variety of explanations on offer. Many prominent neuroscientists, cognitive scientists, philosophers, and physicists have put forward “solutions” to the puzzle — all of them wildly different from, and frequently contradicting, each other. “‘You,’ your joys and your sorrows, your memories and ambitions, your sense of personal identity and free will, are in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules.”

Keyword: Consciousness
Link ID: 26802 - Posted: 11.08.2019

By Erica Tennenhouse Live in the urban jungle long enough, and you might start to see things—in particular humanmade objects like cars and furniture. That’s what researchers found when they melded photos of artificial items with images of animals and asked 20 volunteers what they saw. The people, all of whom lived in cities, overwhelmingly noticed the manufactured objects whereas the animals faded into the background. To find out whether built environments can alter peoples’ perception, the researchers gathered hundreds of photos of animals and artificial objects such as bicycles, laptops, or benches. Then, they superimposed them to create hybrid images—like a horse combined with a table (above, top left) or a rhinoceros combined with a car (above, bottom right). As volunteers watched the hybrids flash by on a screen, they categorized each as a small animal, a big animal, a small humanmade object, or a big humanmade object. Overall, volunteers showed a clear bias toward the humanmade objects, especially when they were big, the researchers report today in the Proceedings of the Royal Society B. The bias itself was a measure of how much the researchers had to visually “amp up” an image before participants saw it instead of its partner image. That bias suggests people’s perceptions are fundamentally altered by their environments, the researchers say. Humans often rely on past experiences to process new information—the classic example is mistaking a snake for a garden hose. But in this case, living in industrialized nations—where you are exposed to fewer “natural” objects—could change the way you view the world. © 2019 American Association for the Advancement of Science

Keyword: Vision; Attention
Link ID: 26793 - Posted: 11.06.2019

By Christof Koch “And death shall have no dominion”—Dylan Thomas, 1933 You will die, sooner or later. We all will. For everything that has a beginning has an end, an ineluctable consequence of the second law of thermodynamics. Few of us like to think about this troubling fact. But once birthed, the thought of oblivion can’t be completely erased. It lurks in the unconscious shadows, ready to burst forth. In my case, it was only as a mature man that I became fully mortal. I had wasted an entire evening playing an addictive, first-person shooter video game—running through subterranean halls, flooded corridors, nightmarishly turning tunnels, and empty plazas under a foreign sun, firing my weapons at hordes of aliens relentlessly pursuing me. I went to bed, easily falling asleep but awoke abruptly a few hours later. Abstract knowledge had turned to felt reality—I was going to die! Not right there and then but eventually. Advertisement Evolution equipped our species with powerful defense mechanisms to deal with this foreknowledge—in particular, psychological suppression and religion. The former prevents us from consciously acknowledging or dwelling on such uncomfortable truths while the latter reassures us by promising never-ending life in a Christian heaven, an eternal cycle of Buddhist reincarnations or an uploading of our mind to the Cloud, the 21st-century equivalent of rapture for nerds. Death has no such dominion over nonhuman animals. Although they can grieve for dead offspring and companions, there is no credible evidence that apes, dogs, crows and bees have minds sufficiently self-aware to be troubled by the insight that one day they will be no more. Thus, these defense mechanisms must have arisen in recent hominin evolution, in less than 10 million years. © 2019 Scientific American

Keyword: Consciousness
Link ID: 26780 - Posted: 11.01.2019

By Zeynep Tufekci More than a billion people around the world have smartphones, almost all of which come with some kind of navigation app such as Google or Apple Maps or Waze. This raises the age-old question we encounter with any technology: What skills are we losing? But also, crucially: What capabilities are we gaining? Talking with people who are good at finding their way around or adept at using paper maps, I often hear a lot of frustration with digital maps. North/south orientation gets messed up, and you can see only a small section at a time. And unlike with paper maps, one loses a lot of detail after zooming out. I can see all that and sympathize that it may be quite frustrating for the already skilled to be confined to a small phone screen. (Although map apps aren’t really meant to be replacements for paper maps, which appeal to our eyes, but are actually designed to be heard: “Turn left in 200 feet. Your destination will be on the right.”) But consider what digital navigation aids have meant for someone like me. Despite being a frequent traveler, I’m so terrible at finding my way that I still use Google Maps almost every day in the small town where I have lived for many years. What looks like an inferior product to some has been a significant expansion of my own capabilities. I’d even call it life-changing. Part of the problem is that reading paper maps requires a specific skill set. There is nothing natural about them. In many developed nations, including the U.S., one expects street names and house numbers to be meaningful referents, and instructions such as “go north for three blocks and then west” make sense to those familiar with these conventions. In Istanbul, in contrast, where I grew up, none of those hold true. For one thing, the locals rarely use street names. Why bother when a government or a military coup might change them—again. House and apartment numbers often aren’t sequential either because after buildings 1, 2 and 3 were built, someone squeezed in another house between 1 and 2, and now that’s 4. But then 5 will maybe get built after 3, and 6 will be between 2 and 3. Good luck with 1, 4, 2, 6, 5, and so on, sometimes into the hundreds, in jumbled order. Besides, the city is full of winding, ancient alleys that intersect with newer avenues at many angles. © 2019 Scientific American

Keyword: Attention; Learning & Memory
Link ID: 26768 - Posted: 10.30.2019

by Emily Anthes The Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative, launched by the U.S. National Institutes of Health (NIH) in 2013, has a lofty goal: to unravel the cellular basis of cognition and behavior. Since the initiative’s launch, the NIH has doled out about $1 billion to researchers who are developing tools and technologies to map, measure and observe the brain’s neural circuits. Along the way, the agency has also tried to explore the ethical implications of this research. Khara Ramos, who directs the neuroethics program at the NIH’s National Institute of Neurological Disorders and Stroke, described the emerging field of neuroethics today at the 2019 Society for Neuroscience annual meeting in Chicago, Illinois. Spectrum: Was discussion about ethics part of the BRAIN Initiative from the beginning? Khara Ramos: We knew that we needed to do something with neuroethics, but it took time for us to figure out what exactly, in part because neuroethics is a relatively new field. Bioethics is a broad field that covers all aspects of biomedicine, but there isn’t specialization of bioethics in kidney research or pulmonary research the way there is in neuroscience research, and that’s really because the brain is so intimately connected with who we are. Neuroscience research raises these unique ethical questions, such as: How might new neurotechnologies alter fundamental notions of agency or autonomy or identity? We’re starting to focus on data sharing and privacy from a philosophical, conceptual perspective: Is there something unique about brain data that is different from, for instance, genetic data? How do researchers themselves feel about data sharing and privacy? And how does the public view it? For instance, is my social security number more or less sensitive than the kinds of neural data that somebody might be able to get if I were participating in a clinical trial? © 2019 Simons Foundation

Keyword: Autism; Attention
Link ID: 26725 - Posted: 10.21.2019

Ian Sample Science editor Warning: this story is about death. You might want to click away now. That’s because, researchers say, our brains do their best to keep us from dwelling on our inevitable demise. A study found that the brain shields us from existential fear by categorising death as an unfortunate event that only befalls other people. “The brain does not accept that death is related to us,” said Yair Dor-Ziderman, at Bar Ilan University in Israel. “We have this primal mechanism that means when the brain gets information that links self to death, something tells us it’s not reliable, so we shouldn’t believe it.” Being shielded from thoughts of our future death could be crucial for us to live in the present. The protection may switch on in early life as our minds develop and we realise death comes to us all. “The moment you have this ability to look into your own future, you realise that at some point you’re going to die and there’s nothing you can do about it,” said Dor-Ziderman. “That goes against the grain of our whole biology, which is helping us to stay alive.” To investigate how the brain handles thoughts of death, Dor-Ziderman and colleagues developed a test that involved producing signals of surprise in the brain. They asked volunteers to watch faces flash up on a screen while their brain activity was monitored. The person’s own face or that of a stranger flashed up on screen several times, followed by a different face. On seeing the final face, the brain flickered with surprise because the image clashed with what it had predicted. © 2019 Guardian News & Media Limited

Keyword: Attention; Emotions
Link ID: 26721 - Posted: 10.19.2019

By Sara Reardon Brain scientists can watch neurons fire and communicate. They can map how brain regions light up during sensation, decision-making, and speech. What they can't explain is how all this activity gives rise to consciousness. Theories abound, but their advocates often talk past each other and interpret the same set of data differently. "Theories are very flexible," says Christof Koch, president of the Allen Institute for Brain Science in Seattle, Washington. "Like vampires, they're very difficult to slay." Now, the Templeton World Charity Foundation (TWCF), a nonprofit best known for funding research at the intersection of science and religion, hopes to narrow the debate with experiments that directly pit theories of consciousness against each other. The first phase of the $20 million project, launched this week at the Society for Neuroscience meeting in Chicago, Illinois, will compare two theories of consciousness by scanning the brains of participants during cleverly designed tests. Proponents of each theory have agreed to admit it is flawed if the outcomes go against them. Head-to-head contests are rare in basic science. "It's a really outlandish project," says principal investigator Lucia Melloni, a neuroscientist at the Max Planck Institute for Empirical Aesthetics in Frankfurt, Germany. But understanding consciousness has become increasingly important for researchers seeking to communicate with locked-in patients, determine whether artificial intelligence systems can become conscious, or explore whether animals experience consciousness the way humans do. To winnow the theories, TWCF took inspiration from a 1919 experiment in which physicist Arthur Eddington pitted Albert Einstein's theory of general relativity against Isaac Newton's gravitational theory. Eddington measured how the Sun's gravity caused light from nearby stars to shift during a solar eclipse—and Einstein won. © 2019 American Association for the Advancement of Science

Keyword: Consciousness
Link ID: 26715 - Posted: 10.17.2019

Subhash Kak Many advanced artificial intelligence projects say they are working toward building a conscious machine, based on the idea that brain functions merely encode and process multisensory information. The assumption goes, then, that once brain functions are properly understood, it should be possible to program them into a computer. Microsoft recently announced that it would spend US$1 billion on a project to do just that. So far, though, attempts to build supercomputer brains have not even come close. A multi-billion-dollar European project that began in 2013 is now largely understood to have failed. That effort has shifted to look more like a similar but less ambitious project in the U.S., developing new software tools for researchers to study brain data, rather than simulating a brain. Some researchers continue to insist that simulating neuroscience with computers is the way to go. Others, like me, view these efforts as doomed to failure because we do not believe consciousness is computable. Our basic argument is that brains integrate and compress multiple components of an experience, including sight and smell – which simply can’t be handled in the way today’s computers sense, process and store data. Brains don’t operate like computers Living organisms store experiences in their brains by adapting neural connections in an active process between the subject and the environment. By contrast, a computer records data in short-term and long-term memory blocks. That difference means the brain’s information handling must also be different from how computers work. © 2010–2019, The Conversation US, Inc.

Keyword: Consciousness
Link ID: 26714 - Posted: 10.17.2019

Patricia Churchland Three myths about morality remain alluring: only humans act on moral emotions, moral precepts are divine in origin, and learning to behave morally goes against our thoroughly selfish nature. Converging data from many sciences, including ethology, anthropology, genetics, and neuroscience, have challenged all three of these myths. First, self-sacrifice, given the pressing needs of close kin or conspecifics to whom they are attached, has been documented in many mammalian species—wolves, marmosets, dolphins, and even rodents. Birds display it too. In sharp contrast, reptiles show no hint of this impulse. Second, until very recently, hominins lived in small groups with robust social practices fostering well-being and survival in a wide range of ecologies. The idea of a divine lawgiver likely played no part in their moral practices for some two million years, emerging only with the advent of agriculture and larger communities where not everyone knew everyone else. The divine lawgiver idea is still absent from some large-scale religions, such as Confucianism and Buddhism. Third, it is part of our genetic heritage to care for kith and kin. Although self-sacrifice is common in termites and bees, the altruistic behavior of mammals and birds is vastly more flexible, variable, and farsighted. Attachment to others, mediated by powerful brain hormones, is the biological platform for morality. © 1986–2019 The Scientist.

Keyword: Consciousness; Emotions
Link ID: 26678 - Posted: 10.08.2019

Alex Smith When children are diagnosed with attention deficit hyperactivity disorder, stimulant medications like Ritalin or Adderall are usually the first line of treatment. The American Academy of Pediatrics issued new guidelines on Monday that uphold the central role of medication, accompanied by behavioral therapy, in ADHD treatment. However, some parents, doctors and researchers who study kids with ADHD say they are disappointed that the new guidelines don't recommend behavioral treatment first for more children, as some recent research has suggested might lead to better outcomes. When 6-year-old Brody Knapp of Kansas City, Mo., was diagnosed with ADHD last year, his father, Brett, was skeptical. Brett didn't want his son taking pills. "You hear of losing your child's personality, and they become a shell of themselves, and they're not that sparkling little kid that you love," Brett says. "I didn't want to lose that with Brody, because he's an amazing kid." Brody's mother, Ashley, had other ideas. She's a school principal and has ADHD herself. "I was all for stimulants at the very, very beginning," Ashley says, "just because I know what they can do to help a neurological issue such as ADHD." More and more families have been facing the same dilemma. The prevalence of diagnosed ADHD has shot up in the U.S. in the past two decades; 1 in 10 children now has that diagnosis. The updated guidelines from the AAP recommend that children with ADHD should also be screened for other conditions, and monitored closely. But the treatment recommendations regarding medication are essentially unchanged from the previous guidelines, which were published in 2011. © 2019 npr

Keyword: ADHD; Drug Abuse
Link ID: 26657 - Posted: 10.01.2019

Jon Hamilton Too much physical exertion appears to make the brain tired. That's the conclusion of a study of triathletes published Thursday in the journal Current Biology. Researchers found that after several weeks of overtraining, athletes became more likely to choose immediate gratification over long-term rewards. At the same time, brain scans showed the athletes had decreased activity in an area of the brain involved in decision-making. The finding could explain why some elite athletes see their performance decline when they work out too much — a phenomenon known as overtraining syndrome. The distance runner Alberto Salazar, for example, experienced a mysterious decline after winning the New York Marathon three times and the Boston Marathon once in the early 1980s. Salazar's times fell off even though he was still in his mid-20s and training more than ever. "Probably [it was] something linked to his brain and his cognitive capacities," says Bastien Blain, an author of the study and a postdoctoral fellow at University College London. (Salazar didn't respond to an interview request for this story.) Blain was part of a team that studied 37 male triathletes who volunteered to take part in a special training program. "They were strongly motivated to be part of this program, at least at the beginning," Blain says. Half of the triathletes were instructed to continue their usual workouts. The rest were told to increase their weekly training by 40%. The result was a training program so intense that these athletes began to perform worse on tests of maximal output. After three weeks, all the participants were put in a brain scanner and asked a series of questions designed to reveal whether a person is more inclined to choose immediate gratification or a long-term reward. "For example, we ask, 'Do you prefer $10 now or $60 in six months,' " Blain says. © 2019 npr

Keyword: Attention
Link ID: 26656 - Posted: 09.28.2019

Alison Abbott A prominent German neuroscientist committed scientific misconduct in research in which he claimed to have developed a brain-monitoring technique able to read certain thoughts of paralysed people, Germany’s main research agency has found. The DFG’s investigation into Niels Birbaumer’s high-profile work found that data in two papers were incomplete and that the scientific analysis was flawed — although it did not comment on whether the approach was valid. In a 19 September statement, the agency, which funded some of the work, said it was imposing some of its most severe sanctions to Birbaumer, who has positions at the University of Tübingen in Germany and the Wyss Center for Bio and Neuroengineering in Geneva, Switzerland. The DFG has banned Birbaumer from applying for its grants and from serving as a DFG evaluator for five years. The agency has also recommended the retraction of the two papers1,2 published in PLoS Biology, and says that it will ask him to return the grant money that he used to generate the data underpinning the papers. “The DFG has found scientific misconduct on my part and has imposed sanctions. I must therefore accept that I was unable to refute the allegations made against me,” Birbaumer said in a statement e-mailed to Nature in response to the DFG’s findings. In a subsequent phone conversation with Nature, Birbaumer added that he could not comment further on the findings because the DFG has not yet provided him with specific details on the reasoning behind the decisions. Birbaumer says he stands by his studies, which he says, “show that it is possible to communicate with patients who are completely paralysed, through computer-based analysis of blood flow and brain currents”. © 2019 Springer Nature Limited

Keyword: Consciousness; Brain imaging
Link ID: 26636 - Posted: 09.23.2019

The mysterious ailments experienced by some 40 Canadian and U.S. diplomats and their families while stationed in Cuba may have had nothing to do with sonic "attacks" identified in earlier studies. According to a new Canadian study, obtained exclusively by Radio-Canada's investigative TV program Enquête, the cause could instead be neurotoxic agents used in pesticide fumigation. A number of Canadians and Americans living in Havana fell victim to an unexplained illness starting in late 2016, complaining of concussion-like symptoms, including headaches, dizziness, nausea and difficulty concentrating. Some described hearing a buzzing or high-pitched sounds before falling sick. In the wake of the health problems experienced over the past three years, Global Affairs Canada commissioned a clinical study by a team of multidisciplinary researchers in Halifax, affiliated with the Brain Repair Centre, Dalhousie University and the Nova Scotia Health Authority. "The working hypothesis actually came only after we had most of the results," Dr. Alon Friedman, the study's lead author, said in an interview. The researchers identified a damaged region of the brain that is responsible for memory, concentration and sleep-and-wake cycle, among other things, and then looked at how this region could come to be injured. "There are very specific types of toxins that affect these kinds of nervous systems ... and these are insecticides, pesticides, organophosphates — specific neurotoxins," said Friedman. "So that's why we generated the hypothesis that we then went to test in other ways." Twenty-six individuals participated in the study, including a control group of people who never lived in Havana. ©2019 CBC/Radio-Canada

Keyword: Neurotoxins; Attention
Link ID: 26627 - Posted: 09.20.2019

By Kenneth Shinozuka What is consciousness? In a sense, this is one of the greatest mysteries in the universe. yet in another, it’s not an enigma at all. If we define consciousness as the feeling of what it’s like to subjectively experience something, then there is nothing more deeply familiar. Most of us know what it’s like to feel the pain of a headache, to empathize with another human being, to see the color blue, to hear the soaring melodies of a symphony, and so on. In fact, as philosopher Galen Strawson insightfully pointed out in a New York Times opinion piece, consciousness is “the only thing in the universe whose ultimate intrinsic nature we can claim to know.” This is a crucial point. We don’t have direct access to the outer world. Instead we experience it through the filter of our consciousness. We have no idea what the color blue really looks like “out there,” only how it appears to us “in here.” Furthermore, as some cognitive scientists like Donald Hoffman have argued in recent years, external reality is likely to be far different from our perceptions of it. The human brain has been optimized, through the process of evolution, to model reality in the way that’s most conducive to its survival, not in the way that most faithfully represents the world. Science has produced an outstandingly accurate description of the outer world, but it has told us very little, if anything, about our internal consciousness. With sufficient knowledge of physics, I can calculate all the forces acting on the chair in front of me, but I don’t know what “forces” or “laws” are giving rise to my subjective experience of the chair. © 2019 Scientific American

Keyword: Consciousness
Link ID: 26607 - Posted: 09.13.2019

Salvatore Domenic Morgera The human brain sends hundreds of billions of neural signals each second. It’s an extraordinarily complex feat. A healthy brain must establish an enormous number of correct connections and ensure that they remain accurate for the entire period of the information transfer – that can take seconds, which in “brain time” is pretty long. How does each signal get to its intended destination? The challenge for your brain is similar to what you’re faced with when trying to engage in conversation at a noisy cocktail party. You’re able to focus on the person you’re talking to and “mute” the other discussions. This phenomenon is selective hearing – what’s called the cocktail party effect. When everyone at a large, crowded party talks at roughly the same loudness, the average sound level of the person you’re speaking with is about equal to the average level of all the other partygoers’ chatter combined. If it were a satellite TV system, this roughly equal balance of desired signal and background noise would result in poor reception. Nevertheless, this balance is good enough to let you understand conversation at a bustling party. How does the human brain do it, distinguishing among billions of ongoing “conversations” within itself and locking on to a specific signal for delivery? My team’s research into the neurological networks of the brain shows there are two activities that support its ability to establish reliable connections in the presence of significant biological background noise. Although the brain’s mechanisms are quite complex, these two activities act as what an electrical engineer calls a matched filter - a processing element used in high-performance radio systems, and now known to exist in nature. © 2010–2019, The Conversation US, Inc.

Keyword: Attention
Link ID: 26604 - Posted: 09.12.2019

Bahar Gholipour The death of free will began with thousands of finger taps. In 1964, two German scientists monitored the electrical activity of a dozen people’s brains. Each day for several months, volunteers came into the scientists’ lab at the University of Freiburg to get wires fixed to their scalp from a showerhead-like contraption overhead. The participants sat in a chair, tucked neatly in a metal tollbooth, with only one task: to flex a finger on their right hand at whatever irregular intervals pleased them, over and over, up to 500 times a visit. The purpose of this experiment was to search for signals in the participants’ brains that preceded each finger tap. At the time, researchers knew how to measure brain activity that occurred in response to events out in the world—when a person hears a song, for instance, or looks at a photograph—but no one had figured out how to isolate the signs of someone’s brain actually initiating an action. The experiment’s results came in squiggly, dotted lines, a representation of changing brain waves. In the milliseconds leading up to the finger taps, the lines showed an almost undetectably faint uptick: a wave that rose for about a second, like a drumroll of firing neurons, then ended in an abrupt crash. This flurry of neuronal activity, which the scientists called the Bereitschaftspotential, or readiness potential, was like a gift of infinitesimal time travel. For the first time, they could see the brain readying itself to create a voluntary movement.

Keyword: Consciousness
Link ID: 26602 - Posted: 09.11.2019

By Tam Hunt How do you know your dog is conscious? Well, she wags her tail when she’s happy, bounces around like a young human child when excited, and yawns when sleepy— among many other examples of behaviors that convince us (most of us, at least) that dogs are quite conscious in ways that are similar to, but not the same as, human consciousness. Most of us are okay attributing emotions, desires, pain and pleasure—which is what I mean by consciousness in this context—to dogs and many other pets. What about further down the chain. Is a mouse conscious? We can apply similar tests for “behavioral correlates of consciousness” like those I’ve just mentioned, but, for some of us, the mice behaviors observed will be considerably less convincing than for dogs in terms of there being an inner life for the average mouse. Advertisement What about an ant? What behaviors do ants engage in that might make us think an individual ant is at least a little bit conscious? Or is it not conscious at all? Let me now turn the questions around: how do I know you, my dear reader, are conscious? If we met, I’d probably introduce myself and hear you say your name and respond to my questions and various small talk. You might be happy to meet me and smile or shake my hand vigorously. Or you might get a little anxious at meeting someone new and behave awkwardly. All of these behaviors would convince me that you are in fact conscious much like I am, and not just faking it! Now here’s the broader question? How can we know anybody or any animal or any thing is actually conscious and not just faking it? The nature of consciousness makes it by necessity a wholly private affair. The only consciousness I can know with certainty is my own. Everything else is inference. © 2019 Scientific American

Keyword: Consciousness
Link ID: 26557 - Posted: 08.31.2019

By Anil K. Seth On the 10th of April this year Pope Francis, President Salva Kiir of South Sudan and former rebel leader Riek Machar sat down together for dinner at the Vatican. They ate in silence, the start of a two-day retreat aimed at reconciliation from a civil war that has killed some 400,000 people since 2013. At about the same time in my laboratory at the University of Sussex in England, Ph.D. student Alberto Mariola was putting the finishing touches to a new experiment in which volunteers experience being in a room that they believe is there but that is not. In psychiatry clinics across the globe, people arrive complaining that things no longer seem “real” to them, whether it is the world around them or their own selves. In the fractured societies in which we live, what is real—and what is not—seems to be increasingly up for grabs. Warring sides may experience and believe in different realities. Perhaps eating together in silence can help because it offers a small slice of reality that can be agreed on, a stable platform on which to build further understanding. Advertisement We need not look to war and psychosis to find radically different inner universes. In 2015 a badly exposed photograph of a dress tore across the Internet, dividing the world into those who saw it as blue and black (me included) and those who saw it as white and gold (half my lab). Those who saw it one way were so convinced they were right—that the dress truly was blue and black or white and gold—that they found it almost impossible to believe that others might perceive it differently. We all know that our perceptual systems are easy to fool. The popularity of visual illusions is testament to this phenomenon. Things seem to be one way, and they are revealed to be another: two lines appear to be different lengths, but when measured they are exactly the same; we see movement in an image we know to be still. The story usually told about illusions is that they exploit quirks in the circuitry of perception, so that what we perceive deviates from what is there. Implicit in this story, however, is the assumption that a properly functioning perceptual system will render to our consciousness things precisely as they are. © 2019 Scientific American

Keyword: Consciousness; Vision
Link ID: 26549 - Posted: 08.29.2019

By John Horgan At the beginning of my book Mind-Body Problems, I describe one of my earliest childhood memories: I am walking near a river on a hot summer day. My left hand grips a fishing rod, my right a can of worms. One friend walks in front of me, another behind. We’re headed to a spot on the river where we can catch perch, bullheads and large-mouth bass. Weeds bordering the path block my view of the river, but I can smell its dank breath and feel its chill on my skin. The seething of cicadas builds to a crescendo. I stop short. I’m me, I say. My friends don’t react, so I say, louder, I’m me. The friend before me glances over his shoulder and keeps walking, the friend behind pushes me. I resume walking, still thinking, I’m me, I’m me. I feel lonely, scared, exhilarated, bewildered. Advertisement That moment was when I first became self-conscious, aware of myself as something weird, distinct from the rest of the world, demanding explanation. Or so I came to believe when I recalled the incident in subsequent decades. I never really talked about it, because it was hard to describe. It meant a lot to me, but I doubted it would mean much to anyone else. Then I learned that others have had similar experiences. One is Rebecca Goldstein, the philosopher and novelist, whom I profiled in Mind-Body Problems. Before interviewing Goldstein, I read her novel 36 Arguments for the Existence of God, and I came upon a passage in which the hero, Cass, a psychologist, recalls a recurrent “metaphysical seizure” or “vertigo” that struck him in childhood. Lying in bed, he was overcome by the improbability that he was just himself and no one else. “The more he tried to get a fix on the fact of being Cass here,” Goldstein writes, “the more the whole idea of it just got away from him.” Even as an adult, Cass kept asking himself, “How can it be that, of all things, one is this thing, so that one can say, astonishingly, ‘Here I am’”? © 2019 Scientific American

Keyword: Consciousness; Development of the Brain
Link ID: 26510 - Posted: 08.17.2019