Links for Keyword: Consciousness

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 21 - 40 of 334

By George Musser They call it the hard problem of consciousness, but a better term might be the impossible problem of consciousness. The whole point is that the qualitative aspects of our conscious experience, or “qualia,” are inexplicable. They slip through the explanatory framework of science, which is reductive: It explains things by breaking them down into parts and describing how they fit together. Subjective experience has an intrinsic je ne sais quoi that can’t be decomposed into parts or explained by relating one thing to another. Qualia can’t be grasped intellectually. They can only be experienced firsthand. For the past five years or so, I’ve been trying to untangle the cluster of theories that attempt to explain consciousness, traveling the world to interview neuroscientists, philosophers, artificial-intelligence researchers, and physicists—all of whom have something to say on the matter. Most duck the hard problem, either bracketing it until neuroscientists explain brain function more fully or accepting that consciousness has no deeper explanation and must be wired into the base level of reality. Although I made it a point to maintain an outsider’s view of science in my reporting, staying out of academic debates and finding value in every approach, I find both positions defensible but dispiriting. I cling to the intuition that consciousness must have some scientific explanation that we can achieve. But how? It’s hard to imagine how science could possibly expand its framework to accommodate the redness of red or the awfulness of fingernails on a chalkboard. But there is another option: to suppose that we are misconstruing our experience in some way. We think that it has intrinsic qualities, but maybe on closer inspection it doesn’t. Not that this is an easy position to take. Two leading theories of consciousness take a stab at it. Integrated Information Theory (IIT) says that the neural networks in our head are conscious since neurons act together in harmony—they form collective structures with properties beyond those of the individual cells. If so, subjective experience isn’t primitive and unanalyzable; in principle, you could follow the network’s transitions and read its mind. “What IIT tries to do is completely avoid any intrinsic quality in the traditional sense,” the father of IIT, Giulio Tononi, told me. © 2023 NautilusNext Inc.,

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 28970 - Posted: 10.25.2023

By Hope Reese There is no free will, according to Robert Sapolsky, a biologist and neurologist at Stanford University and a recipient of the MacArthur Foundation “genius” grant. Dr. Sapolsky worked for decades as a field primatologist before turning to neuroscience, and he has spent his career investigating behavior across the animal kingdom and writing about it in books including “Behave: The Biology of Humans at Our Best and Worst” and “Monkeyluv, and Other Essays on Our Lives as Animals.” In his latest book, “Determined: A Science of Life Without Free Will,” Dr. Sapolsky confronts and refutes the biological and philosophical arguments for free will. He contends that we are not free agents, but that biology, hormones, childhood and life circumstances coalesce to produce actions that we merely feel were ours to choose. It’s a provocative claim, he concedes, but he would be content if readers simply began to question the belief, which is embedded in our cultural conversation. Getting rid of free will “completely strikes at our sense of identity and autonomy and where we get meaning from,” Dr. Sapolsky said, and this makes the idea particularly hard to shake. There are major implications, he notes: Absent free will, no one should be held responsible for their behavior, good or bad. Dr. Sapolsky sees this as “liberating” for most people, for whom “life has been about being blamed and punished and deprived and ignored for things they have no control over.” He spoke in a series of interviews about the challenges that free will presents and how he stays motivated without it. These conversations were edited and condensed for clarity. To most people, free will means being in charge of our actions. What’s wrong with that outlook? It’s a completely useless definition. When most people think they’re discerning free will, what they mean is somebody intended to do what they did: Something has just happened; somebody pulled the trigger. They understood the consequences and knew that alternative behaviors were available. But that doesn’t remotely begin to touch it, because you’ve got to ask: Where did that intent come from? That’s what happened a minute before, in the years before, and everything in between. © 2023 The New York Times Company

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 28967 - Posted: 10.17.2023

By Marco Giancotti I’m lying down in a white cylinder barely wider than my body, surrounded on all sides by a mass of sophisticated machinery the size of a small camper van. It’s an fMRI machine, one of the technological marvels of modern neuroscience. Two small inflatable cushions squeeze my temples, keeping my head still. “We are ready to begin the next batch of exercises,” I hear Dr. Horikawa’s gentle voice saying. We’re underground, in one of the laboratories of Tokyo University’s Faculty of Medicine, Hongo Campus. “Do you feel like proceeding?” “Yes, let’s go,” I answer. The machine sets in motion again. A powerful current grows inside the cryogenically cooled wires that coil around me, showering my head with radio waves, knocking the hydrogen atoms inside my head off their original spin axis, and measuring the rate at which the axis recovers afterward. To the sensors around me, I’m now as transparent as a glass of water. Every tiny change of blood flow anywhere inside my brain is being watched and recorded in 3-D. A few seconds pass, then a synthetic female voice speaks into my ears over the electronic clamor: “top hat.” I close my eyes and I imagine a top hat. A few seconds later a beep tells me I should rate the quality of my mental picture, which I do with a controller in my hand. The voice speaks again: “fire extinguisher,” and I repeat the routine. Next is “butterfly,” then “camel,” then “snowmobile,” and so on, for about 10 minutes, while the system monitors the activation of my brain synapses. For most people, this should be a rather simple exercise, perhaps even satisfying. For me, it’s a considerable strain, because I don’t “see” any of those things. For each and every one of the prompts, I rate my mental image “0” on a 0 to 5 scale, because as soon as I close my eyes, what I see are not everyday objects, animals, and vehicles, but the dark underside of my eyelids. I can’t willingly form the faintest of images in my mind. And, although it isn’t the subject of the current experiment, I also can’t conjure sounds, smells, or any other kind of sensory stimulation inside my head. I have what is called “aphantasia,” the absence of voluntary imagination of the senses. I know what a top hat is. I can describe its main characteristics. I can even draw an above-average impression of one on a piece of paper for you. But I can’t visualize it mentally. What’s wrong with me? © 2023 NautilusNext Inc.,

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 28945 - Posted: 10.05.2023

By Anil Seth Earlier this month, the consciousness science community erupted into chaos. An open letter, signed by 124 researchers—some specializing in consciousness and others not—made the provocative claim that one of the most widely discussed theories in the field, Integrated Information Theory (IIT), should be considered “pseudoscience.” The uproar that followed sent consciousness social media into a doom spiral of accusation and recrimination, with the fallout covered in Nature, New Scientist, and elsewhere. Calling something pseudoscience is pretty much the strongest criticism one can make of a theory. It’s a move that should never be taken lightly, especially when more than 100 influential scientists and philosophers do it all at once. The open letter justified the charge primarily on the grounds that IIT has “commitments” to panpsychism—the idea that consciousness is fundamental and ubiquitous—and that the theory “as a whole” may not be empirically testable. A subsequent piece by one of the lead authors of the letter, Hakwan Lau, reframed the charge somewhat: that the claims made for IIT by its proponents and the wider media are not supported by empirical evidence. The brainchild of neuroscientist Giulio Tononi, IIT has been around for quite some time. Back in the late 1990s, Tononi published a paper in Science with the Nobel Laureate Gerald Edelman, linking consciousness to mathematical measures of complexity. This paper, which made a lasting impression on me, sowed the seeds of what later became IIT. Tononi published his first outline of the theory itself in 2004 and it has been evolving ever since, with the latest version—IIT 4.0—appearing earlier this year. The theory’s counterintuitive and deeply mathematical nature has always attracted controversy and criticism—including from myself and my colleagues—but it has certainly become prominent in consciousness science. A survey conducted at the main conference in the field—the annual meeting of the Association for the Scientific Study of Consciousness—found that nearly half of respondents considered it “definitely promising” or “probably promising,” and researchers in the field regularly identify it as one of four main theoretical approaches to consciousness. (The philosopher Tim Bayne did just this in our recent review paper on theories of consciousness for Nature Reviews Neuroscience.) © 2023 NautilusNext Inc.

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 28936 - Posted: 09.29.2023

By Dan Falk More than 400 years ago, Galileo showed that many everyday phenomena—such as a ball rolling down an incline or a chandelier gently swinging from a church ceiling—obey precise mathematical laws. For this insight, he is often hailed as the founder of modern science. But Galileo recognized that not everything was amenable to a quantitative approach. Such things as colors, tastes and smells “are no more than mere names,” Galileo declared, for “they reside only in consciousness.” These qualities aren’t really out there in the world, he asserted, but exist only in the minds of creatures that perceive them. “Hence if the living creature were removed,” he wrote, “all these qualities would be wiped away and annihilated.” Since Galileo’s time the physical sciences have leaped forward, explaining the workings of the tiniest quarks to the largest galaxy clusters. But explaining things that reside “only in consciousness”—the red of a sunset, say, or the bitter taste of a lemon—has proven far more difficult. Neuroscientists have identified a number of neural correlates of consciousness—brain states associated with specific mental states—but have not explained how matter forms minds in the first place. As philosopher David Chalmers asked: “How does the water of the brain turn into the wine of consciousness?” He famously dubbed this quandary the “hard problem” of consciousness. Scholars recently gathered to debate the problem at Marist College in Poughkeepsie, N.Y., during a two-day workshop focused on an idea known as panpsychism. The concept proposes that consciousness is a fundamental aspect of reality, like mass or electrical charge. The idea goes back to antiquity—Plato took it seriously—and has had some prominent supporters over the years, including psychologist William James and philosopher and mathematician Bertrand Russell. Lately it is seeing renewed interest, especially following the 2019 publication of philosopher Philip Goff’s book Galileo’s Error, which argues forcefully for the idea. Goff, of the University of Durham in England, organized the recent event along with Marist philosopher Andrei Buckareff, and it was funded through a grant from the John Templeton Foundation. In a small lecture hall with floor-to-ceiling windows overlooking the Hudson River, roughly two dozen scholars probed the possibility that perhaps it’s consciousness all the way down.

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 28928 - Posted: 09.27.2023

Mariana Lenharo A letter, signed by 124 scholars and posted online last week1, has caused an uproar in the consciousness research community. It claims that a prominent theory describing what makes someone or something conscious — called the integrated information theory (IIT) — should be labelled “pseudoscience”. Since its publication on 15 September in the preprint repository PsyArXiv, the letter has some researchers arguing over the label and others worried it will increase polarization in a field that has grappled with issues of credibility in the past. “I think it’s inflammatory to describe IIT as pseudoscience,” says neuroscientist Anil Seth, director of the Centre for Consciousness Science at the University of Sussex near Brighton, UK, adding that he disagrees with the label. “IIT is a theory, of course, and therefore may be empirically wrong,” says neuroscientist Christof Koch, a meritorious investigator at the Allen Institute for Brain Science in Seattle, Washington, and a proponent of the theory. But he says that it makes its assumptions — for example, that consciousness has a physical basis and can be mathematically measured — very clear. There are dozens of theories that seek to understand consciousness — everything that a human or non-human experiences, including what they feel, see and hear — as well as its underlying neural foundations. IIT has often been described as one of the central theories, alongside others, such as global neuronal workspace theory (GNW), higher-order thought theory and recurrent processing theory. It proposes that consciousness emerges from the way information is processed within a ‘system’ (for instance, networks of neurons or computer circuits), and that systems that are more interconnected, or integrated, have higher levels of consciousness. Hakwan Lau, a neuroscientist at Riken Center for Brain Science in Wako, Japan, and one of the authors of the letter, says that some researchers in the consciousness field are uncomfortable with what they perceive as a discrepancy between IIT’s scientific merit and the considerable attention it receives from the popular media because of how it is promoted by advocates. “Has IIT become a leading theory because of academic acceptance first, or is it because of the popular noise that kind of forced the academics to give it acknowledgement?”, Lau asks. © 2023 Springer Nature Limited

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 28918 - Posted: 09.21.2023

Mariana Lenharo Science fiction has long entertained the idea of artificial intelligence becoming conscious — think of HAL 9000, the supercomputer-turned-villain in the 1968 film 2001: A Space Odyssey. With the rapid progress of artificial intelligence (AI), that possibility is becoming less and less fantastical, and has even been acknowledged by leaders in AI. Last year, for instance, Ilya Sutskever, chief scientist at OpenAI, the company behind the chatbot ChatGPT, tweeted that some of the most cutting-edge AI networks might be “slightly conscious”. Many researchers say that AI systems aren’t yet at the point of consciousness, but that the pace of AI evolution has got them pondering: how would we know if they were? To answer this, a group of 19 neuroscientists, philosophers and computer scientists have come up with a checklist of criteria that, if met, would indicate that a system has a high chance of being conscious. They published their provisional guide earlier this week in the arXiv preprint repository1, ahead of peer review. The authors undertook the effort because “it seemed like there was a real dearth of detailed, empirically grounded, thoughtful discussion of AI consciousness,” says co-author Robert Long, a philosopher at the Center for AI Safety, a research non-profit organization in San Francisco, California. The team says that a failure to identify whether an AI system has become conscious has important moral implications. If something has been labelled ‘conscious’, according to co-author Megan Peters, a neuroscientist at the University of California, Irvine, “that changes a lot about how we as human beings feel that entity should be treated”. Long adds that, as far as he can tell, not enough effort is being made by the companies building advanced AI systems to evaluate the models for consciousness and make plans for what to do if that happens. “And that’s in spite of the fact that, if you listen to remarks from the heads of leading labs, they do say that AI consciousness or AI sentience is something they wonder about,” he adds. © 2023 Springer Nature Limited

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 28893 - Posted: 08.30.2023

By Elizabeth Finkel Science routinely puts forward theories, then batters them with data till only one is left standing. In the fledgling science of consciousness, a dominant theory has yet to emerge. More than 20 are still taken seriously. It’s not for want of data. Ever since Francis Crick, the co-discoverer of DNA’s double helix, legitimized consciousness as a topic for study more than three decades ago, researchers have used a variety of advanced technologies to probe the brains of test subjects, tracing the signatures of neural activity that could reflect consciousness. The resulting avalanche of data should have flattened at least the flimsier theories by now. Five years ago, the Templeton World Charity Foundation initiated a series of “adversarial collaborations” to coax the overdue winnowing to begin. This past June saw the results from the first of these collaborations, which pitted two high-profile theories against each other: global neuronal workspace theory (GNWT) and integrated information theory (IIT). Neither emerged as the outright winner. The results, announced like the outcome of a sporting event at the 26th meeting of the Association for the Scientific Study of Consciousness (ASSC) in New York City, were also used to settle a 25-year bet between Crick’s longtime collaborator, the neuroscientist Christof Koch of the Allen Institute for Brain Science, and the philosopher David Chalmers of New York University, who coined the term “the hard problem” to challenge the presumption that we can explain the subjective feeling of consciousness by analyzing the circuitry of the brain. The neuroscientist Christof Koch of the Allen Institute for Brain Science deemed the mixed results of the first adversarial collaboration on consciousness to be “a victory for science.” Nevertheless, Koch proclaimed, “It’s a victory for science.” But was it? All Rights Reserved © 2023

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 28887 - Posted: 08.26.2023

By Elizabeth Finkel In 2021, Google engineer Blake Lemoine made headlines—and got himself fired—when he claimed that LaMDA, the chatbot he’d been testing, was sentient. Artificial intelligence (AI) systems, especially so-called large language models such as LaMDA and ChatGPT, can certainly seem conscious. But they’re trained on vast amounts of text to imitate human responses. So how can we really know? Now, a group of 19 computer scientists, neuroscientists, and philosophers has come up with an approach: not a single definitive test, but a lengthy checklist of attributes that, together, could suggest but not prove an AI is conscious. In a 120-page discussion paper posted as a preprint this week, the researchers draw on theories of human consciousness to propose 14 criteria, and then apply them to existing AI architectures, including the type of model that powers ChatGPT. None is likely to be conscious, they conclude. But the work offers a framework for evaluating increasingly humanlike AIs, says co-author Robert Long of the San Francisco–based nonprofit Center for AI Safety. “We’re introducing a systematic methodology previously lacking.” Adeel Razi, a computational neuroscientist at Monash University and a fellow at the Canadian Institute for Advanced Research (CIFAR) who was not involved in the new paper, says that is a valuable step. “We’re all starting the discussion rather than coming up with answers.” Until recently, machine consciousness was the stuff of science fiction movies such as Ex Machina. “When Blake Lemoine was fired from Google after being convinced by LaMDA, that marked a change,” Long says. “If AIs can give the impression of consciousness, that makes it an urgent priority for scientists and philosophers to weigh in.” Long and philosopher Patrick Butlin of the University of Oxford’s Future of Humanity Institute organized two workshops on how to test for sentience in AI.

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 28881 - Posted: 08.24.2023

Max Kozlov Dead in California but alive in New Jersey: that was the status of 13-year-old Jahi McMath after physicians in Oakland, California, declared her brain dead in 2013, after complications from a tonsillectomy. Unhappy with the care that their daughter received and unwilling to remove life support, McMath’s family moved with her to New Jersey, where the law allowed them to lodge a religious objection to the declaration of brain death and keep McMath connected to life-support systems for another four and a half years. Prompted by such legal discrepancies and a growing number of lawsuits around the United States, a group of neurologists, physicians, lawyers and bioethicists is attempting to harmonize state laws surrounding the determination of death. They say that imprecise language in existing laws — as well as research done since the laws were passed — threatens to undermine public confidence in how death is defined worldwide. “It doesn’t really make a lot of sense,” says Ariane Lewis, a neurocritical care clinician at NYU Langone Health in New York City. “Death is something that should be a set, finite thing. It shouldn’t be something that’s left up to interpretation.” Since 2021, a committee in the Uniform Law Commission (ULC), a non-profit organization in Chicago, Illinois, that drafts model legislation for states to adopt, has been revising its recommendation for the legal determination of death. The drafting committee hopes to clarify the definition of brain death, determine whether consent is required to test for it, specify how to handle family objections and provide guidance on how to incorporate future changes to medical standards. The broader membership of the ULC will offer feedback on the first draft of the revised law at a meeting on 26 July. After members vote on it, the text could be ready for state legislatures to consider by the middle of next year. But as the ULC revision process has progressed, clinicians who were once eager to address these issues have become increasingly worried. © 2023 Springer Nature Limited

Related chapters from BN: Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 3: The Chemistry of Behavior: Neurotransmitters and Neuropharmacology; Chapter 14: Attention and Higher Cognition
Link ID: 28853 - Posted: 07.22.2023

By Anil Seth In 1870, Alfred Russell Wallace wagered £500—a huge sum in those days—that he could prove the flat-Earther John Hampden wrong. Wallace duly did so, but the aggrieved Hampden never paid up. Since then, a lively history of scientific wagers has ensued—many of them instigated by Stephen Hawking. Just last month in New York, the most famous recent wager was settled: a 25-year-old bet over one of the last great mysteries in science and philosophy. The bettors were neuroscientist Christof Koch and philosopher David Chalmers, both known for their pioneering work on the nature of consciousness. Chalmers won. Koch paid up. Back in the late 1990s, consciousness science was full of renewed promise. Koch—a natural optimist—believed that 25 years was more than enough time for scientists to uncover the neural correlates of consciousness: those patterns of brain activity that underlie each and every one of our conscious experiences. Chalmers, a philosopher and therefore something of a pessimist by profession, demurred. In 1998, the pair staked a crate of fine wine on the outcome. The bet was finally called at the annual meeting of the Association for the Scientific Study of Consciousness in New York a couple of weeks ago. Koch graciously handed Chalmers a bottle of Madeira on the conference stage. While much more is known about consciousness today than in the ’90s, its true neural correlates—and indeed a consensus theory of consciousness—still elude us. What helped resolve the wager was the outcome, or rather the lack of a decisive outcome, of an “adversarial collaboration” organized by a consortium called COGITATE. Adversarial collaborations encourage researchers from different theoretical camps to jointly design experiments that can distinguish between their theories. In this case, the theories in question were integrated information theory (IIT), the brainchild of Giulio Tononi, and the neuronal global workspace theory (GWT), championed by Stanislas Dehaene. The two scientists made predictions, based on their respective theories, about what kinds of brain activity would be recorded in an experiment in which participants looked at a series of images—but neither predicted outcome fully played out. © 2023 NautilusNext Inc.,

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 28845 - Posted: 07.06.2023

By Carl Zimmer On a muggy June night in Greenwich Village, more than 800 neuroscientists, philosophers and curious members of the public packed into an auditorium. They came for the first results of an ambitious investigation into a profound question: What is consciousness? To kick things off, two friends — David Chalmers, a philosopher, and Christof Koch, a neuroscientist — took the stage to recall an old bet. In June 1998, they had gone to a conference in Bremen, Germany, and ended up talking late one night at a local bar about the nature of consciousness. For years, Dr. Koch had collaborated with Francis Crick, a biologist who shared a Nobel Prize for uncovering the structure of DNA, on a quest for what they called the “neural correlate of consciousness.” They believed that every conscious experience we have — gazing at a painting, for example — is associated with the activity of certain neurons essential for the awareness that comes with it. Dr. Chalmers liked the concept, but he was skeptical that they could find such a neural marker any time soon. Scientists still had too much to learn about consciousness and the brain, he figured, before they could have a reasonable hope of finding it. Dr. Koch wagered his friend that scientists would find a neural correlate of consciousness within 25 years. Dr. Chalmers took the bet. The prize would be a few bottles of fine wine. Recalling the bet from the auditorium stage, Dr. Koch admitted that it had been fueled by drinks and enthusiasm. “When you’re young, you’ve got to believe things will be simple,” he said. © 2023 The New York Times Company

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 28839 - Posted: 07.01.2023

By John Horgan A neuroscientist clad in gold and red and a philosopher sheathed in black took the stage before a packed, murmuring auditorium at New York University on Friday night. The two men were grinning, especially the philosopher. They were here to settle a bet made in the late 1990s on one of science’s biggest questions: How does a brain, a lump of matter, generate subjective conscious states such as the blend of anticipation and nostalgia I felt watching these guys? Before I reveal their bet’s resolution, let me take you through its twisty backstory, which reveals why consciousness remains a topic of such fascination and frustration to anyone with even the slightest intellectual leaning. I first saw Christof Koch, the neuroscientist, and David Chalmers, the philosopher, butt heads in 1994 at a now legendary conference in Tucson, Ariz., called Toward a Scientific Basis for Consciousness. Koch was a star of the meeting. Together with biophysicist Francis Crick, he had been proclaiming in Scientific American and elsewhere that consciousness, which philosophers have wrestled with for millennia, was scientifically tractable. Just as Crick and geneticist James Watson solved heredity by decoding DNA’s double helix, scientists would crack consciousness by discovering its neural underpinnings, or “correlates.” Or so Crick and Koch claimed. They even identified a possible basis for consciousness: brain cells firing in synchrony 40 times per second. Advertisement Not everyone in Tucson was convinced. Chalmers, younger and then far less well known than Koch, argued that neither 40-hertz oscillations nor any other strictly physical process could account for why perceptions are accompanied by conscious sensations, such as the crushing boredom evoked by a jargony lecture. I have a vivid memory of the audience perking up when Chalmers called consciousness “the hard problem.” That was the first time I heard that now famous phrase.

Related chapters from BN: Chapter 1: Introduction: Scope and Outlook; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 1: Cells and Structures: The Anatomy of the Nervous System; Chapter 14: Attention and Higher Cognition
Link ID: 28836 - Posted: 06.28.2023

By Steven Strogatz Neuroscience has made progress in deciphering how our brains think and perceive our surroundings, but a central feature of cognition is still deeply mysterious: namely, that many of our perceptions and thoughts are accompanied by the subjective experience of having them. Consciousness, the name we give to that experience, can’t yet be explained — but science is at least beginning to understand it. In this episode, the consciousness researcher Anil Seth and host Steven Strogatz discuss why our perceptions can be described as a “controlled hallucination,” how consciousness played into the internet sensation known as “the dress,” and how people at home can help researchers catalog the full range of ways that we experience the world. Steven Strogatz (00:03): I’m Steve Strogatz, and this is The Joy of Why, a podcast from Quanta Magazine that takes you into some of the biggest unanswered questions in math and science today. In this episode, we’re going to be discussing the mystery of consciousness. The mystery being that when your brain cells fire in certain patterns, it actually feels like something. It might feel like jealousy, or a toothache, or the memory of your mother’s face, or the scent of her favorite perfume. But other patterns of brain activity don’t really feel like anything at all. Right now, for instance, I’m probably forming some memories somewhere deep in my brain. But the process of that memory formation is imperceptible to me. I can’t feel it. It doesn’t give rise to any sort of internal subjective experience at all. In other words, I’m not conscious of it. (00:54) So how does consciousness happen? How is it related to physics and biology? Are animals conscious? What about plants? Or computers, could they ever be conscious? And what is consciousness exactly? My guest today, Dr. Anil Seth, studies consciousness in his role as the co-director of the Sussex Center for Consciousness Science at the University of Sussex, near Brighton, England. The Center brings together all sorts of disciplinary specialists, from neuroscientists to mathematicians to experts in virtual reality, to study the conscious experience. Dr. Seth is also the author of the book Being You: A New Science of Consciousness. He joins us from studios in Brighton, England. Anil, thanks for being here. All Rights Reserved © 2023

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 28812 - Posted: 06.03.2023

By Alessandra Buccella, Tomáš Dominik  Imagine you are shopping online for a new pair of headphones. There is an array of colors, brands and features to look at. You feel that you can pick any model that you like and are in complete control of your decision. When you finally click the “add to shopping cart” button, you believe that you are doing so out of your own free will. But what if we told you that while you thought that you were still browsing, your brain activity had already highlighted the headphones you would pick? That idea may not be so far-fetched. Though neuroscientists likely could not predict your choice with 100 percent accuracy, research has demonstrated that some information about your upcoming action is present in brain activity several seconds before you even become conscious of your decision. As early as the 1960s, studies found that when people perform a simple, spontaneous movement, their brain exhibits a buildup in neural activity—what neuroscientists call a “readiness potential”—before they move. In the 1980s, neuroscientist Benjamin Libet reported this readiness potential even preceded a person’s reported intention to move, not just their movement. In 2008 a group of researchers found that some information about an upcoming decision is present in the brain up to 10 seconds in advance, long before people reported making the decision of when or how to act. Advertisement These studies have sparked questions and debates. To many observers, these findings debunked the intuitive concept of free will. After all, if neuroscientists can infer the timing or choice of your movements long before you are consciously aware of your decision, perhaps people are merely puppets, pushed around by neural processes unfolding below the threshold of consciousness. But as researchers who study volition from both a neuroscientific and philosophical perspective, we believe that there’s still much more to this story. We work with a collaboration of philosophers and scientists to provide more nuanced interpretations—including a better understanding of the readiness potential—and a more fruitful theoretical framework in which to place them. The conclusions suggest “free will” remains a useful concept, although people may need to reexamine how they define it. © 2023 Scientific American

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 28635 - Posted: 01.18.2023

By Dennis Overbye If you could change the laws of nature, what would you change? Maybe it’s that pesky speed-of-light limit on cosmic travel — not to mention war, pestilence and the eventual asteroid that has Earth’s name on it. Maybe you would like the ability to go back in time — to tell your teenage self how to deal with your parents, or to buy Google stock. Couldn’t the universe use a few improvements? That was the question that David Anderson, a computer scientist, enthusiast of the Search for Extraterrestrial Intelligence (SETI), musician and mathematician at the University of California, Berkeley, recently asked his colleagues and friends. In recent years the idea that our universe, including ourselves and all of our innermost thoughts, is a computer simulation, running on a thinking machine of cosmic capacity, has permeated culture high and low. In an influential essay in 2003, Nick Bostrom, a philosopher at the University of Oxford and director of the Institute for the Future of Humanity, proposed the idea, adding that it was probably an easy accomplishment for “technologically mature” civilizations wanting to explore their histories or entertain their offspring. Elon Musk, who, for all we know, is the star of this simulation, seemed to echo this idea when he once declared that there was only a one-in-a-billion chance that we lived in “base reality.” It’s hard to prove, and not everyone agrees that such a drastic extrapolation of our computing power is possible or inevitable, or that civilization will last long enough to see it through. But we can’t disprove the idea either, so thinkers like Dr. Bostrom contend that we must take the possibility seriously. In some respects, the notion of a Great Simulator is redolent of a recent theory among cosmologists that the universe is a hologram, its margins lined with quantum codes that determine what is going on inside. A couple of years ago, pinned down by the coronavirus pandemic, Dr. Anderson began discussing the implications of this idea with his teenage son. If indeed everything was a simulation, then making improvements would simply be a matter of altering whatever software program was running everything. “Being a programmer, I thought about exactly what these changes might involve,” he said in an email. © 2023 The New York Times Company

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 28634 - Posted: 01.18.2023

By Oliver Whang Hod Lipson, a mechanical engineer who directs the Creative Machines Lab at Columbia University, has shaped most of his career around what some people in his industry have called the c-word. On a sunny morning this past October, the Israeli-born roboticist sat behind a table in his lab and explained himself. “This topic was taboo,” he said, a grin exposing a slight gap between his front teeth. “We were almost forbidden from talking about it — ‘Don’t talk about the c-word; you won’t get tenure’ — so in the beginning I had to disguise it, like it was something else.” That was back in the early 2000s, when Dr. Lipson was an assistant professor at Cornell University. He was working to create machines that could note when something was wrong with their own hardware — a broken part, or faulty wiring — and then change their behavior to compensate for that impairment without the guiding hand of a programmer. Just as when a dog loses a leg in an accident, it can teach itself to walk again in a different way. This sort of built-in adaptability, Dr. Lipson argued, would become more important as we became more reliant on machines. Robots were being used for surgical procedures, food manufacturing and transportation; the applications for machines seemed pretty much endless, and any error in their functioning, as they became more integrated with our lives, could spell disaster. “We’re literally going to surrender our life to a robot,” he said. “You want these machines to be resilient.” One way to do this was to take inspiration from nature. Animals, and particularly humans, are good at adapting to changes. This ability might be a result of millions of years of evolution, as resilience in response to injury and changing environments typically increases the chances that an animal will survive and reproduce. Dr. Lipson wondered whether he could replicate this kind of natural selection in his code, creating a generalizable form of intelligence that could learn about its body and function no matter what that body looked like, and no matter what that function was. ImageHod Lipson, in jeans, a dark jacket and a dark button-down shirt, stands at the double-door entrance to the Creative Machines Lab. Signs on and next to the doors read “Creative Machines Lab,” “Laboratory,” “No Smoking” and “Smile, You’re On Camera.” © 2023 The New York Times Company

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 28625 - Posted: 01.07.2023

By Gary Stix  Can the human brain ever really understand itself? The problem of gaining a deep knowledge of the subjective depths of the conscious mind is such a hard problem that it has in fact been named the hard problem. The human brain is impressively powerful. Its 100 billion neurons are connected by 100 trillion wirelike fibers, all squeezed into three pounds of squishy flesh lodged below a helmet of skull. Yet we still don’t know whether this organ will ever be able to muster the requisite smarts to hack the physical processes that underlie the ineffable “quality of deep blue” or “the sensation of middle C,” as philosopher David Chalmers put it when giving examples of the “hard problem” of consciousness, a term he invented, in a 1995 paper. This past year did not uncover a solution to the hard problem, and one may not be forthcoming for decades, if ever. But 2022 did witness plenty of surprises and solutions to understanding the brain that do not require a complete explanation of consciousness. Such incrementalism could be seen in mid-November, when a crowd of more than 24,000 attendees of the annual Society for Neuroscience meeting gathered in San Diego, Calif. The event was a tribute of sorts to reductionism—the breaking down of hard problems into simpler knowable entities. At the event, there were reports of an animal study of a brain circuit that encodes social trauma and a brain-computer interface that lets a severely paralyzed person mentally spell out letters to form words. Your Brain Has a Thumbs-Up–Thumbs-Down Switch When neuroscientist Kay Tye was pursuing her Ph.D., she was told a chapter on emotion was inappropriate for her thesis. Emotion just wasn’t accepted as an integral, intrinsic part of behavioral neuroscience, her field of study. That didn’t make any sense to Tye. She decided to go her own way to become a leading researcher on feelings. This year Tye co-authored a Nature paper that reported on a kind of molecular switch in rodents that flags an experience as either good or bad. If human brains operate the same way as the brains of the mice in her lab, a malfunctioning thumbs-up–thumbs-down switch might explain some cases of depression, anxiety and addiction.

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 28601 - Posted: 12.17.2022

By Jan Claassen, Brian L. Edlow A medical team surrounded Maria Mazurkevich’s hospital bed, all eyes on her as she did … nothing. Mazurkevich was 30 years old and had been admitted to New York–Presbyterian Hospital at Columbia University on a blisteringly hot July day in New York City. A few days earlier, at home, she had suddenly fallen unconscious. She had suffered a ruptured blood vessel in her brain, and the bleeding area was putting tremendous pressure on critical brain regions. The team of nurses and physicians at the hospital’s neurological intensive care unit was looking for any sign that Mazurkevich could hear them. She was on a mechanical ventilator to help her breathe, and her vital signs were stable. But she showed no signs of consciousness. Mazurkevich’s parents, also at her bed, asked, “Can we talk to our daughter? Does she hear us?” She didn’t appear to be aware of anything. One of us (Claassen) was on her medical team, and when he asked Mazurkevich to open her eyes, hold up two fingers or wiggle her toes, she remained motionless. Her eyes did not follow visual cues. Yet her loved ones still thought she was “in there.” She was. The medical team gave her an EEG—placing sensors on her head to monitor her brain’s electrical activity—while they asked her to “keep opening and closing your right hand.” Then they asked her to “stop opening and closing your right hand.” Even though her hands themselves didn’t move, her brain’s activity patterns differed between the two commands. These brain reactions clearly indicated that she was aware of the requests and that those requests were different. And after about a week, her body began to follow her brain. Slowly, with minuscule responses, Mazurkevich started to wake up. Within a year she recovered fully without major limitations to her physical or cognitive abilities. She is now working as a pharmacist. © 2022 Scientific American,

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 28527 - Posted: 10.26.2022

By Hedda Hassel Mørch The nature of consciousness seems to be unique among scientific puzzles. Not only do neuroscientists have no fundamental explanation for how it arises from physical states of the brain, we are not even sure whether we ever will. Astronomers wonder what dark matter is, geologists seek the origins of life, and biologists try to understand cancer—all difficult problems, of course, yet at least we have some idea of how to go about investigating them and rough conceptions of what their solutions could look like. Our first-person experience, on the other hand, lies beyond the traditional methods of science. Following the philosopher David Chalmers, we call it the hard problem of consciousness. But perhaps consciousness is not uniquely troublesome. Going back to Gottfried Leibniz and Immanuel Kant, philosophers of science have struggled with a lesser known, but equally hard, problem of matter. What is physical matter in and of itself, behind the mathematical structure described by physics? This problem, too, seems to lie beyond the traditional methods of science, because all we can observe is what matter does, not what it is in itself—the “software” of the universe but not its ultimate “hardware.” On the surface, these problems seem entirely separate. But a closer look reveals that they might be deeply connected. Consciousness is a multifaceted phenomenon, but subjective experience is its most puzzling aspect. Our brains do not merely seem to gather and process information. They do not merely undergo biochemical processes. Rather, they create a vivid series of feelings and experiences, such as seeing red, feeling hungry, or being baffled about philosophy. There is something that it’s like to be you, and no one else can ever know that as directly as you do. © 2022 NautilusThink Inc, All rights reserved.

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 28489 - Posted: 09.24.2022