Links for Keyword: Consciousness

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 347

By Carl Zimmer Consciousness may be a mystery, but that doesn’t mean that neuroscientists don’t have any explanations for it. Far from it. “In the field of consciousness, there are already so many theories that we don’t need more theories,” said Oscar Ferrante, a neuroscientist at the University of Birmingham. If you’re looking for a theory to explain how our brains give rise to subjective, inner experiences, you can check out Adaptive Resonance Theory. Or consider Dynamic Core Theory. Don’t forget First Order Representational Theory, not to mention semantic pointer competition theory. The list goes on: A 2021 survey identified 29 different theories of consciousness. Dr. Ferrante belongs to a group of scientists who want to lower that number, perhaps even down to just one. But they face a steep challenge, thanks to how scientists often study consciousness: Devise a theory, run experiments to build evidence for it, and argue that it’s better than the others. “We are not incentivized to kill our own ideas,” said Lucia Melloni, a neuroscientist at the Max Planck Institute for Empirical Aesthetics in Frankfurt, Germany. Seven years ago, Dr. Melloni and 41 other scientists embarked on a major study on consciousness that she hoped would break this pattern. Their plan was to bring together two rival groups to design an experiment to see how well both theories did at predicting what happens in our brains during a conscious experience. The team, called the Cogitate Consortium, published its results on Wednesday in the journal Nature. But along the way, the study became subject to the same sharp-elbowed conflicts they had hoped to avoid. Dr. Melloni and a group of like-minded scientists began drawing up plans for their study in 2018. They wanted to try an approach known as adversarial collaboration, in which scientists with opposing theories join forces with neutral researchers. The team chose two theories to test. © 2025 The New York Times Company

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29773 - Posted: 05.03.2025

By Anil Seth On stage in New York a couple years ago, noted neuroscientist Christof Koch handed a very nice bottle of Madeira wine to philosopher David Chalmers. Chalmers had won a quarter-century-long bet about consciousness—or at least our understanding of it. Nautilus Members enjoy an ad-free experience. Log in or Join now . The philosopher had challenged the neuroscientist in 1998—with a crate of fine wine on the line—that in 25 years, science would still not have located the seat of consciousness in the brain. The philosopher was right. But not without an extraordinary—and revealing—effort on the part of consciousness researchers and theorists. Backing up that concession were the results of a long and thorough “adversarial collaboration” that compared two leading theories about consciousness, testing each with rigorous experimental data. Now we finally learn more about the details of this work in a new paper in the journal Nature. Nicknamed COGITATE, the collaboration pitted “global neuronal workspace theory” (GNWT)—an idea advocated by cognitive neuroscientist Stanislas Dehaene, which associates consciousness with the broadcast of information throughout large swathes of the brain—against “integrated information theory” (IIT)—the idea from neuroscientist Giulio Tononi, which identifies consciousness with the intrinsic cause-and-effect power of brain networks. The adversarial collaboration involved the architects of both theories sitting down together, along with other researchers who would lead and execute the project (hats off to them), to decide on experiments that could potentially distinguish between the theories—ideally supporting one and challenging the other. Deciding on the theory-based predictions, and on experiments good enough to test them, was never going to be easy. In consciousness research, it is especially hard since—as philosopher Tim Bayne and I noted—theories often make different assumptions, and attempt to explain different things even if, on the face of it, they are all theories of “consciousness.” © 2025 NautilusNext Inc.,

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29772 - Posted: 05.03.2025

By Allison Parshall ] Where in the brain does consciousness originate? Theories abound, but neuroscientists still haven’t coalesced around one explanation, largely because it’s such a hard question to probe with the scientific method. Unlike other phenomena studied by science, consciousness cannot be observed externally. “I observe your behavior. I observe your brain, if I do an intracranial EEG [electroencephalography] study. But I don’t ever observe your experience,” says Robert Chis-Ciure, a postdoctoral researcher studying consciousness at the University of Sussex in England. Scientists have landed on two leading theories to explain how consciousness emerges: integrated information theory, or IIT, and global neuronal workspace theory, or GNWT. These frameworks couldn’t be more different—they rest on different assumptions, draw from different fields of science and may even define consciousness in different ways, explains Anil K. Seth, a consciousness researcher at the University of Sussex. To compare them directly, researchers organized a group of 12 laboratories called the Cogitate Consortium to test the theories’ predictions against each other in a large brain-imaging study. The result, published in full on Wednesday in Nature, was effectively a draw and raised far more questions than it answered. The preliminary findings were posted to the preprint server bioRxiv in 2023. And only a few months later, a group of scholars publicly called IIT “pseudoscience” and attempted to excise it from the field. As the dust settles, leading consciousness researchers say that the Cogitate results point to a way forward for understanding how consciousness arises—no matter what theory eventually comes out on top. “We all are very good at constructing castles in the sky” with abstract ideas, says Chis-Ciure, who was not involved in the new study. “But with data, you make those more grounded.” © 2025 SCIENTIFIC AMERICAN,

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29771 - Posted: 05.03.2025

By Yasemin Saplakoglu In 1943, a pair of neuroscientists were trying to describe how the human nervous system works when they accidentally laid the foundation for artificial intelligence. In their mathematical framework (opens a new tab) for how systems of cells can encode and process information, Warren McCulloch and Walter Pitts argued that each brain cell, or neuron, could be thought of as a logic device: It either turns on or it doesn’t. A network of such “all-or-none” neurons, they wrote, can perform simple calculations through true or false statements. “They were actually, in a sense, describing the very first artificial neural network,” said Tomaso Poggio (opens a new tab) of the Massachusetts Institute of Technology, who is one of the founders of computational neuroscience. McCulloch and Pitts’ framework laid the groundwork for many of the neural networks that underlie the most powerful AI systems. These algorithms, built to recognize patterns in data, have become so competent at complex tasks that their products can seem eerily human. ChatGPT’s text is so conversational and personal that some people are falling in love (opens a new tab). Image generators can create pictures so realistic that it can be hard to tell when they’re fake. And deep learning algorithms are solving scientific problems that have stumped humans for decades. These systems’ abilities are part of the reason the AI vocabulary is so rich in language from human thought, such as intelligence, learning and hallucination. But there is a problem: The initial McCulloch and Pitts framework is “complete rubbish,” said the science historian Matthew Cobb (opens a new tab) of the University of Manchester, who wrote the book The Idea of the Brain: The Past and Future of Neuroscience (opens a new tab). “Nervous systems aren’t wired up like that at all.” A promotional card for Quanta's AI series, which reads Science Promise and the Peril of AI, Explore the Series" When you poke at even the most general comparison between biological and artificial intelligence — that both learn by processing information across layers of networked nodes — their similarities quickly crumble. © 2025 Simons Foundation

Related chapters from BN: Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals; Chapter 14: Attention and Higher Cognition
Link ID: 29770 - Posted: 05.03.2025

By Smriti Mallapaty Neuroscientists have observed for the first time how structures deep in the brain are activated when the brain becomes aware of its own thoughts, known as conscious perception1. The brain is constantly bombarded with sights, sounds and other stimuli, but people are only ever aware of a sliver of the world around them — the taste of a piece of chocolate or the sound of someone’s voice, for example. Researchers have long known that the outer layer of the brain, called the cerebral cortex, plays a part in this experience of being aware of specific thoughts. The involvement of deeper brain structures has been much harder to elucidate, because they can be accessed only with invasive surgery. Designing experiments to test the concept in animals is also tricky. But studying these regions would allow researchers to broaden their theories of consciousness beyond the brain’s outer wrapping, say researchers. “The field of consciousness studies has evoked a lot of criticism and scepticism because this is a phenomenon that is so hard to study,” says Liad Mudrik, a neuroscientist at Tel Aviv University in Israel. But scientists have increasingly been using systematic and rigorous methods to investigate consciousness, she says. Aware or not In a study published in Science today1, Mingsha Zhang, a neuroscientist at Beijing Normal University, focused on the thalamus. This region at the centre of the brain is involved in processing sensory information and working memory, and is thought to have a role in conscious perception. Participants were already undergoing therapy for severe and persistent headaches, for which they had thin electrodes injected deep into their brains. This allowed Zhang and his colleagues to study their brain signals and measure conscious awareness. © 2025 Springer Nature Limited

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29731 - Posted: 04.05.2025

By Kelly Servick New York City—A recent meeting here on consciousness started from a relatively uncontroversial premise: A newly fertilized human egg isn’t conscious, and a preschooler is, so consciousness must emerge somewhere in between. But the gathering, sponsored by New York University (NYU), quickly veered into more unsettled territory. At the Infant Consciousness Conference from 28 February to 1 March, researchers explored when and how consciousness might arise, and how to find out. They also considered hints from recent brain imaging studies that the capacity for consciousness could emerge before birth, toward the end of gestation. “Fetal consciousness would have been a less central topic at a meeting like this a few years ago,” says Claudia Passos-Ferreira, a bioethicist at NYU who co-organized the gathering. The conversation has implications for how best to care for premature infants, she says, and intersects with thorny issues such as abortion. “Whatever you claim about this, there are some moral implications.” How to define consciousness is itself the subject of debate. “Each of us might have a slightly different definition,” neuroscientist Lorina Naci of Trinity College Dublin acknowledged at the meeting before describing how she views consciousness—as the capacity to have an experience or a subjective point of view. There’s also vigorous debate about where consciousness arises in the brain and what types of neural activity define it. That makes it hard to agree on specific markers of consciousness in beings—such as babies—that can’t talk about their experience. Further complicating the picture, the nature of consciousness could be different for infants than adults, researchers noted at the meeting. And it may emerge gradually versus all at once, on different timescales for different individuals.

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 13: Memory and Learning
Link ID: 29703 - Posted: 03.12.2025

By Mark Humphries There are many ways neuroscience could end. Prosaically, society may just lose interest. Of all the ways we can use our finite resources, studying the brain has only recently become one; it may one day return to dust. Other things may take precedence, like feeding the planet or preventing an asteroid strike. Or neuroscience may end as an incidental byproduct, one of the consequences of war or of thoughtlessly disassembling a government or of being sideswiped by a chunk of space rock. We would prefer it to end on our own terms. We would like neuroscience to end when we understand the brain. Which raises the obvious question: Is this possible? For the answer to be yes, three things need to be true: that there is a finite amount of stuff to know, that stuff is physically accessible and that we understand all the stuff we obtain. But each of these we can reasonably doubt. The existence of a finite amount of knowledge is not a given. Some arguments suggest that an infinite amount of knowledge is not only possible but inevitable. Physicist David Deutsch proposes the seemingly innocuous idea that knowledge grows when we find a good explanation for a phenomenon, an explanation whose details are hard to vary without changing its predictions and hence breaking it as an explanation. Bad explanations are those whose details can be varied without consequence. Ancient peoples attributing the changing seasons to the gods is a bad explanation, for those gods and their actions can be endlessly varied without altering the existence of four seasons occurring in strict order. Our attributing the changing seasons to the Earth’s tilt in its orbit of the sun is a good explanation, for if we omit the tilt, we lose the four seasons and the opposite patterns of seasons in the Northern and Southern hemispheres. A good explanation means we have nailed down some property of the universe sufficiently well that something can be built upon it. © 2025 Simons Foundation

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 1: Introduction: Scope and Outlook
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 1: Cells and Structures: The Anatomy of the Nervous System
Link ID: 29702 - Posted: 03.12.2025

By Alissa Wilkinson There’s a moment in “Theater of Thought” (in theaters) when Darío Gil, the director of research at IBM, is explaining quantum computing to Werner Herzog, the movie’s director. Standing before a whiteboard, Gil draws some points on spheres to illustrate how qubits work, then proceeds to define the Schrödinger equation. As he talks and writes, the audio grows quieter, and Herzog’s distinctive resonant German accent takes over. “I admit that I literally understand nothing of this, and I assume most of you don’t either,” he intones in voice-over. “But I found it fascinating that this mathematical formula explains the law that draws the subatomic world.” It’s a funny moment, a playful way to keep us from glazing over when presented with partial differential equations. Herzog may be a world-renowned filmmaker, but he’s hardly a scientist, and that makes him the perfect director for “Theater of Thought,” a documentary about, as he puts it, the “mysteries of our brain.” Emphasis on mysteries. Herzog interviews a dizzying array of scientists, researchers, and even a Nobel Prize winner or two. He asks them about everything: how the brain works, what consciousness means, what the tiniest organisms in the world are, whether parrots understand human speech, whether rogue governments can control thoughts, whether we’re living in an elaborate simulation, how telepathy and psychedelics work, and, at several points, what thinking even is. Near the end of the film he notes that not one of the scientists could explain what a thought is, or what consciousness is, but “they were all keenly alive to the ethical questions in neuroscience.” In other words, they’re immersed in both the mystery and what their field of study implies about the future of humanity. There’s a boring way to make this movie, with talking-head interviews that are arranged to form a coherent argument. Herzog goes another direction, starting off by narrating why he’s making it, then talking about his interviewees as we are introduced to them in their labs or in their favorite outdoor settings. (He also visits Philippe Petit, the Twin Towers tightrope walker, as he practices in his Catskills backyard.) Herzog’s constant verbal presence brings us into his own head space — his own brain, if you will — and gives us the sense that we’re following his patterns of thought. © 2024 The New York Times Company

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29598 - Posted: 12.14.2024

By Iris Berent Seeing the striking magenta of bougainvillea. Tasting a rich morning latte. Feeling the sharp pain of a needle prick going into your arm. These subjective experiences are the stuff of the mind. What is “doing the experiencing,” the 3-pound chunk of meat in our head, is a tangible object that works on electrochemical signals—physics, essentially. How do the two—our mental experiences and physical brains—interact? The puzzle of consciousness seems to be giving science a run for its money. The problem, to be clear, isn’t merely to pinpoint “where it all happens” in the brain (although this, too, is far from trivial). The real mystery is how to bridge the gap between the mental, first-person stuff of consciousness and the physical lump of matter inside the cranium. Some think the gap is unbreachable. The philosopher David Chalmers, for instance, has argued that consciousness is something special and distinct from the physical world. If so, it may never be possible to explain consciousness in terms of physical brain processes. No matter how deeply scientists understand the brain, for Chalmers, this would never explain how our neurons produce consciousness. Why should a hunk of flesh, teeming with chemical signals and electrical charges, experience a point of view? There seems to be no conceivable reason why meaty matter would have this light of subjectivity “on the inside.” Consciousness, then, is a “hard problem”—as Chalmers has labeled it—indeed. The possibility that consciousness itself isn’t anything physical raises burning questions about whether, for example, an AI can fall in love with its programmer. And since consciousness is a natural phenomenon, much like gravity or genes, these questions carry huge implications. Science explains the natural world by physical principles only. So if it turns out that one natural phenomenon transcends the laws of physics, then it is not only the science of consciousness that is in trouble—our entire understanding of the natural world would require serious revision. © 2024 NautilusNext Inc.,

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29581 - Posted: 11.30.2024

By Tamlyn Hunt The neuron, the specialized cell type that makes up much of our brains, is at the center of today’s neuroscience. Neuroscientists explain perception, memory, cognition and even consciousness itself as products of billions of these tiny neurons busily firing their tiny “spikes” of voltage inside our brain. These energetic spikes not only convey things like pain and other sensory information to our conscious mind, but they are also in theory able to explain every detail of our complex consciousness. At least in principle. The details of this “neural code” have yet to be worked out. While neuroscientists have long focused on spikes travelling throughout brain cells, “ephaptic” field effects may really be the primary mechanism for consciousness and cognition. These effects, resulting from the electric fields produced by neurons rather than their synaptic firings, may play a leading role in our mind’s workings. In 1943 American scientists first described what is known today as the neural code, or spike code. They fleshed out a detailed map of how logical operations can be completed with the “all or none” nature of neural firing—similar to how today’s computers work. Since then neuroscientists around the world have engaged in a vast endeavor to crack the neural code in order to understand the specifics of cognition and consciousness. To little avail. “The most obvious chasm in our understanding is in all the things we did not meet on our journey from your eye to your hand,” confessed neuroscientist Mark Humphries in 2020’s The Spike, a deep dive into this journey: “All the things of the mind I’ve not been able to tell you about, because we know so little of what spikes do to make them.” © 2024 SCIENTIFIC AMERICAN

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29546 - Posted: 11.09.2024

By Rachel Nuwer One person felt a sensation of “slowly floating into the air” as images flashed around. Another recalled “the most profound sense of love and peace,” unlike anything experienced before. Consciousness became a “foreign entity” to another whose “whole sense of reality disappeared.” These were some of the firsthand accounts shared in a small survey of people who belonged to an unusual cohort: They had all undergone a near-death experience and tried psychedelic drugs. The survey participants described their near-death and psychedelic experiences as being distinct, yet they also reported significant overlap. In a paper published on Thursday, researchers used these accounts to provide a comparison of the two phenomena. “For the first time, we have a quantitative study with personal testimony from people who have had both of these experiences,” said Charlotte Martial, a neuroscientist at the University of Liège in Belgium and an author of the findings, which were published in the journal Neuroscience of Consciousness. “Now we can say for sure that psychedelics can be a kind of window through which people can enter a rich, subjective state resembling a near-death experience.” Near-death experiences are surprisingly common — an estimated 5 to 10 percent of the general population has reported having one. For decades, scientists largely dismissed the fantastical stories of people who returned from the brink of death. But some researchers have started to take these accounts seriously. “In recent times, the science of consciousness has become interested in nonordinary states,” said Christopher Timmermann, a research fellow at the Center for Psychedelic Research at Imperial College London and an author of the article. “To get a comprehensive account of what it means to be a human being requires incorporating these experiences.” © 2024 The New York Times Company

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 4: The Chemistry of Behavior: Neurotransmitters and Neuropharmacology
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 4: Development of the Brain
Link ID: 29450 - Posted: 08.22.2024

By Carl Zimmer When people suffer severe brain damage — as a result of car crashes, for example, or falls or aneurysms — they may slip into a coma for weeks, their eyes closed, their bodies unresponsive. Some recover, but others enter a mysterious state: eyes open, yet without clear signs of consciousness. Hundreds of thousands of such patients in the United States alone are diagnosed in a vegetative state or as minimally conscious. They may survive for decades without regaining a connection to the outside world. These patients pose an agonizing mystery both for their families and for the medical professionals who care for them. Even if they can’t communicate, might they still be aware? A large study published on Wednesday suggests that a quarter of them are. Teams of neurologists at six research centers asked 241 unresponsive patients to spend several minutes at a time doing complex cognitive tasks, such as imagining themselves playing tennis. Twenty-five percent of them responded with the same patterns of brain activity seen in healthy people, suggesting that they were able to think and were at least somewhat aware. Dr. Nicholas Schiff, a neurologist at Weill Cornell Medicine and an author of the study, said the study shows that up to 100,000 patients in the United States alone might have some level of consciousness despite their devastating injuries. The results should lead to more sophisticated exams of people with so-called disorders of consciousness, and to more research into how these patients might communicate with the outside world, he said: “It’s not OK to know this and to do nothing.” When people lose consciousness after a brain injury, neurologists traditionally diagnose them with a bedside exam. They may ask patients to say something, to look to their left or right, or to give a thumbs-up. © 2024 The New York Times Company

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29436 - Posted: 08.15.2024

By Hartmut Neven & Christof Koch The brain is a mere piece of furniture in the vastness of the cosmos, subject to the same physical laws as asteroids, electrons or photons. On the surface, its three pounds of neural tissue seem to have little to do with quantum mechanics, the textbook theory that underlies all physical systems, since quantum effects are most pronounced on microscopic scales. Newly proposed experiments, however, promise to bridge this gap between microscopic and macroscopic systems, like the brain, and offer answers to the mystery of consciousness. Quantum mechanics explains a range of phenomena that cannot be understood using the intuitions formed by everyday experience. Recall the Schrödinger’s cat thought experiment, in which a cat exists in a superposition of states, both dead and alive. In our daily lives there seems to be no such uncertainty—a cat is either dead or alive. But the equations of quantum mechanics tell us that at any moment the world is composed of many such coexisting states, a tension that has long troubled physicists. Taking the bull by its horns, the cosmologist Roger Penrose in 1989 made the radical suggestion that a conscious moment occurs whenever a superimposed quantum state collapses. The idea that two fundamental scientific mysteries—the origin of consciousness and the collapse of what is called the wave function in quantum mechanics—are related, triggered enormous excitement. Penrose’s theory can be grounded in the intricacies of quantum computation. Consider a quantum bit, a qubit, the unit of information in quantum information theory that exists in a superposition of a logical 0 with a logical 1. According to Penrose, when this system collapses into either 0 or 1, a flicker of conscious experience is created, described by a single classical bit. © 2024 SCIENTIFIC AMERICAN,

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29427 - Posted: 08.11.2024

Tijl Grootswagers Genevieve L Quek Manuel Varlet You are standing in the cereal aisle, weighing up whether to buy a healthy bran or a sugary chocolate-flavoured alternative. Your hand hovers momentarily before you make the final grab. But did you know that during those last few seconds, while you’re reaching out, your brain is still evaluating the pros and cons – influenced by everything from your last meal, the health star rating, the catchy jingle in the ad, and the colours of the letters on the box? Our recently published research shows our brains do not just think first and then act. Even while you are reaching for a product on a supermarket shelf, your brain is still evaluating whether you are making the right choice. Read news coverage based on evidence, not tweets Further, we found measuring hand movements offers an accurate window into the brain’s ongoing evaluation of the decision – you don’t have to hook people up to expensive brain scanners. What does this say about our decision-making? And what does it mean for consumers and the people marketing to them? There has been debate within neuroscience on whether a person’s movements to enact a decision can be modified once the brain’s “motor plan” has been made. Our research revealed not only that movements can be changed after a decision – “in flight” – but also the changes matched incoming information from a person’s senses. To study how our decisions unfold over time, we tracked people’s hand movements as they reached for different options shown in pictures – for example, in response to the question “is this picture a face or an object?” Put simply, reaching movements are shaped by ongoing thinking and decision-making. © 2010–2024, The Conversation US, Inc.

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 5: The Sensorimotor System
Link ID: 29387 - Posted: 07.11.2024

By Simon Makin Most of us have an “inner voice,” and we tend to assume everybody does, but recent evidence suggests that people vary widely in the extent to which they experience inner speech, from an almost constant patter to a virtual absence of self-talk. “Until you start asking the right questions you don’t know there’s even variation,” says Gary Lupyan, a cognitive scientist at the University of Wisconsin–Madison. “People are really surprised because they’d assumed everyone is like them.” A new study, from Lupyan and his colleague Johanne Nedergaard, a cognitive scientist at the University of Copenhagen, shows that not only are these differences real but they also have consequences for our cognition. Participants with weak inner voices did worse at psychological tasks that measure, say, verbal memory than did those with strong inner voices. The researchers have even proposed calling a lack of inner speech “anendophasia” and hope that naming it will help facilitate further research. The study adds to growing evidence that our inner mental worlds can be profoundly different. “It speaks to the surprising diversity of our subjective experiences,” Lupyan says. Psychologists think we use inner speech to assist in various mental functions. “Past research suggests inner speech is key in self-regulation and executive functioning, like task-switching, memory and decision-making,” says Famira Racy, an independent scholar who co-founded the Inner Speech Research Lab at Mount Royal University in Calgary. “Some researchers have even suggested that not having an inner voice may impact these and other areas important for a sense of self, although this is not a certainty.” Inner speech researchers know that it varies from person to person, but studies have typically used subjective measures, like questionnaires, and it is difficult to know for sure if what people say goes on in their heads is what really happens. “It’s very difficult to reflect on one’s own inner experiences, and most people aren’t very good at it when they start out,” says Charles Fernyhough, a psychologist at Durham University in England, who was not involved in the study. © 2024 SCIENTIFIC AMERICAN,

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29382 - Posted: 07.06.2024

By George Musser Had you stumbled into a certain New York University auditorium in March 2023, you might have thought you were at pure neuroscience conference. In fact, it was a workshop on artificial intelligence—but your confusion could have been readily forgiven. Speakers talked about “ablation,” a procedure of creating brain lesions, as commonly done in animal model experiments. They mentioned “probing,” like using electrodes to tap into the brain’s signals. They presented linguistic analyses and cited long-standing debates in psychology over nature versus nurture. Plenty of the hundred or so researchers in attendance probably hadn’t worked with natural brains since dissecting frogs in seventh grade. But their language choices reflected a new milestone for their field: The most advanced AI systems, such as ChatGPT, have come to rival natural brains in size and complexity, and AI researchers are studying them almost as if they were studying a brain in a skull. As part of that, they are drawing on disciplines that traditionally take humans as their sole object of study: psychology, linguistics, philosophy of mind. And in return, their own discoveries have started to carry over to those other fields. These various disciplines now have such closely aligned goals and methods that they could unite into one field, Grace Lindsay, assistant professor of psychology and data science at New York University, argued at the workshop. She proposed calling this merged science “neural systems understanding.” “Honestly, it’s neuroscience that would benefit the most, I think,” Lindsay told her colleagues, noting that neuroscience still lacks a general theory of the brain. “The field that I come from, in my opinion, is not delivering. Neuroscience has been around for over 100 years. I really thought that, when people developed artificial neural systems, they could come to us.” © 2024 Simons Foundation

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 15: Language and Lateralization
Link ID: 29344 - Posted: 06.06.2024

By Dan Falk Some years ago, when he was still living in southern California, neuroscientist Christof Koch drank a bottle of Barolo wine while watching The Highlander, and then, at midnight, ran up to the summit of Mount Wilson, the 5,710-foot peak that looms over Los Angeles. After an hour of “stumbling around with my headlamp and becoming nauseated,” as he later described the incident, he realized the nighttime adventure was probably not a smart idea, and climbed back down, though not before shouting into the darkness the last line of William Ernest Henley’s 1875 poem “Invictus”: “I am the master of my fate / I am the captain of my soul.” Koch, who first rose to prominence for his collaborative work with the late Nobel Laureate Francis Crick, is hardly the only scientist to ponder the nature of the self—but he is perhaps the most adventurous, both in body and mind. He sees consciousness as the central mystery of our universe, and is willing to explore any reasonable idea in the search for an explanation. Over the years, Koch has toyed with a wide array of ideas, some of them distinctly speculative—like the idea that the Internet might become conscious, for example, or that with sufficient technology, multiple brains could be fused together, linking their accompanying minds along the way. (And yet, he does have his limits: He’s deeply skeptical both of the idea that we can “upload” our minds and of the “simulation hypothesis.”) In his new book, Then I Am Myself The World, Koch, currently the chief scientist at the Allen Institute for Brain Science in Seattle, ventures through the challenging landscape of integrated information theory (IIT), a framework that attempts to compute the amount of consciousness in a system based on the degree to which information is networked. Along the way, he struggles with what may be the most difficult question of all: How do our thoughts—seemingly ethereal and without mass or any other physical properties—have real-world consequences? © 2024 NautilusNext Inc.,

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29294 - Posted: 05.07.2024

By Steve Paulson These days, we’re inundated with speculation about the future of artificial intelligence—and specifically how AI might take away our jobs, or steal the creative work of writers and artists, or even destroy the human species. The American writer Meghan O’Gieblyn also wonders about these things, and her essays offer pointed inquiries into the philosophical and spiritual underpinnings of this technology. She’s steeped in the latest AI developments but is also well-versed in debates about linguistics and the nature of consciousness. O’Gieblyn also writes about her own struggle to find deeper meaning in her life, which has led her down some unexpected rabbit holes. A former Christian fundamentalist, she later stumbled into transhumanism and, ultimately, plunged into the exploding world of AI. (She currently also writes an advice column for Wired magazine about tech and society.) When I visited her at her home in Madison, Wisconsin, I was curious if I might see any traces of this unlikely personal odyssey. I hadn’t expected her to pull out a stash of old notebooks filled with her automatic writing, composed while working with a hypnotist. I asked O’Gieblyn if she would read from one of her notebooks, and she picked this passage: “In all the times we came to bed, there was never any sleep. Dawn bells and doorbells and daffodils and the side of the road glaring with their faces undone …” And so it went—strange, lyrical, and nonsensical—tapping into some part of herself that she didn’t know was there. That led us into a wide-ranging conversation about the unconscious, creativity, the quest for transcendence, and the differences between machine intelligence and the human mind. © 2024 NautilusNext Inc.,

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29289 - Posted: 05.03.2024

By Dan Falk Daniel Dennett, who died in April at the age of 82, was a towering figure in the philosophy of mind. Known for his staunch physicalist stance, he argued that minds, like bodies, are the product of evolution. He believed that we are, in a sense, machines—but astoundingly complex ones, the result of millions of years of natural selection. Dennett wrote more than a dozen books, some of them aimed at a scholarly audience but many of them directed squarely at the inquisitive non-specialist—including bestsellers like Consciousness Explained, Breaking the Spell, and Darwin’s Dangerous Idea. Reading his works, one gets the impression of a mind jammed to the rafters with ideas. As Richard Dawkins put it in a blurb for Dennett’s last book, a memoir titled I’ve Been Thinking: “How unfair for one man to be blessed with such a torrent of stimulating thoughts.” Dennett spent decades puzzling over the existence of minds. How does non-thinking matter arrange itself into matter that can think, and even ponder its own existence? A long-time academic nemesis of Dennett’s, the philosopher David Chalmers, dubbed this the “Hard Problem” of consciousness. But Dennett felt this label needlessly turned a series of potentially-solvable problems into one giant unsolvable one: He was sure the so-called hard problem would evaporate once the various lesser (but still difficult) problems of understanding the brain’s mechanics were figured out. Because he viewed brains as miracle-free mechanisms, he saw no barrier to machine consciousness, at least in principle. Yet he had no fear of Terminator-style AI doomsday scenarios, either. (“The whole singularity stuff, that’s preposterous,” he once told an interviewer for The Guardian. “It distracts us from much more pressing problems.”) © 2024 NautilusNext Inc.,

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29285 - Posted: 05.02.2024

By John Horgan Philosopher Daniel Dennett died a few days ago, on April 19. When he argued that we overrate consciousness, he demonstrated, paradoxically, how conscious he was, and he made his audience more conscious. Dennett’s death feels like the end of an era, the era of ultramaterialist, ultra-Darwinian, swaggering, know-it-all scientism. Who’s left, Richard Dawkins? Dennett wasn’t as smart as he thought he was, I liked to say, because no one is. He lacked the self-doubt gene, but he forced me to doubt myself. He made me rethink what I think, and what more can you ask of a philosopher? I first encountered Dennett’s in-your-face brilliance in 1981 when I read The Mind’s I, a collection of essays he co-edited. And his name popped up at a consciousness shindig I attended earlier this month. To honor Dennett, I’m posting a revision of my 2017 critique of his claim that consciousness is an “illusion.” I’m also coining a phrase, “the Dennett paradox,”which is explained below. Of all the odd notions to emerge from debates over consciousness, the oddest is that it doesn’t exist, at least not in the way we think it does. It is an illusion, like “Santa Claus” or “American democracy.” René Descartes said consciousness is the one undeniable fact of our existence, and I find it hard to disagree. I’m conscious right now, as I type this sentence, and you are presumably conscious as you read it (although I can’t be absolutely sure). The idea that consciousness isn’t real has always struck me as absurd, but smart people espouse it. One of the smartest is philosopher Daniel Dennett, who has been questioning consciousness for decades, notably in his 1991 bestseller Consciousness Explained. © 2024 SCIENTIFIC AMERICAN,

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29266 - Posted: 04.24.2024