Chapter 14. Attention and Higher Cognition

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 1718

By Brandon Keim 1 How We Think About Animals Has a Long, Complicated History Back when I first started writing about scientific research on animal minds, I had internalized a straightforward historical narrative: The western intellectual tradition held animals to be unintelligent, but thanks to recent advances in the science, we were learning otherwise. The actual history is so much more complicated. The denial of animal intelligence does have deep roots, of course. You can trace a direct line from Aristotle, who considered animals capable of feeling only pain and hunger, to medieval Christian theologians fixated on their supposed lack of rationality, to Enlightenment intellectuals who likened the cries of beaten dogs to the squeaking of springs. But along the way, a great many thinkers, from early Greek philosopher Plutarch on through to Voltaire, pushed back. They saw animals as intelligent and therefore deserving of ethical regard, too. Those have always been the stakes of this debate: If animals are mindless then we owe them nothing. Through that lens it’s no surprise that societies founded on exploitation—of other human beings, of animals, of the whole natural world—would yield knowledge systems that formally regarded animals as dumb. The Plutarchs and Voltaires of the world were cast to the side. The scientific pendulum did swing briefly in the other direction, thanks in no small part to the popularity of Charles Darwin. He saw humans as related to other animals not only in body but in mind, and recognized rich forms of consciousness even in earthworms. But the backlash to that way of thinking was fierce, culminating in a principle articulated in the 1890s and later enshrined as Morgan’s Canon: An animal’s behavior should not be interpreted as evidence of a higher psychological faculty until all other explanations could be ruled out. Stupidity by default. © 2024 NautilusNext Inc.,

Keyword: Evolution; Attention
Link ID: 29399 - Posted: 07.23.2024

By Andrew Jacobs July 17, 2024 If you had to come up with a groovy visualization of the human brain on psychedelic drugs, it might look something like this. The image, as it happens, comes from dozens of brain scans produced by researchers at Washington University School of Medicine in St. Louis who gave psilocybin, the compound in “magic mushrooms,” to participants in a study before sending them into a functional M.R.I. scanner. The kaleidoscopic whirl of colors they recorded is essentially a heat map of brain changes, with the red, orange and yellow hues reflecting a significant departure from normal activity patterns. The blues and greens reflect normal brain activity that occurs in the so-called functional networks, the neural communication pathways that connect different regions of the brain. The scans, published Wednesday in the journal Nature, offer a rare glimpse into the wild neural storm associated with mind-altering drugs. Researchers say they could provide a potential road map for understanding how psychedelic compounds like psilocybin, LSD and MDMA can lead to lasting relief from depression, anxiety and other mental health disorders. “Psilocybin, in contrast to any other drug we’ve tested, has this massive effect on the whole brain that was pretty unexpected,” said Dr. Nico Dosenbach, a professor of neurology at Washington University and a senior author of the study. “It was quite shocking when we saw the effect size.” The study included seven healthy adults who were given either a single dose of psilocybin or a placebo in the form of methylphenidate, the generic version of the amphetamine Ritalin. Each participant underwent a total of 18 brain scans, taken before, during and after the initial dosing. © 2024 The New York Times Company

Keyword: Drug Abuse; Depression
Link ID: 29398 - Posted: 07.18.2024

By Jack Goulder Late last summer, in the waiting room of a children’s mental health clinic, I found Daniel, a softly spoken 16-year-old boy, flanked by his parents. He had been referred to the clinic for an assessment for attention deficit hyperactivity disorder (ADHD). As we took our seats on the plastic sofas in the consulting room, I asked him to tell me about the difficulties he was having. Tentatively, his gaze not leaving the floor, he started talking about school, about how he was finding it impossible to focus and would daydream for hours at a time. His exam results were beginning to show it too, his parents explained, and ADHD seemed to run in the family. They wanted to know more about any medication that could help. I had just begun a six-month placement working as a junior doctor in the clinic’s ADHD team. Doctors often take a temporary post before they formally apply to train in a speciality. Since medical school I had always imagined I would become a psychiatrist, but I wanted to be sure I was making the right choice. Armed with a textbook and the memory of some distant lectures, I began my assessment, running through the questions listed in the diagnostic manual. Are you easily distracted? Do you often lose things? Do people say you talk excessively? He answered yes to many of them. Are you accident-prone? He and his parents exchanged a knowing laugh. With Daniel exhibiting so many of the symptoms, I told them, this sounded like ADHD. I felt a sense of relief fill the room. Later that afternoon, I took Daniel’s case to a meeting where the day’s new referrals were discussed. Half a dozen senior doctors, nurses, psychologists and psychotherapists sat around the table and listened as each case was presented, trying to piece together the story being told and decide what to do next. When it was my turn, I launched into my findings, laying out what Daniel had told me and what I had gleaned from his parents about his childhood. © 2024 Guardian News & Media Limited

Keyword: ADHD; Attention
Link ID: 29397 - Posted: 07.18.2024

Tijl Grootswagers Genevieve L Quek Manuel Varlet You are standing in the cereal aisle, weighing up whether to buy a healthy bran or a sugary chocolate-flavoured alternative. Your hand hovers momentarily before you make the final grab. But did you know that during those last few seconds, while you’re reaching out, your brain is still evaluating the pros and cons – influenced by everything from your last meal, the health star rating, the catchy jingle in the ad, and the colours of the letters on the box? Our recently published research shows our brains do not just think first and then act. Even while you are reaching for a product on a supermarket shelf, your brain is still evaluating whether you are making the right choice. Read news coverage based on evidence, not tweets Further, we found measuring hand movements offers an accurate window into the brain’s ongoing evaluation of the decision – you don’t have to hook people up to expensive brain scanners. What does this say about our decision-making? And what does it mean for consumers and the people marketing to them? There has been debate within neuroscience on whether a person’s movements to enact a decision can be modified once the brain’s “motor plan” has been made. Our research revealed not only that movements can be changed after a decision – “in flight” – but also the changes matched incoming information from a person’s senses. To study how our decisions unfold over time, we tracked people’s hand movements as they reached for different options shown in pictures – for example, in response to the question “is this picture a face or an object?” Put simply, reaching movements are shaped by ongoing thinking and decision-making. © 2010–2024, The Conversation US, Inc.

Keyword: Consciousness
Link ID: 29387 - Posted: 07.11.2024

By Simon Makin Most of us have an “inner voice,” and we tend to assume everybody does, but recent evidence suggests that people vary widely in the extent to which they experience inner speech, from an almost constant patter to a virtual absence of self-talk. “Until you start asking the right questions you don’t know there’s even variation,” says Gary Lupyan, a cognitive scientist at the University of Wisconsin–Madison. “People are really surprised because they’d assumed everyone is like them.” A new study, from Lupyan and his colleague Johanne Nedergaard, a cognitive scientist at the University of Copenhagen, shows that not only are these differences real but they also have consequences for our cognition. Participants with weak inner voices did worse at psychological tasks that measure, say, verbal memory than did those with strong inner voices. The researchers have even proposed calling a lack of inner speech “anendophasia” and hope that naming it will help facilitate further research. The study adds to growing evidence that our inner mental worlds can be profoundly different. “It speaks to the surprising diversity of our subjective experiences,” Lupyan says. Psychologists think we use inner speech to assist in various mental functions. “Past research suggests inner speech is key in self-regulation and executive functioning, like task-switching, memory and decision-making,” says Famira Racy, an independent scholar who co-founded the Inner Speech Research Lab at Mount Royal University in Calgary. “Some researchers have even suggested that not having an inner voice may impact these and other areas important for a sense of self, although this is not a certainty.” Inner speech researchers know that it varies from person to person, but studies have typically used subjective measures, like questionnaires, and it is difficult to know for sure if what people say goes on in their heads is what really happens. “It’s very difficult to reflect on one’s own inner experiences, and most people aren’t very good at it when they start out,” says Charles Fernyhough, a psychologist at Durham University in England, who was not involved in the study. © 2024 SCIENTIFIC AMERICAN,

Keyword: Consciousness
Link ID: 29382 - Posted: 07.06.2024

By Adolfo Plasencia Recently, a group of Australian researchers demonstrated a “mind-reading” system called BrainGPT. The system can, according to its creators, convert thoughts (recorded with a non-invasive electrode helmet) into words that are displayed on a screen. Essentially, BrainGPT connects a multitasking EEG encoder to a large language model capable of decoding coherent and readable sentences from EEG signals. Is the mind, the last frontier of privacy, still a safe place to think one’s thoughts? I spoke with Harvard-based behavioral neurologist Alvaro Pascual-Leone, a leader in the study of neuroplasticity and noninvasive brain stimulation, about what it means and how we can protect ourselves. The reality is that the ability to read the brain and influence activity is already here. It’s no longer only in the realm of science fiction. Now, the question is, what exactly can we access and manipulate in the brain? Consider this example: If I instruct you to move a hand, I can tell if you are preparing to move, say, your right hand. I can even administer a precise “nudge” to your brain and make you move your right hand faster. And you would then claim, and fully believe, that you moved it yourself. However, I know that, in fact, it was me who moved it for you. I can even force you to move your left hand—which you were not going to move—and lead you to rationalize why you changed your mind when in fact, our intervention led to that action you perceive as your choice. We have done this experiment in our laboratory. In humans, we can modify brain activity by reading and writing in the brain, so to speak, though we can affect only very simple things right now. In animals, we can do much more complex things because we have much more precise control of the neurons and their timing. But the capacity for that modulation of smaller circuits progressively down to individual neurons in humans is going to come, including much more selective modification with optogenetic alternatives—that is, using light to control the activity of neurons. © 2024 NautilusNext Inc.,

Keyword: Brain imaging
Link ID: 29377 - Posted: 07.03.2024

By Carl Zimmer For thousands of years, philosophers have argued about the purpose of language. Plato believed it was essential for thinking. Thought “is a silent inner conversation of the soul with itself,” he wrote. Many modern scholars have advanced similar views. Starting in the 1960s, Noam Chomsky, a linguist at M.I.T., argued that we use language for reasoning and other forms of thought. “If there is a severe deficit of language, there will be severe deficit of thought,” he wrote. As an undergraduate, Evelina Fedorenko took Dr. Chomsky’s class and heard him describe his theory. “I really liked the idea,” she recalled. But she was puzzled by the lack of evidence. “A lot of things he was saying were just stated as if they were facts — the truth,” she said. Dr. Fedorenko went on to become a cognitive neuroscientist at M.I.T., using brain scanning to investigate how the brain produces language. And after 15 years, her research has led her to a startling conclusion: We don’t need language to think. “When you start evaluating it, you just don’t find support for this role of language in thinking,” she said. When Dr. Fedorenko began this work in 2009, studies had found that the same brain regions required for language were also active when people reasoned or carried out arithmetic. But Dr. Fedorenko and other researchers discovered that this overlap was a mirage. Part of the trouble with the early results was that the scanners were relatively crude. Scientists made the most of their fuzzy scans by combining the results from all their volunteers, creating an overall average of brain activity. © 2024 The New York Times Company

Keyword: Language; Consciousness
Link ID: 29376 - Posted: 07.03.2024

By Olivia Gieger Three pioneers in face-perception research have won the 2024 Kavli Prize in Neuroscience. Nancy Kanwisher, professor of cognitive neuroscience at the Massachusetts Institute of Technology; Winrich Freiwald, professor of neurosciences and behavior at Rockefeller University; and Doris Tsao, professor of neurobiology at the University of California, Berkeley, will share the $1 million Kavli Prize for their discoveries of the regions—in both the human and monkey brains—responsible for identifying and recognizing faces. “This is work that’s very classic and very elegant, not only in face-processing and face-recognition work, but the impact it’s had on how we think about brain organization in general is huge,” says Alexander Cohen, assistant professor of neurology at Harvard Medical School, who studies face recognition in autistic people. The Norwegian Academy of Science and Letters awards the prize every two years. Kanwisher says she long suspected that something special happens in the brain when we look at faces, because people with prosopagnosia—the inability to recognize faces—maintain the ability to recognize nearly all other objects. What’s more, it is harder to recognize an upside-down face than most other inverted objects, studies have shown. To get to the root of face processing, Kanwisher spent hours as a young researcher lying still in an MRI machine as images of faces and objects flashed before her. A spot in the bottom right of the cerebral cortex lit up when she and others looked at faces, according to functional MRI (fMRI) scans, she and her colleagues reported in a seminal 1997 paper. They called the region the fusiform face area. © 2024 Simons Foundation

Keyword: Attention
Link ID: 29356 - Posted: 06.13.2024

By Betsy Mason To help pay for his undergraduate education, Elias Garcia-Pelegrin had an unusual summer job: cruise ship magician. “I was that guy who comes out at dinnertime and does random magic for you,” he says. But his latest magic gig is even more unusual: performing for Eurasian jays at Cambridge University’s Comparative Cognition Lab. Birds can be harder to fool than tourists. And to do magic for the jays, he had to learn to do sleight-of-hand tricks with a live, wriggling waxworm instead of the customary coin or ball. But performing in an aviary does have at least one advantage over performing on a cruise ship: The birds aren’t expecting to be entertained. “You don’t have to worry about impressing anybody, or tell a joke,” Garcia-Pelegrin says. “So you just do the magic.” In just the last few years, researchers have become interested in what they can learn about animal minds by studying what does and doesn’t fool them. “Magic effects can reveal blind spots in seeing and roadblocks in thinking,” says Nicky Clayton, who heads the Cambridge lab and, with Garcia-Pelegrin and others, cowrote an overview of the science of magic in the Annual Review of Psychology. What we visually perceive about the world is a product of how our brains interpret what our eyes see. Humans and other animals have evolved to handle the immense amount of visual information we’re exposed to by prioritizing some types of information, filtering out things that are usually less relevant and filling in gaps with assumptions. Many magic effects exploit these cognitive shortcuts in humans, and comparing how well these same tricks work on other species may reveal something about how their minds operate. Clayton and her colleagues have used magic tricks with both jays and monkeys to reveal differences in how these animals experience the world. Now they are hoping to expand to more species and inspire other researchers to try magic to explore big questions about complex mental abilities and how they evolved.

Keyword: Attention; Evolution
Link ID: 29345 - Posted: 06.06.2024

By George Musser Had you stumbled into a certain New York University auditorium in March 2023, you might have thought you were at pure neuroscience conference. In fact, it was a workshop on artificial intelligence—but your confusion could have been readily forgiven. Speakers talked about “ablation,” a procedure of creating brain lesions, as commonly done in animal model experiments. They mentioned “probing,” like using electrodes to tap into the brain’s signals. They presented linguistic analyses and cited long-standing debates in psychology over nature versus nurture. Plenty of the hundred or so researchers in attendance probably hadn’t worked with natural brains since dissecting frogs in seventh grade. But their language choices reflected a new milestone for their field: The most advanced AI systems, such as ChatGPT, have come to rival natural brains in size and complexity, and AI researchers are studying them almost as if they were studying a brain in a skull. As part of that, they are drawing on disciplines that traditionally take humans as their sole object of study: psychology, linguistics, philosophy of mind. And in return, their own discoveries have started to carry over to those other fields. These various disciplines now have such closely aligned goals and methods that they could unite into one field, Grace Lindsay, assistant professor of psychology and data science at New York University, argued at the workshop. She proposed calling this merged science “neural systems understanding.” “Honestly, it’s neuroscience that would benefit the most, I think,” Lindsay told her colleagues, noting that neuroscience still lacks a general theory of the brain. “The field that I come from, in my opinion, is not delivering. Neuroscience has been around for over 100 years. I really thought that, when people developed artificial neural systems, they could come to us.” © 2024 Simons Foundation

Keyword: Consciousness; Language
Link ID: 29344 - Posted: 06.06.2024

By Mariana Lenharo Crows know their numbers. An experiment has revealed that these birds can count their own calls, showcasing a numerical skill previously only seen in people. Investigating how animals understand numbers can help scientists to explore the biological origins of humanity’s numerical abilities, says Giorgio Vallortigara, a neuroscientist at the University of Trento in Rovereto, Italy. Being able to produce a deliberate number of vocalizations on cue, as the birds in the experiment did, “is actually a very impressive achievement”, he notes. Andreas Nieder, an animal physiologist at the University of Tübingen in Germany and a co-author of the study published 23 May in Science1, says it was amazing to see how cognitively flexible these corvids are. “They have a reputation of being very smart and intelligent, and they proved this once again.” The researchers worked with three carrion crows (Corvus corone) that had already been trained to caw on command. Over the next several months, the birds were taught to associate visual cues — a screen showing the digits 1, 2, 3 or 4 — with the number of calls they were supposed to produce. They were later also introduced to four auditory cues that were each associated with a distinct number. During the experiment, the birds stood in front of the screen and were presented with a visual or auditory cue. They were expected to produce the number of vocalizations associated with the cue and to peck at an ‘enter key’ on the touchscreen monitor when they were done. If they got it right, an automated feeder delivered bird-seed pellets and mealworms as a reward. They were correct most of the time. “Their performance was way beyond chance and highly significant,” says Nieder. © 2024 Springer Nature Limited

Keyword: Attention; Evolution
Link ID: 29326 - Posted: 05.25.2024

By Christina Caron Just before Katie Marsh dropped out of college, she began to worry that she might have attention deficit hyperactivity disorder. “Boredom was like a burning sensation inside of me,” said Ms. Marsh, who is now 30 and lives in Portland, Ore. “I barely went to class. And when I did, I felt like I had a lot of pent-up energy. Like I had to just move around all the time.” So she asked for an A.D.H.D. evaluation — but the results, she was surprised to learn, were inconclusive. She never did return to school. And only after seeking help again four years later was she diagnosed by an A.D.H.D. specialist. “It was pretty frustrating,” she said. A.D.H.D. is one of the most common psychiatric disorders in adults. Yet many health care providers have uneven training on how to evaluate it, and there are no U.S. clinical practice guidelines for diagnosing and treating patients beyond childhood. Without clear rules, some providers, while well-intentioned, are just “making it up as they go along,” said Dr. David W. Goodman, an assistant professor of psychiatry and behavioral sciences at the Johns Hopkins University School of Medicine. This lack of clarity leaves providers and adult patients in a bind. “We desperately need something to help guide the field,” said Dr. Wendi Waits, a psychiatrist with Talkiatry, an online mental health company. “When everyone’s practicing somewhat differently, it makes it hard to know how best to approach it.” Can A.D.H.D. symptoms emerge in adulthood? A.D.H.D. is defined as a neurodevelopmental disorder that begins in childhood and is typically characterized by inattention, disorganization, hyperactivity and impulsivity. Patients are generally categorized into three types: hyperactive and impulsive, inattentive, or a combination of the two. © 2024 The New York Times Company

Keyword: ADHD
Link ID: 29318 - Posted: 05.23.2024

By Meghan Willcoxon In the summer of 1991, the neuroscientist Vittorio Gallese was studying how movement is represented in the brain when he noticed something odd. He and his research adviser, Giacomo Rizzolatti, at the University of Parma were tracking which neurons became active when monkeys interacted with certain objects. As the scientists had observed before, the same neurons fired when the monkeys either noticed the objects or picked them up. But then the neurons did something the researchers didn’t expect. Before the formal start of the experiment, Gallese grasped the objects to show them to a monkey. At that moment, the activity spiked in the same neurons that had fired when the monkey grasped the objects. It was the first time anyone had observed neurons encode information for both an action and another individual performing that action. Those neurons reminded the researchers of a mirror: Actions the monkeys observed were reflected in their brains through these peculiar motor cells. In 1992, Gallese and Rizzolatti first described the cells in the journal Experimental Brain Research and then in 1996 named them “mirror neurons” in Brain. The researchers knew they had found something interesting, but nothing could have prepared them for how the rest of the world would respond. Within 10 years of the discovery, the idea of a mirror neuron had become the rare neuroscience concept to capture the public imagination. From 2002 to 2009, scientists across disciplines joined science popularizers in sensationalizing these cells, attributing more properties to them to explain such complex human behaviors as empathy, altruism, learning, imitation, autism, and speech. Then, nearly as quickly as mirror neurons caught on, scientific doubts about their explanatory power crept in. Within a few years, these celebrity cells were filed away in the drawer of over-promised, under-delivered discoveries. © 2024 NautilusNext Inc.,

Keyword: Attention; Vision
Link ID: 29316 - Posted: 05.21.2024

By Angie Voyles Askham Each time we blink, it obscures our visual world for 100 to 300 milliseconds. It’s a necessary action that also, researchers long presumed, presents the brain with a problem: how to cobble together a cohesive picture of the before and after. “No one really thought about blinks as an act of looking or vision to begin with,” says Martin Rolfs, professor of experimental psychology at Humboldt University of Berlin. But blinking may be a more important component of vision than previously thought, according to a study published last month in the Proceedings of the National Academy of Sciences. Participants performed better on a visual task when they blinked while looking at the visual stimulus than when they blinked before it appeared. The blink, the team found, caused a change in visual input that improved participants’ perception. The finding suggests that blinking is a feature of seeing rather than a bug, says Rolfs, who was not involved with the study but wrote a commentary about it. And it could explain why adults blink more frequently than is seemingly necessary, the researchers say. “The brain capitalizes on things that are changing in the visual world—whether it’s blinks or eye movements, or any type of ocular-motor dynamics,” says Patrick Mayo, a neuroscientist in the ophthalmology department at the University of Pittsburgh, who was also not involved in the work. “That is … a point that’s still not well appreciated in visual neuroscience, generally.” The researchers started their investigation by simulating a blink. In the computational model they devised, a person staring at black and white stripes would suddenly see a dark, uniform gray before once again viewing the high-contrast pattern. The interruption would cause a brief change in the stimulus input to neurons in the retina, which in turn could increase the cells’ sensitivity to stimuli right after a blink, they hypothesized. © 2024 Simons Foundation

Keyword: Vision; Attention
Link ID: 29303 - Posted: 05.14.2024

By Emily Cooke & LiveScience Optical illusions play on the brain's biases, tricking it into perceiving images differently than how they really are. And now, in mice, scientists have harnessed an optical illusion to reveal hidden insights into how the brain processes visual information. The research focused on the neon-color-spreading illusion, which incorporates patterns of thin lines on a solid background. Parts of these lines are a different color — such as lime green, in the example above — and the brain perceives these lines as part of a solid shape with a distinct border — a circle, in this case. The closed shape also appears brighter than the lines surrounding it. It's well established that this illusion causes the human brain to falsely fill in and perceive a nonexistent outline and brightness — but there's been ongoing debate about what's going on in the brain when it happens. Now, for the first time, scientists have demonstrated that the illusion works on mice, and this allowed them to peer into the rodents' brains to see what's going on. Specifically, they zoomed in on part of the brain called the visual cortex. When light hits our eyes, electrical signals are sent via nerves to the visual cortex. This region processes that visual data and sends it on to other areas of the brain, allowing us to perceive the world around us. The visual cortex is made of six layers of neurons that are progressively numbered V1, V2, V3 and so on. Each layer is responsible for processing different features of images that hit the eyes, with V1 neurons handling the first and most basic layer of data, while the other layers belong to the "higher visual areas." These neurons are responsible for more complex visual processing than V1 neurons. © 2024 SCIENTIFIC AMERICAN,

Keyword: Vision; Consciousness
Link ID: 29298 - Posted: 05.09.2024

By Dan Falk Some years ago, when he was still living in southern California, neuroscientist Christof Koch drank a bottle of Barolo wine while watching The Highlander, and then, at midnight, ran up to the summit of Mount Wilson, the 5,710-foot peak that looms over Los Angeles. After an hour of “stumbling around with my headlamp and becoming nauseated,” as he later described the incident, he realized the nighttime adventure was probably not a smart idea, and climbed back down, though not before shouting into the darkness the last line of William Ernest Henley’s 1875 poem “Invictus”: “I am the master of my fate / I am the captain of my soul.” Koch, who first rose to prominence for his collaborative work with the late Nobel Laureate Francis Crick, is hardly the only scientist to ponder the nature of the self—but he is perhaps the most adventurous, both in body and mind. He sees consciousness as the central mystery of our universe, and is willing to explore any reasonable idea in the search for an explanation. Over the years, Koch has toyed with a wide array of ideas, some of them distinctly speculative—like the idea that the Internet might become conscious, for example, or that with sufficient technology, multiple brains could be fused together, linking their accompanying minds along the way. (And yet, he does have his limits: He’s deeply skeptical both of the idea that we can “upload” our minds and of the “simulation hypothesis.”) In his new book, Then I Am Myself The World, Koch, currently the chief scientist at the Allen Institute for Brain Science in Seattle, ventures through the challenging landscape of integrated information theory (IIT), a framework that attempts to compute the amount of consciousness in a system based on the degree to which information is networked. Along the way, he struggles with what may be the most difficult question of all: How do our thoughts—seemingly ethereal and without mass or any other physical properties—have real-world consequences? © 2024 NautilusNext Inc.,

Keyword: Consciousness
Link ID: 29294 - Posted: 05.07.2024

By Steve Paulson These days, we’re inundated with speculation about the future of artificial intelligence—and specifically how AI might take away our jobs, or steal the creative work of writers and artists, or even destroy the human species. The American writer Meghan O’Gieblyn also wonders about these things, and her essays offer pointed inquiries into the philosophical and spiritual underpinnings of this technology. She’s steeped in the latest AI developments but is also well-versed in debates about linguistics and the nature of consciousness. O’Gieblyn also writes about her own struggle to find deeper meaning in her life, which has led her down some unexpected rabbit holes. A former Christian fundamentalist, she later stumbled into transhumanism and, ultimately, plunged into the exploding world of AI. (She currently also writes an advice column for Wired magazine about tech and society.) When I visited her at her home in Madison, Wisconsin, I was curious if I might see any traces of this unlikely personal odyssey. I hadn’t expected her to pull out a stash of old notebooks filled with her automatic writing, composed while working with a hypnotist. I asked O’Gieblyn if she would read from one of her notebooks, and she picked this passage: “In all the times we came to bed, there was never any sleep. Dawn bells and doorbells and daffodils and the side of the road glaring with their faces undone …” And so it went—strange, lyrical, and nonsensical—tapping into some part of herself that she didn’t know was there. That led us into a wide-ranging conversation about the unconscious, creativity, the quest for transcendence, and the differences between machine intelligence and the human mind. © 2024 NautilusNext Inc.,

Keyword: Consciousness; Robotics
Link ID: 29289 - Posted: 05.03.2024

By Dan Falk Daniel Dennett, who died in April at the age of 82, was a towering figure in the philosophy of mind. Known for his staunch physicalist stance, he argued that minds, like bodies, are the product of evolution. He believed that we are, in a sense, machines—but astoundingly complex ones, the result of millions of years of natural selection. Dennett wrote more than a dozen books, some of them aimed at a scholarly audience but many of them directed squarely at the inquisitive non-specialist—including bestsellers like Consciousness Explained, Breaking the Spell, and Darwin’s Dangerous Idea. Reading his works, one gets the impression of a mind jammed to the rafters with ideas. As Richard Dawkins put it in a blurb for Dennett’s last book, a memoir titled I’ve Been Thinking: “How unfair for one man to be blessed with such a torrent of stimulating thoughts.” Dennett spent decades puzzling over the existence of minds. How does non-thinking matter arrange itself into matter that can think, and even ponder its own existence? A long-time academic nemesis of Dennett’s, the philosopher David Chalmers, dubbed this the “Hard Problem” of consciousness. But Dennett felt this label needlessly turned a series of potentially-solvable problems into one giant unsolvable one: He was sure the so-called hard problem would evaporate once the various lesser (but still difficult) problems of understanding the brain’s mechanics were figured out. Because he viewed brains as miracle-free mechanisms, he saw no barrier to machine consciousness, at least in principle. Yet he had no fear of Terminator-style AI doomsday scenarios, either. (“The whole singularity stuff, that’s preposterous,” he once told an interviewer for The Guardian. “It distracts us from much more pressing problems.”) © 2024 NautilusNext Inc.,

Keyword: Consciousness; Attention
Link ID: 29285 - Posted: 05.02.2024

By Lilly Tozer How the brain processes visual information — and its perception of time — is heavily influenced by what we’re looking at, a study has found. In the experiment, participants perceived the amount of time they had spent looking at an image differently depending on how large, cluttered or memorable the contents of the picture were. They were also more likely to remember images that they thought they had viewed for longer. The findings, published on 22 April in Nature Human Behaviour1, could offer fresh insights into how people experience and keep track of time. “For over 50 years, we’ve known that objectively longer-presented things on a screen are better remembered,” says study co-author Martin Wiener, a cognitive neuroscientist at George Mason University in Fairfax, Virginia. “This is showing for the first time, a subjectively experienced longer interval is also better remembered.” Research has shown that humans’ perception of time is intrinsically linked to our senses. “Because we do not have a sensory organ dedicated to encoding time, all sensory organs are in fact conveying temporal information” says Virginie van Wassenhove, a cognitive neuroscientist at the University of Paris–Saclay in Essonne, France. Previous studies found that basic features of an image, such as its colours and contrast, can alter people’s perceptions of time spent viewing the image. In the latest study, researchers set out to investigate whether higher-level semantic features, such as memorability, can have the same effect. © 2024 Springer Nature Limited

Keyword: Attention; Vision
Link ID: 29269 - Posted: 04.24.2024

By John Horgan Philosopher Daniel Dennett died a few days ago, on April 19. When he argued that we overrate consciousness, he demonstrated, paradoxically, how conscious he was, and he made his audience more conscious. Dennett’s death feels like the end of an era, the era of ultramaterialist, ultra-Darwinian, swaggering, know-it-all scientism. Who’s left, Richard Dawkins? Dennett wasn’t as smart as he thought he was, I liked to say, because no one is. He lacked the self-doubt gene, but he forced me to doubt myself. He made me rethink what I think, and what more can you ask of a philosopher? I first encountered Dennett’s in-your-face brilliance in 1981 when I read The Mind’s I, a collection of essays he co-edited. And his name popped up at a consciousness shindig I attended earlier this month. To honor Dennett, I’m posting a revision of my 2017 critique of his claim that consciousness is an “illusion.” I’m also coining a phrase, “the Dennett paradox,”which is explained below. Of all the odd notions to emerge from debates over consciousness, the oddest is that it doesn’t exist, at least not in the way we think it does. It is an illusion, like “Santa Claus” or “American democracy.” René Descartes said consciousness is the one undeniable fact of our existence, and I find it hard to disagree. I’m conscious right now, as I type this sentence, and you are presumably conscious as you read it (although I can’t be absolutely sure). The idea that consciousness isn’t real has always struck me as absurd, but smart people espouse it. One of the smartest is philosopher Daniel Dennett, who has been questioning consciousness for decades, notably in his 1991 bestseller Consciousness Explained. © 2024 SCIENTIFIC AMERICAN,

Keyword: Consciousness
Link ID: 29266 - Posted: 04.24.2024