Chapter 10. Vision: From Eye to Brain
Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.
By Katrina Miller Take a look at this video of a waiting room. Do you see anything strange? Perhaps you saw the rug disappear, or the couch pillows transform, or a few ceiling panels evaporate. Or maybe you didn’t. In fact, dozens of objects change in this video, which won second place in the Best Illusion of the Year Contest in 2021. Voting for the latest version of the contest opened on Monday. Illusions “are the phenomena in which the physical reality is divorced from perception,” said Stephen Macknik, a neuroscientist at SUNY Downstate Health Sciences University in Brooklyn. He runs the contest with his colleague and spouse, Susana Martinez-Conde. By studying the disconnect between perception and reality, scientists can better understand which brain regions and processes help us interpret the world around us. The illusion above highlights change blindness, the brain’s failure to notice shifts in the environment, especially when they occur gradually. To some extent, all sensory experience is illusory, Dr. Martinez-Conde asserts. “We are always constructing a simulation of reality,” she said. “We don’t have direct access to that reality. We live inside the simulation that we create.” She and Dr. Macknik have run the illusion contest since 2005. What began as a public outreach event at an academic conference has since blossomed into an annual competition open to anyone in the world. They initially worried that people would run out of illusions to submit. “But that actually never happened,” Dr. Martinez-Conde said. “What ended up happening instead is that people started developing illusions, actually, with an eye to competing in the contest.” © 2025 The New York Times Company
Keyword: Vision; Attention
Link ID: 29843 - Posted: 06.28.2025
By Nala Rogers Coffer illusion What do you see when you stare at this grid of line segments: a series of rectangles, or a series of circles? The way you perceive this optical illusion, known as the Coffer illusion, may tie back to the visual environment that surrounds you, a recent preprint suggests.Anthony Norcia/Smith-Kettlewell Eye Research Institute Himba people from rural Namibia can see right through optical illusions that trick people from the United States and United Kingdom. Even when there’s no “right” or “wrong” way to interpret an image, what Himba people see is often vastly different from what people see in industrialized societies, a new preprint suggests. That could mean people’s vision is fundamentally shaped by the environments they’re raised in—an old but controversial idea that runs counter to the way human perception is often studied. For example, when presented with a grid of line segments that can be seen as either rectangles or circles—an optical illusion known as the Coffer illusion—people from the U.S. and U.K. almost always see rectangles first, and they often struggle to see circles. The researchers suspect this is because they are surrounded by rectangular architecture, an idea known as the carpentered world hypothesis. In contrast, the traditional villages of Himba people are composed of round huts surrounding a circular livestock corral. People from these villages almost always see circles first, and about half don’t see rectangles even when prompted. “I’m surprised that you can’t see the round ones,” says Uapwanawa Muhenije, a Himba woman from a village in northern Namibia, speaking through an interpreter over a Zoom interview. “I wonder how you can’t see them.” Muhenije didn’t participate in the research because her village is less remote than those in the study, and it includes rectangular as well as circular buildings. She sees both shapes in the Coffer illusion easily. Although the study found dramatic differences in how people see four illusions, “the one experiment that’s going to overwhelm people is this Coffer,” says Jules Davidoff, a psychologist at the University of London who was not involved in the study. “There are other striking cultural differences in perception, but the one that they’ve produced here is a real humdinger.” The findings were published as a preprint on the PsyArXiv in February and updated this week. © 2025 American Association for the Advancement of Science.
Keyword: Vision; Development of the Brain
Link ID: 29838 - Posted: 06.21.2025
Anna Bawden Health and social affairs correspondent Weight loss drugs could at least double the risk of diabetic patients developing age-related macular degeneration, a large-scale study has found. Originally developed for diabetes patients, glucagon-like peptide-1 receptor agonist (GLP-1 RA) medicines have transformed how obesity is treated and there is growing evidence of wider health benefits. They help reduce blood sugar levels, slow digestion and reduce appetite. But a study by Canadian scientists published in Jama Ophthalmology has found that after six months of use GLP-1 RAs are associated with double the risk of older people with diabetes developing neovascular age-related macular degeneration compared with similar patients not taking the drugs. Academics at the University of Toronto examined medical data for more than 1 million Ontario residents with a diagnosis of diabetes and identified 46,334 patients with an average age of 66 who were prescribed GLP-1 RAs. Nearly all (97.5%) were taking semaglutide, while 2.5% were on lixisenatide. The study did not exclude any specific brand of drugs, but since Wegovy was only approved in Canada in November 2021, primarily for weight loss, it is likely the bulk of semaglutide users in the study were taking Ozempic, which is prescribed for diabetes. Each patient on semaglutide or lixisenatide was matched with two patients who also had diabetes but were not taking the drugs, who shared similar characteristics such as age, gender and health conditions. The researchers then compared how many patients developed neovascular age-related macular degeneration over three years. © 2025 Guardian News & Media Limited
Ian Sample Science editor Researchers have given people a taste of superhuman vision after creating contact lenses that allow them to see infrared light, a band of the electromagnetic spectrum that is invisible to the naked eye. Unlike night vision goggles, the contact lenses need no power source, and because they are transparent, wearers can see infrared and all the normal visible colours of light at the same time. Prof Tian Xue, a neuroscientist at the University of Science and Technology of China, said the work paved the way for a range of contact lenses, glasses and other wearable devices that give people “super-vision”. The technology could also help people with colour blindness, he added. The lenses are the latest breakthrough driven by the team’s desire to extend human vision beyond its natural, narrow range. The wavelengths of light that humans can see make up less than one hundredth of a per cent of the electromagnetic spectrum. Dr Yuqian Ma, a researcher on the project, said: “Over half of the solar radiation energy, existing as infrared light, remains imperceptible to humans.” The rainbow of colours visible to humans spans wavelengths from 400 to 700 nanometres (a nanometre is a millionth of a millimetre). But many other animals sense the world differently. Birds, bees, reindeer and mice can see ultraviolet light, wavelengths too short for humans to perceive. Meanwhile, some snakes and vampire bats have organs that detect far-infrared, or thermal radiation, which helps them hunt for prey. To extend humans’ range of vision and enhance our experience of the world, the scientists developed what are called upconversion nanoparticles. The particles absorb infrared light and re-emit it as visible light. For the study, the scientists chose particles that absorb near-infrared light, comprising wavelengths that are just too long for humans to perceive, and converted it into visible red, green or blue light. © 2025 Guardian News & Media Limited
Keyword: Vision; Robotics
Link ID: 29804 - Posted: 05.24.2025
By Jacek Krywko edited by Allison Parshall There are only so many colors that the typical human eye can see; estimates put the number just below 10 million. But now, for the first time, scientists say they’ve broken out of that familiar spectrum and into a new world of color. In a paper published on Friday in Science Advances, researchers detail how they used a precise laser setup to stimulate the retinas of five participants, making them the first humans to see a color beyond our visual range: an impossibly saturated bluish green. Our retinas contain three types of cone cells, photoreceptors that detect the wavelengths of light. S cones pick up relatively short wavelengths, which we see as blue. M cones react to medium wavelengths, which we see as green. And L cones are triggered by long wavelengths, which we see as red. These red, green and blue signals travel to the brain, where they’re combined into the full-color vision we experience. But these three cone types handle overlapping ranges of light: the light that activates M cones will also activate either S cones or L cones. “There’s no light in the world that can activate only the M cone cells because, if they are being activated, for sure one or both other types get activated as well,” says Ren Ng, a professor of electrical engineering and computer science at the University of California, Berkeley. Ng and his research team wanted to try getting around that fundamental limitation, so they developed a technicolor technique they call “Oz.” “The name comes from the Wizard of Oz, where there’s a journey to the Emerald City, where things look the most dazzling green you’ve ever seen,” Ng explains. On their own expedition, the researchers used lasers to precisely deliver tiny doses of light to select cone cells in the human eye. First, they mapped a portion of the retina to identify each cone cell as either an S, M or L cone. Then, using the laser, they delivered light only to M cone cells. © 2025 SCIENTIFIC AMERICAN,
Keyword: Vision
Link ID: 29752 - Posted: 04.19.2025
By Catherine Offord Scientists say they have found a long–sought-after population of stem cells in the retina of human fetuses that could be used to develop therapies for one of the leading causes of blindness. The use of fetal tissue, a source of ethical debate and controversy in some countries, likely wouldn’t be necessary for an eventual therapy: Transplanting similar human cells generated in the lab into the eyes of mice with retinal disease protected the animals’ vision, the team reported this week in Science Translational Medicine. “I see this as potentially a very interesting advancement of this field, where we are really in need of a regenerative treatment for retinal diseases,” says Anders Kvanta, a retinal specialist at the Karolinska Institute who was not involved in the work. He and others note that more evidence is needed to show the therapeutic usefulness of the newly described cells. The retina, a layer of light-sensing tissue at the back of the eye, can degenerate with age or because of an inherited condition such as retinitis pigmentosa, a rare disease that causes gradual breakdown of retinal cells. Hundreds of millions of people worldwide are affected by retinal degeneration, and many suffer vision loss or blindness as a result. Most forms can’t be treated. Scientists have long seen a potential solution in stem cells, which can regenerate and repair injured tissue. Several early-stage clinical trials are already evaluating the safety and efficacy of transplanting stem cells derived from cell lines established from human embryos, for example, or adult human cells that have been reprogrammed to a stem-like state. Other approaches include transplanting so-called retinal progenitor cells (RPCs)—immature cells that give rise to photoreceptors and other sorts of retinal cells—from aborted human fetuses. Some researchers have argued that another type of cell, sometimes referred to as retinal stem cells (RSCs), could also treat retinal degeneration. These cells’ long lifespans and ability to undergo numerous cells divisions could make them better candidates to regenerate damaged tissue than RPCs. RSCs have been found in the eyes of zebrafish and some other vertebrates, but evidence for their existence in mammals has been controversial. Reports announcing their discovery in adult mice in the early 2000s were later discounted.
Keyword: Vision; Stem Cells
Link ID: 29719 - Posted: 03.27.2025
By Bill Newsome What paper changed your life?: Activity of superior colliculus in behaving monkey. II. Effect of attention on neuronal responses. M.E. Goldberg and R.H. Wurtz Journal of Neurophysiology (1972) In 1972, Mickey Goldberg and Bob Wurtz published a quadrilogy of papers in the Journal of Neurophysiology—yes, you could do that in those days—on the physiological activity of single superior colliculus neurons in alert monkeys trained to perform simple eye fixation and eye movement tasks. The experiments revealed a rich variety of sensory and motor signals: Some neurons fired at the onset of a visual stimulus; others showed bursts of activity immediately prior to the eye movement. The researchers found that visually evoked activity differed depending on whether the monkey ultimately used the stimulus as a target for a saccadic eye movement. The neural response to the visual stimulus was stronger and continued until the time of the eye movement, forming a sort of temporal bridge between stimulus and evoked behavioral response. This bridge was alluring because it hinted at intermediate processes—perhaps the stuff of cognition—between sensory input and behavioral output. But it was also mysterious, in that no models existed for how such activity might be initiated and maintained until the behavioral response. These papers were revelatory to me because they pointed toward a mechanistic physiological understanding of such complex cognitive functions as attention. I was particularly fascinated by the second paper in the series of four, which dug into that mystery. Goldberg and Wurtz explicitly made a suggestive leap from physiology to psychology: “[Because] we can infer that the monkey attended to the stimulus when he made a saccade to it, the enhancement can be viewed as a neurophysiological event related to the psychological phenomenon of attention.” They also issued appropriate caveats, noting that “the unitary behavioral concept” of attention “may not have a single physiological mechanism.” h. © 2025 Simons Foundation
Keyword: Vision; Attention
Link ID: 29679 - Posted: 02.22.2025
By Kristel Tjandra Close your eyes and picture an apple—what do you see? Most people will conjure up a vivid image of the fruit, but for the roughly one in 100 individuals with aphantasia, nothing will appear in the mind’s eye at all. Now, scientists have discovered that in people with this inability to form mental images, visual processing areas of the brain still light up when they try to do so. The study, published today in Current Biology, suggests aphantasia is not caused by a complete deficit in visual processing, as researchers have previously proposed. Visual brain areas are still active when aphantasic people are asked to imagine—but that activity doesn’t translate into conscious experience. The work offers new clues about the neurological differences underlying this little-explored condition. The study authors “take a very strong, mechanistic approach,” says Sarah Shomstein, a vision scientist at George Washington University who was not involved in the study. “It was asking the right questions and using the right methods.” Some scientists suspect aphantasia may be caused by a malfunction in the primary visual cortex, the first area in the brain to process images. “Typically, primary cortex is thought to be the engine of visual perception,” says Joel Pearson, a neuroscientist at the University of New South Wales Sydney who co-led the study. “If you don’t have activity there, you’re not going to have perceptual consciousness.” To see what was going on in this region in aphantasics, the team used functional magnetic resonance imaging to measure the brain activity of 14 people with aphantasia and 18 neurotypical controls as they repeatedly saw two simple patterns, made up of either green vertical lines or red horizontal lines. They then repeated the experiment, this time asking participants to simply imagine the two images.
Keyword: Attention; Vision
Link ID: 29624 - Posted: 01.11.2025
By Ann Gibbons As the parent of any teenager knows, humans need a long time to grow up: We take about twice as long as chimpanzees to reach adulthood. Anthropologists theorize that our long childhood and adolescence allow us to build comparatively bigger brains or learn skills that help us survive and reproduce. Now, a study of an ancient youth’s teeth suggests a slow pattern of growth appeared at least 1.8 million years ago, half a million years earlier than any previous evidence for delayed dental development. Researchers used state-of-the art x-ray imaging methods to count growth lines in the molars of a member of our genus, Homo, who lived 1.77 million years ago in what today is Dmanisi, Georgia. Although the youth developed much faster than children today, its molars grew as slowly as a modern human’s during the first 5 years of life, the researchers report today in Nature. The finding, in a group whose brains are hardly larger than chimpanzees, could provide clues to why humans evolved such long childhoods. “One of the main questions in paleoanthropology is to understand when this pattern of slow development evolves in [our genus] Homo,” says Alessia Nava, a bioarchaeologist at the Sapienza University of Rome who is not part of the study. “Now, we have an important hint.” Others caution that although the teeth of this youngster grew slowly, other individuals, including our direct ancestors, might have developed faster. Researchers have known since the 1930s that humans stay immature longer than other apes. Some posit our ancestors evolved slow growth to allow more time and energy to build bigger brains, or to learn how to adapt to complex social interactions and environments before they had children. To pin down when this slow pattern of growth arose, researchers often turn to teeth, especially permanent molars, because they persist in the fossil record and contain growth lines like tree rings. What’s more, the dental growth rate in humans and other primates correlates with the development of the brain and body.
Keyword: Evolution; Sexual Behavior
Link ID: 29562 - Posted: 11.16.2024
By Elena Renken Small may be mightier than we think when it comes to brains. This is what neuroscientist Marcella Noorman is learning from her neuroscientific research into tiny animals like fruit flies, whose brains hold around 140,000 neurons each, compared to the roughly 86 billion in the human brain. Nautilus Members enjoy an ad-free experience. Log in or Join now . In work published earlier this month in Nature Neuroscience, Noorman and colleagues showed that a small network of cells in the fruit fly brain was capable of completing a highly complex task with impressive accuracy: maintaining a consistent sense of direction. Smaller networks were thought to be capable of only discrete internal mental representations, not continuous ones. These networks can “perform more complex computations than we previously thought,” says Noorman, an associate at the Howard Hughes Medical Institute. The scientists monitored the brains of fruit flies as they walked on tiny rotating foam balls in the dark, and recorded the activity of a network of cells responsible for keeping track of head direction. This kind of brain network is called a ring attractor network, and it is present in both insects and in humans. Ring attractor networks maintain variables like orientation or angular velocity—the rate at which an object rotates—over time as we navigate, integrating new information from the senses and making sure we don’t lose track of the original signal, even when there are no updates. You know which way you’re facing even if you close your eyes and stand still, for example. After finding that this small circuit in fruit fly brains—which contains only about 50 neurons in the core of the network—could accurately represent head direction, Noorman and her colleagues built models to identify the minimum size of a network that could still theoretically perform this task. Smaller networks, they found, required more precise signaling between neurons. But hundreds or thousands of cells weren’t necessary for this basic task. As few as four cells could form a ring attractor, they found. © 2024 NautilusNext Inc.,
Keyword: Development of the Brain; Vision
Link ID: 29560 - Posted: 11.16.2024
By Phil Plait I remember watching the full moon rise one early evening a while back. It was when I still lived in Colorado, and I was standing outside in my yard. I first noticed a glow to the east lighting up the flat horizon in the darkening sky, and within moments the moon was cresting above it, yellow and swollen—like, really swollen As it cleared the horizon, the moon looked huge! It also seemed so close that I could reach out and touch it; it was so “in my face” that I felt I could fall in. I gawped at it for a moment and then smiled. I knew what I was actually seeing: the moon illusion. Anyone who is capable of seeing the moon (or the sun) near the horizon has experienced this effect. The moon looks enormous there, far larger than it does when it’s overhead. I’m an astronomer, and I know the moon is no bigger on the horizon than at the zenith, yet I can’t not see it that way. It’s an overwhelming effect. But it’s not real. Simple measurements of the moon show it’s essentially the same size on the horizon as when it’s overhead. This really is an illusion. It’s been around awhile, too: the illusion is shown in cuneiform on a clay tablet from the ancient Assyrian city Nineveh that has been dated to the seventh century B.C.E. Attempts to explain it are as old as the illusion itself, and most come up short. Aristotle wrote about it, for example, attributing it to the effects of mist. This isn’t correct, obviously; the illusion manifests even in perfectly clear weather. A related idea, still common today, is that Earth’s air acts like a lens, refracting (bending) the light from the moon and magnifying it. But we know that’s not right because the moon is measurably the same size no matter where it is in the sky. Also, examining the physics of that explanation shows that it falls short as well. In fact, while the air near the horizon does indeed act like a lens, its actual effect is to make the sun and moon look squished, like flat ovals, not to simply magnify them. So that can’t be the cause either.
Keyword: Vision; Attention
Link ID: 29522 - Posted: 10.19.2024
By Yasemin Saplakoglu Two years ago, Sarah Shomstein realized she didn’t have a mind’s eye. The vision scientist was sitting in a seminar room, listening to a scientific talk, when the presenter asked the audience to imagine an apple. Shomstein closed her eyes and did so. Then, the presenter asked the crowd to open their eyes and rate how vividly they saw the apple in their mind. Saw the apple? Shomstein was confused. She didn’t actually see an apple. She could think about an apple: its taste, its shape, its color, the way light might hit it. But she didn’t see it. Behind her eyes, “it was completely black,” Shomstein recalled. And yet, “I imagined an apple.” Most of her colleagues reacted differently. They reported actually seeing an apple, some vividly and some faintly, floating like a hologram in front of them. In that moment, Shomstein, who’s spent years researching perception at George Washington University, realized she experienced the world differently than others. She is part of a subset of people — thought to be about 1% to 4% of the general population — who lack mental imagery, a phenomenon known as aphantasia. Though it was described more than 140 years ago, the term “aphantasia” was coined only in 2015. It immediately drew the attention of anyone interested in how the imagination works. That included neuroscientists. So far, they’re finding that aphantasia is not a disorder — it’s a different way of experiencing the world. Early studies have suggested that differences in the connections between brain regions involved in vision, memory and decision-making could explain variations in people’s ability to form mental images. Because many people with aphantasia dream in images and can recognize objects and faces, it seems likely that their minds store visual information — they just can’t access it voluntarily or can’t use it to generate the experience of imagery. That’s just one explanation for aphantasia. In reality, people’s subjective experiences vary dramatically, and it’s possible that different subsets of aphantasics have their own neural explanations. Aphantasia and hyperphantasia, the opposite phenomenon in which people report mental imagery as vivid as reality, are in fact two ends of a spectrum, sandwiching an infinite range of internal experiences between them. © 2024 the Simons Foundation.
Keyword: Attention; Vision
Link ID: 29417 - Posted: 08.02.2024
By Abdullahi Tsanni Time takes its toll on the eyes. Now a funky, Hitchcockian video of 64 eyeballs, all rolling and blinking in different directions, is providing a novel visual of one way in which eyes age. A video display of 64 eyeballs, captured using eye trackers, helped researchers compare the size of younger and older study participants’ pupils under differing light conditions, confirming aging affects our eyes. Lab studies have previously shown that the eye’s pupil size shrinks as people get older, making the pupil less responsive to light. A new study that rigged volunteers up with eye-trackers and GoPro videos and sent them traipsing around a university campus has confirmed what happens in the lab happens in real life, too. While pupils remain sensitive to changing light conditions, pupil size can decrease up to about 0.4 millimeters per decade, researchers report June 19 in Royal Society Open Science. “We see a big age effect,” says Manuel Spitschan, a neuroscientist at Max Planck Institute for Biological Cybernetics in Tubingen, Germany. The change helps explain why it can be increasingly harder for people to see in dim light as they age. Light travels through the dark pupil in the center of the eye to the retina, a layer of cells in the back of the eyes that converts the light into images. The pupil’s size can vary from 2 to 8 millimeters in diameter depending on light conditions, getting smaller in bright light and larger in dim light. “With a small pupil, less light enters the eye,” Spitschan says. © Society for Science & the Public 2000–2024.
Keyword: Vision; Development of the Brain
Link ID: 29375 - Posted: 07.03.2024
By Elie Dolgin The COVID-19 pandemic didn’t just reshape how children learn and see the world. It transformed the shape of their eyeballs. As real-life classrooms and playgrounds gave way to virtual meetings and digital devices, the time that children spent focusing on screens and other nearby objects surged — and the time they spent outdoors dropped precipitously. This shift led to a notable change in children’s anatomy: their eyeballs lengthened to better accommodate short-vision tasks. Study after study, in regions ranging from Europe to Asia, documented this change. One analysis from Hong Kong even reported a near doubling in the incidence of pathologically stretched eyeballs among six-year-olds compared with pre-pandemic levels1. This elongation improves the clarity of close-up images on the retina, the light-sensitive layer at the back of the eye. But it also makes far-away objects appear blurry, leading to a condition known as myopia, or short-sightedness. And although corrective eyewear can usually address the issue — allowing children to, for example, see a blackboard or read from a distance — severe myopia can lead to more-serious complications, such as retinal detachment, macular degeneration, glaucoma and even permanent blindness. Rates of myopia were booming well before the COVID-19 pandemic. Widely cited projections in the mid-2010s suggested that myopia would affect half of the world’s population by mid-century (see ‘Rising prevalence’), which would effectively double the incidence rate in less than four decades2 (see ‘Affecting every age’). Now, those alarming predictions seem much too modest, says Neelam Pawar, a paediatric ophthalmologist at the Aravind Eye Hospital in Tirunelveli, India. “I don’t think it will double,” she says. “It will triple.” © 2024 Springer Nature Limited
Keyword: Vision; Development of the Brain
Link ID: 29329 - Posted: 05.29.2024
By Angie Voyles Askham Each time we blink, it obscures our visual world for 100 to 300 milliseconds. It’s a necessary action that also, researchers long presumed, presents the brain with a problem: how to cobble together a cohesive picture of the before and after. “No one really thought about blinks as an act of looking or vision to begin with,” says Martin Rolfs, professor of experimental psychology at Humboldt University of Berlin. But blinking may be a more important component of vision than previously thought, according to a study published last month in the Proceedings of the National Academy of Sciences. Participants performed better on a visual task when they blinked while looking at the visual stimulus than when they blinked before it appeared. The blink, the team found, caused a change in visual input that improved participants’ perception. The finding suggests that blinking is a feature of seeing rather than a bug, says Rolfs, who was not involved with the study but wrote a commentary about it. And it could explain why adults blink more frequently than is seemingly necessary, the researchers say. “The brain capitalizes on things that are changing in the visual world—whether it’s blinks or eye movements, or any type of ocular-motor dynamics,” says Patrick Mayo, a neuroscientist in the ophthalmology department at the University of Pittsburgh, who was also not involved in the work. “That is … a point that’s still not well appreciated in visual neuroscience, generally.” The researchers started their investigation by simulating a blink. In the computational model they devised, a person staring at black and white stripes would suddenly see a dark, uniform gray before once again viewing the high-contrast pattern. The interruption would cause a brief change in the stimulus input to neurons in the retina, which in turn could increase the cells’ sensitivity to stimuli right after a blink, they hypothesized. © 2024 Simons Foundation
Keyword: Vision; Attention
Link ID: 29303 - Posted: 05.14.2024
By Emily Cooke & LiveScience Optical illusions play on the brain's biases, tricking it into perceiving images differently than how they really are. And now, in mice, scientists have harnessed an optical illusion to reveal hidden insights into how the brain processes visual information. The research focused on the neon-color-spreading illusion, which incorporates patterns of thin lines on a solid background. Parts of these lines are a different color — such as lime green, in the example above — and the brain perceives these lines as part of a solid shape with a distinct border — a circle, in this case. The closed shape also appears brighter than the lines surrounding it. It's well established that this illusion causes the human brain to falsely fill in and perceive a nonexistent outline and brightness — but there's been ongoing debate about what's going on in the brain when it happens. Now, for the first time, scientists have demonstrated that the illusion works on mice, and this allowed them to peer into the rodents' brains to see what's going on. Specifically, they zoomed in on part of the brain called the visual cortex. When light hits our eyes, electrical signals are sent via nerves to the visual cortex. This region processes that visual data and sends it on to other areas of the brain, allowing us to perceive the world around us. The visual cortex is made of six layers of neurons that are progressively numbered V1, V2, V3 and so on. Each layer is responsible for processing different features of images that hit the eyes, with V1 neurons handling the first and most basic layer of data, while the other layers belong to the "higher visual areas." These neurons are responsible for more complex visual processing than V1 neurons. © 2024 SCIENTIFIC AMERICAN,
Keyword: Vision; Consciousness
Link ID: 29298 - Posted: 05.09.2024
By Lilly Tozer How the brain processes visual information — and its perception of time — is heavily influenced by what we’re looking at, a study has found. In the experiment, participants perceived the amount of time they had spent looking at an image differently depending on how large, cluttered or memorable the contents of the picture were. They were also more likely to remember images that they thought they had viewed for longer. The findings, published on 22 April in Nature Human Behaviour1, could offer fresh insights into how people experience and keep track of time. “For over 50 years, we’ve known that objectively longer-presented things on a screen are better remembered,” says study co-author Martin Wiener, a cognitive neuroscientist at George Mason University in Fairfax, Virginia. “This is showing for the first time, a subjectively experienced longer interval is also better remembered.” Research has shown that humans’ perception of time is intrinsically linked to our senses. “Because we do not have a sensory organ dedicated to encoding time, all sensory organs are in fact conveying temporal information” says Virginie van Wassenhove, a cognitive neuroscientist at the University of Paris–Saclay in Essonne, France. Previous studies found that basic features of an image, such as its colours and contrast, can alter people’s perceptions of time spent viewing the image. In the latest study, researchers set out to investigate whether higher-level semantic features, such as memorability, can have the same effect. © 2024 Springer Nature Limited
Keyword: Attention; Vision
Link ID: 29269 - Posted: 04.24.2024
By Meghan Willcoxon In the summer of 1991, the neuroscientist Vittorio Gallese was studying how movement is represented in the brain when he noticed something odd. He and his research adviser, Giacomo Rizzolatti, at the University of Parma were tracking which neurons became active when monkeys interacted with certain objects. As the scientists had observed before, the same neurons fired when the monkeys either noticed the objects or picked them up. But then the neurons did something the researchers didn’t expect. Before the formal start of the experiment, Gallese grasped the objects to show them to a monkey. At that moment, the activity spiked in the same neurons that had fired when the monkey grasped the objects. It was the first time anyone had observed neurons encode information for both an action and another individual performing that action. Those neurons reminded the researchers of a mirror: Actions the monkeys observed were reflected in their brains through these peculiar motor cells. In 1992, Gallese and Rizzolatti first described the cells in the journal Experimental Brain Research and then in 1996 named them “mirror neurons” in Brain. The researchers knew they had found something interesting, but nothing could have prepared them for how the rest of the world would respond. Within 10 years of the discovery, the idea of a mirror neuron had become the rare neuroscience concept to capture the public imagination. From 2002 to 2009, scientists across disciplines joined science popularizers in sensationalizing these cells, attributing more properties to them to explain such complex human behaviors as empathy, altruism, learning, imitation, autism and speech. Then, nearly as quickly as mirror neurons caught on, scientific doubts about their explanatory power crept in. Within a few years, these celebrity cells were filed away in the drawer of over-promised, under-delivered discoveries. Vittorio Gallese wears round glasses.
Keyword: Attention; Vision
Link ID: 29242 - Posted: 04.04.2024
Linda Geddes Science correspondent If you have wondered why your partner always beats you at tennis or one child always crushes the other at Fortnite, it seems there is more to it than pure physical ability. Some people are effectively able to see more “images per second” than others, research suggests, meaning they’re innately better at spotting or tracking fast-moving objects such as tennis balls. The rate at which our brains can discriminate between different visual signals is known as temporal resolution, and influences the speed at which we are able to respond to changes in our environment. Previous studies have suggested that animals with high visual temporal resolution tend to be species with fast-paced lives, such as predators. Human research has also suggested that this trait tends to decrease as we get older, and dips temporarily after intense exercise. However, it was not clear how much it varies between people of similar ages. One way of measuring this trait is to identify the point at which someone stops perceiving a flickering light to flicker, and sees it as a constant or still light instead. Clinton Haarlem, a PhD candidate at Trinity College Dublin, and his colleagues tested this in 80 men and women between the ages of 18 and 35, and found wide variability in the threshold at which this happened. The research, published in Plos One, found that some people reported a light source as constant when it was in fact flashing about 35 times a second, while others could still detect flashes at rates of greater than 60 times a second. © 2024 Guardian News & Media Limited
Keyword: Vision
Link ID: 29233 - Posted: 04.02.2024
By Viviane Callier Biologists have often wondered what would happen if they could rewind the tape of life’s history and let evolution play out all over again. Would lineages of organisms evolve in radically different ways if given that opportunity? Or would they tend to evolve the same kinds of eyes, wings, and other adaptive traits because their previous evolutionary histories had already sent them down certain developmental pathways? A new paper published in Science this February describes a rare and important test case for that question, which is fundamental to understanding how evolution and development interact. A team of researchers at the University of California, Santa Barbara happened upon it while studying the evolution of vision in an obscure group of mollusks called chitons. In that group of animals, the researchers discovered that two types of eyes—eyespots and shell eyes—each evolved twice independently. A given lineage could evolve one type of eye or the other, but never both. Intriguingly, the type of eye that a lineage had was determined by a seemingly unrelated older feature: the number of slits in the chiton’s shell armor. This represents a real-world example of “path-dependent evolution,” in which a lineage’s history irrevocably shapes its future evolutionary trajectory. Critical junctures in a lineage act like one-way doors, opening up some possibilities while closing off other options for good. “This is one of the first cases [where] we’ve actually been able to see path-dependent evolution,” said Rebecca Varney, a postdoctoral fellow in Todd Oakley’s lab at UCSB and the lead author of the new paper. Although path-dependent evolution has been observed in some bacteria grown in labs, “showing that in a natural system was a really exciting thing to be able to do.” © 2024 NautilusNext Inc.,
Keyword: Vision; Evolution
Link ID: 29203 - Posted: 03.21.2024