Links for Keyword: Vision

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 909

Betsy Mason With virtual reality finally hitting the consumer market this year, VR headsets are bound to make their way onto a lot of holiday shopping lists. But new research suggests these gifts could also give some of their recipients motion sickness — especially if they’re women. In a test of people playing one virtual reality game using an Oculus Rift headset, more than half felt sick within 15 minutes, a team of scientists at the University of Minnesota in Minneapolis reports online December 3 in Experimental Brain Research. Among women, nearly four out of five felt sick. So-called VR sickness, also known as simulator sickness or cybersickness, has been recognized since the 1980s, when the U.S. military noticed that flight simulators were nauseating its pilots. In recent years, anecdotal reports began trickling in about the new generation of head-mounted virtual reality displays making people sick. Now, with VR making its way into people’s homes, there’s a steady stream of claims of VR sickness. “It's a high rate of people that you put in [VR headsets] that are going to experience some level of symptoms,” says Eric Muth, an experimental psychologist at Clemson University in South Carolina with expertise in motion sickness. “It’s going to mute the ‘Wheee!’ factor.” Oculus, which Facebook bought for $2 billion in 2014, released its Rift headset in March. The company declined to comment on the new research but says it has made progress in making the virtual reality experience comfortable for most people, and that developers are getting better at creating VR content. All approved games and apps get a comfort rating based on things like the type of movements involved, and Oculus recommends starting slow and taking breaks. But still some users report getting sick. © Society for Science & the Public 2000 - 2016.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 7: Vision: From Eye to Brain
Link ID: 22962 - Posted: 12.07.2016

.By JOANNA KLEIN A honey bee gathering pollen on a white flower. Dagmar Sporck/EyeEm, via Getty Images Set your meetings, phone calls and emails aside, at least for the next several minutes. That’s because today you’re a bee. It's time to leave your hive, or your underground burrow, and forage for pollen. Pollen is the stuff that flowers use to reproduce. But it’s also essential grub for you, other bees in your hive and your larvae. Once you’ve gathered pollen to take home, you or another bee will mix it with water and flower nectar that other bees have gathered and stored in the hive. But how do you decide which flowers to approach? What draws you in? In a review published last week in the journal Functional Ecology, researchers asked: What is a flower like from a bee’s perspective, and what does the pollinator experience as it gathers pollen? And that's why we're talking to you in the second person: to help you understand how bees like you, while hunting for pollen, use all of your senses — taste, touch, smell and more — to decide what to pick up and bring home. Maybe you're ready to go find some pollen. But do you even know where to look? © 2016 The New York Times Company

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain; Chapter 8: General Principles of Sensory Processing, Touch, and Pain
Related chapters from MM:Chapter 7: Vision: From Eye to Brain; Chapter 5: The Sensorimotor System
Link ID: 22943 - Posted: 12.03.2016

Hannah Devlin Science Correspondent Blind animals have had their vision partially restored using a revolutionary DNA editing technique that scientists say could in future be applied to a range of devastating genetic diseases. The study is the first to demonstrate that a gene editing tool, called Crispr, can be used to replace faulty genes with working versions in the cells of adults - in this case adult rats. Previously, the powerful procedure, in which strands of DNA are snipped out and replaced, had been used only in dividing cells - such as those in an embryo - and scientists had struggled to apply it to non-dividing cells that make up most adult tissue, including the brain, heart, kidneys and liver. The latest advance paves the way for Crispr to be used to treat a range of incurable illnesses, such as muscular dystrophy, haemophilia and cystic fibrosis, by overwriting aberrant genes with a healthy working version. Professor Juan Carlos Izpisua Belmonte, who led the work at the Salk Institute in California, said: “For the first time, we can enter into cells that do not divide and modify the DNA at will. The possible applications of this discovery are vast.” The technique could be trialled in humans in as little as one or two years, he predicted, adding that the team were already working on developing therapies for muscular dystrophy. Crispr, a tool sometimes referred to as “molecular scissors”, has already been hailed as a game-changer in genetics because it allows scientists to cut precise sections of DNA and replace them with synthetic, healthy replacements. © 2016 Guardian News and Media Limited

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 7: Vision: From Eye to Brain
Link ID: 22882 - Posted: 11.17.2016

By Simon Oxenham Isy Suttie has felt “head squeezing” since she was young. The comedian, best known for playing Dobbie in the BBC sitcom Peep Show, is one of many people who experience autonomous sensory meridian response (ASMR) – a tingly feeling often elicited by certain videos or particular mundane interactions. Growing up, Suttie says she had always assumed everyone felt it too. Not everyone feels it, but Suttie is by no means alone. On Reddit, a community of more than 100,000 members share videos designed to elicit the pleasurable sensation. The videos, often described as “whisper porn”, typically consist of people role-playing routine tasks, whispering softly into a microphone or making noises by crinkling objects such as crisp packets. The most popular ASMR YouTuber, “Gentle Whispering”, has over 250 million views. To most of us, the videos might seem strange or boring, but the clips frequently garner hundreds of thousands of views. These videos often mimic real-life situations that provoke ASMR in susceptible people. Suttie says her strongest real-world triggers occur during innocuous interactions with strangers, like talking about the weather – “it’s almost as if the more superficial the subject the better,” Suttie says. She feels the sensation particularly strongly when someone brushes past her. For Suttie, the feelings are so powerful that she often feels floored by them, and they even overcome pain and emotional distress. During a trip to the dentist, she still experiences the pleasurable tingles when the assistant brushes past her, she says. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 8: General Principles of Sensory Processing, Touch, and Pain; Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 5: The Sensorimotor System; Chapter 7: Vision: From Eye to Brain
Link ID: 22843 - Posted: 11.08.2016

By Jessica Boddy Glasses may be trendy now, but for centuries they were the stodgy accessories of the elderly worn only for failing eyes. Now, new research suggests that aging bonobos might also benefit from a pair of specs—not for reading, but for grooming. Many older bonobos groom their partners at arm’s length instead of just centimeters away, in the same way that older humans often hold newspapers farther out to read. This made researchers think the apes might also be losing their close-up vision as they age. To see whether their hypothesis held, the researchers took photos of 14 different bonobos of varying ages as they groomed one another (above) and measured the distance between their hands and faces. By analyzing how this so-called grooming distance varied from ape to ape, the researchers found that grooming distance increased exponentially with age, they report today in Current Biology. And because both humans and bonobos shows signs of farsightedness around age 40, deterioration in human eyes might not be the mere result of staring at screens and small text, the scientists say. Rather, it might be a deep-rooted natural trait reaching back to a common ancestor. © 2016 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 7: Vision: From Eye to Brain; Chapter 13: Memory, Learning, and Development
Link ID: 22841 - Posted: 11.08.2016

By peering into the eyes of mice and tracking their ocular movements, researchers made an unexpected discovery: the visual cortex — a region of the brain known to process sensory information — plays a key role in promoting the plasticity of innate, spontaneous eye movements. The study, published in Nature, was led by researchers at the University of California, San Diego (UCSD) and the University of California, San Francisco (UCSF) and funded by the National Eye Institute (NEI), part of the National Institutes of Health. “This study elegantly shows how analysis of eye movement sheds more light on brain plasticity — an ability that is at the core of the brain’s capacity to adapt and function. More specifically, it shows how the visual cortex continues to surprise and to awe,” said Houmam Araj, Ph.D., a program director at NEI. Without our being aware of it, our eyes are in constant motion. As we rotate our heads and as the world around us moves, two ocular reflexes kick in to offset this movement and stabilize images projected onto our retinas, the light-sensitive tissue at the back of our eyes. The optokinetic reflex causes eyes to drift horizontally from side-to-side — for example, as we watch the scenery through a window of a moving train. The vestibulo-ocular reflex adjusts our eye position to offset head movements. Both reflexes are crucial to survival. These mechanisms allow us to see traffic while driving down a bumpy road, or a hawk in flight to see a mouse scurrying for cover.

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 7: Vision: From Eye to Brain
Link ID: 22750 - Posted: 10.13.2016

By Colin Barras Subtract 8 from 52. Did you see the calculation in your head? While a leading theory suggests our visual experiences are linked to our understanding of numbers, a study of people who have been blind from birth suggests the opposite. The link between vision and number processing is strong. Sighted people can estimate the number of people in a crowd just by looking, for instance, while children who can mentally rotate an object and correctly imagine how it might look from a different angle often develop better mathematical skills. “It’s actually hard to think of a situation when you might process numbers through any modality other than vision,” says Shipra Kanjlia at Johns Hopkins University in Baltimore, Maryland. But blind people can do maths too. To understand how they might compensate for their lack of visual experience, Kanjlia and her colleagues asked 36 volunteers – 17 of whom had been blind at birth – to do simple mental arithmetic inside an fMRI scanner. To level the playing field, the sighted participants wore blindfolds. We know that a region of the brain called the intraparietal sulcus (IPS) is, and brain scans revealed that the same area is similarly active in blind people too. “It’s really surprising,” says Kanjlia. “It turns out brain activity is remarkably similar, at least in terms of classic number processing.” This may mean we have a deep understanding of how to handle numbers that is entirely independent of visual experience. This suggests we are all born with a natural understanding of numbers – an idea many researchers find difficult to accept. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 7: Vision: From Eye to Brain; Chapter 14: Attention and Consciousness
Link ID: 22664 - Posted: 09.17.2016

Tina Hesman Saey Color vision may actually work like a colorized version of a black-and-white movie, a new study suggests. Cone cells, which sense red, green or blue light, detect white more often than colors, researchers report September 14 in Science Advances. The textbook-rewriting discovery could change scientists’ thinking about how color vision works. For decades, researchers have known that three types of cone cells in the retina are responsible for color vision. Those cone cells were thought to send “red,” “green” and “blue” signals to the brain. The brain supposedly combines the colors, much the way a color printer does, to create a rainbow-hued picture of the world (including black and white). But the new findings indicate that “the retina is doing more of the work, and it’s doing it in a more simpleminded way,” says Jay Neitz, a color vision scientist at the University of Washington in Seattle who was not involved in the study. Red and green cone cells each come in two types: One type signals “white”; another signals color, vision researcher Ramkumar Sabesan and colleagues at the University of California, Berkeley, discovered. The large number of cells that detect white (and black — the absence of white) create a high-resolution black-and-white picture of a person’s surroundings, picking out edges and fine details. Red- and green-signaling cells fill in low-resolution color information. The process works much like filling in a coloring book or adding color to a black-and-white film, says Sabesan, who is now at the University of Washington. |© Society for Science & the Public 2000 - 2016

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 7: Vision: From Eye to Brain
Link ID: 22660 - Posted: 09.15.2016

By Rachel Becker Optical illusions have a way of breaking the internet, and the latest visual trick looks like it’s well on its way. On Sunday afternoon, game developer Will Kerslake tweeted a picture of intersecting gray lines on a white background. Twelve black dots blink in and out of existence where the gray lines meet. In the six hours since he posted the photo to Twitter, it’s been shared more than 6,000 times, with commenters demanding to know why they can’t see all 12 dots at the same time. The optical illusion was first posted to Facebook about a day ago by Japanese psychology professor Akiyoshi Kitaoka, and it has been shared more than 4,600 times so far. But the origin of this bit of visual trickery is a scientific paper published in the journal Perception in 2000. To be clear, there really are 12 black dots in the image. But (most) people can’t see all 12 dots at the same time, which is driving people nuts. "They think, 'It’s an existential crisis,'" says Derek Arnold, a vision scientist at the University of Queensland in Australia. "'How can I ever know what the truth is?'" But, he adds, scientists who study the visual system know that perception doesn’t always equal reality. In this optical illusion, the black dot in the center of your vision should always appear. But the black dots around it seem to appear and disappear. That’s because humans have pretty bad peripheral vision. If you focus on a word in the center of this line you’ll probably see it clearly. But if you try to read the words at either end without moving your eyes, they most likely look blurry. As a result, the brain has to make its best guess about what’s most likely to be going on in the fuzzy periphery — and fill in the mental image accordingly. © 2016 Vox Media, Inc.

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 7: Vision: From Eye to Brain
Link ID: 22652 - Posted: 09.15.2016

Laura Sanders Despite its name, the newly identified GluMI cell (pronounced “gloomy”) is no downer. It’s a nerve cell, spied in a mouse retina, that looks like one type of cell but behaves like another. Like neighboring retina nerve cells that subdue, or deaden, activity of other nerve cells, GluMI cells have a single arm extending from their body. But unlike those cells, GluMI cells actually seem to ramp up activity of nearby cells in a way that could aid vision. GLuMIs don’t seem to detect light firsthand, but they respond to it, Luca Della Santina of the University of Washington in Seattle and colleagues found. GluMIs are among a growing list of unexpected and mysterious cells found in the retinas of vertebrates, the researchers write August 8 in Current Biology. Citations L. Della Santina et al. Glutamatergic monopolar interneurons provide a novel pathway of excitation in the mouse retina. Current Biology. Vol. 26, August 8, 2016. doi:10.1016/j.cub.2016.06.016. |© Society for Science & the Public 2000 - 2016

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 7: Vision: From Eye to Brain
Link ID: 22610 - Posted: 08.30.2016

Amy McDermott You’ve got to see it to be it. A heightened sense of red color vision arose in ancient reptiles before bright red skin, scales and feathers, a new study suggests. The finding bolsters evidence that dinosaurs probably saw red and perhaps displayed red color. The new finding, published in the Aug. 17 Proceedings of the Royal Society B, rests on the discovery that birds and turtles share a gene used both for red vision and red coloration. More bird and turtle species use the gene, called CYP2J19, for vision than for coloration, however, suggesting that its first job was in sight. “We have this single gene that has two very different functions,” says evolutionary biologist Nicholas Mundy of the University of Cambridge. Mundy’s team wondered which function came first: the red vision or the ornamentation. In evolution, what an animal can see is often linked with what others can display, says paleontologist Martin Sander of the University of Bonn in Germany, who did not work on the new study. “We’re always getting at color from these two sides,” he says, because the point of seeing a strong color is often reading visual signals. Scientists already knew that birds use CYP2J19 for vision and color. In bird eyes, the gene contains instructions for making bright red oil droplets that filter red light. Other forms of red color vision evolved earlier in other animals, but this form allows birds to see more shades of red than humans can. Elsewhere in the body, the same gene can code for pigments that stain feathers red. Turtles are the only other land vertebrates with bright red oil droplets in their eyes. But scientists weren’t sure if the same gene was responsible, Mundy says. |© Society for Science & the Public 2000 - 2016

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 7: Vision: From Eye to Brain
Link ID: 22535 - Posted: 08.10.2016

Davide Castelvecchi People can detect flashes of light as feeble as a single photon, an experiment has demonstrated — a finding that seems to conclude a 70-year quest to test the limits of human vision. The study, published in Nature Communications on 19 July1, “finally answers a long-standing question about whether humans can see single photons — they can!” says Paul Kwiat, a quantum optics researcher at the University of Illinois at Urbana–Champaign. The techniques used in the study also open up ways of testing how quantum properties — such as the ability of photons to be in two places at the same time — affect biology, he adds. “The most amazing thing is that it’s not like seeing light. It’s almost a feeling, at the threshold of imagination,” says Alipasha Vaziri, a physicist at the Rockefeller University in New York City, who led the work and tried out the experience himself. Experiments on cells from frogs have shown that sensitive light-detecting cells in vertebrate eyes, called rod cells, do fire in response to single photons2. But, in part because the retina processes its information to reduce ‘noise’ from false alarms, researchers hadn’t been able to confirm whether the firing of one rod cell would trigger a signal that would be transmitted all the way to the brain. Nor was it clear whether people would be able to consciously sense such a signal if it did reach the brain. Experiments to test the limits of human vision have also had to wait for the arrival of quantum-optics technologies that can reliably produce one photon of light at a time. © 2016 Macmillan Publishers Limited

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 7: Vision: From Eye to Brain
Link ID: 22461 - Posted: 07.20.2016

Jon Hamilton Letting mice watch Orson Welles movies may help scientists explain human consciousness. At least that's one premise of the Allen Brain Observatory, which launched Wednesday and lets anyone with an Internet connection study a mouse brain as it responds to visual information. "Think of it as a telescope, but a telescope that is looking at the brain," says Christof Koch, chief scientific officer of the Allen Institute for Brain Science, which created the observatory. The hope is that thousands of scientists and would-be scientists will look through that telescope and help solve one of the great mysteries of human consciousness, Koch says. "You look out at the world and there's a picture in your head," he says. "You see faces, you see your wife, you see something on TV." But how does the brain create those images from the chaotic stream of visual information it receives? "That's the mystery," Koch says. There's no easy way to study a person's brain as it makes sense of visual information. So the observatory has been gathering huge amounts of data on mice, which have a visual system that is very similar to the one found in people. The data come from mice that run on a wheel as still images and movies appear on a screen in front of them. For the mice, it's a lot like watching TV on a treadmill at the gym. But these mice have been genetically altered in a way that allows a computer to monitor the activity of about 18,000 neurons as they respond to different images. "We can look at those neurons and from that decode literally what goes through the mind of the mouse," Koch says. Those neurons were pretty active when the mice watched the first few minutes of Orson Welles' film noir classic Touch of Evil. The film is good for mouse experiments because "It's black and white and it has nice contrasts and it has a long shot without having many interruptions," Koch says. © 2016 npr

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 7: Vision: From Eye to Brain
Link ID: 22438 - Posted: 07.14.2016

By Karen Weintraub Researchers at Stanford University have coaxed brain cells involved in vision to regrow and make functional connections—helping to upend the conventional dogma that mammalian brain cells, once damaged, can never be restored. The work was carried out in visually impaired mice but suggests that human maladies including glaucoma, Alzheimer’s disease and spinal cord injuries might be more repairable than has long been believed. Frogs, fish and chickens are known to regrow brain cells, and previous research has offered clues that it might be possible in mammals. The Stanford scientists say their new study confirms this and shows that, although fewer than 5 percent of the damaged retinal ganglion cells grew back, it was still enough to make a difference in the mice’s vision. “The brain is very good at coping with deprived inputs,” says Andrew Huberman, the Stanford neurobiologist who led the work. “The study also supports the idea that we may not need to regenerate every neuron in a system to get meaningful recovery.” Other researchers praised the study, published Monday in Nature Neuroscience. “I think it’s a significant step forward toward getting to the point where we really can regenerate optic nerves,” says Don Zack, a professor of ophthalmology at Johns Hopkins University who was not involved in the research. He calls it “one more indication that it may be possible to bring that ability back in humans.” © 2016 Scientific American

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 7: Vision: From Eye to Brain
Link ID: 22428 - Posted: 07.12.2016

By Shayla Love In 2005, astronaut John Phillips took a break from his work on the International Space Station and looked out the window at Earth. He was about halfway through a mission that had begun in April and would end in October. When he gazed down at the planet, the Earth was blurry. He couldn’t focus on it clearly. That was strange — his vision had always been 20/20. He wondered: Was his eyesight getting worse? “I’m not sure if I reported that to the ground,” he said. “I think I didn’t. I thought it would be something that would just go away, and fix itself when I got to Earth.” It didn’t go away. During Phillips’ post-flight physical, NASA found that his vision had gone from 20/20 to 20/100 in six months. John Phillips began experiencing sight issues during his time on the International Space Station in 2005, but was reluctant to say anything while in space. (NASA) Rigorous testing followed. Phillips got MRIs, retinal scans, neurological tests and a spinal tap. The tests showed that not only had his vision changed, but his eyes had changed as well. The backs of his eyes had gotten flatter, pushing his retinas forward. He had choroidal folds, which are like stretch marks. His optic nerves were inflamed. Phillips case became the first widely recognized one of a mysterious syndrome that affects 80 percent of astronauts on long-duration missions in space. The syndrome could interfere with plans for future crewed space missions, including any trips to Mars.

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 7: Vision: From Eye to Brain
Link ID: 22422 - Posted: 07.11.2016

By Patrick Monahan Animals like cuttlefish and octopuses can rapidly change color to blend into the background and dazzle prospective mates. But there’s only one problem: As far as we know, they can’t see in color. Unlike our eyes, the eyes of cephalopods—cuttlefish, octopuses, and their relatives—contain just one kind of color-sensitive protein, apparently restricting them to a black and white view of the world. But a new study shows how they might make do. By rapidly focusing their eyes at different depths, cephalopods could be taking advantage of a lensing property called “chromatic blur.” Each color of light has a different wavelength—and because lenses bend some wavelengths more than others, one color of light shining through a lens can be in focus while another is still blurry. So with the right kind of eye, a quick sweep of focus would let the viewer figure out the actual color of an object based on when it blurs. The off-center pupils of many cephalopods—including the w-shaped pupils of cuttlefish (above)—make this blurring effect more extreme, according to a study published this week in the Proceedings of the National Academy of Sciences. In that study, scientists built a computer model of an octopus eye and showed that—for an object at least one body length away—it could determine the object’s color just by changing focus. Because this is all still theoretical, the next step is testing whether live cephalopods actually see color this way—and whether any other “colorblind” animals might, too. © 2016 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 7: Vision: From Eye to Brain
Link ID: 22402 - Posted: 07.07.2016

By Jessica Hamzelou Imagine if each of the words in this article had their own taste, or the music you’re listening to played out as visual scene in your mind. For synaesthetes – a small proportion of people whose senses intertwine – this is the stuff of the every day. “Most people describe it as a gift,” says Jamie Ward, a neuroscientist at the University of Sussex in the UK. Now, he and his colleagues have found a new form of synaesthesia – one that moves beyond written language to sign language. It is the first time the phenomenon has been observed. “People with synaesthesia experience the ordinary world in extraordinary ways,” says Ward. In theory, any two senses can overlap. Some synaesthetes connect textures with words, while others can taste them. More commonly, written letters seem to have corresponding colours. An individual synaesthete may always associate the letter A with the colour pink, for instance. This type of synaesthesia has been found across many written languages, prompting Ward’s team to wonder if it can also apply to sign language. They recruited 50 volunteers with the type of synaesthesia that means they experience colours with letters, around half of whom were fluent in sign language too. All the participants watched a video of sign language and were asked if it triggered any colours. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 8: General Principles of Sensory Processing, Touch, and Pain; Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 5: The Sensorimotor System; Chapter 7: Vision: From Eye to Brain
Link ID: 22390 - Posted: 07.02.2016

When you walk into a room, your eyes process your surroundings immediately: refrigerator, sink, table, chairs. "This is the kitchen," you realize. Your brain has taken data and come to a clear conclusion about the world around you, in an instant. But how does this actually happen? Elissa Aminoff, a research scientist in the Department of Psychology and the Center for the Neural Basis of Cognition at Carnegie Mellon University, shares her insights on what computer modeling can tell us about human vision and memory. What do you do? What interests me is how the brain and the mind understand our visual environment. The visual world is really rich with information, and it’s extremely complex. So we have to find ways to break visual data down. What specific parts of our [visual] world is the brain using to give us what we see? In order to answer that question, we’re collaborating with computer scientists and using computer vision algorithms. The goal is to compare these digital methods with the brain. Perhaps they can help us find out what types of data the brain is working with. Does that mean that our brains function like a computer? That’s something you hear a lot about these days. No, I wouldn’t say that. It’s that computers are giving us the closest thing that we have right now to an analogous mechanism. The brain is really, really complex. It deals with massive amounts of data. We need help in organizing these data and computers can do that. Right now, there are algorithms that can identify an object as a phone or as a mug, just like the brain. But are they doing the same thing? Probably not. © 2016 Scientific American,

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 7: Vision: From Eye to Brain
Link ID: 22379 - Posted: 06.30.2016

By Aviva Rutkin Machine minds are often described as black boxes, their decision-making processes all but inscrutable. But in the case of machine intelligence, researchers are cracking that black box open and peering inside. What they find is that humans and machines don’t pay attention to the same things when they look at pictures – not at all. Researchers at Facebook and Virginia Tech in Blacksburg got humans and machines to look at pictures and answer simple questions – a task that neural-network-based artificial intelligence can handle. But the researchers weren’t interested in the answers. They wanted to map human and AI attention, in order to shed a little light on the differences between us and them. “These attention maps are something we can measure in both humans and machines, which is pretty rare,” says Lawrence Zitnick at Facebook AI Research. Comparing the two could provide insight “into whether computers are looking in the right place”. First, Zitnick and his colleagues asked human workers on Amazon Mechanical Turk to answer simple questions about a set of pictures, such as “What is the man doing?” or “What number of cats are lying on the bed?” Each picture was blurred, and the worker would have to click around to sharpen it. A map of those clicks served as a guide to what part of the picture they were paying attention to. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 7: Vision: From Eye to Brain
Link ID: 22378 - Posted: 06.30.2016

Worldwide voting for the BEST ILLUSION OF THE YEAR will take place online from 4pm EST on June 29th to 4pm EST on June 30th. The winning illusions will receive a $3,000 award for 1st place, a $2,000 award for 2nd place, and a $1,000 award for 3rd place. Anybody with an internet connection (that means YOU!) can vote to pick the Top 3 Winners from the current Top 10 List! The Best illusion of the Year Contest is a celebration of the ingenuity and creativity of the world’s premier illusion research community. Contestants from all around the world submitted novel illusions (unpublished, or published no earlier than 2015), and an international panel of judges rated them and narrowed them to the TOP TEN.

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 7: Vision: From Eye to Brain
Link ID: 22375 - Posted: 06.29.2016