Links for Keyword: Robotics

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 21 - 40 of 268

By Robert Martone We humans have evolved a rich repertoire of communication, from gesture to sophisticated languages. All of these forms of communication link otherwise separate individuals in such a way that they can share and express their singular experiences and work together collaboratively. In a new study, technology replaces language as a means of communicating by directly linking the activity of human brains. Electrical activity from the brains of a pair of human subjects was transmitted to the brain of a third individual in the form of magnetic signals, which conveyed an instruction to perform a task in a particular manner. This study opens the door to extraordinary new means of human collaboration while, at the same time, blurring fundamental notions about individual identity and autonomy in disconcerting ways. Direct brain-to-brain communication has been a subject of intense interest for many years, driven by motives as diverse as futurist enthusiasm and military exigency. In his book Beyond Boundaries one of the leaders in the field, Miguel Nicolelis, described the merging of human brain activity as the future of humanity, the next stage in our species’ evolution. (Nicolelis serves on Scientific American’s board of advisers.) He has already conducted a study in which he linked together the brains of several rats using complex implanted electrodes known as brain-to-brain interfaces. Nicolelis and his co-authors described this achievement as the first “organic computer” with living brains tethered together as if they were so many microprocessors. The animals in this network learned to synchronize the electrical activity of their nerve cells to the same extent as those in a single brain. The networked brains were tested for things such as their ability to discriminate between two different patterns of electrical stimuli, and they routinely outperformed individual animals. © 2019 Scientific American

Related chapters from BN: Chapter 11: Motor Control and Plasticity; Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 5: The Sensorimotor System; Chapter 15: Language and Lateralization
Link ID: 26770 - Posted: 10.30.2019

By Kelly Servick CHICAGO, ILLINOIS—By harnessing the power of imagination, researchers have nearly doubled the speed at which completely paralyzed patients may be able to communicate with the outside world. People who are “locked in”—fully paralyzed by stroke or neurological disease—have trouble trying to communicate even a single sentence. Electrodes implanted in a part of the brain involved in motion have allowed some paralyzed patients to move a cursor and select onscreen letters with their thoughts. Users have typed up to 39 characters per minute, but that’s still about three times slower than natural handwriting. In the new experiments, a volunteer paralyzed from the neck down instead imagined moving his arm to write each letter of the alphabet. That brain activity helped train a computer model known as a neural network to interpret the commands, tracing the intended trajectory of his imagined pen tip to create letters (above). Eventually, the computer could read out the volunteer’s imagined sentences with roughly 95% accuracy at a speed of about 66 characters per minute, the team reported here this week at the annual meeting of the Society for Neuroscience. The researchers expect the speed to increase with more practice. As they refine the technology, they will also use their neural recordings to better understand how the brain plans and orchestrates fine motor movements. © 2019 American Association for the Advancement of Science.

Related chapters from BN: Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals; Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals; Chapter 5: The Sensorimotor System
Link ID: 26745 - Posted: 10.24.2019

By James Gallagher Health and science correspondent A man has been able to move all four of his paralysed limbs with a mind-controlled exoskeleton suit, French researchers report. Thibault, 30, said taking his first steps in the suit felt like being the "first man on the Moon". His movements, particularly walking, are far from perfect and the robo-suit is being used only in the lab. But researchers say the approach could one day improve patients' quality of life. And he can control each of the arms, manoeuvring them in three-dimensional space How easy was it to use? Thibault, who does not want his surname revealed, was an optician before he fell 15m in an incident at a night club four years ago. The injury to his spinal cord left him paralysed and he spent the next two years in hospital. But in 2017, he took part in the exoskeleton trial with Clinatec and the University of Grenoble. Initially he practised using the brain implants to control a virtual character, or avatar, in a computer game, then he moved on to walking in the suit. "It was like [being the] first man on the Moon. I didn't walk for two years. I forgot what it is to stand, I forgot I was taller than a lot of people in the room," he said. It took a lot longer to learn how to control the arms. © 2019 BBC.

Related chapters from BN: Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 5: The Sensorimotor System
Link ID: 26670 - Posted: 10.04.2019

Cassandra Willyard Rob Summers was flat on his back at a rehabilitation institute in Kentucky when he realized he could wiggle his big toe. Up, down, up, down. This was new — something he hadn’t been able to do since a hit-and-run driver left him paralysed from the chest down. When that happened four years earlier, doctors had told him that he would never move his lower body again. Now he was part of a pioneering experiment to test the power of electrical stimulation in people with spinal-cord injuries. “Susie, look, I can wiggle my toe,” Summers said. Susan Harkema, a neurophysiologist at the University of Louisville in Kentucky, sat nearby, absorbed in the data on her computer. She was incredulous. Summers’s toe might be moving, but he was not in control. Of that she was sure. Still, she decided to humour him. She asked him to close his eyes and move his right toe up, then down, and then up. She moved on to the left toe. He performed perfectly. “Holy shit,” Harkema said. She was paying attention now. “How is that happening?” he asked. “I have no idea,” she replied. Summers had been a university baseball player with major-league ambitions before the vehicle that struck him snapped all the ligaments and tendons in his neck, allowing one of his vertebra to pound the delicate nerve tissue it was meant to protect. Doctors classified the injury as complete; the motor connections to his legs had been wiped out. When Harkema and her colleagues implanted a strip of tiny electrodes in his spine in 2009, they weren’t trying to restore Summers’s ability to move on his own. Instead, the researchers were hoping to demonstrate that the spine contains all the circuitry necessary for the body to stand and to step. They reasoned that such an approach might allow people with spinal-cord injuries to stand and walk, using electrical stimulation to replace the signals that once came from the brain.

Related chapters from BN: Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 5: The Sensorimotor System
Link ID: 26471 - Posted: 07.31.2019

By Dom Vukovic Robotic skeletons may sound like something out of a science fiction movie but they are now being used to help people with severe spinal cord injuries take their first steps. The device known as a Rex bionic exoskeleton is one of only a few in the country and researchers in a trial have named their protype HELLEN. In a joint initiative between the University of Newcastle and the Australian Institute of Neuro-Rehabilitation, the robot is being used as a therapy device to see if it can help improve health and mobility outcomes in people with conditions including stroke, multiple sclerosis and now quadriplegia. Chief investigator Jodie Marquez said the trial was one of the first in the world to capture data about physiological and neurological changes that might occur in patients who undergo therapy while wearing the robotic suit. "We're seeing whether exercising in the exoskeleton device can improve both real measures of strength and spasticity, but also bigger measures such as mood and quality of life and function," Dr Marquez said. "I have no doubt that robotics will become a part of rehabilitation and a part of our lives in the future, I think that's unquestionable." Lifesaver Jess Collins is the first person with severe spinal injuries to participate in the trial. She had a near fatal surfing accident while on holidays with friends in May last year leaving her paralysed from the chest down. "I've hit the board and then the sandbank and then instantly I didn't have any movement or feeling and I wasn't sure where I was placed in the water … I was face down, which was horrific and I was conscious the entire time," she said. © 2019 ABC

Related chapters from BN: Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 5: The Sensorimotor System
Link ID: 26460 - Posted: 07.29.2019

By: Karen Moxon, Ph.D., Ignacio Saez, Ph.D., and Jochen Ditterich, Ph.D. Technology that is sparking an entirely new field of neuroscience will soon let us simply think about something we want our computers to do and watch it instantaneously happen. In fact, some patients with severe neurological injury or disease are already reaping the benefits of initial advances by using their thoughts to signal and control robotic limbs. This brain-computer interface (BCI) idea is spawning a new area of neuroscience called cognitive neuroengineering that holds the promise of improving the quality of life for everyone on the planet in unimaginable ways. But the technology is not yet ready for prime time. There are three basic aspects of BCIs—recording, decoding, and operation, and progress will require refining all three. BCI works because brain activity generates a signal—typically an electrical field—that can be recorded through a dedicated device, which feeds it to a computer whose analysis software (i.e., a decoding algorithm) “translates” the signal to a simple command. This command signal operates a computer or other machine. The resulting operation can be as simple as moving a cursor on a screen, for which the command need contain just X and Y coordinates, or as complex as controlling a robotic arm, which requires information about position, orientation, speed, rotation, and more. Recent work from University of Pittsburgh has shown that subjects with amyotrophic lateral sclerosis (ALS) can control a complex robot arm—having it pick up a pitcher and pour water into a glass—just by thinking about it. The downside is that it is necessary to surgically implant recording microelectrodes intothe brain and that, most importantly, such electrodes are not reliable for more than a few years. © 2019 The Dana Foundation.

Related chapters from BN: Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 5: The Sensorimotor System
Link ID: 26306 - Posted: 06.06.2019

Sandeep Ravindran In 2012, computer scientist Dharmendra Modha used a powerful supercomputer to simulate the activity of more than 500 billion neurons—more, even, than the 85 billion or so neurons in the human brain. It was the culmination of almost a decade of work, as Modha progressed from simulating the brains of rodents and cats to something on the scale of humans. The simulation consumed enormous computational resources—1.5 million processors and 1.5 petabytes (1.5 million gigabytes) of memory—and was still agonizingly slow, 1,500 times slower than the brain computes. Modha estimates that to run it in biological real time would have required 12 gigawatts of energy, about six times the maximum output capacity of the Hoover Dam. “And yet, it was just a cartoon of what the brain does,” says Modha, chief scientist for brain-inspired computing at IBM Almaden Research Center in northern California. The simulation came nowhere close to replicating the functionality of the human brain, which uses about the same amount of power as a 20-watt lightbulb. Since the early 2000s, improved hardware and advances in experimental and theoretical neuroscience have enabled researchers to create ever larger and more-detailed models of the brain. But the more complex these simulations get, the more they run into the limitations of conventional computer hardware, as illustrated by Modha’s power-hungry model. © 1986–2019 The Scientist

Related chapters from BN: Chapter 1: Introduction: Scope and Outlook; Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System
Related chapters from MM:Chapter 20: ; Chapter 1: Cells and Structures: The Anatomy of the Nervous System
Link ID: 26269 - Posted: 05.28.2019

Siobhan Roberts In May 2013, the mathematician Carina Curto attended a workshop in Arlington, Virginia, on “Physical and Mathematical Principles of Brain Structure and Function” — a brainstorming session about the brain, essentially. The month before, President Obama had issued one of his “Grand Challenges” to the scientific community in announcing the BRAIN Initiative (Brain Research through Advancing Innovative Neurotechnologies), aimed at spurring a long-overdue revolution in understanding our three-pound organ upstairs. In advance of the workshop, the hundred or so attendees each contributed to a white paper addressing the question of what they felt was the most significant obstacle to progress in brain science. Answers ran the gamut — some probed more generally, citing the brain’s “utter complexity,” while others delved into details about the experimental technology. Curto, an associate professor at Pennsylvania State University, took a different approach in her entry, offering an overview of the mathematical and theoretical technology: A major obstacle impeding progress in brain science is the lack of beautiful models. Let me explain. … Many will agree that the existing (and impending) deluge of data in neuroscience needs to be accompanied by advances in computational and theoretical approaches — for how else are we to “make sense” of these data? What such advances should look like, however, is very much up to debate. … How much detail should we be including in our models? … How well can we defend the biological realism of our theories? All Rights Reserved © 2018

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 13: Memory and Learning
Link ID: 25108 - Posted: 06.20.2018

By Robert F. Service Prosthetics may soon take on a whole new feel. That’s because researchers have created a new type of artificial nerve that can sense touch, process information, and communicate with other nerves much like those in our own bodies do. Future versions could add sensors to track changes in texture, position, and different types of pressure, leading to potentially dramatic improvements in how people with artificial limbs—and someday robots—sense and interact with their environments. “It’s a pretty nice advance,” says Robert Shepherd, an organic electronics expert at Cornell University. Not only are the soft, flexible, organic materials used to make the artificial nerve ideal for integrating with pliable human tissue, but they are also relatively cheap to manufacture in large arrays, Shepherd says. Modern prosthetics are already impressive: Some allow amputees to control arm movement with just their thoughts; others have pressure sensors in the fingertips that help wearers control their grip without the need to constantly monitor progress with their eyes. But our natural sense of touch is far more complex, integrating thousands of sensors that track different types of pressure, such as soft and forceful touch, along with the ability to sense heat and changes in position. This vast amount of information is ferried by a network that passes signals through local clusters of nerves to the spinal cord and ultimately the brain. Only when the signals combine to become strong enough do they make it up the next link in the chain. © 2018 American Association for the Advancement of Science.

Related chapters from BN: Chapter 11: Motor Control and Plasticity; Chapter 8: General Principles of Sensory Processing, Touch, and Pain
Related chapters from MM:Chapter 5: The Sensorimotor System; Chapter 5: The Sensorimotor System
Link ID: 25048 - Posted: 06.01.2018

By Matthew Hutson As artificial intelligence (AI) allows machines to become more like humans, will they experience similar psychological quirks such as hallucinations or depression? And might this be a good thing? Last month, New York University in New York City hosted a symposium called Canonical Computations in Brains and Machines, where neuroscientists and AI experts discussed overlaps in the way humans and machines think. Zachary Mainen, a neuroscientist at the Champalimaud Centre for the Unknown, a neuroscience and cancer research institute in Lisbon, speculated that we might expect an intelligent machine to suffer some of the same mental problems people do. Q: Why do you think AIs might get depressed and hallucinate? A: I’m drawing on the field of computational psychiatry, which assumes we can learn about a patient who’s depressed or hallucinating from studying AI algorithms like reinforcement learning. If you reverse the arrow, why wouldn’t an AI be subject to the sort of things that go wrong with patients? Q: Might the mechanism be the same as it is in humans? A: Depression and hallucinations appear to depend on a chemical in the brain called serotonin. It may be that serotonin is just a biological quirk. But if serotonin is helping solve a more general problem for intelligent systems, then machines might implement a similar function, and if serotonin goes wrong in humans, the equivalent in a machine could also go wrong. © 2018 American Association for the Advancement of Science

Related chapters from BN: Chapter 11: Motor Control and Plasticity; Chapter 16: Psychopathology: Biological Basis of Behavior Disorders
Related chapters from MM:Chapter 5: The Sensorimotor System; Chapter 12: Psychopathology: The Biology of Behavioral Disorders
Link ID: 24843 - Posted: 04.10.2018

BCIs have deep roots. In the 18th century Luigi Galvani discovered the role of electricity in nerve activity when he found that applying voltage could cause a dead frog’s legs to twitch. In the 1920s Hans Berger used electroencephalography to record human brain waves. In the 1960s José Delgado theatrically used a brain implant to stop a charging bull in its tracks. One of the field’s father figures is still hard at work in the lab. Eberhard Fetz was a post-doctoral researcher at the University of Washington in Seattle when he decided to test whether a monkey could control the needle of a meter using only its mind. A paper based on that research, published in 1969, showed that it could. Dr Fetz tracked down the movement of the needle to the firing rate of a single neuron in the monkey’s brain. The animal learned to control the activity of that single cell within two minutes, and was also able to switch to control a different neuron. Dr Fetz disclaims any great insights in setting up the experiment. “I was just curious, and did not make the association with potential uses of robotic arms or the like,” he says. But the effect of his paper was profound. It showed both that volitional control of a BCI was possible, and that the brain was capable of learning how to operate one without any help. Some 48 years later, Dr Fetz is still at the University of Washington, still fizzing with energy and still enthralled by the brain’s plasticity. He is particularly interested in the possibility of artificially strengthening connections between cells, and perhaps forging entirely new ones.

Related chapters from BN: Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 5: The Sensorimotor System
Link ID: 24839 - Posted: 04.09.2018

Sara Reardon Superconducting computing chips modelled after neurons can process information faster and more efficiently than the human brain. That achievement, described in Science Advances on 26 January1, is a key benchmark in the development of advanced computing devices designed to mimic biological systems. And it could open the door to more natural machine-learning software, although many hurdles remain before it could be used commercially. Artificial intelligence software has increasingly begun to imitate the brain. Algorithms such as Google’s automatic image-classification and language-learning programs use networks of artificial neurons to perform complex tasks. But because conventional computer hardware was not designed to run brain-like algorithms, these machine-learning tasks require orders of magnitude more computing power than the human brain does. “There must be a better way to do this, because nature has figured out a better way to do this,” says Michael Schneider, a physicist at the US National Institute of Standards and Technology (NIST) in Boulder, Colorado, and a co-author of the study. NIST is one of a handful of groups trying to develop ‘neuromorphic’ hardware that mimics the human brain in the hope that it will run brain-like software more efficiently. In conventional electronic systems, transistors process information at regular intervals and in precise amounts — either 1 or 0 bits. But neuromorphic devices can accumulate small amounts of information from multiple sources, alter it to produce a different type of signal and fire a burst of electricity only when needed — just as biological neurons do. As a result, neuromorphic devices require less energy to run. © 2018 Macmillan Publishers Limited

Related chapters from BN: Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals; Chapter 13: Memory and Learning
Link ID: 24579 - Posted: 01.27.2018

Jules Montague Steve Thomas and I are talking about brain implants. Bonnie Tyler’s Holding Out For a Hero is playing in the background and for a moment I almost forget that a disease has robbed Steve of his speech. The conversation breaks briefly; now I see his wheelchair, his ventilator, his hospital bed. Steve, a software engineer, was diagnosed with ALS (amyotrophic lateral sclerosis, a type of motor neurone disease) aged 50. He knew it was progressive and incurable; that he would soon become unable to move and, in his case, speak. He is using eye-gaze technology to tell me this (and later to turn off the sound of Bonnie Tyler); cameras pick up light reflection from his eye as he scans a screen. Movements of his pupils are translated into movements of a cursor through infrared technology and the cursor chooses letters or symbols. A speech-generating device transforms these written words into spoken ones – and, in turn, sentences and stories form. Eye-gaze devices allow some people with limited speech or hand movements to communicate, use environmental controls, compose music, and paint. That includes patients with ALS – up to 80% have communication difficulties, cerebral palsy, strokes, multiple sclerosis and spinal cord injuries. It’s a far cry from Elle editor-in-chief Jean-Dominique Bauby, locked-in by a stroke in 1995, painstakingly blinking through letters on an alphabet board. His memoir, written at one word every two minutes, later became a film, The Diving Bell and the Butterfly. Although some still use low-tech options (not everyone can meet the physical or cognitive requirements for eye-gaze systems; occasionally, locked-in patients can blink but cannot move their eyes), speech-to-text and text-to-speech functionality on smartphones and tablets has revolutionised communication. © 2017 Guardian News and Media Limited

Related chapters from BN: Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 5: The Sensorimotor System
Link ID: 24200 - Posted: 10.16.2017

By Andrew Wagner Although it’s a far cry from the exosuits of science fiction, researchers have developed a robotic exoskeleton that can help stroke victims regain use of their legs. Nine out of 10 stroke patients are afflicted with partial paralysis, leaving some with an abnormal gait. The exosuit works by pulling cords attached to a shoe insole, providing torque to the ankle and correcting the abnormal walking motion. With the suit providing assistance to their joints, the stroke victims are able to maintain their balance, and walk similarly to the way they had prior to their paralysis, the team reports today in Science Translational Medicine. The exosuit is an adaptation of a previous design developed for the Defense Advanced Research Projects Agency Warrior Web program, a Department of Defense plan to develop assistive exosuits for military applications. Although similar mechanical devices have been built in the past to assist in gait therapy, these were bulky and had to be kept tethered to a power source. This new suit is light enough that with a decent battery, it could be used to help patients walk over terrain as well, not just on a treadmill. The researchers say that although the technology needs long-term testing, it could start to decrease the time it takes for stroke patients to recover in the near future. © 2017 American Association for the Advancement of Science

Related chapters from BN: Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 5: The Sensorimotor System
Link ID: 23881 - Posted: 07.27.2017

By Sam Wong People who have had amputations can control a virtual avatar using their imagination alone, thanks to a system that uses a brain scanner. Brain-computer interfaces, which translate neuron activity into computer signals, have been advancing rapidly, raising hopes that such technology can help people overcome disabilities such as paralysis or lost limbs. But it has been unclear how well this might work for people who have had limbs removed some time ago, as the brain areas that previously controlled these may become less active or repurposed for other uses over time. Ori Cohen at IDC Herzliya, in Israel, and colleagues have developed a system that uses an fMRI brain scanner to read the brain signals associated with imagining a movement. To see if it can work a while after someone has had a limb removed, they recruited three volunteers who had had an arm removed between 18 months and two years earlier, and four people who have not had an amputation. While lying in the fMRI scanner, the volunteers were shown an avatar on a screen with a path ahead of it, and instructed to move the avatar along this path by imagining moving their feet to move forward, or their hands to turn left or right. The people who had had arm amputations were able to do this just as well with their missing hand as they were with their intact hand. Their overall performance on the task was almost as good as of those people who had not had an amputation. © Copyright New Scientist Ltd.

Related chapters from BN: Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 5: The Sensorimotor System
Link ID: 23770 - Posted: 06.24.2017

By Matthew Hutson Artificial neural networks, computer algorithms that take inspiration from the human brain, have demonstrated fancy feats such as detecting lies, recognizing faces, and predicting heart attacks. But most computers can’t run them efficiently. Now, a team of engineers has designed a computer chip that uses beams of light to mimic neurons. Such “optical neural networks” could make any application of so-called deep learning—from virtual assistants to language translators—many times faster and more efficient. “It works brilliantly,” says Daniel Brunner, a physicist at the FEMTO-ST Institute in Besançon, France, who was not involved in the work. “But I think the really interesting things are yet to come.” Most computers work by using a series of transistors, gates that allow electricity to pass or not pass. But decades ago, physicists realized that light might make certain processes more efficient—for example, building neural networks. That’s because light waves can travel and interact in parallel, allowing them to perform lots of functions simultaneously. Scientists have used optical equipment to build simple neural nets, but these setups required tabletops full of sensitive mirrors and lenses. For years, photonic processing was dismissed as impractical. Now, researchers at the Massachusetts Institute of Technology (MIT) in Cambridge have managed to condense much of that equipment to a microchip just a few millimeters across. © 2017 American Association for the Advancement of Science

Related chapters from BN: Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Related chapters from MM:Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 23758 - Posted: 06.21.2017

By Edd Gent There’s been a lot of hype coming out of Silicon Valley about technology that can meld the human brain with machines. But how will this help society, and which companies are leading the charge? Elon Musk, chief executive of Tesla and SpaceX, made waves in March when he announced his latest venture, Neuralink, which would design what are called brain-computer interfaces. Initially, BCIs would be used for medical research, but the ultimate goal would be to prevent humans from becoming obsolete by enabling people to merge with artificial intelligence. Musk is not the only one who’s trying to bring humans closer to machines. Here are five organizations working hard on hacking the brain. According to Musk, the main barrier to human-machine co­operation is communication bandwidth. Because using a touch screen or a keyboard is a slow way to communicate with a computer, Musk’s new venture aims to create a “high-bandwidth” link between the brain and machines. What that system would look like is not entirely clear. Words such as “neural lace” and “neural dust” have been bandied about, but all that has really been revealed is a business model. Neuralink has been registered as a medical research company, and Musk said the firm will produce a product to help people with severe brain injuries within four years. This will lay the groundwork for developing BCIs for healthy people, enabling them to communicate by “consensual telepathy,” possibly within five years, Musk said. Some scientists, particularly those in neuroscience, are skeptical of Musk’s ambitious plans. © 1996-2017 The Washington Post

Related chapters from BN: Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 5: The Sensorimotor System
Link ID: 23733 - Posted: 06.12.2017

Sarah Boseley Health editor A man who was paralysed from below the neck after crashing his bike into a truck can once again drink a cup of coffee and eat mashed potato with a fork, after a world-first procedure to allow him to control his hand with the power of thought. Bill Kochevar, 53, has had electrical implants in the motor cortex of his brain and sensors inserted in his forearm, which allow the muscles of his arm and hand to be stimulated in response to signals from his brain, decoded by computer. After eight years, he is able to drink and feed himself without assistance. “I think about what I want to do and the system does it for me,” Kochevar told the Guardian. “It’s not a lot of thinking about it. When I want to do something, my brain does what it does.” The experimental technology, pioneered by the Case Western Reserve University in Cleveland, Ohio, is the first in the world to restore brain-controlled reaching and grasping in a person with complete paralysis. For now, the process is relatively slow, but the scientists behind the breakthrough say this is proof of concept and that they hope to streamline the technology until it becomes a routine treatment for people with paralysis. In the future, they say, it will also be wireless and the electrical arrays and sensors will all be implanted under the skin and invisible.

Related chapters from BN: Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 5: The Sensorimotor System
Link ID: 23423 - Posted: 03.29.2017

By Jackie Snow Last month, Facebook announced software that could simply look at a photo and tell, for example, whether it was a picture of a cat or a dog. A related program identifies cancerous skin lesions as well as trained dermatologists can. Both technologies are based on neural networks, sophisticated computer algorithms at the cutting edge of artificial intelligence (AI)—but even their developers aren’t sure exactly how they work. Now, researchers have found a way to "look" at neural networks in action and see how they draw conclusions. Neural networks, also called neural nets, are loosely based on the brain’s use of layers of neurons working together. Like the human brain, they aren't hard-wired to produce a specific result—they “learn” on training sets of data, making and reinforcing connections between multiple inputs. A neural net might have a layer of neurons that look at pixels and a layer that looks at edges, like the outline of a person against a background. After being trained on thousands or millions of data points, a neural network algorithm will come up with its own rules on how to process new data. But it's unclear what the algorithm is using from those data to come to its conclusions. “Neural nets are fascinating mathematical models,” says Wojciech Samek, a researcher at Fraunhofer Institute for Telecommunications at the Heinrich Hertz Institute in Berlin. “They outperform classical methods in many fields, but are often used in a black box manner.” © 2017 American Association for the Advancement of Science.

Related chapters from BN: Chapter 17: Learning and Memory; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 14: Attention and Higher Cognition
Link ID: 23329 - Posted: 03.08.2017

Sometimes the biggest gifts arrive in the most surprising ways. A couple in Singapore, Tianqiao Chen and Chrissy Luo, were watching the news and saw a Caltech scientist help a quadriplegic use his thoughts to control a robotic arm so that — for the first time in more than 10 years — he could sip a drink unaided. Inspired, Chen and Luo flew to Pasadena to meet the scientist, Richard Andersen, in person. Now they’ve given Caltech $115 million to shake up the way scientists study the brain in a new research complex. Construction of the Tianqiao and Chrissy Chen Institute for Neuroscience at Caltech will begin as early as 2018 and bring together biology, engineering, chemistry, physics, computer science and the social sciences to tackle brain function in an integrated, comprehensive way, university officials announced Tuesday. The goal of connecting these traditionally separate departments is to make “transformational advances” that will lead to new scientific tools and medical treatments, the university said. Research in shared labs will include looking more deeply into fundamentals of the brain and exploring the complexities of sensation, perception, cognition and human behavior. Neuroscience research has advanced greatly in recent years, Caltech President Thomas Rosenbaum said. The field now has the tools to look at individual neurons, for example, as well as the computer power to analyze massive data sets and an entire system of neurons. Collaborating across traditional academic boundaries takes it to the next level, he said. “The tools are at a time and place where we think that the field is ready for that sort of combination.”

Related chapters from BN: Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 5: The Sensorimotor System
Link ID: 22960 - Posted: 12.07.2016