Links for Keyword: Robotics

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 259

By Ferris Jabr To hear more audio stories from publications like The New York Times, download Audm for iPhone or Android. On the evening of Oct. 10, 2006, Dennis DeGray’s mind was nearly severed from his body. After a day of fishing, he returned to his home in Pacific Grove, Calif., and realized he had not yet taken out the trash or recycling. It was raining fairly hard, so he decided to sprint from his doorstep to the garbage cans outside with a bag in each hand. As he was running, he slipped on a patch of black mold beneath some oak trees, landed hard on his chin, and snapped his neck between his second and third vertebrae. While recovering, DeGray, who was 53 at the time, learned from his doctors that he was permanently paralyzed from the collarbones down. With the exception of vestigial twitches, he cannot move his torso or limbs. “I’m about as hurt as you can get and not be on a ventilator,” he told me. For several years after his accident, he “simply laid there, watching the History Channel” as he struggled to accept the reality of his injury. Some time later, while at a fund-raising event for stem-cell research, he met Jaimie Henderson, a professor of neurosurgery at Stanford University. The pair got to talking about robots, a subject that had long interested DeGray, who grew up around his family’s machine shop. As DeGray remembers it, Henderson captivated him with a single question: Do you want to fly a drone? Henderson explained that he and his colleagues had been developing a brain-computer interface: an experimental connection between someone’s brain and an external device, like a computer, robotic limb or drone, which the person could control simply by thinking. DeGray was eager to participate, eventually moving to Menlo Park to be closer to Stanford as he waited for an opening in the study and the necessary permissions. In the summer of 2016, Henderson opened DeGray’s skull and exposed his cortex — the thin, wrinkled, outermost layer of the brain — into which he implanted two 4-millimeter-by-4-millimeter electrode arrays resembling miniature beds of nails. Each array had 100 tiny metal spikes that, collectively, recorded electric impulses surging along a couple of hundred neurons or so in the motor cortex, a brain region involved in voluntary movement. © 2022 The New York Times Company

Related chapters from BN: Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 5: The Sensorimotor System
Link ID: 28326 - Posted: 05.14.2022

By Pallab Ghosh A paralysed man with a severed spinal cord has been able to walk again, thanks to an implant developed by a team of Swiss researchers. It is the first time someone who has had a complete cut to their spinal cord has been able to walk freely. The same technology has improved the health of another paralysed patient to the extent that he has been able to become a father. The research has been published in the journal Nature Medicine. Michel Roccati was paralysed after a motorbike accident five years ago. His spinal cord was completely severed - and he has no feeling at all in his legs. But he can now walk - because of an electrical implant that has been surgically attached to his spine. Someone this injured has never been able to walk like this before. The researchers stress that it isn't a cure for spinal injury and that the technology is still too complicated to be used in everyday life, but hail it nonetheless as a major step to improving quality of life. I met Michel at the lab where the implant was created. He told me that the technology "is a gift to me". "I stand up, walk where I want to, I can walk the stairs - it's almost a normal life." It was not the technology alone that drove Michel's recovery. The young Italian has a steely resolve. He told me that from the moment of his accident, he was determined to make as much progress as he could. "I used to box, run and do fitness training in the gym. But after the accident, I could not do the things that I loved to do, but I did not let my mood go down. I never stopped my rehabilitation. I wanted to solve this problem." The speed of Michel's recovery amazed the neurosurgeon who inserted the implant and expertly attached electrodes to individual nerve fibres, Prof Jocelyne Bloch at Lausanne University Hospital "I was extremely surprised," she told me. "Michel is absolutely incredible. He should be able to use this technology to progress and be better and better." © 2022 BBC.

Related chapters from BN: Chapter 11: Motor Control and Plasticity; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 5: The Sensorimotor System; Chapter 13: Memory and Learning
Link ID: 28194 - Posted: 02.09.2022

By Nayef Al-Rodhan o In Chile, the National Commission for Scientific and Technological Research has begun to debate a “neurorights” bill to be written into the country’s constitution. The world, and most importantly the OECD, UNESCO and the United Nations, should be watching closely. The Chilean bill sets out to protect the right to personal identity, free will, mental privacy, equitable access to technologies that augment human capacities, and the right to protection against bias and discrimination. The landmark bill would be the first of its kind to pioneer a regulatory framework which protects human rights from the manipulation of brain activity. The relatively nascent concept of neurorights follows a number of recent medical innovations, most notably brain-computer interface technology (BCI), which has the potential to revolutionize the field of neuroscience. BCI-based therapy may be useful for poststroke motor rehabilitation and may be a potential method for the accurate detection and treatment of neurological diseases such as Alzheimer’s. Advocates claim there is therefore a moral imperative to use the technology, given the benefits it could bring; others worry about its ethical, moral and societal consequences. Many (mistakenly) see this process as being potentially undermined by premature governance restrictions, or accuse any mention of brake mechanisms as an exaggerated reaction to an unlikely science-fiction scenario. © 2021 Scientific American

Related chapters from BN: Chapter 11: Motor Control and Plasticity; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 5: The Sensorimotor System; Chapter 14: Attention and Higher Cognition
Link ID: 27841 - Posted: 06.02.2021

R. Douglas Fields The raging bull locked its legs mid-charge. Digging its hooves into the ground, the beast came to a halt just before it would have gored the man. Not a matador, the man in the bullring standing eye-to-eye with the panting toro was the Spanish neuroscientist José Manuel Rodriguez Delgado, in a death-defying public demonstration in 1963 of how violent behavior could be squelched by a radio-controlled brain implant. Delgado had pressed a switch on a hand-held radio transmitter to energize electrodes implanted in the bull’s brain. Remote-controlled brain implants, Delgado argued, could suppress deviant behavior to achieve a “psychocivilized society.” Unsurprisingly, the prospect of manipulating the human mind with brain implants and radio beams ignited public fears that curtailed this line of research for decades. But now there is a resurgence using even more advanced technology. Laser beams, ultrasound, electromagnetic pulses, mild alternating and direct current stimulation and other methods now allow access to, and manipulation of, electrical activity in the brain with far more sophistication than the needlelike electrodes Delgado stabbed into brains. Billionaires Elon Musk of Tesla and Mark Zuckerberg of Facebook are leading the charge, pouring millions of dollars into developing brain-computer interface (BCI) technology. Musk says he wants to provide a “superintelligence layer” in the human brain to help protect us from artificial intelligence, and Zuckerberg reportedly wants users to upload their thoughts and emotions over the internet without the bother of typing. But fact and fiction are easily blurred in these deliberations. How does this technology actually work, and what is it capable of? All Rights Reserved © 2021

Related chapters from BN: Chapter 11: Motor Control and Plasticity; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 5: The Sensorimotor System; Chapter 13: Memory and Learning
Link ID: 27827 - Posted: 05.19.2021

By Christine Kenneally The first thing that Rita Leggett saw when she regained consciousness was a pair of piercing blue eyes peering curiously into hers. “I know you, don’t I?” she said. The man with the blue eyes replied, “Yes, you do.” But he didn’t say anything else, and for a while Leggett just wondered and stared. Then it came to her: “You’re my surgeon!” It was November, 2010, and Leggett had just undergone neurosurgery at the Royal Melbourne Hospital. She recalled a surge of loneliness as she waited alone in a hotel room the night before the operation and the fear she felt when she entered the operating room. She’d worried about the surgeon cutting off her waist-length hair. What am I doing in here? she’d thought. But just before the anesthetic took hold, she recalled, she had said to herself, “I deserve this.” Leggett was forty-nine years old and had suffered from epilepsy since she was born. During the operation, her surgeon, Andrew Morokoff, had placed an experimental device inside her skull, part of a brain-computer interface that, it was hoped, would be able to predict when she was about to have a seizure. The device, developed by a Seattle company called NeuroVista, had entered a trial stage known in medical research as “first in human.” A research team drawn from three prominent epilepsy centers based in Melbourne had selected fifteen patients to test the device. Leggett was Patient 14. © 2021 Condé Nast.

Related chapters from BN: Chapter 11: Motor Control and Plasticity; Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Related chapters from MM:Chapter 5: The Sensorimotor System; Chapter 3: The Chemistry of Behavior: Neurotransmitters and Neuropharmacology
Link ID: 27791 - Posted: 04.28.2021

By Tanya Lewis During Musk’s demonstration, he strolled near a pen containing several pigs, some of which had Neuralink implants. One animal, named Gertrude, had hers for two months. The device’s electrodes were situated in a part of Gertrude’s cortex that connected to neurons in her snout. And for the purposes of the demo, her brain signals were converted to audible bleeps that became more frequent as she sniffed around the pen and enjoyed some tasty treats. Musk also showed off a pig whose implant had been successfully removed to show that the surgery was reversible. Some of the other displayed pigs had multiple implants. Neuralink implantable device Neuralink implantable device, v0.9. Credit: Neuralink Neuralink, which was founded by Musk and a team of engineers and scientists in 2016, unveiled an earlier, wired version of its implant technology in 2019. It had several modules: the electrodes were connected to a USB port in the skull, which was intended to be wired to an external battery and a radio transmitter that were located behind the ear. The latest version consists of a single integrated implant that fits in a hole in the skull and relays data through the skin via a Bluetooth radio. The wireless design makes it seem much more practical for human use but limits the bandwidth of data that can be sent, compared with state-of-the-art brain-computer interfaces. The company’s goal, Musk said in the demo, is to “solve important spine and brain problems with a seamlessly implanted device”—a far cry from his previously stated, much more fantastic aim of allowing humans to merge with artificial intelligence. This time Musk seemed more circumspect about the device’s applications. As before, he insisted the demonstration was purely intended as a recruiting event to attract potential staff. Neuralink’s efforts build on decades of work from researchers in the field of brain-computer interfaces. Although technically impressive, this wireless brain implant is not the first to be tested in pigs or other large mammals.] © 2020 Scientific American,

Related chapters from BN: Chapter 11: Motor Control and Plasticity; Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Related chapters from MM:Chapter 5: The Sensorimotor System; Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 27457 - Posted: 09.07.2020

By Benjamin Powers On the 10th floor of a nondescript building at Columbia University, test subjects with electrodes attached to their heads watch a driver’s view of a car going down a street through a virtual reality headset. All the while, images of pianos and sailboats pop up to the left and right of each test subject’s field of vision, drawing their attention. The experiment, headed by Paul Sajda, a biomedical engineer and the director of Columbia’s Laboratory for Intelligent Imaging and Neural Computing, monitors the subjects’ brain activity through electroencephalography technology (EEG), while the VR headset tracks their eye movement to see where they’re looking — a setup in which a computer interacts directly with brain waves, called a brain computer interface (BCI). In the Columbia experiment, the goal is to use the information from the brain to train artificial intelligence in self-driving cars, so they can monitor when, or if, drivers are paying attention. BCIs are popping up in a range of fields, from soldiers piloting a swarm of drones at the Defense Advanced Research Projects Agency (DARPA) to a Chinese school monitoring students’ attention. The devices are also used in medicine, including versions that let people who have been paralyzed operate a tablet with their mind or that give epileptic patients advance warning of a seizure. And in July 2019, Elon Musk, the CEO and founder of Tesla and other technology companies, showed off the work of his venture Neuralink, which could implant BCIs in people’s brains to achieve “a symbiosis with artificial intelligence.”

Related chapters from BN: Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 1: Cells and Structures: The Anatomy of the Nervous System; Chapter 14: Attention and Higher Cognition
Link ID: 27209 - Posted: 04.22.2020

By Karen Weintraub At age 16, German Aldana was riding in the back seat of a car driven by a friend when another car headed straight for them. To avoid a collision, his friend swerved and hit a concrete pole. The others weren’t seriously injured, but Aldana, unbuckled, was tossed around enough to snap his spine just below his neck. For the next five years, he could move only his neck, and his arms a little. Right after he turned 21 and met the criteria, Aldana signed up for a research project at the University of Miami Miller School of Medicine near his home. Researchers with the Miami Project to Cure Paralysis carefully opened Aldana's skull and, at the surface of the brain, implanted electrodes. Then, in the lab, they trained a computer to interpret the pattern of signals from those electrodes as he imagines opening and closing his hand. The computer then transfers the signal to a prosthetic on Aldana's forearm, which then stimulates the appropriate muscles to cause his hand to close. The entire process takes 400 milliseconds from thought to grasp. A year after his surgery, Aldana can grab simple objects, like a block. He can bring a spoon to his mouth, feeding himself for the first time in six years. He can grasp a pen and scratch out some legible letters. He has begun experimenting with a treadmill that moves his limbs, allowing him to take steps forward or stop as he thinks about clenching or unclenching the fingers of his right hand. But only in the lab. Researchers had permission to test it only in their facility, but they’re now applying for federal permission to extend their study. The hope is that by the end of this year, Aldana will be able to bring his device home — improving his ability to feed himself, open doors and restoring some measure of independence.

Related chapters from BN: Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 5: The Sensorimotor System
Link ID: 27107 - Posted: 03.09.2020

By Kelly Servick Building a beautiful robotic hand is one thing. Getting it to do your bidding is another. For all the hand-shaped prostheses designed to bend each intricate joint on cue, there’s still the problem of how to send that cue from the wearer’s brain. Now, by tapping into signals from nerves in the arm, researchers have enabled amputees to precisely control a robotic hand just by thinking about their intended finger movements. The interface, which relies on a set of tiny muscle grafts to amplify a user’s nerve signals, just passed its first test in people: It translated those signals into movements, and its accuracy stayed stable over time. “This is really quite a promising and lovely piece of work,” says Gregory Clark, a neural engineer at the University of Utah who was not involved in the research. It “opens up new opportunities for better control.” Most current robotic prostheses work by recording—from the surface of the skin—electrical signals from muscles left intact after an amputation. Some amputees can guide their artificial hand by contracting muscles remaining in the forearm that would have controlled their fingers. If those muscles are missing, people can learn to use less intuitive movements, such as flexing muscles in their upper arm. These setups can be finicky, however. The electrical signal changes when a person’s arm sweats, swells, or slips around in the socket of the prosthesis. As a result, the devices must be recalibrated over and over, and many people decide that wearing a heavy robotic arm all day just isn’t worth it, says Shriya Srinivasan, a biomedical engineer at the Massachusetts Institute of Technology. © 2020 American Association for the Advancement of Science

Related chapters from BN: Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 5: The Sensorimotor System
Link ID: 27095 - Posted: 03.05.2020

By Matthew Cobb We are living through one of the greatest of scientific endeavours – the attempt to understand the most complex object in the universe, the brain. Scientists are accumulating vast amounts of data about structure and function in a huge array of brains, from the tiniest to our own. Tens of thousands of researchers are devoting massive amounts of time and energy to thinking about what brains do, and astonishing new technology is enabling us to both describe and manipulate that activity. A neuroscientist explains: the need for ‘empathetic citizens’ - podcast We can now make a mouse remember something about a smell it has never encountered, turn a bad mouse memory into a good one, and even use a surge of electricity to change how people perceive faces. We are drawing up increasingly detailed and complex functional maps of the brain, human and otherwise. In some species, we can change the brain’s very structure at will, altering the animal’s behaviour as a result. Some of the most profound consequences of our growing mastery can be seen in our ability to enable a paralysed person to control a robotic arm with the power of their mind. Every day, we hear about new discoveries that shed light on how brains work, along with the promise – or threat – of new technology that will enable us to do such far-fetched things as read minds, or detect criminals, or even be uploaded into a computer. Books are repeatedly produced that each claim to explain the brain in different ways. And yet there is a growing conviction among some neuroscientists that our future path is not clear. It is hard to see where we should be going, apart from simply collecting more data or counting on the latest exciting experimental approach. As the German neuroscientist Olaf Sporns has put it: “Neuroscience still largely lacks organising principles or a theoretical framework for converting brain data into fundamental knowledge and understanding.” Despite the vast number of facts being accumulated, our understanding of the brain appears to be approaching an impasse. © 2020 Guardian News & Media Limited

Related chapters from BN: Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals; Chapter 11: Emotions, Aggression, and Stress
Link ID: 27084 - Posted: 02.28.2020

Ian Sample Science editor Scientists have created artificial neurons that could potentially be implanted into patients to overcome paralysis, restore failing brain circuits, and even connect their minds to machines. The bionic neurons can receive electrical signals from healthy nerve cells, and process them in a natural way, before sending fresh signals on to other neurons, or to muscles and organs elsewhere in the body. One of the first applications may be a treatment for a form of heart failure that develops when a particular neural circuit at the base of the brain deteriorates through age or disease and fails to send the right signals to make the heart pump properly. Rather than implanting directly into the brain, the artificial neurons are built into ultra-low power microchips a few millimetres wide. The chips form the basis for devices that would plug straight into the nervous system, for example by intercepting signals that pass between the brain and leg muscles. “Any area where you have some degenerative disease, such as Alzheimer’s, or where the neurons stop firing properly because of age, disease, or injury, then in theory you could replace the faulty biocircuit with a synthetic circuit,” said Alain Nogaret, a physicist who led the project at the University of Bath. The breakthrough came when researchers found they could model live neurons in a computer program and then recreate their firing patterns in silicon chips with more than 94% accuracy. The program allows the scientists to mimic the full variety of neurons found in the nervous system. © 2019 Guardian News & Media Limited

Related chapters from BN: Chapter 17: Learning and Memory; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 4: Development of the Brain; Chapter 4: Development of the Brain
Link ID: 26872 - Posted: 12.04.2019

By Robert Martone We humans have evolved a rich repertoire of communication, from gesture to sophisticated languages. All of these forms of communication link otherwise separate individuals in such a way that they can share and express their singular experiences and work together collaboratively. In a new study, technology replaces language as a means of communicating by directly linking the activity of human brains. Electrical activity from the brains of a pair of human subjects was transmitted to the brain of a third individual in the form of magnetic signals, which conveyed an instruction to perform a task in a particular manner. This study opens the door to extraordinary new means of human collaboration while, at the same time, blurring fundamental notions about individual identity and autonomy in disconcerting ways. Direct brain-to-brain communication has been a subject of intense interest for many years, driven by motives as diverse as futurist enthusiasm and military exigency. In his book Beyond Boundaries one of the leaders in the field, Miguel Nicolelis, described the merging of human brain activity as the future of humanity, the next stage in our species’ evolution. (Nicolelis serves on Scientific American’s board of advisers.) He has already conducted a study in which he linked together the brains of several rats using complex implanted electrodes known as brain-to-brain interfaces. Nicolelis and his co-authors described this achievement as the first “organic computer” with living brains tethered together as if they were so many microprocessors. The animals in this network learned to synchronize the electrical activity of their nerve cells to the same extent as those in a single brain. The networked brains were tested for things such as their ability to discriminate between two different patterns of electrical stimuli, and they routinely outperformed individual animals. © 2019 Scientific American

Related chapters from BN: Chapter 11: Motor Control and Plasticity; Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 5: The Sensorimotor System; Chapter 15: Language and Lateralization
Link ID: 26770 - Posted: 10.30.2019

By Kelly Servick CHICAGO, ILLINOIS—By harnessing the power of imagination, researchers have nearly doubled the speed at which completely paralyzed patients may be able to communicate with the outside world. People who are “locked in”—fully paralyzed by stroke or neurological disease—have trouble trying to communicate even a single sentence. Electrodes implanted in a part of the brain involved in motion have allowed some paralyzed patients to move a cursor and select onscreen letters with their thoughts. Users have typed up to 39 characters per minute, but that’s still about three times slower than natural handwriting. In the new experiments, a volunteer paralyzed from the neck down instead imagined moving his arm to write each letter of the alphabet. That brain activity helped train a computer model known as a neural network to interpret the commands, tracing the intended trajectory of his imagined pen tip to create letters (above). Eventually, the computer could read out the volunteer’s imagined sentences with roughly 95% accuracy at a speed of about 66 characters per minute, the team reported here this week at the annual meeting of the Society for Neuroscience. The researchers expect the speed to increase with more practice. As they refine the technology, they will also use their neural recordings to better understand how the brain plans and orchestrates fine motor movements. © 2019 American Association for the Advancement of Science.

Related chapters from BN: Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals; Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals; Chapter 5: The Sensorimotor System
Link ID: 26745 - Posted: 10.24.2019

By James Gallagher Health and science correspondent A man has been able to move all four of his paralysed limbs with a mind-controlled exoskeleton suit, French researchers report. Thibault, 30, said taking his first steps in the suit felt like being the "first man on the Moon". His movements, particularly walking, are far from perfect and the robo-suit is being used only in the lab. But researchers say the approach could one day improve patients' quality of life. And he can control each of the arms, manoeuvring them in three-dimensional space How easy was it to use? Thibault, who does not want his surname revealed, was an optician before he fell 15m in an incident at a night club four years ago. The injury to his spinal cord left him paralysed and he spent the next two years in hospital. But in 2017, he took part in the exoskeleton trial with Clinatec and the University of Grenoble. Initially he practised using the brain implants to control a virtual character, or avatar, in a computer game, then he moved on to walking in the suit. "It was like [being the] first man on the Moon. I didn't walk for two years. I forgot what it is to stand, I forgot I was taller than a lot of people in the room," he said. It took a lot longer to learn how to control the arms. © 2019 BBC.

Related chapters from BN: Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 5: The Sensorimotor System
Link ID: 26670 - Posted: 10.04.2019

Cassandra Willyard Rob Summers was flat on his back at a rehabilitation institute in Kentucky when he realized he could wiggle his big toe. Up, down, up, down. This was new — something he hadn’t been able to do since a hit-and-run driver left him paralysed from the chest down. When that happened four years earlier, doctors had told him that he would never move his lower body again. Now he was part of a pioneering experiment to test the power of electrical stimulation in people with spinal-cord injuries. “Susie, look, I can wiggle my toe,” Summers said. Susan Harkema, a neurophysiologist at the University of Louisville in Kentucky, sat nearby, absorbed in the data on her computer. She was incredulous. Summers’s toe might be moving, but he was not in control. Of that she was sure. Still, she decided to humour him. She asked him to close his eyes and move his right toe up, then down, and then up. She moved on to the left toe. He performed perfectly. “Holy shit,” Harkema said. She was paying attention now. “How is that happening?” he asked. “I have no idea,” she replied. Summers had been a university baseball player with major-league ambitions before the vehicle that struck him snapped all the ligaments and tendons in his neck, allowing one of his vertebra to pound the delicate nerve tissue it was meant to protect. Doctors classified the injury as complete; the motor connections to his legs had been wiped out. When Harkema and her colleagues implanted a strip of tiny electrodes in his spine in 2009, they weren’t trying to restore Summers’s ability to move on his own. Instead, the researchers were hoping to demonstrate that the spine contains all the circuitry necessary for the body to stand and to step. They reasoned that such an approach might allow people with spinal-cord injuries to stand and walk, using electrical stimulation to replace the signals that once came from the brain.

Related chapters from BN: Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 5: The Sensorimotor System
Link ID: 26471 - Posted: 07.31.2019

By Dom Vukovic Robotic skeletons may sound like something out of a science fiction movie but they are now being used to help people with severe spinal cord injuries take their first steps. The device known as a Rex bionic exoskeleton is one of only a few in the country and researchers in a trial have named their protype HELLEN. In a joint initiative between the University of Newcastle and the Australian Institute of Neuro-Rehabilitation, the robot is being used as a therapy device to see if it can help improve health and mobility outcomes in people with conditions including stroke, multiple sclerosis and now quadriplegia. Chief investigator Jodie Marquez said the trial was one of the first in the world to capture data about physiological and neurological changes that might occur in patients who undergo therapy while wearing the robotic suit. "We're seeing whether exercising in the exoskeleton device can improve both real measures of strength and spasticity, but also bigger measures such as mood and quality of life and function," Dr Marquez said. "I have no doubt that robotics will become a part of rehabilitation and a part of our lives in the future, I think that's unquestionable." Lifesaver Jess Collins is the first person with severe spinal injuries to participate in the trial. She had a near fatal surfing accident while on holidays with friends in May last year leaving her paralysed from the chest down. "I've hit the board and then the sandbank and then instantly I didn't have any movement or feeling and I wasn't sure where I was placed in the water … I was face down, which was horrific and I was conscious the entire time," she said. © 2019 ABC

Related chapters from BN: Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 5: The Sensorimotor System
Link ID: 26460 - Posted: 07.29.2019

By: Karen Moxon, Ph.D., Ignacio Saez, Ph.D., and Jochen Ditterich, Ph.D. Technology that is sparking an entirely new field of neuroscience will soon let us simply think about something we want our computers to do and watch it instantaneously happen. In fact, some patients with severe neurological injury or disease are already reaping the benefits of initial advances by using their thoughts to signal and control robotic limbs. This brain-computer interface (BCI) idea is spawning a new area of neuroscience called cognitive neuroengineering that holds the promise of improving the quality of life for everyone on the planet in unimaginable ways. But the technology is not yet ready for prime time. There are three basic aspects of BCIs—recording, decoding, and operation, and progress will require refining all three. BCI works because brain activity generates a signal—typically an electrical field—that can be recorded through a dedicated device, which feeds it to a computer whose analysis software (i.e., a decoding algorithm) “translates” the signal to a simple command. This command signal operates a computer or other machine. The resulting operation can be as simple as moving a cursor on a screen, for which the command need contain just X and Y coordinates, or as complex as controlling a robotic arm, which requires information about position, orientation, speed, rotation, and more. Recent work from University of Pittsburgh has shown that subjects with amyotrophic lateral sclerosis (ALS) can control a complex robot arm—having it pick up a pitcher and pour water into a glass—just by thinking about it. The downside is that it is necessary to surgically implant recording microelectrodes intothe brain and that, most importantly, such electrodes are not reliable for more than a few years. © 2019 The Dana Foundation.

Related chapters from BN: Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 5: The Sensorimotor System
Link ID: 26306 - Posted: 06.06.2019

Sandeep Ravindran In 2012, computer scientist Dharmendra Modha used a powerful supercomputer to simulate the activity of more than 500 billion neurons—more, even, than the 85 billion or so neurons in the human brain. It was the culmination of almost a decade of work, as Modha progressed from simulating the brains of rodents and cats to something on the scale of humans. The simulation consumed enormous computational resources—1.5 million processors and 1.5 petabytes (1.5 million gigabytes) of memory—and was still agonizingly slow, 1,500 times slower than the brain computes. Modha estimates that to run it in biological real time would have required 12 gigawatts of energy, about six times the maximum output capacity of the Hoover Dam. “And yet, it was just a cartoon of what the brain does,” says Modha, chief scientist for brain-inspired computing at IBM Almaden Research Center in northern California. The simulation came nowhere close to replicating the functionality of the human brain, which uses about the same amount of power as a 20-watt lightbulb. Since the early 2000s, improved hardware and advances in experimental and theoretical neuroscience have enabled researchers to create ever larger and more-detailed models of the brain. But the more complex these simulations get, the more they run into the limitations of conventional computer hardware, as illustrated by Modha’s power-hungry model. © 1986–2019 The Scientist

Related chapters from BN: Chapter 1: Introduction: Scope and Outlook; Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System
Related chapters from MM:Chapter 20: ; Chapter 1: Cells and Structures: The Anatomy of the Nervous System
Link ID: 26269 - Posted: 05.28.2019

Siobhan Roberts In May 2013, the mathematician Carina Curto attended a workshop in Arlington, Virginia, on “Physical and Mathematical Principles of Brain Structure and Function” — a brainstorming session about the brain, essentially. The month before, President Obama had issued one of his “Grand Challenges” to the scientific community in announcing the BRAIN Initiative (Brain Research through Advancing Innovative Neurotechnologies), aimed at spurring a long-overdue revolution in understanding our three-pound organ upstairs. In advance of the workshop, the hundred or so attendees each contributed to a white paper addressing the question of what they felt was the most significant obstacle to progress in brain science. Answers ran the gamut — some probed more generally, citing the brain’s “utter complexity,” while others delved into details about the experimental technology. Curto, an associate professor at Pennsylvania State University, took a different approach in her entry, offering an overview of the mathematical and theoretical technology: A major obstacle impeding progress in brain science is the lack of beautiful models. Let me explain. … Many will agree that the existing (and impending) deluge of data in neuroscience needs to be accompanied by advances in computational and theoretical approaches — for how else are we to “make sense” of these data? What such advances should look like, however, is very much up to debate. … How much detail should we be including in our models? … How well can we defend the biological realism of our theories? All Rights Reserved © 2018

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 13: Memory and Learning
Link ID: 25108 - Posted: 06.20.2018

By Robert F. Service Prosthetics may soon take on a whole new feel. That’s because researchers have created a new type of artificial nerve that can sense touch, process information, and communicate with other nerves much like those in our own bodies do. Future versions could add sensors to track changes in texture, position, and different types of pressure, leading to potentially dramatic improvements in how people with artificial limbs—and someday robots—sense and interact with their environments. “It’s a pretty nice advance,” says Robert Shepherd, an organic electronics expert at Cornell University. Not only are the soft, flexible, organic materials used to make the artificial nerve ideal for integrating with pliable human tissue, but they are also relatively cheap to manufacture in large arrays, Shepherd says. Modern prosthetics are already impressive: Some allow amputees to control arm movement with just their thoughts; others have pressure sensors in the fingertips that help wearers control their grip without the need to constantly monitor progress with their eyes. But our natural sense of touch is far more complex, integrating thousands of sensors that track different types of pressure, such as soft and forceful touch, along with the ability to sense heat and changes in position. This vast amount of information is ferried by a network that passes signals through local clusters of nerves to the spinal cord and ultimately the brain. Only when the signals combine to become strong enough do they make it up the next link in the chain. © 2018 American Association for the Advancement of Science.

Related chapters from BN: Chapter 11: Motor Control and Plasticity; Chapter 8: General Principles of Sensory Processing, Touch, and Pain
Related chapters from MM:Chapter 5: The Sensorimotor System; Chapter 5: The Sensorimotor System
Link ID: 25048 - Posted: 06.01.2018