Links for Keyword: Robotics

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 21 - 40 of 276

R. Douglas Fields The raging bull locked its legs mid-charge. Digging its hooves into the ground, the beast came to a halt just before it would have gored the man. Not a matador, the man in the bullring standing eye-to-eye with the panting toro was the Spanish neuroscientist José Manuel Rodriguez Delgado, in a death-defying public demonstration in 1963 of how violent behavior could be squelched by a radio-controlled brain implant. Delgado had pressed a switch on a hand-held radio transmitter to energize electrodes implanted in the bull’s brain. Remote-controlled brain implants, Delgado argued, could suppress deviant behavior to achieve a “psychocivilized society.” Unsurprisingly, the prospect of manipulating the human mind with brain implants and radio beams ignited public fears that curtailed this line of research for decades. But now there is a resurgence using even more advanced technology. Laser beams, ultrasound, electromagnetic pulses, mild alternating and direct current stimulation and other methods now allow access to, and manipulation of, electrical activity in the brain with far more sophistication than the needlelike electrodes Delgado stabbed into brains. Billionaires Elon Musk of Tesla and Mark Zuckerberg of Facebook are leading the charge, pouring millions of dollars into developing brain-computer interface (BCI) technology. Musk says he wants to provide a “superintelligence layer” in the human brain to help protect us from artificial intelligence, and Zuckerberg reportedly wants users to upload their thoughts and emotions over the internet without the bother of typing. But fact and fiction are easily blurred in these deliberations. How does this technology actually work, and what is it capable of? All Rights Reserved © 2021

Related chapters from BN: Chapter 11: Motor Control and Plasticity; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 5: The Sensorimotor System; Chapter 13: Memory and Learning
Link ID: 27827 - Posted: 05.19.2021

By Christine Kenneally The first thing that Rita Leggett saw when she regained consciousness was a pair of piercing blue eyes peering curiously into hers. “I know you, don’t I?” she said. The man with the blue eyes replied, “Yes, you do.” But he didn’t say anything else, and for a while Leggett just wondered and stared. Then it came to her: “You’re my surgeon!” It was November, 2010, and Leggett had just undergone neurosurgery at the Royal Melbourne Hospital. She recalled a surge of loneliness as she waited alone in a hotel room the night before the operation and the fear she felt when she entered the operating room. She’d worried about the surgeon cutting off her waist-length hair. What am I doing in here? she’d thought. But just before the anesthetic took hold, she recalled, she had said to herself, “I deserve this.” Leggett was forty-nine years old and had suffered from epilepsy since she was born. During the operation, her surgeon, Andrew Morokoff, had placed an experimental device inside her skull, part of a brain-computer interface that, it was hoped, would be able to predict when she was about to have a seizure. The device, developed by a Seattle company called NeuroVista, had entered a trial stage known in medical research as “first in human.” A research team drawn from three prominent epilepsy centers based in Melbourne had selected fifteen patients to test the device. Leggett was Patient 14. © 2021 Condé Nast.

Related chapters from BN: Chapter 11: Motor Control and Plasticity; Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Related chapters from MM:Chapter 5: The Sensorimotor System; Chapter 3: The Chemistry of Behavior: Neurotransmitters and Neuropharmacology
Link ID: 27791 - Posted: 04.28.2021

By Tanya Lewis During Musk’s demonstration, he strolled near a pen containing several pigs, some of which had Neuralink implants. One animal, named Gertrude, had hers for two months. The device’s electrodes were situated in a part of Gertrude’s cortex that connected to neurons in her snout. And for the purposes of the demo, her brain signals were converted to audible bleeps that became more frequent as she sniffed around the pen and enjoyed some tasty treats. Musk also showed off a pig whose implant had been successfully removed to show that the surgery was reversible. Some of the other displayed pigs had multiple implants. Neuralink implantable device Neuralink implantable device, v0.9. Credit: Neuralink Neuralink, which was founded by Musk and a team of engineers and scientists in 2016, unveiled an earlier, wired version of its implant technology in 2019. It had several modules: the electrodes were connected to a USB port in the skull, which was intended to be wired to an external battery and a radio transmitter that were located behind the ear. The latest version consists of a single integrated implant that fits in a hole in the skull and relays data through the skin via a Bluetooth radio. The wireless design makes it seem much more practical for human use but limits the bandwidth of data that can be sent, compared with state-of-the-art brain-computer interfaces. The company’s goal, Musk said in the demo, is to “solve important spine and brain problems with a seamlessly implanted device”—a far cry from his previously stated, much more fantastic aim of allowing humans to merge with artificial intelligence. This time Musk seemed more circumspect about the device’s applications. As before, he insisted the demonstration was purely intended as a recruiting event to attract potential staff. Neuralink’s efforts build on decades of work from researchers in the field of brain-computer interfaces. Although technically impressive, this wireless brain implant is not the first to be tested in pigs or other large mammals.] © 2020 Scientific American,

Related chapters from BN: Chapter 11: Motor Control and Plasticity; Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Related chapters from MM:Chapter 5: The Sensorimotor System; Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 27457 - Posted: 09.07.2020

By Benjamin Powers On the 10th floor of a nondescript building at Columbia University, test subjects with electrodes attached to their heads watch a driver’s view of a car going down a street through a virtual reality headset. All the while, images of pianos and sailboats pop up to the left and right of each test subject’s field of vision, drawing their attention. The experiment, headed by Paul Sajda, a biomedical engineer and the director of Columbia’s Laboratory for Intelligent Imaging and Neural Computing, monitors the subjects’ brain activity through electroencephalography technology (EEG), while the VR headset tracks their eye movement to see where they’re looking — a setup in which a computer interacts directly with brain waves, called a brain computer interface (BCI). In the Columbia experiment, the goal is to use the information from the brain to train artificial intelligence in self-driving cars, so they can monitor when, or if, drivers are paying attention. BCIs are popping up in a range of fields, from soldiers piloting a swarm of drones at the Defense Advanced Research Projects Agency (DARPA) to a Chinese school monitoring students’ attention. The devices are also used in medicine, including versions that let people who have been paralyzed operate a tablet with their mind or that give epileptic patients advance warning of a seizure. And in July 2019, Elon Musk, the CEO and founder of Tesla and other technology companies, showed off the work of his venture Neuralink, which could implant BCIs in people’s brains to achieve “a symbiosis with artificial intelligence.”

Related chapters from BN: Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 1: Cells and Structures: The Anatomy of the Nervous System; Chapter 14: Attention and Higher Cognition
Link ID: 27209 - Posted: 04.22.2020

By Karen Weintraub At age 16, German Aldana was riding in the back seat of a car driven by a friend when another car headed straight for them. To avoid a collision, his friend swerved and hit a concrete pole. The others weren’t seriously injured, but Aldana, unbuckled, was tossed around enough to snap his spine just below his neck. For the next five years, he could move only his neck, and his arms a little. Right after he turned 21 and met the criteria, Aldana signed up for a research project at the University of Miami Miller School of Medicine near his home. Researchers with the Miami Project to Cure Paralysis carefully opened Aldana's skull and, at the surface of the brain, implanted electrodes. Then, in the lab, they trained a computer to interpret the pattern of signals from those electrodes as he imagines opening and closing his hand. The computer then transfers the signal to a prosthetic on Aldana's forearm, which then stimulates the appropriate muscles to cause his hand to close. The entire process takes 400 milliseconds from thought to grasp. A year after his surgery, Aldana can grab simple objects, like a block. He can bring a spoon to his mouth, feeding himself for the first time in six years. He can grasp a pen and scratch out some legible letters. He has begun experimenting with a treadmill that moves his limbs, allowing him to take steps forward or stop as he thinks about clenching or unclenching the fingers of his right hand. But only in the lab. Researchers had permission to test it only in their facility, but they’re now applying for federal permission to extend their study. The hope is that by the end of this year, Aldana will be able to bring his device home — improving his ability to feed himself, open doors and restoring some measure of independence.

Related chapters from BN: Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 5: The Sensorimotor System
Link ID: 27107 - Posted: 03.09.2020

By Kelly Servick Building a beautiful robotic hand is one thing. Getting it to do your bidding is another. For all the hand-shaped prostheses designed to bend each intricate joint on cue, there’s still the problem of how to send that cue from the wearer’s brain. Now, by tapping into signals from nerves in the arm, researchers have enabled amputees to precisely control a robotic hand just by thinking about their intended finger movements. The interface, which relies on a set of tiny muscle grafts to amplify a user’s nerve signals, just passed its first test in people: It translated those signals into movements, and its accuracy stayed stable over time. “This is really quite a promising and lovely piece of work,” says Gregory Clark, a neural engineer at the University of Utah who was not involved in the research. It “opens up new opportunities for better control.” Most current robotic prostheses work by recording—from the surface of the skin—electrical signals from muscles left intact after an amputation. Some amputees can guide their artificial hand by contracting muscles remaining in the forearm that would have controlled their fingers. If those muscles are missing, people can learn to use less intuitive movements, such as flexing muscles in their upper arm. These setups can be finicky, however. The electrical signal changes when a person’s arm sweats, swells, or slips around in the socket of the prosthesis. As a result, the devices must be recalibrated over and over, and many people decide that wearing a heavy robotic arm all day just isn’t worth it, says Shriya Srinivasan, a biomedical engineer at the Massachusetts Institute of Technology. © 2020 American Association for the Advancement of Science

Related chapters from BN: Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 5: The Sensorimotor System
Link ID: 27095 - Posted: 03.05.2020

By Matthew Cobb We are living through one of the greatest of scientific endeavours – the attempt to understand the most complex object in the universe, the brain. Scientists are accumulating vast amounts of data about structure and function in a huge array of brains, from the tiniest to our own. Tens of thousands of researchers are devoting massive amounts of time and energy to thinking about what brains do, and astonishing new technology is enabling us to both describe and manipulate that activity. A neuroscientist explains: the need for ‘empathetic citizens’ - podcast We can now make a mouse remember something about a smell it has never encountered, turn a bad mouse memory into a good one, and even use a surge of electricity to change how people perceive faces. We are drawing up increasingly detailed and complex functional maps of the brain, human and otherwise. In some species, we can change the brain’s very structure at will, altering the animal’s behaviour as a result. Some of the most profound consequences of our growing mastery can be seen in our ability to enable a paralysed person to control a robotic arm with the power of their mind. Every day, we hear about new discoveries that shed light on how brains work, along with the promise – or threat – of new technology that will enable us to do such far-fetched things as read minds, or detect criminals, or even be uploaded into a computer. Books are repeatedly produced that each claim to explain the brain in different ways. And yet there is a growing conviction among some neuroscientists that our future path is not clear. It is hard to see where we should be going, apart from simply collecting more data or counting on the latest exciting experimental approach. As the German neuroscientist Olaf Sporns has put it: “Neuroscience still largely lacks organising principles or a theoretical framework for converting brain data into fundamental knowledge and understanding.” Despite the vast number of facts being accumulated, our understanding of the brain appears to be approaching an impasse. © 2020 Guardian News & Media Limited

Related chapters from BN: Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals; Chapter 11: Emotions, Aggression, and Stress
Link ID: 27084 - Posted: 02.28.2020

Ian Sample Science editor Scientists have created artificial neurons that could potentially be implanted into patients to overcome paralysis, restore failing brain circuits, and even connect their minds to machines. The bionic neurons can receive electrical signals from healthy nerve cells, and process them in a natural way, before sending fresh signals on to other neurons, or to muscles and organs elsewhere in the body. One of the first applications may be a treatment for a form of heart failure that develops when a particular neural circuit at the base of the brain deteriorates through age or disease and fails to send the right signals to make the heart pump properly. Rather than implanting directly into the brain, the artificial neurons are built into ultra-low power microchips a few millimetres wide. The chips form the basis for devices that would plug straight into the nervous system, for example by intercepting signals that pass between the brain and leg muscles. “Any area where you have some degenerative disease, such as Alzheimer’s, or where the neurons stop firing properly because of age, disease, or injury, then in theory you could replace the faulty biocircuit with a synthetic circuit,” said Alain Nogaret, a physicist who led the project at the University of Bath. The breakthrough came when researchers found they could model live neurons in a computer program and then recreate their firing patterns in silicon chips with more than 94% accuracy. The program allows the scientists to mimic the full variety of neurons found in the nervous system. © 2019 Guardian News & Media Limited

Related chapters from BN: Chapter 17: Learning and Memory; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 4: Development of the Brain; Chapter 4: Development of the Brain
Link ID: 26872 - Posted: 12.04.2019

By Robert Martone We humans have evolved a rich repertoire of communication, from gesture to sophisticated languages. All of these forms of communication link otherwise separate individuals in such a way that they can share and express their singular experiences and work together collaboratively. In a new study, technology replaces language as a means of communicating by directly linking the activity of human brains. Electrical activity from the brains of a pair of human subjects was transmitted to the brain of a third individual in the form of magnetic signals, which conveyed an instruction to perform a task in a particular manner. This study opens the door to extraordinary new means of human collaboration while, at the same time, blurring fundamental notions about individual identity and autonomy in disconcerting ways. Direct brain-to-brain communication has been a subject of intense interest for many years, driven by motives as diverse as futurist enthusiasm and military exigency. In his book Beyond Boundaries one of the leaders in the field, Miguel Nicolelis, described the merging of human brain activity as the future of humanity, the next stage in our species’ evolution. (Nicolelis serves on Scientific American’s board of advisers.) He has already conducted a study in which he linked together the brains of several rats using complex implanted electrodes known as brain-to-brain interfaces. Nicolelis and his co-authors described this achievement as the first “organic computer” with living brains tethered together as if they were so many microprocessors. The animals in this network learned to synchronize the electrical activity of their nerve cells to the same extent as those in a single brain. The networked brains were tested for things such as their ability to discriminate between two different patterns of electrical stimuli, and they routinely outperformed individual animals. © 2019 Scientific American

Related chapters from BN: Chapter 11: Motor Control and Plasticity; Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 5: The Sensorimotor System; Chapter 15: Language and Lateralization
Link ID: 26770 - Posted: 10.30.2019

By Kelly Servick CHICAGO, ILLINOIS—By harnessing the power of imagination, researchers have nearly doubled the speed at which completely paralyzed patients may be able to communicate with the outside world. People who are “locked in”—fully paralyzed by stroke or neurological disease—have trouble trying to communicate even a single sentence. Electrodes implanted in a part of the brain involved in motion have allowed some paralyzed patients to move a cursor and select onscreen letters with their thoughts. Users have typed up to 39 characters per minute, but that’s still about three times slower than natural handwriting. In the new experiments, a volunteer paralyzed from the neck down instead imagined moving his arm to write each letter of the alphabet. That brain activity helped train a computer model known as a neural network to interpret the commands, tracing the intended trajectory of his imagined pen tip to create letters (above). Eventually, the computer could read out the volunteer’s imagined sentences with roughly 95% accuracy at a speed of about 66 characters per minute, the team reported here this week at the annual meeting of the Society for Neuroscience. The researchers expect the speed to increase with more practice. As they refine the technology, they will also use their neural recordings to better understand how the brain plans and orchestrates fine motor movements. © 2019 American Association for the Advancement of Science.

Related chapters from BN: Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals; Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals; Chapter 5: The Sensorimotor System
Link ID: 26745 - Posted: 10.24.2019

By James Gallagher Health and science correspondent A man has been able to move all four of his paralysed limbs with a mind-controlled exoskeleton suit, French researchers report. Thibault, 30, said taking his first steps in the suit felt like being the "first man on the Moon". His movements, particularly walking, are far from perfect and the robo-suit is being used only in the lab. But researchers say the approach could one day improve patients' quality of life. And he can control each of the arms, manoeuvring them in three-dimensional space How easy was it to use? Thibault, who does not want his surname revealed, was an optician before he fell 15m in an incident at a night club four years ago. The injury to his spinal cord left him paralysed and he spent the next two years in hospital. But in 2017, he took part in the exoskeleton trial with Clinatec and the University of Grenoble. Initially he practised using the brain implants to control a virtual character, or avatar, in a computer game, then he moved on to walking in the suit. "It was like [being the] first man on the Moon. I didn't walk for two years. I forgot what it is to stand, I forgot I was taller than a lot of people in the room," he said. It took a lot longer to learn how to control the arms. © 2019 BBC.

Related chapters from BN: Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 5: The Sensorimotor System
Link ID: 26670 - Posted: 10.04.2019

Cassandra Willyard Rob Summers was flat on his back at a rehabilitation institute in Kentucky when he realized he could wiggle his big toe. Up, down, up, down. This was new — something he hadn’t been able to do since a hit-and-run driver left him paralysed from the chest down. When that happened four years earlier, doctors had told him that he would never move his lower body again. Now he was part of a pioneering experiment to test the power of electrical stimulation in people with spinal-cord injuries. “Susie, look, I can wiggle my toe,” Summers said. Susan Harkema, a neurophysiologist at the University of Louisville in Kentucky, sat nearby, absorbed in the data on her computer. She was incredulous. Summers’s toe might be moving, but he was not in control. Of that she was sure. Still, she decided to humour him. She asked him to close his eyes and move his right toe up, then down, and then up. She moved on to the left toe. He performed perfectly. “Holy shit,” Harkema said. She was paying attention now. “How is that happening?” he asked. “I have no idea,” she replied. Summers had been a university baseball player with major-league ambitions before the vehicle that struck him snapped all the ligaments and tendons in his neck, allowing one of his vertebra to pound the delicate nerve tissue it was meant to protect. Doctors classified the injury as complete; the motor connections to his legs had been wiped out. When Harkema and her colleagues implanted a strip of tiny electrodes in his spine in 2009, they weren’t trying to restore Summers’s ability to move on his own. Instead, the researchers were hoping to demonstrate that the spine contains all the circuitry necessary for the body to stand and to step. They reasoned that such an approach might allow people with spinal-cord injuries to stand and walk, using electrical stimulation to replace the signals that once came from the brain.

Related chapters from BN: Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 5: The Sensorimotor System
Link ID: 26471 - Posted: 07.31.2019

By Dom Vukovic Robotic skeletons may sound like something out of a science fiction movie but they are now being used to help people with severe spinal cord injuries take their first steps. The device known as a Rex bionic exoskeleton is one of only a few in the country and researchers in a trial have named their protype HELLEN. In a joint initiative between the University of Newcastle and the Australian Institute of Neuro-Rehabilitation, the robot is being used as a therapy device to see if it can help improve health and mobility outcomes in people with conditions including stroke, multiple sclerosis and now quadriplegia. Chief investigator Jodie Marquez said the trial was one of the first in the world to capture data about physiological and neurological changes that might occur in patients who undergo therapy while wearing the robotic suit. "We're seeing whether exercising in the exoskeleton device can improve both real measures of strength and spasticity, but also bigger measures such as mood and quality of life and function," Dr Marquez said. "I have no doubt that robotics will become a part of rehabilitation and a part of our lives in the future, I think that's unquestionable." Lifesaver Jess Collins is the first person with severe spinal injuries to participate in the trial. She had a near fatal surfing accident while on holidays with friends in May last year leaving her paralysed from the chest down. "I've hit the board and then the sandbank and then instantly I didn't have any movement or feeling and I wasn't sure where I was placed in the water … I was face down, which was horrific and I was conscious the entire time," she said. © 2019 ABC

Related chapters from BN: Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 5: The Sensorimotor System
Link ID: 26460 - Posted: 07.29.2019

By: Karen Moxon, Ph.D., Ignacio Saez, Ph.D., and Jochen Ditterich, Ph.D. Technology that is sparking an entirely new field of neuroscience will soon let us simply think about something we want our computers to do and watch it instantaneously happen. In fact, some patients with severe neurological injury or disease are already reaping the benefits of initial advances by using their thoughts to signal and control robotic limbs. This brain-computer interface (BCI) idea is spawning a new area of neuroscience called cognitive neuroengineering that holds the promise of improving the quality of life for everyone on the planet in unimaginable ways. But the technology is not yet ready for prime time. There are three basic aspects of BCIs—recording, decoding, and operation, and progress will require refining all three. BCI works because brain activity generates a signal—typically an electrical field—that can be recorded through a dedicated device, which feeds it to a computer whose analysis software (i.e., a decoding algorithm) “translates” the signal to a simple command. This command signal operates a computer or other machine. The resulting operation can be as simple as moving a cursor on a screen, for which the command need contain just X and Y coordinates, or as complex as controlling a robotic arm, which requires information about position, orientation, speed, rotation, and more. Recent work from University of Pittsburgh has shown that subjects with amyotrophic lateral sclerosis (ALS) can control a complex robot arm—having it pick up a pitcher and pour water into a glass—just by thinking about it. The downside is that it is necessary to surgically implant recording microelectrodes intothe brain and that, most importantly, such electrodes are not reliable for more than a few years. © 2019 The Dana Foundation.

Related chapters from BN: Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 5: The Sensorimotor System
Link ID: 26306 - Posted: 06.06.2019

Sandeep Ravindran In 2012, computer scientist Dharmendra Modha used a powerful supercomputer to simulate the activity of more than 500 billion neurons—more, even, than the 85 billion or so neurons in the human brain. It was the culmination of almost a decade of work, as Modha progressed from simulating the brains of rodents and cats to something on the scale of humans. The simulation consumed enormous computational resources—1.5 million processors and 1.5 petabytes (1.5 million gigabytes) of memory—and was still agonizingly slow, 1,500 times slower than the brain computes. Modha estimates that to run it in biological real time would have required 12 gigawatts of energy, about six times the maximum output capacity of the Hoover Dam. “And yet, it was just a cartoon of what the brain does,” says Modha, chief scientist for brain-inspired computing at IBM Almaden Research Center in northern California. The simulation came nowhere close to replicating the functionality of the human brain, which uses about the same amount of power as a 20-watt lightbulb. Since the early 2000s, improved hardware and advances in experimental and theoretical neuroscience have enabled researchers to create ever larger and more-detailed models of the brain. But the more complex these simulations get, the more they run into the limitations of conventional computer hardware, as illustrated by Modha’s power-hungry model. © 1986–2019 The Scientist

Related chapters from BN: Chapter 1: Introduction: Scope and Outlook; Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System
Related chapters from MM:Chapter 20: ; Chapter 1: Cells and Structures: The Anatomy of the Nervous System
Link ID: 26269 - Posted: 05.28.2019

Siobhan Roberts In May 2013, the mathematician Carina Curto attended a workshop in Arlington, Virginia, on “Physical and Mathematical Principles of Brain Structure and Function” — a brainstorming session about the brain, essentially. The month before, President Obama had issued one of his “Grand Challenges” to the scientific community in announcing the BRAIN Initiative (Brain Research through Advancing Innovative Neurotechnologies), aimed at spurring a long-overdue revolution in understanding our three-pound organ upstairs. In advance of the workshop, the hundred or so attendees each contributed to a white paper addressing the question of what they felt was the most significant obstacle to progress in brain science. Answers ran the gamut — some probed more generally, citing the brain’s “utter complexity,” while others delved into details about the experimental technology. Curto, an associate professor at Pennsylvania State University, took a different approach in her entry, offering an overview of the mathematical and theoretical technology: A major obstacle impeding progress in brain science is the lack of beautiful models. Let me explain. … Many will agree that the existing (and impending) deluge of data in neuroscience needs to be accompanied by advances in computational and theoretical approaches — for how else are we to “make sense” of these data? What such advances should look like, however, is very much up to debate. … How much detail should we be including in our models? … How well can we defend the biological realism of our theories? All Rights Reserved © 2018

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 13: Memory and Learning
Link ID: 25108 - Posted: 06.20.2018

By Robert F. Service Prosthetics may soon take on a whole new feel. That’s because researchers have created a new type of artificial nerve that can sense touch, process information, and communicate with other nerves much like those in our own bodies do. Future versions could add sensors to track changes in texture, position, and different types of pressure, leading to potentially dramatic improvements in how people with artificial limbs—and someday robots—sense and interact with their environments. “It’s a pretty nice advance,” says Robert Shepherd, an organic electronics expert at Cornell University. Not only are the soft, flexible, organic materials used to make the artificial nerve ideal for integrating with pliable human tissue, but they are also relatively cheap to manufacture in large arrays, Shepherd says. Modern prosthetics are already impressive: Some allow amputees to control arm movement with just their thoughts; others have pressure sensors in the fingertips that help wearers control their grip without the need to constantly monitor progress with their eyes. But our natural sense of touch is far more complex, integrating thousands of sensors that track different types of pressure, such as soft and forceful touch, along with the ability to sense heat and changes in position. This vast amount of information is ferried by a network that passes signals through local clusters of nerves to the spinal cord and ultimately the brain. Only when the signals combine to become strong enough do they make it up the next link in the chain. © 2018 American Association for the Advancement of Science.

Related chapters from BN: Chapter 11: Motor Control and Plasticity; Chapter 8: General Principles of Sensory Processing, Touch, and Pain
Related chapters from MM:Chapter 5: The Sensorimotor System; Chapter 5: The Sensorimotor System
Link ID: 25048 - Posted: 06.01.2018

By Matthew Hutson As artificial intelligence (AI) allows machines to become more like humans, will they experience similar psychological quirks such as hallucinations or depression? And might this be a good thing? Last month, New York University in New York City hosted a symposium called Canonical Computations in Brains and Machines, where neuroscientists and AI experts discussed overlaps in the way humans and machines think. Zachary Mainen, a neuroscientist at the Champalimaud Centre for the Unknown, a neuroscience and cancer research institute in Lisbon, speculated that we might expect an intelligent machine to suffer some of the same mental problems people do. Q: Why do you think AIs might get depressed and hallucinate? A: I’m drawing on the field of computational psychiatry, which assumes we can learn about a patient who’s depressed or hallucinating from studying AI algorithms like reinforcement learning. If you reverse the arrow, why wouldn’t an AI be subject to the sort of things that go wrong with patients? Q: Might the mechanism be the same as it is in humans? A: Depression and hallucinations appear to depend on a chemical in the brain called serotonin. It may be that serotonin is just a biological quirk. But if serotonin is helping solve a more general problem for intelligent systems, then machines might implement a similar function, and if serotonin goes wrong in humans, the equivalent in a machine could also go wrong. © 2018 American Association for the Advancement of Science

Related chapters from BN: Chapter 11: Motor Control and Plasticity; Chapter 16: Psychopathology: Biological Basis of Behavior Disorders
Related chapters from MM:Chapter 5: The Sensorimotor System; Chapter 12: Psychopathology: The Biology of Behavioral Disorders
Link ID: 24843 - Posted: 04.10.2018

BCIs have deep roots. In the 18th century Luigi Galvani discovered the role of electricity in nerve activity when he found that applying voltage could cause a dead frog’s legs to twitch. In the 1920s Hans Berger used electroencephalography to record human brain waves. In the 1960s José Delgado theatrically used a brain implant to stop a charging bull in its tracks. One of the field’s father figures is still hard at work in the lab. Eberhard Fetz was a post-doctoral researcher at the University of Washington in Seattle when he decided to test whether a monkey could control the needle of a meter using only its mind. A paper based on that research, published in 1969, showed that it could. Dr Fetz tracked down the movement of the needle to the firing rate of a single neuron in the monkey’s brain. The animal learned to control the activity of that single cell within two minutes, and was also able to switch to control a different neuron. Dr Fetz disclaims any great insights in setting up the experiment. “I was just curious, and did not make the association with potential uses of robotic arms or the like,” he says. But the effect of his paper was profound. It showed both that volitional control of a BCI was possible, and that the brain was capable of learning how to operate one without any help. Some 48 years later, Dr Fetz is still at the University of Washington, still fizzing with energy and still enthralled by the brain’s plasticity. He is particularly interested in the possibility of artificially strengthening connections between cells, and perhaps forging entirely new ones.

Related chapters from BN: Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 5: The Sensorimotor System
Link ID: 24839 - Posted: 04.09.2018

Sara Reardon Superconducting computing chips modelled after neurons can process information faster and more efficiently than the human brain. That achievement, described in Science Advances on 26 January1, is a key benchmark in the development of advanced computing devices designed to mimic biological systems. And it could open the door to more natural machine-learning software, although many hurdles remain before it could be used commercially. Artificial intelligence software has increasingly begun to imitate the brain. Algorithms such as Google’s automatic image-classification and language-learning programs use networks of artificial neurons to perform complex tasks. But because conventional computer hardware was not designed to run brain-like algorithms, these machine-learning tasks require orders of magnitude more computing power than the human brain does. “There must be a better way to do this, because nature has figured out a better way to do this,” says Michael Schneider, a physicist at the US National Institute of Standards and Technology (NIST) in Boulder, Colorado, and a co-author of the study. NIST is one of a handful of groups trying to develop ‘neuromorphic’ hardware that mimics the human brain in the hope that it will run brain-like software more efficiently. In conventional electronic systems, transistors process information at regular intervals and in precise amounts — either 1 or 0 bits. But neuromorphic devices can accumulate small amounts of information from multiple sources, alter it to produce a different type of signal and fire a burst of electricity only when needed — just as biological neurons do. As a result, neuromorphic devices require less energy to run. © 2018 Macmillan Publishers Limited

Related chapters from BN: Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals; Chapter 13: Memory and Learning
Link ID: 24579 - Posted: 01.27.2018