Two years ago, Robert “Buz” Chmielewski underwent a 10-hour operation, during which surgeons implanted six electrodes in his brain. Within a few months of the operation, Chmielewski could control two robotic arms using only his thoughts, transmitted through a brain-computer interface (BCI). Recently, researchers at Johns Hopkins Medicine and the Johns Hopkins University Applied Physics Laboratory announced that Chmielewski can now coordinate large and small movements between the arms, enabling him to perform simple tasks, such as eating with a knife and fork.

More than 30 years ago, when Chmielewski was in his teens, a surfing accident left him mostly paralyzed with minimal ability to use his hands and arms. He received the electrode surgery and subsequent training in controlling the prosthetic arms as part of an ongoing clinical trial sponsored by the Defense Advanced Research Projects Agency (DARPA).

After Chmielewski’s initial success, the Johns Hopkins teams intensified their efforts to refine neurally integrated motion control while simultaneously providing sensory feedback from both prosthetic hands. The recent announcement reflects Chmielewski and the teams’ achievement of the first half of that goal.

The neural interface receives signals from both sides of the brain via the electrodes. A computer uses AI software to identify the intended movements, then transmits the appropriate signal to the robotic arms. Past trials have achieved mind control over the movements of one arm, but the Hopkins teams have broken new ground with their technology that allows Chmielewski to coordinate the actions of two prosthetics at the same time.

Currently, Chmielewski needs time and concentration to perform very simple tasks, such as cutting and feeding himself bites of sponge cake. That may improve with practice and technology improvements, as the researchers focus on increasing the number and types of everyday tasks the system can perform. And the ongoing work towards a sensor that provides robust sensory feedback may help Chmielewski finesse and quicken his prosthetic movements.

Ultimately, the teams hope to create an advanced sensory feedback loop that would allow the user to “feel” the location of his robotic arms and hands, enabling the user to perform tasks like tying shoelaces without looking at their hands. Clearly, this level of capability would transform significantly the daily experience of people who live with upper limb paralysis.