November 26, 2020
brain-computer interface Neuroscience Science Technology

A new way to plug a human brain into a computer: Via veins

This post was originally published on this site

human brain, motherboards, chip and artificial intelligence concept and neural tech and brain computer interfaces.

Enlarge / human brain, motherboards, chip and artificial intelligence concept and neural tech and brain computer interfaces.

The hard part of connecting a gooey, thinking brain to a cold, one-ing and zero-ing computer is getting information through your thick skull—or mine, or anyone’s. The whole point of a skull, after all, is keeping a brain safely separate from [waves hands at everything].

So if that brain isn’t yours, the only way to tell what’s going on inside it is inference. People make very educated guesses based on what that brain tells a body to do—like, if the body makes some noises that you can understand (that’s speech) or moves around in a recognizable way. That’s a problem for people trying to understand how the brain works, and an even bigger problem for people who because of injury or illness can’t move or speak. Sophisticated imaging technologies like functional magnetic resonance can give you some clues. But it’d be great to have something more direct. For decades, technologists have been trying to get brains to interface with computer keyboards or robot arms, to get meat to commune with silicon.

On Wednesday, a team of scientists and engineers showed results from a promising new approach. It involves mounting electrodes on an expandable, springy tube called a stent and threading it through a blood vessel that leads to the brain. In tests on two people, the researchers literally went for the jugular, running a stent-tipped wire up that vein in the throat and then into a vessel near the brain’s primary motor cortex, where they popped the spring. The electrodes snuggled into the vessel wall and started sensing when the people’s brains signaled their intention to move—and sent those signals wirelessly to a computer, via an infrared transmitter surgically inserted in the subjects’ chests. In an article published in the Journal of NeuroInterventional Surgery, the Australian and US researchers describe how two people with paralysis due to amyotrophic lateral sclerosis (better known as Lou Gehrig’s disease) used such a device to send texts and fool around online by brain-control alone.

“Self-expanding stent technology has been well demonstrated in both cardiac and neurological applications to treat other diseases. We just use that feature and put electrodes on top of the stent,” says Thomas Oxley, an interventional neurologist and CEO of Synchron, the company hoping to commercialize the technology. “It’s fully implantable. Patients go home in a couple of days. And it’s plug-and-play.”

It took training once the subjects got home. The electrode-studded stent could pick up signals from the brain, but machine-learning algorithms have to figure out what those signals—imperfect reflections of a mind at work even under ideal conditions—actually represent. But after a few weeks of work, both patients could use an eye tracker to move a cursor and then click with a thought, using the implant. It doesn’t sound like much, but that was enough for both of them to send text messages, shop online, and otherwise perform activities of digital daily life.

The Food and Drug Administration hasn’t approved what Oxley calls a “stentrode” for widespread use yet, and the company is still chasing funding for more tests, but these preliminary results suggest that it’s a functioning brain-computer interface. The signal it receives isn’t packed full of information. For now, all the stentrode is picking up is one bit of information—either a telepathic mouse-click or the absence of that click. But for some applications, maybe that’s enough. “There’s been a lot of talk about data and channels, and really what should matter is, have you delivered a life-changing product to the patient?” Oxley says. “Just with a handful of outputs restored to the patient that they’re in control of, we’ve got them controlling Windows 10.”

Much more ambitious brain-computer interfaces and neural prosthetics have been in the news lately. Last month, Elon Musk’s company Neuralink demonstrated a wireless BCI with more than a thousand flexible electrodes, designed to be inserted directly into a brain by a specialized robot surgeon. (The company has so far only shown short-term use in pigs.) Inserting electrodes is tricky; while it’s true that brain surgery isn’t exactly rocket science, it has risks whether the surgeon is a robot or not. Even flexible, thin electrodes like those that Neuralink demonstrated are invasive enough that the brain tries to defend against them, coating them with glial cells that reduce their ability to conduct the electrical impulses they’re looking for. And while implanted electrodes like those of the more commonly used “Utah array” can get clear signals from individual neurons, understanding what those signals mean is still science in progress. Plus, the brain sloshes around like jelly in a donut; fixed-in-place electrodes can damage it. But get it right and they can do more than brain research. “Locked-in” patients with ALS have used them as successful brain-computer interfaces, though they require training, maintenance, surgery, and so on.

Meanwhile, electrodes placed directly onto the scalp can pick up brain waves—electroencephalograms, or EEGs—but those lack the spatial detail of implanted electrodes. Neuroscientists know, very roughly, which part of the brain does what, but the more you know about which neurons are firing, the better you can tell what they’re firing about.

A more recent innovation, electrocorticography, places a mesh of electrodes directly onto the surface of the brain. In combination with smart spectral processing of the signals those electrodes pick up, ECoG is good enough to translate action in the part of the motor cortex that controls the lips, jaw, and tongue into text or even speech. And there are other approaches. CTRL-labs, which Facebook bought for perhaps as much as $1 billion in 2019, tries to get motor signals from neurons in the wrist. Kernel uses functional near-infrared spectroscopy on the head to sense brain activity.

Oxley and his colleagues’ stentrode, if it keeps showing good results, will fit somewhere along the spectrum between implanted electrodes and EEG. Closer to the first thing than the second, its inventors hope. But it’s still early days. “The core technology and the core idea is super cool, but given where they’re accessing the signals from, my expectation would be that this is a relatively low-fidelity signal relative to other brain-machine interface strategies,” says Vikash Gilja, who runs the Translational Neural Engineering Lab at UC San Diego. “We at least know that high-density ECoG recording from the surface of the brain can convey information beyond what is being shown in this paper.”

A possible problem: Tissue conducts electrical impulses, but the electrodes in the stent are picking up signals from the brain through the cells of the blood vessel. That lowers signal content. “If we were to take those cortical surface recordings and compare them to Utah array experiments—the bulk of clinical experience with implanted electrodes—I would say the style of recording in ECoG is a rate limiter,” Gilja says. (Just for transparency, I should point out that Gilja has done for-pay work with BCI companies including Neuralink, with whom Synchron could theoretically compete someday.)

So it might not be good enough for neuroscience, but it could be plenty useful for a person with paralysis who wants a low-maintenance BCI that doesn’t require drilling through the skull. “There’s a trade-off between how invasive you want to be and at what level you collect information,” says Andrew Pruszynski, a neuroscientist at Western University in Canada. “This is trying to get to the middle ground, to insert a catheter close to the neural activity. It’s obviously invasive, but certainly not as invasive as putting electrodes into the brain.”

And there’s more work to come. Oxley’s team hopes to expand their study to more human subjects. They’ll be looking for possible side effects, like the chance that the stent could contribute to strokes (though this seems less likely as it embeds in the vessel walls, a process called endothelialization). They might find better locations for the stent, in blood vessels adjacent to other brain areas of interest; anywhere within 2 millimeters of a vessel big enough to accommodate the stentrode is fair game, Oxley says. The software could stand some improving, in terms of figuring out what the brain actually means when it emits its electrical bells and whistles, and some of their tests suggest the system could pick up more informational detail—like which specific muscle the users were trying to contract. That could lead to more useful prosthetics or control of devices beyond Windows 10. “The motor system, right now, is what’s going to deliver therapy for people who are paralyzed,” Oxley says. “But when we start to engage with other areas of the brain, you begin to see how the technology is going to open up brain processing power.” It’s hard to predict what might happen when scientists actually figure out how to get inside someone’s head.

This story originally appeared on wired.com.

Related Posts