New! Sign up for our email newsletter on Substack.

The Brain That Learned to Type Again

Key Takeaways

  • New research introduces a brain-computer interface typing system for people with paralysis, decoding intended movements from the motor cortex.
  • Participants typed using a QWERTY layout without physical finger movements, achieving speeds of up to 110 characters per minute.
  • The system demonstrated high accuracy and usability, allowing participants to operate it from home without frequent recalibration.
  • This advancement may lead to restoring more complex hand functions, not just typing capabilities.
  • Current limitations include the need for periodic recalibration and inapplicability for those with no motor cortex activity related to finger movements.

Something fires in the motor cortex. Not a hand movement, exactly. More like the ghost of one: a neural pattern that, in another life, would have extended a finger toward the letter Q. The hand stays still. The paralysis is complete. But the pattern is there, bright and readable, rippling through hundreds of electrodes in the precentral gyrus. And somewhere in the signal processing chain that follows, a letter appears on a screen.

This is typing, of a kind. Slower than you’d manage on a phone, at first. But getting faster.

A new study from Massachusetts General Brigham and Brown University describes what may be the most effective implantable typing system for people with paralysis yet tested. Two participants in the BrainGate2 clinical trial, one with ALS and one with a cervical spinal cord injury, used an intracortical brain-computer interface to type using nothing but intended finger movements. One of them reached 110 characters per minute. That’s 22 words per minute. It’s on par with able-bodied typing accuracy and faster than any previous hand-motor BCI system. The results appear today in Nature Neuroscience.

The system is, at its core, a familiar one. A QWERTY keyboard. Thirty possible movements, one per key (with space and a few punctuation marks), mapped to extensions and flexions of all ten fingers. Upper row: extend upward. Middle row: flex straight down. Bottom row: curl into the palm. What’s different is that none of this requires actual movement. The participants attempted the movements without being able to perform them. The electrodes picked up the motor cortex activity that would, in an uninjured brain, have commanded the muscles to act. A recurrent neural network decoded these signals in real time, feeding predictions into a language model that polished the output into coherent sentences.

It works, it turns out, rather well. Which might seem obvious in retrospect, but wasn’t.

The challenge with any BCI that tries to decode finger movements is that adjacent fingers tend to produce overlapping neural signals. Confusion is inevitable at the single-character level. What the team found, though, is that QWERTY-specific errors are particularly forgiving. When a handwriting-based BCI makes mistakes, it tends to confuse visually similar letters, things like a, e, o and u, or r and n. Those letters are often interchangeable in English, so the language model struggles to correct them. Adjacent QWERTY keys, by contrast, rarely produce interchangeable words. Hitting n instead of m loses you a word, but the language model can see that and fix it. The error patterns and the language patterns, in a sense, cancel each other out.

The two participants, known in the paper as T17 and T18, represent quite different clinical situations. T17 is a 33-year-old man with ALS who has lost voluntary control of essentially everything except his eye muscles; he is ventilator-dependent and has anarthria, meaning he has lost the ability to produce speech sounds. T18 is a 48-year-old man with a complete cervical spinal cord injury, with no residual voluntary movement below the C4 level, though he doesn’t require a ventilator. Both had Utah microelectrode arrays implanted in the precentral gyrus. T18 had six arrays spanning both hemispheres; T17 had six arrays in the left hemisphere only.

This bilateral representation mattered. For T18, having electrodes in both left and right precentral cortex meant the system could cleanly decode movements for both hands without the two sides confusing each other. But even T17, with only left-hemisphere coverage, managed surprisingly clean decoding for both hands. The left motor cortex, it turns out, carries substantial information about movements on the same side of the body as well as the opposite side, a finding that has practical consequences for future device design.

Daniel Rubin, senior author and critical care neurologist at Mass General Brigham, describes the situation the device is trying to address: “For many people with paralysis, when losing use of both the hands and the muscles of speech, communication can become difficult or impossible. Often, people with severe speech and motor impairments end up relying on things like eye-gaze technology… Those systems take far too long for many users.” Eye-gaze trackers require the user to dwell on each letter individually, a laborious process that can produce perhaps a few words per minute on a good day. Some people abandon them altogether.

Both participants used the typing BCI from their own homes, not in a clinical setting, which is arguably as significant a detail as the speed numbers. A device that requires a hospital visit isn’t a communication tool; it’s a demonstration. The fact that T18’s neural decoder remained usable for about seven days without recalibration, and that new users can calibrate the whole system with as few as 30 practice sentences, suggests something closer to a daily-use tool might be within reach.

T18’s top speed of 110 characters per minute is about 81% of the average smartphone typing speed for his age group. Perhaps more striking: his accuracy didn’t decline as he sped up. The system was calibrated with a relatively simple procedure and then handed over; the participant regulated his own pace. There is no fixed decoding window, no beep-and-select rhythm. The neural network runs continuously, predicting characters as movement intentions occur, and ignores the gaps.

Justin Jude, the postdoctoral researcher who led the study, notes that decoding fine finger movements has implications well beyond communication. “Decoding these finger movements is also a big step toward being able to restore complex reach and grasp movements for people with upper extremity paralysis,” he says. The algorithms developed here for typing could, in principle, underpin future systems that restore functional hand control rather than just text output. “Our BCI is a great example of how modern neuroscience and artificial intelligence technology can combine to create something capable of restoring communication and independence for people with paralysis.”

The current system still needs work. The decoder drifts as neural signals change over days and weeks, requiring periodic recalibration. Neither T17 nor T18 were touch typists before their injuries or diagnosis, which means they’ve been learning the system’s spatial mapping from scratch, a somewhat unusual constraint. Future improvements might include personalised keyboard layouts, stenography shortcuts, or wrist gestures that switch between character sets. There are also people for whom no finger movement imagery is possible at all, which is a population this approach cannot reach.

But for two people who had, between them, lost the use of their hands and in one case also their voice, the numbers matter. A sentence on a screen. Words per minute, climbing. The motor cortex, still firing.

DOI / Source: https://doi.org/10.1038/s41593-026-02218-y


Frequently Asked Questions

How does this brain-computer interface typing system actually work?

Microelectrode arrays implanted in the motor cortex record the brain’s electrical activity when a person attempts to move their fingers, even if those movements can’t physically occur due to paralysis. These signals are decoded by a neural network that maps them to keys on a QWERTY keyboard, where each finger has three possible movements corresponding to the three rows of keys. A language model then refines the output into coherent words and sentences in real time.

How fast can someone type with this system?

The faster of the two study participants reached 110 characters per minute, equivalent to 22 words per minute, with a word error rate of 1.6%. That’s comparable to able-bodied typing accuracy and faster than any previously reported hand-motor brain-computer interface. The slower participant reached around 47 characters per minute, still substantially faster than eye-gaze spelling systems that many people with severe paralysis currently rely on.

Does the user need to be in a hospital to use this device?

No. Both participants in the study used the system from their own homes. The decoder can be calibrated with as few as 30 practice sentences, and in the better-performing participant, the system remained usable for about seven days without recalibration. The researchers see at-home use as central to the device’s practical value.

Could this technology eventually restore hand movement, not just typing?

Potentially, yes. The same neural decoding approach that identifies which key a person intends to press by reading motor cortex signals could in principle be extended to decode more complex hand movements. The researchers note that demonstrating clean decoding of all ten fingers’ individual movements is a step toward future systems that might restore functional reach and grasp rather than just typed communication.

What are the main limitations of the current system?

The decoder gradually drifts as the brain’s neural signals change over time, requiring periodic recalibration. The system is currently only suited for people who retain some motor cortex activity related to finger movements, excluding some individuals with the most severe forms of paralysis. Neither study participant was an experienced touch typist before their injury or diagnosis, so both were learning an unfamiliar spatial mapping during the study, which likely slowed initial performance compared to what experienced typists might achieve.


Quick Note Before You Read On.

ScienceBlog.com has no paywalls, no sponsored content, and no agenda beyond getting the science right. Every story here is written to inform, not to impress an advertiser or push a point of view.

Good science journalism takes time — reading the papers, checking the claims, finding researchers who can put findings in context. We do that work because we think it matters.

If you find this site useful, consider supporting it with a donation. Even a few dollars a month helps keep the coverage independent and free for everyone.


Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.