Meta has a brain-typing AI, but it’s stuck in the lab

In 2017, Facebook revealed plans for a brain-reading hat that you could wear to send text messages just by thinking. “We’re working on a system that will let you type directly from your brain,” CEO Mark Zuckerberg shared in a blog post that year.

Now the company, renamed Meta, has actually done it. Only it weighs half a ton, costs $2 million, and will never leave the lab.

Still, it’s pretty cool that neuroscience and AI researchers working for Meta have been able to scan people’s brains while they type and determine which keys they’re pressing, just from their thoughts.

The research, described in two papers published by the company ( here and here ) as well as a blog post, is particularly impressive because the subjects’ thoughts were measured from outside their skulls using a magnetic scanner and then processed using a deep neural network.

“As we’ve seen time and time again, deep neural networks can reveal remarkable insights when combined with robust data,” says Sumner Norman, founder of Forest Neurotech, who was not involved in the research but credits Meta for going to “extensive lengths to collect high-quality data.”

According to Jean-Rémi King, leader of Meta’s “Brain and AI” research team, the system can determine which letter a skilled typist pressed up to 80 percent of the time, a high enough accuracy to reconstruct entire sentences from brain signals.

Facebook’s original call for a cap or headband that would allow consumers to read their brains ran into technical hurdles, and after four years, the company abandoned the idea.

But Meta has never stopped supporting basic neuroscience research, which it now sees as an important path toward more powerful AIs that learn and reason like humans.

King says his Paris-based group is specifically tasked with figuring out “the principles of intelligence” in the human brain. “Trying to understand the precise architecture or principles of the human brain could be a way to inform the development of machine intelligence,” King says. “That’s the way forward.”

The typing system is definitely not a commercial product, nor is it on its way to becoming one. The magnetoencephalography scanner used in the new research collects magnetic signals produced in the cortex as brain neurons fire.

But it is large and expensive, and must be operated in a shielded room, since the Earth’s magnetic field is a trillion times stronger than that of your brain.

Norman likens the device to “an MRI machine tilted sideways and suspended above the user’s head.” Furthermore, adds King, the second a subject’s head moves, the signal is lost. “Our effort is not product-oriented at all,” he says. “In fact, my message is always that I don’t think there is a path to product because it’s too hard.”

The typing project was conducted with 35 volunteers at a research center in Spain, the Basque Center on Cognition, Brain, and Language. Each spent about 20 hours inside the scanner typing phrases like “el procesador ejecuta la instrucción” (the processor executes the instruction) while their brain signals were fed into a deep-learning system that Meta is calling Brain2Qwerty, in a reference to the layout of letters on a keyboard.

The job of this deep-learning system is to figure out which brain signals mean someone is typing an a, which mean a z, and so on. Eventually, after watching an individual volunteer type several thousand characters, the model can guess which keys people were actually pressing.

In the first preprint, the Meta researchers report that the average error rate was about 32 percent—or about one wrong letter in every three.

Still, according to Meta, their results are the most accurate yet for brain typing using a full alphabetical keyboard and signals collected outside the skull. Research on brain scanning has advanced rapidly, though the most effective approaches use electrodes implanted in the brain, or directly on its surface. These are known as “invasive” brain-computer interfaces. Although they require brain surgery, they can collect electrical information from small groups of neurons with great precision.

In 2023, for example, a person who lost his voice to ALS was able to speak through brain-reading software connected to a speech synthesizer. Neuralink, founded by Elon Musk, is testing its own brain implant that gives paralyzed people control over a cursor.

Meta says its own efforts remain focused on basic research into the nature of intelligence.

That’s where the large magnetic scanner could help. While it’s not practical for patients and doesn’t measure individual neurons, it can look at the entire brain, broadly, all at once. Meta scientists say that in a second research effort using the same typing data, they have used this broader view to gather evidence that the brain produces language information from the top down, with an initial signal for a sentence initiating separate signals for words, syllables and, finally, typed letters.

“The central claim is that the brain structures language production hierarchically,” Norman says.

That’s not a new idea, but the Meta report highlights “how these different levels interact as a system,” Norman says. These kinds of insights could eventually shape the design of artificial intelligence systems. Some of them, like chatbots, already rely heavily on language to process information and reason, just as people do.

“Language has become a foundation of AI,” King says. “So the computational principles that enable the brain, or any system, to acquire that ability are a key motivation behind this work.”

( fonte: MIT Technology Review)