Business

Meta Is Building AI That Reads Brainwaves

Researchers at Meta, the parent company of Facebook, are working on a new way to understand what’s happening in people’s minds. On August 31, the company announced that research scientists in its AI lab have developed AI that can “hear” what someone’s hearing, by studying their brainwaves.

While the research is still in very early stages, it’s intended to be a building block for tech that could help people with traumatic brain injuries who can’t communicate by talking or typing. Researchers are also trying to capture brain activity without using electrodes. This is a difficult task that requires surgery.

The Meta AI study looked at 169 healthy adult participants who heard stories and sentences read aloud, as scientists recorded their brain activity with various devices (think: electrodes stuck on participants’ heads).

In the hopes of discovering patterns, researchers fed this data into an AI modeling. They wanted the algorithm to “hear” or determine what participants were listening to, based on the electrical and magnetic activity in their brains.

TIME spoke to Jean Remi King, Facebook Artificial Intelligence Research Lab (FAIR) Lab’s research scientist, about the study’s goals, challenges, as well as the ethical implications. This research is still not peer-reviewed.

The following interview was edited and condensed for clarity.

TIME: In layman’s terms, can you explain what your team set out to do with this research and what was accomplished?

Jean Remi King: A variety of conditions can lead to traumatic brain injuries or anoxia. [an oxygen deficiency]These patients are basically unable communicate with one another. Over the years, brain-computer interfaces has been identified as one possible route for such patients. By putting an electrode on the motor areas of a patient’s brain, we can decode activity and help the patient communicate with the rest of the world…But it’s obviously extremely invasive to put an electrode inside someone’s brain. We wanted noninvasive recordings to capture brain activity. The goal of the project was to create an AI system capable of decoding brain responses in spoken stories.

How did you overcome the greatest challenges in conducting your research?

Two challenges are worth noting. On the one hand, the signals that we pick up from brain activity are extremely “noisy.” The sensors are pretty far away from the brain. The signal we pick up can be affected by the presence of skin and a skull. It is super-technical to pick them up using a sensor.

The other big problem is more conceptual in that we actually don’t know how the brain represents language to a large extent. So even if we had a very clear signal, without machine learning, it would be very difficult to say, “OK, this brain activity means this word, or this phoneme, or an intent to act, or whatever.”

This is why the AI system must be able to learn to match representations speech with representations brain activity.

How can we continue this research in the future? Are we really that far from an AI that can help those who have experienced traumatic neurological injuries to communicate with one another?

Patients will need a device that can work at their bedside as well as for the production of language. We only focus on speech perception in our instance. So I think one possible next step is to try to decode what people attend to in terms of speech—to try to see whether they can track what DiversePeople are telling you what they want. Perhaps more important, we should have the capability to interpret what they are saying. Because a volunteer is healthy, this creates many facial movements which the sensors easily pick up. This makes it extremely difficult. Very difficult will be to ensure that brain activity is being decoded and not just muscle activity. So that’s the goal, but we already know that it’s going to be very hard.

What else can this research be used for?

It’s difficult to judge that because we have one objective here. The goal is to attempt to understand brain activity and decode what the people heard. At this stage, colleagues and reviewers are mainly asking, “How is this useful? It isn’t much use decoding what we think people have heard. [the table].” But I take this more as a proof of principle that there may be pretty rich representations in these signals—more than perhaps we would have thought.

Is there anything else you think it’s important for people to know about this study?

This is FAIR research and it isn’t directed by Meta. It is not meant to be a product-oriented study.

Here are more must-read stories from TIME


To Megan McCluskey at megan.mccluskey@time.com.

Tags

Related Articles

Back to top button