Business

DeepMind AI Lab Releases 200 Million 3D Images of Proteins

Matt Higgins, his Oxford University research team faced a challenge.

They had spent years studying the malaria parasite, which still kills hundreds and thousands every year, for many years. A key protein was identified on the parasite’s outer surface. This could be a point of entry for future vaccines. The chemical code was known. But the protein’s all-important 3D structure was eluding them. The key was to create the perfect vaccine to stop the parasites from infecting human cell.

The team’s best way of taking a “photograph” of the protein was using X-rays—an imprecise tool that only returned the fuzziest of images. Their dream of creating effective malaria vaccines wasn’t possible without a 3D image. “We were never able, despite many years of work, to see in sufficient detail what this molecule looked like,” Higgins told reporters on Tuesday.

DeepMind was born. The artificial intelligence lab, which is a subsidiary of Google’s parent company Alphabet, had set its sights on solving the longstanding “grand challenge” within science of accurately predicting the 3D structures of proteins and enzymes. DeepMind developed a program called AlphaFold which analyzes thousands of protein chemical structures and determines their 3D shape to accurately predict new proteins.

The team was delighted with the AlphaFold results that DeepMind provided Higgins, and his coworkers. “The use of AlphaFold was really transformational, giving us a really sharp view of this malaria surface protein,” Higgins told reporters, adding that the new clarity had allowed his team to begin testing new vaccines that targeted the protein. “AlphaFold has provided the ability to transform the speed and capability of our research.”

On Thursday, DeepMind announced that it would now be making its predictions of the 3D structures of 200 million proteins—almost all that are known to science—available to the entire scientific community. Demis Hassabis from DeepMind, CEO of DeepMind, stated that this disclosure will boost the biology world and facilitate faster work in diverse fields such as sustainability, food safety, and neglected disease. “Now you can look up a 3D structure of a protein almost as easily as doing a keyword Google search,” Hassabis said. “It’s sort of like unlocking scientific exploration at digital speed.”

Learn More Demis Hassabis is on the 2017 TIME 100

The AlphaFold project is good publicity for DeepMind, whose stated end goal is to build “artificial general intelligence,” or a theoretical computer that could carry out most imaginable tasks more competently and quickly than any human. Hassabis described the steps necessary to reach that goal, which if achieved, would transform science and humanity’s prosperity.

The DeepMind CEO has described AlphaFold as a “gift to humanity.” A DeepMind spokesperson told TIME that the company was making AlphaFold’s code and data freely available for any use, commercial or academic, under irrevocable open source licenses in order to benefit humanity and the scientific community. Some AI experts and researchers have expressed concerns about the possibility that machine learning could lead to a concentration of wealth and power within a small number of companies. This would threaten equity and the participation of the wider community.

The allure of “artificial general intelligence” perhaps explains why DeepMind’s owner Alphabet (then known as Google), which paid more than $500 million for the lab in 2014, has historically allowed it to work on areas it sees as beneficial to humanity as a whole, even at great immediate cost to the company. DeepMind suffered a steady loss over the years. Alphabet had to write off $1.1 billion in debt from these losses in 2019 and it made $60 million in profit for its first attempt at profitability in 2020. That profit came entirely from selling its AI to other arms of the Alphabet empire, including tech that improves the efficiency of Google’s voice assistant, its Maps service, and the battery life on its Android phones.

AI’s complicated role in scientific discovery

Combine massive data, computing power, and powerful methods to spot patterns (known as neural network), the combination is fast changing scientific landscape. Artificial intelligence is often called artificial technology. These technologies are now helping scientists in many fields, including drug discovery and understanding stars’ evolution.

But this transformation isn’t without its risks. In a recent study, researchers for a drug discovery company said that with only small tweaks, their drug discovery algorithm could generate toxic molecules like the VX nerve agent—and others, unknown to science, that could be even more deadly. “We have spent decades using computers and AI to improve human health—not to degrade it,” the researchers wrote. “We were naive in thinking about the potential misuse of our trade.”

DeepMind claims that it considered all the possible risks before releasing AlphaFold to the public. The company said it did so after consulting more than 30 security and bioethics experts. “The assessment came back saying that [with] this release, the benefits far outweigh any risks,” Hassabis, DeepMind’s CEO, told TIME at a briefing with reporters on Tuesday.

Hassabis added that DeepMind had made some adjustments in response to the risk assessment, to be “careful” with the structure of viral proteins. Later, a spokesperson for DeepMind clarified that viruses had been exempted from AlphaFold due to technical issues and that experts agreed that AlphaFold wouldn’t significantly lower the barriers of entry that could cause harm with proteins.

Ewan Birney (director of European Bioinformatics Institute), who collaborated with DeepMind in the research, said that the risks associated making it easy for anyone to identify the 3D structure a protein’s 3-D structure are much lower than those of giving anybody access to an algorithm to discover drugs. The same technology that is available to the public could prove to be an asset to efforts to develop antidotes and vaccines. “I think, like all risks, you have to think about the balance here and the positive side,” Birney told reporters Tuesday. “The accumulation of human knowledge is just a massive benefit. The entities that could pose a risk are unlikely to be many. So I think we are comfortable.”

DeepMind admits, however that there are potential risks to the project. DeepMind acknowledges that artificial intelligence research is characterized by an open culture. Researchers at different labs share their results and source code publicly. On Tuesday, Hassabis said to reporters that machine learning is making greater progress in other areas that could be more risky. This openness may have to change. “Future [systems], if they do carry risks, the whole community would need to consider different ways of giving access to that system—not necessarily open sourcing everything—because that could enable bad actors,” Hassabis said.

“Open-sourcing isn’t some sort of panacea,” Hassabis added. “It’s great when you can do it. But there are often cases where the risks may be too great.”

Read More From Time


Send an email to Billy Perrigo at billy.perrigo@time.com.

Tags

Related Articles

Back to top button