Business

Why Copycat AI Tools Will Be the Internet’s Next Big Problem

If you’ve spent any time on Twitter lately, you may have seen a viral black-and-white image depicting Jar Jar BinksNuremberg Trials (or a courtroom sketchSnoopy sued Snoopy for Snoop Dogg.

Dall E Mini’s popular website creates surreal images and these are its products. Type in a prompt, and it will rapidly produce a handful of cartoon images depicting whatever you’ve asked for.

More than 200,000 people are now using Dall-E Mini every day, its creator says—a number that is only growing. A Twitter account called “Weird Dall-E Generations,” created in February, has more than 890,000 followers at the time of publication. One of its most popular tweets so far is a response to the prompt “CCTV footage of Jesus Christ stealing [a] bike.”

If Dall-E Mini seems revolutionary, it’s only a crude imitation of what’s possible with more powerful tools. As the “Mini” in its name suggests, the tool is effectively a copycat version of Dall-E—a much more powerful text-to-image tool created by one of the most advanced artificial intelligence labs in the world.

That lab, OpenAI, boasts online of (the real) Dall-E’s ability to generate photorealistic images. But OpenAI has not released Dall-E for public use, due to what it says are concerns that it “could be used to generate a wide range of deceptive and otherwise harmful content.” It’s not the only image-generation tool that’s been locked behind closed doors by its creator. Google is keeping its own similarly powerful image-generation tool, called Imagen, restricted while it studies the tool’s risks and limitations.

Google and OpenAI warn that using text-to image tools can lead to bullying, harassment and the creation of images that are racist or gender stereotypical. These tools could also reduce trust in authentic photographs depicting reality.

It could prove to be more complicated than images for text. OpenAI and Google both have their own text generators, which chatbots can use to generate synthetic texts. However they are reluctant to release them to the general public due to fears of misinformation being spread or bullying.

Continue reading: The 20-year anniversary of AI: How it will completely change the way we live in the next 20 years

Google and OpenAI both have stated that they are committed to safe AI development. This is evident in their decision to limit access to potentially harmful tools to only a small number of people, at least temporarily. But that hasn’t stopped them from publicly hyping the tools, announcing their capabilities, and describing how they made them. This has sparked a new wave of imitations, with less ethical qualms. In an ever increasing number of instances, the tools developed by Google and OpenAI are being copied by knockoff applications that circulate online. This is contributing to the growing belief that the internet has the potential for a revolutionary revolution.

“Platforms are making it easier for people to create and share different types of technology without needing to have any strong background in computer science,” says Margaret Mitchell, a computer scientist and a former co-lead of Google’s Ethical Artificial Intelligence team. “By the end of 2022, the general public’s understanding of this technology and everything that can be done with it will fundamentally shift.”

Copycat Effect

The rise of Dall-E Mini is just one example of the “copycat effect”—a term used by defense analysts to understand the way adversaries take inspiration from one another in military research and development. “The copycat effect is when you see a capability demonstrated, and it lets you know, oh, that’s possible,” says Trey Herr, the director of the Atlantic Council’s cyber statecraft initiative. “What we’re seeing with Dall-E Mini right now is that it’s possible to recreate a system that can output these things based on what we know Dall-E is capable of. That reduces the amount of uncertainty. And so if I have resources and the technical chops to try and train a system in that direction, I know I could get there.”

That’s exactly what happened with Boris Dayma, a machine learning researcher based in Houston, Texas. When he saw OpenAI’s descriptions online of what Dall-E could do, he was inspired to create Dall-E Mini. “I was like, oh, that’s super cool,” Dayma told TIME. “I wanted to do the same.”

“The big groups like Google and OpenAI have to show that they are on the forefront of AI, so they will talk about what they can do as fast as they can,” Dayma says. “[OpenAI]Published a paper with a lot more interesting information about how they make it. [Dall-E]. They didn’t give the code, but they gave a lot of critical elements. I wouldn’t have been able to develop my program without the paper they published.”

In June, Dall-E Mini’s creators said the tool would be changing its name to Craiyon, in response to what they said was a request from OpenAI “to avoid confusion.”

Advocates of restraint, like Mitchell, say it’s inevitable that accessible image- and text-generation tools will open up a world of creative opportunity, but also a Pandora’s box of awful applications—like depicting people in compromising situations, or creating armies of hate-speech bots to relentlessly bully vulnerable people online.

Continue reading: Artificial intelligence helped to create this play. This play may contain racial content

Dayma believes that Dall-E Mini poses no danger as the images generated by it are not photorealistic. “In a way it’s a big advantage,” he says. “I can let people discover that technology while still not posing a risk.”

There are many other similar projects that can be copied. These come with greater risk. GPT-4chan was announced in June. The program, also known as GPT-4chan, was a text-generator or chatbot that was trained from text sent to 4chan. This forum is well-known for being an open source hotbed for racism, sexism, homophobia and other hateful behavior. Each new sentence that it produced sounded similar.

Like Dall-E Mini the tool was designed by an unrelated programmer, but it was inspired in part by OpenAI’s research. Its name, GPT-4chan, was a nod to GPT-3, OpenAI’s flagship text-generator. GPT-3, unlike the copied version, was trained using text scraped across large areas of the internet. OpenAI is the only one who has granted access to GPT-3.

The internet is the future of safety

In June, after GPT-4chan’s racist and vitriolic text outputs attracted widespread criticism online, the app was removed from Hugging Face, the website that hosted it, for violating its terms and conditions.

Hugging Face allows machine-learning-based applications to be accessed through any web browser. Hugging Face has been the preferred location to download open-source AI applications, such as Dall-E Mini.

Hugging Face CEO Clement Delangue told TIME his company is doing well and that he believes there’s a new age in computing. More tech companies are realizing that machine learning can unlock many opportunities.

GPT-4chan’s controversy is a sign of an emerging problem in online safety. Social media, the last online revolution, made billionaires out of platforms’ CEOs, and also put them in the position of deciding what content is (and is not) acceptable online. Questionable decisions have tarnished those CEOs’ once glossy reputations. Hugging Face and other smaller platforms that use machine learning are now able to keep the gates open. The hosts of open-source machine-learning software like GPT-4chan or Dall-E will have to determine what’s acceptable. Hugging Face is one example.

Delangue believes that Hugging face is well-equipped to take on this challenging role as a gatekeeper. “We’re super excited because we think there is a lot of potential to have a positive impact on the world,” he says. “But that means not making the mistakes that a lot of the older players made, like the social networks – meaning thinking that technology is value neutral, and removing yourself from the ethical discussions.”

Delangue, however, suggests a preference to moderate light content, much like early social media CEOs. He says the site’s policy is currently to politely ask creators to fix their models, and will only remove them entirely as an “extreme” last resort.

But Hugging Face is also encouraging its creators to be transparent about their tools’ limitations and biases, informed by the latest research into AI harms. Mitchell is now a Hugging Face AI ethicsist. She’s helping the platform envision what a new content moderation paradigm for machine learning might look like.

“There’s an art there, obviously, as you try to balance open source and all these ideas around public sharing of really powerful technology, with what malicious actors can do and what misuse looks like,” says Mitchell, speaking in her capacity as an independent machine learning researcher rather than as a Hugging Face employee. She adds that part of her role is to “shape AI in a way that the worst actors, and the easily-foreseeable terrible scenarios, don’t end up happening.”

Mitchell envisions the worst case scenario in which a group schoolchildren trains a text-generator such as GPT-4chan, bullying a classmate through their texts and direct messages. The victim then decides to take their life. “There’s going to be a reckoning,” Mitchell says. “We know something like this is going to happen. It’s foreseeable. But there’s such a breathless fandom around AI and modern technologies that really sidesteps the serious issues that are going to emerge and are already emerging.”

AI hype: The dangers

That “breathless fandom” was encapsulated in yet another AI project that caused controversy this month. In early June, Google engineer Blake Lemoine claimed that one of the company’s chatbots, called LaMDA, based on the company’s synthetic-text generation software, had become sentient. Google dismissed his claims and sent him to administrative leave. Ilya Sutskever (a senior executive at OpenAI) was placed on administrative leave. suggestedTwitter users reported that computers’ brains are beginning to mimic those of humans. “Psychology should become more and more applicable to AI as it gets smarter,” he said.

In a statement, Google spokesperson Brian Gabriel said the company was “taking a restrained, careful approach with LaMDA to better consider valid concerns on fairness and factuality.” OpenAI declined to comment.

For some experts, the discussion over LaMDA’s supposed sentience was a distraction—at the worst possible time. Instead of arguing over whether the chatbot had feelings, they argued, AI’s most influential players should be rushing to educate people about the potential for such technology to do harm.

“This could be a moment to better educate the public as to what this technology is actually doing,” says Emily Bender, a linguistics professor at the University of Washington who studies machine learning technologies. “Or it could be a moment where more and more people get taken in, and go with the hype.” Bender adds that even the term “artificial intelligence” is a misnomer, because it is being used to describe technologies that are nowhere near “intelligent”—or indeed conscious.

Bender still believes image-generators, such as Dall E Mini, may be able to inform the public about AI’s limitations. It’s easier to fool people with a chatbot, because humans tend to look for meaning in language, no matter where it comes from, she says. Tricking our eyes is harder than it sounds. Dall-E Mini produces images that look too sloppy and distorted, which is why they’re not nearly photorealistic. “I don’t think anybody who is playing with Dall-E Mini believes that these images are actually a thing in the world that exists,” Bender says.

Dall-E Mini, an extremely crude tool for analyzing AI data from big companies, shows just how far this technology has come. When you type in “CEO,” Dall-E Mini spits out nine images of a white man in a suit. When you type in “woman,” the images all depict white women. The results reflect the biases in the data that both Dall-E Mini and OpenAI’s Dall-E were trained on: images scraped from the internet. This includes sexist, racist and other harmful stereotypes as well as large amounts of violence and porn. Dayma, OpenAI, and other researchers claim they remove the worst content but subtle biases can still be present.

Continue reading: Why Timnit Gebru Isn’t Waiting for Big Tech to Fix AI’s Problems

These are the basic weaknesses that still remain in many aspects of machine learning, despite AI’s impressive capabilities. These are the main reasons Google and OpenAI have decided not to make their images and text-generation tools public. “The big AI labs have a responsibility to cut it out with the hype and be very clear about what they’ve actually built,” Bender says. “And I’m seeing the opposite.”

Here are more must-read stories from TIME


Send an email to Billy Perrigo at billy.perrigo@time.com.

Tags

Related Articles

Back to top button