When Tech Leaves No Space for Humans

We created technology to preserve humanity. Instead, technology will preserve itself.

Colin Horgan

--

Credit: Victor Habbick Visions/Science Photo Library via Getty Images

In mid-February, a new website made the rounds. Hit refresh on thispersondoesnotexist.com and you’ll see a human face, but not a face that actually belongs to a human. Instead, it’s a nearly-perfect computer-generated image, at a glance imperceptibly different than a genuine photograph.

Images generated from thispersondoesnotexist.com. These people do not exist.

Phillip Wang, a software engineer at Uber, built on work by researchers at Nvidia and created the site to “raise some public awareness” for the technology that creates these images: generative adversarial networks, or GANs. These are programs with two neural networks. One of them generates an image; another determines how realistic they are and challenges the first to improve on its output. The goal is to create something that’s virtually indistinguishable from a real-life human face.

The people Wang’s site creates look real. They could be anyone. But they are no one.

A few days after the site launched, OpenAI revealed a tool that can write cohesive paragraphs of text given minimal human prompting. They call it “deepfakes for text,” referencing the technology that can be used to replace one person’s face over another in a video. As the Guardian explained, “the AI system is fed text, anything from a few words to a whole page, and asked to write the next few sentences based on its predictions of what should come next.”

The people Wang’s site creates look real. They could be anyone. But they are no one.

The Guardian used the technology — it’s called GPT2, and OpenAI has not released it in full due to concerns about deceptive use “at scale” — to write an entire story about itself. The paper gave the program two paragraphs to begin with. Much like the faces generated by GANs, the story GPT2 wrote is nearly indistinguishable from a human version. GPT2 even fabricated quotes from its own creators (it did the same thing when the Guardian fed it the beginning of a piece on Brexit — it created fake…

--

--