Skip to main content
 

The corpus bride

I got my beta invitation to DALL-E 2, which creates art based on text prompts. You’ve probably seen them floating around the internet by now: surrealist, AI-drawn illustrations in a variety of styles.

Another tool, Craiyon (formerly DALL-E Mini), had been doing the rounds as a freely-available toy. It’s fun too, but DALL-E’s fidelity is impressive enough to be almost indistinguishable from magic.

I can’t claim to fully understand its algorithm, but DALL-E is ultimately based on a huge corpus of information: OpenAI created a variation of GPT-3 that follows human-language instructions well enough to sift through collected data and create new works based on what it’s learned. OpenAI claims to have guarded against hateful or infringing use cases, but it can never be perfect at this, and will only ever be as sensitive to these issues as the team that builds it.

These images are attention-grabbing, but the technology has lots of different applications. Some are benign: the team found that AI-generated critiques helped human writers find flaws in their work, for example. GitHub uses OpenAI’s libraries to help engineers write code, using a feature called Copilot. There’s a Figma plugin that will mock up a website based on a text description. But it’s obvious that there are military and intelligence applications for this technology, too.

If I was a science fiction writer - and at night, I am! - I would ask myself what I could create if the corpus was everything. If an AI algorithm was fed with every decision made by every person in the world - our movements via our cellphones, our intentions via our searches, our actions via our purchases and interactions - what might it be able to say about us? Could it predict what we would do next? Could it determine how to influence us to take certain actions?

Yes - but “yes” wouldn’t make for a particularly compelling story in itself. Instead, I’d want to drill a level deeper and remind myself that any technology is a reflection of the people who built it. So even if all those datapoints were loaded into the system, a person who fell outside of the parameters the designers thought to measure or look for might not be as predictable in the system. The designer’s answer, in turn, might be to incentivize people to act within the frameworks they’d built: to make them conform to the data model. (Modern marketing already doesn’t stray too far from this idea.) The people who are not compliant, who resist those incentives, are the only ones who can bring down the system. In the end, only the non-conformists, in this story and in life, are truly free, and are the flag-bearers of freedom for everyone else.

The corpus of images used to power DALL-E 2 is scraped from the internet; the corpus of code for GitHub Copilot is scraped from open source software. There are usage implications here, of course: I did not grant permission for my code, my drawings, or my photographs to form the basis of someone else’s work. But a human artist also draws on everything they’ve encountered, and we tend not to worry about that (unless the re-use becomes undeniably obviously centered on one work in particular). An engineer relies on “best practices” and “patterns” that were developed by others, and we actively encourage that (unless, again, it turns the corner and becomes plagiarism of a single source). Where should we draw the line, legally and conceptually?

I think there is a line, and it’s in part because OpenAI is building a commercial, proprietary platform. The corpus of work translates into profit for them; if OpenAI’s software does wind up powering military applications, or if my mini science fiction story partially becomes true, it could also translate into real harm. The ethical considerations there can’t be brushed away.

What I’m not worried about: I don’t think AI is coming for the jobs of creative people. The corpus requires new art. I do think we will see AI-produced news stories, which are a natural evolution of the content aggregator and cheap reblogging sites we see today, but there will always be a need for deeply-reported journalism. I don’t think we’ll see AI-produced novels and other similar content, although I can imagine writers using them to help with their first drafts before they revise. Mostly, for creatives, this will be a tool rather than a replacement. At least, for another generation or so.

In the meantime, here’s a raccoon in a cowboy hat singing karaoke:

· Posts