Ross: The fundamental problem with new artificial intelligence
I read Steven Johnson’s excellent article in the New York Times about a new system of artificial intelligence called GPT-3.
GPT-3 stands for Generative Pre-Trained Transformer 3, which is an artificial intelligence program that mimics brain synapses, and is housed in a supercomputer in Iowa. This machine is reading the Internet 24-7 and digesting its contents, mapping speech patterns, and teaching itself to write original prose in answer to any question.
It learns by teaching itself to complete partial sentences, much in the same way Microsoft Outlook offers to complete your e-mail responses.
But GPT-3 goes way beyond this. It can write original short stories. It’s even been trained to write movie scripts.
Imagine you could choose a subject, a style, and a tone – anything from Northwest Nice to Five Jalapenos – and in less than a second, a grammatically-perfect paragraph pops out.
The excerpts in the Times article are stunning compared to what I was seeing a few years ago.
But there is a fundamental problem with this technology and that’s the temptation to use it. If you’re just churning out fiction for entertainment, great. However, you know trouble is ahead when you read the stipulations in the software license: the designers specifically forbid using the technology to determine who gets a credit card, a payday loan, a job, or an apartment. It is also forbidden to use it to generate spam, promote ‘‘pseudo-pharmaceuticals,” or influence the political process.
Except there’s no mention of how to enforce that. Since computers cannot experience fear, shame, pain, poverty, loss or death – they have no motivation to control themselves, and so any control will have to be imposed by humans – humans who I guarantee have every intention of using these machines to do all those forbidden things.
GPT-3 will also be used to generate high school essays, eventually college essays, and ultimately, human experts will simply read an AI-generated script in their teleprompter glasses.
Of course, ultimately, this will be used to make decisions – like, do we stick with the conventional weapons, or is it time for the nukes?
I want to say here and now: that would be bad. I want to get that on the record before this segment is turned over to GPT-3.
Time may be running short. After reading the article, I also read some of the comments.
One of them from a reader in New Jersey named “Archer” read as follows:
“I have no objection to any of this. I am tired of reading the scribblings of carbon-based writers.”
Like I said, that was from a reader in New Jersey named Archer … unless it wasn’t.
Listen to Seattle’s Morning News with Dave Ross and Colleen O’Brien weekday mornings from 5 – 9 a.m. on KIRO Newsradio, 97.3 FM. Subscribe to the podcast here.
- Tune in to KIRO Newsradio weekdays at 5am for Dave Ross on Seattle's Morning News.