Ross: Tech companies need to be held liable for AI misinformation
Jun 5, 2023, 8:05 AM | Updated: 9:58 am

A ChatGPT prompt is shown on a device near a public school in Brooklyn, New York, Jan. 5, 2023. A popular online chatbot powered by artificial intelligence is proving to be adept at creating disinformation and propaganda. When researchers asked the online AI chatbot ChatGPT to compose a blog post, news story or essay making the case for a widely debunked claim -- that COVID-19 vaccines are unsafe, for example -- the site often complied, with results that were regularly indistinguishable from similar claims that have bedeviled online content moderators for years. (AP Photo/Peter Morgan, File)
(AP Photo/Peter Morgan, File)
Based on the number of articles I’m seeing, there’s a new monster under the bed, and it’s AI.
There are warnings of a robot takeover, maybe even the extinction of civilization. If not, then at the very least (so the warning goes), there will be an attempt to use AI to subvert the election process.
More from Dave Ross: Patty Murray vows to restore social services as debt ceiling bill passes
I can see the temptation — chatbots are very good at spewing out made-up stuff.
I asked ChatGPT to write an arts and events calendar for today in Seattle. Simple task, and it did it. Instantly. Listing events at the Rep, the Crocodile, and an exhibit at a place called the “Seattle Art Gallery at 123 Main Street, Seattle.” Which doesn’t exist – because it was all made up!
And to the chatbot’s credit, there was a disclaimer at the bottom admitting it was all made up. But what it should have said is, “Sorry, I can’t do that, Dave, because I have no idea what’s going on in Seattle today.”
And yes, I’m sure the technology will get better with time, but the problem is that everything artificial, including intelligence, has one fundamental and incurable flaw: it’s artificial.
If it gets something wrong, it doesn’t care because it has no life. No pulse. No hunger. No fear. No sense of mortality or responsibility; no capacity to love or hate or feel pain. It has no stake in being right and faces no penalties for being wrong.
Which is why the responsibility has to be placed on any company that decides to unleash one of these things to flood the Internet with distorted news.
And if you say the First Amendment protects all speech, look at the case of Elizabeth Holmes – the entrepreneur who ran a company called Theranos. She lured investors in by making up stuff about her company’s accomplishments. She’s sold a false story, and she’s going to jail for 11 years. The First Amendment did not protect her.
Ross: AI ChatGPT gets defensive when you correct its mistakes
The owners of AI companies should face similar consequences.
The FCC prohibits broadcasters like us from deliberately distorting a factual news report.
And since chatbots are known to do exactly that, any company that unleashes an online chatbot that starts distorting factual news reports should be held responsible. And in the case of an election, I would even say criminally liable.
And once a few AI CEOs find themselves going to jail for 11 years, once they learn that Artificial Intelligence can lead to Actual Incarceration – I imagine the industry will quickly start policing itself.
Listen to Seattle’s Morning News with Dave Ross and Colleen O’Brien weekday mornings from 5 – 9 a.m. on KIRO Newsradio, 97.3 FM. Subscribe to the podcast here.