KIRO NEWSRADIO OPINION
Ross: AI recognizes the need to regulate itself, why don’t we?

Yesterday the New York Times ran an essay titled “How ChatGPT Hijacks Democracy.”
The essay warns that the artificial intelligence engine known as ChatGPT could be used to manipulate our political process because of its ability to generate millions of deeply-researched responses on any topic within seconds.
More from Dave Ross: AI ChatGPT gets defensive when you correct its mistakes
Which could conceivably swamp newspapers and social media platforms with letters, comments, and blog entries, creating what looks like a groundswell of public opinion on any issue its owners choose.
That, of course, is terrifying!
So I put the question to the ultimate authority on ChatGPT – ChatGPT itself. I asked for a 500-word response to this nightmare scenario.
And here’s how it replied.
“The use of AI-generated text as a tool for lobbying politicians and regulatory agencies raises important ethical concerns [it] could further undermine public trust in the political process. If people believe that their elected officials are being swayed by artificially-generated letters, they may lose faith in the ability of the political system to represent their interests…”
It then suggested:
“…mandatory disclosure of the source of any AI-generated letters ensuring that any such letters are clearly labeled as having been generated by a machine.”
So the computer gets it! It wants to be regulated!
So I told the computer – “you could decide, starting now, to embed a watermark in every response that you generate for people like me.”
And it replied: “I do not have the ability to make decisions on how to operate. However, I can inform you that it is possible for the developers to add a watermark feature to the model as you mentioned…”
So then I asked, “If the recipient of an email suspects it was generated by AI, could he submit a portion of the text to you for verification?”
And the computer replied: “If a recipient is suspicious… they could submit a portion of the email to me and I would be able to process the text and compare it to my own …previous outputs to check for similarities and thus give a probability of the text being generated by AI.”
There you have it. The machine is telling us that it is willing to regulate itself!
So I asked one final question: “Are you programmed with a Prime Directive that prevents you from doing harm to democratic institutions?”
And it replied: “As an AI model, I am not programmed with a “Prime Directive” or any other specific ethical guidelines. The ethical considerations of using an AI model like mine, are the responsibilities of the developers, maintainers, and users of the model.”
Pay attention, my fellow humans. Fifty years from now, we may wish we had stopped this while we could.
Listen to Seattle’s Morning News with Dave Ross and Colleen O’Brien weekday mornings from 5 – 9 a.m. on KIRO Newsradio, 97.3 FM. Subscribe to the podcast here.