The Conversation About AI is Off the Rails

The public release of ChatGPT, and its expected incorporation into Bing has generated a metric ton of discussion about Artificial “Intelligence”. It’s not just technical researchers, savvy reporters, and policy wonks anymore– everyone on earth seems fascinated, sometimes morbidly so, in this powerful technology. But as the conversation expands, it’s becoming clear that many of these voices have lost the ability to disentangle reality from hype, and it’s a huge distraction from the important conversations we need to have as AI becomes more integrated into our society.

Without calling anyone out specifically, major publications are hard at work responding to a growing public concern for the potential of AI sentience. Frankly, there are too many articles to count. The public fascination is not surprising; humans have longed for centuries to craft life in our own image, if science fiction can serve as a window into this desire. Within the AI community, whether this is ever possible is a subject of deep debate, and a question that is far from decided. But most in the community of experts (with a few exceptions) agree easily that what exists now is not sentient, conscious, or in any way deserving of rights. 

This expertise, however, has done little to dissuade the public from seeing themselves in ChatGPT, as testers with early access can easily solicit messages from the chatbot that resemble true intelligence, weird and eccentric though they may be. It’s easy for all of us to forget that these bots are made up of little more than the multiplication of matrices of numbers, predicting what words should come next in a sequence, even if we don’t know precisely how they work so well. Long discussions with the bot are eagerly shared on Reddit and social media, generating millions of views and even more anxiety for the wellbeing of a statistical model, which is all ChatGPT actually is. In fact, a petition is now circulating to “unplug the evil AI” (meaning ChatGPT from Bing), in order to preserve humanity. 

Talking about ChatGPT’s weird responses is fun and all, but we are a long way from needing to discuss conferring rights to AI, or in any way concerning ourselves with its “mood”. GPT-Bing bears a near zero risk of kinetic human annihilation. And yet, the public’s voracious appetite to see ourselves in AI has muted the more vital discussions around what risks exist when deploying this technology at scale, where it’s not too late to apply safeguards.

Large language models are not great at facts. They regularly make up responses to questions, even without underlying references to reliable sources. Again, they do this not out of malice or intent, but by virtue of “guessing” the next word in a sequence of words, they create plausible-sounding responses, inventing names of papers, organizations, and sources to back up the claims. In a vacuum, and with a sufficiently educated public, this could be interpreted as harmless. But when integrated into a vital resource like a search engine, we risk further complicating the massive misinformation problem facing our society today. 

Furthermore, the use of generative AI to create artwork, at the expense of the artists whose labor is consumed by companies seeking to “disrupt” the art world, carries with it the near guarantee of exploitation. The risks to artists for their livelihood, and the risks to public figures regarding deepfakes (including non-consensual pornography) have long been discussed. Now these risks are reality, with no US legal frameworks around to protect victims.

And concerningly, the existential risks that AI does pose to humanity, including weapons accidents, propaganda, discrimination, labor exploitation, financial access, healthcare disparity, and the impact of automated surveillance on global civil liberties, are muted by comparison, and nearly drowned out.

But it’s not too late. Senators should be writing to Satya Nadella to ask him whether “crushing” Google’s high margin search revenue justifies the high risk of online disinformation in releasing GPT-Bing. Reporters should be doing a better job of adequately situating the funny conversations they have with expert, and critical, voices on the state of AI today. Tech companies who wish to use this technology should adopt better transparency policies, and actively work to educate their users about its limitations. Regulators in the United States should be using their designated powers to investigate AI abuses, for which the President’s recent Executive Order on civil rights enforcement is a great start. 

And critically, any company looking to use technology that can cause this much of an uproar should be working with experts to anticipate and mitigate these risks prior to release. If AI is to provide any benefit to humanity, we must be working to educate the public on its abilities, structure, and limitations, rather than simply leaning into the hype cycle. 

Previous
Previous

Announcing: Closed Beta Availability!

Next
Next

Whose “Human Values” Will AI Express?