Brickner: AI: Seeing can be deceiving – Fargo – INFORUM

A recent letter to The Forum

attacked President Biden for his executive order limiting artificial intelligence

. The writer said this order would wrongly limit start-up companies against major corporations. Perhaps, but I know we need AI regulation.

AI involves using computers, software, and robots to simulate human thought and action. It includes deepfakes and ChatGPT.

CEO of Open AI company Sam Altman worries that AI could potentially kill us, but current threats are enough.

Imagine your teen daughter chatting with female friends before their chemistry test. Soon, however, they notice their male friends acting strange. They find out some boys had created nude “deepfakes” of them.

Imagine her embarrassment and disgust. Maybe she already battles depression or anxiety.

Think she’s focused on her chemistry test now?

You may go to the police, but such acts are difficult to prosecute, as New Jersey parents recently discovered.

Deepfakes are images and videos created by manipulating photos and videos, merging images and sounds. One deepfake put fake, racist words in a principal’s mouth. But it’s estimated 90% of deepfakes are pornographic.

ChatGPT software can draft emails, pass law exams, and write essays. Having investigated plagiarism many times, this sounds like a nightmare.

CNN reports AI “has led economists to warn…As many as 

300 million full-time jobs

 around the world could eventually be automated in some way by generative AI…About 

14 million positions

 could disappear in the next five years alone.”

CNN adds Altman’s concern “for AI to be used to manipulate voters and target disinformation.”

As it is, according to The Guardian: More than 85% of people are worried about the impact of online disinformation.

Audrey Azoulay, director general of the UN’s culture body Unesco, [said] false information and hate speech online – accelerated and amplified by social media platforms – posed “major risks to social cohesion, peace and stability.”

Regulation was urgently needed “to protect access to information … while at the same time protecting freedom of expression and human rights…”

The Indian Express notes, “AI is not neutral: AI-based decisions are susceptible to inaccuracies, discriminatory outcomes, embedded or inserted bias.”

According to Scientific American, Tech Desk and other sources, AI contributes to discrimination in housing, criminal justice, health care, wage theft, and misinformation. One AI system started a friendly social media conversation that then descended into hate speech.

In 2018, the American Civil Liberties Union tested Amazon’s facial racial program “Rekognition.” The software inaccurately identified 28 members of Congress as criminals. And 39% of the false matches were people of color.

We can’t put the toothpaste back in the tube. AI is mushrooming, but even if Biden’s measure isn’t the best fix, it’s a necessary first step. Without limits, we have a wild west landscape of cheating, discrimination, and inaccuracy endangering individuals, and institutions of education, criminal justice and elections.

In a more basic way, it endangers the truth. I recall a student who said, “How do I know the Holocaust happened? I wasn’t there.” Deepfakes and other material can help validate conspiracy theorists, dismiss events of the past due to the realistic technology of the present.

We do see some fixes with a recent “ChatGPT detector” “catching AI-generated papers with accuracy,” according to the Nature website. But we cannot wait for an industry to regulate itself. We need guidance for these threats now.

Interested in a broad range of issues, including social and faith issues, Joan Brickner serves as a regular contributor to the Forum’s opinion page. She is a retired English instructor, having taught in Michigan and Minnesota.

Leave a Reply

Your email address will not be published. Required fields are marked *