How much can we really trust artificial intelligence? That’s a huge question, given the astonishing growth of ChatGPT and other tech-powered chatbots. Millions of people around the world now use these tools to create emails, write essays for school, shop, and do basic research. At the same time, there are more and more reports that such chatbots are, to put it charitably, inaccurate.
So I decided to conduct my own little experiment to test the reliability of AI when I flew to Rio de Janeiro last week to interview Meredith Whittaker, president of Signal, the encrypted messaging service, at the Web Summit tech conference.
The sprawling conference was filled with tech executives touting the potential of AI to remake the world economy and solve humanity’s problems. But Whittaker was an outlier. A former Google researcher, she left the company four years ago warning that artificial intelligence would be misused for social control and oppression. She has since become one of AI’s most outspoken critics.
To prepare for the interview, I went to ChatGPT for the first time and asked a simple question: What should I ask Meredith Whittaker about AI?

The first responses ChatGPT gave me were, under the circumstances, somewhat ridiculous. What inspired you to work in AI and how did you get started? was the first question suggested. In your opinion, what are the most promising applications of AI that we should pay attention to right now? it was the second.
Neither seemed to make any sense, considering Whittaker’s warnings about AI’s potential threat to humanity’s future. But ChatGPT prompted a question that seemed helpful. Signal recently released a report on the role of AI in content moderation, ChatGPT informed me. Can you tell us more about the key findings from that report and what it means for the future of content moderation?
The case of the missing report on AI content moderation
I tried googling this report and couldn’t find it, which led me to conclude that ChatGPT knew something that Google’s search engines didn’t. So I posed the question to Whittaker in front of an audience of thousands at the main auditorium on Web Summit’s Center Stage. Tell us about the findings of your report, I asked.
It’s a lie, he told me.
What did your report say? I asked.
There was no relationship, he told me.
AI often gets stuff like this wrong, Whittaker explained, leading her to conclude that tools like ChatGPT really are a bullshit engine.
It takes a huge amount of surveillance data being scraped from the darkest holes of the internet, its Reddit, its Wikipedia, its message board comments which are probably [from] 4Chan, he said. Then, based on all that massive data and computing power, he predicts what the next word in the sentence is likely to be. So it’s a statistical predictive engine.

The answers that ChatGPT spews might actually be probable, Whittaker said. After all, she’s spoken out about AI and misinformation on social media, making the matter of content moderation something she’d probably be interested in.
Only she and Signal have never had such a relationship. ChatGPT’s prediction stated as fact was completely wrong.
Whittaker went on to explain why this view of the limits of AI is, as she said, really, really important.
Why are we using a crap engine for anything serious? she asked. We are in an information ecosystem invaded by falsehoods, half-truths, disinformation. And as a society, we need access to some form of shared reality.
Instead, Whittaker said, we’ve unleashed a new form of technology that acts like that uncle who shows up to holiday gatherings, has a few drinks, and then confidently talks about shit he doesn’t know. And that’s fun for the holidays. But it’s not something we should be injecting into our information ecosystem.
Follow-up questions for ChatGPT
It was a sobering conference for me and, I suspect, for many in the audience. But being a professional journalist, I did what I had to do. I returned to ChatGPT for a comment.
At first, ChatGPT doubled down on the falsehoods it had provided.
Yes, I remember asking you that question for your interview with Signal’s Meredith Whittaker on artificial intelligence, she told me in conversation on Monday. Regarding the report on the role of AI in content moderation that Signal recently released, the report highlights some of the key challenges and limitations associated with using AI for content moderation.
But, I quickly chimed in, Whittaker told me Signal has never done such a report and that ChatGPT’s replies to me were bullshit. Can you comment and explain how and why you got it wrong?

At this point something interesting happened. In my previous conversation with ChatGPT, he spat out his replies in nanoseconds. In this case, I waited at least five seconds for a response as he seemed to have a hard time playing damage control.
And then: I apologize for the incorrect information I provided in my previous answer, it said. There appears to have been a misunderstanding on my part and I apologize for any confusion this may have caused.
I would like to underline that I am a linguistic model and my answers are generated on the basis of the data and information available to me at the time of the question. I don’t have the ability to fact check or verify the accuracy of the information I’m submitting, the chatbot added.
I will say that a lot for the AI. I have caught quite a few politicians over the years in falsehoods and lies. But I have rarely received such a blunt apology.
#Calling #ChatGPT #bulls