People are trying to claim that real videos are deepfakes. The courts are not amused

Elon Musk spoke to journalists Kara Swisher and Walt Mossberg at a conference in 2016. Musk’s lawyers recently tried to argue in court that comments he made at that event could have been doctored.

Recoding/Screenshot of NPR


hide caption

toggle caption on/off

Recoding/Screenshot of NPR


Elon Musk spoke to journalists Kara Swisher and Walt Mossberg at a conference in 2016. Musk’s lawyers recently tried to argue in court that comments he made at that event could have been doctored.

Recoding/Screenshot of NPR

In 2016, Elon Musk took the stage at a tech conference outside Los Angeles and made a bold statement about Tesla’s self-driving capabilities.

“A Model S and a Model X at this point can drive autonomously more safely than a person. Right now,” the CEO told the Code Conference audience during a question and answer session following his interview with tech reporters Walt Mossberg and Kara Swisher.

The video of the session has been on YouTube for almost seven years. But it recently returned to the spotlight as part of a lawsuit brought by the family of a man who died when his Tesla crashed while using the self-driving feature. The family’s lawyers cited that 2016 claim, along with others Musk made about Tesla’s self-driving software.

But the automaker’s lawyers objected.

Musk, “like many public figures, is the subject of many ‘deepfake’ videos and audio recordings purporting to show him saying and doing things he never actually said or did,” they wrote in a court filing, describing several fake videos of the billionaire.

Thanks to advances in artificial intelligence, it’s easier than ever to create images and videos of things that don’t exist or events that never happened. This is prompting warnings about digital fakery being used to spread propaganda and disinformation, impersonate celebrities and politicians, rig elections and defraud people.

But the unleashing of powerful generative AI on the public is also raising concerns about another phenomenon: that as the technology becomes more widespread, it will become easier to say something is false.

“That’s exactly what we were concerned about: that as we enter this era of deepfakes, anyone can deny reality,” said Hany Farid, a digital forensics expert and professor at the University of California, Berkeley. “This is the classic liar’s dividend.”

The liar’s dividend is a term coined by law professors Bobby Chesney and Danielle Citron in a 2018 paper setting out the challenges deepfakes present to privacy, democracy, and national security.

The idea is that as people become more aware of how easy it is to fake audio and video, bad actors can weaponize that skepticism.

“Put simply: A skeptical public will be quick to doubt the authenticity of actual audio and video evidence,” wrote Chesney and Citron.

So far, the courts aren’t buying complaints of deepfaked evidence

In Musk’s case, the judge did not accept his lawyers’ claims.

“What Tesla claims is deeply troubling to the court,” Judge Evette Pennypacker wrote in a ruling ordering Musk to testify under oath.

“Their position is that because Mr. Musk is famous and may be more of a target for deep hoaxes, his public statements are immune,” he wrote. “In other words, Mr. Musk, and others in his position, can simply say whatever they want in the public domain, then hide behind the potential for their recorded statements to be a profoundly false to avoid taking ownership of what they actually have.” said and the Court is not prepared to set such a precedent by condoning Tesla’s approach here.”

The Tesla lawsuit isn’t the first time deepfakes have been invoked in an attempt to disprove evidence, and it likely won’t be the last.

Two of the defendants on trial for the January 6 riot attempted to argue that the video showing them at the Capitol could have been created or manipulated by artificial intelligence. Lawyers for one sued a 2017 deepfake of former President Barack Obama, created by researchers, as a reason for the court to doubt footage of the riot that was taken from YouTube.

But ultimately, both men were found guilty.

“So far, those appear to be Hail Mary passages that haven’t panned out,” said Riana Pfefferkorn, a researcher at the Stanford Internet Observatory who warned of the looming impact of deepfakes in the classroom in a 2020 law review article .

While the threat that deepfakes could be offered as evidence is real, he said, “courts have built up hundreds of years of resilience against efforts to introduce false or tampered evidence, going all the way back to forging someone’s signature on a document written in by hand and going through typed or mimeographed documents, film, video, Photoshop proofing, and digital photography and video”.

In fact, Pfefferkorn thinks that as AI technology proliferates, courts are more likely to face charges of falsifying real evidence rather than attempts to introduce false evidence.

In those cases, he said, the ethical rules and professional norms of lawyers have a role to play. Lawyers shouldn’t make frivolous arguments, for example.

“I think the attorneys’ sense of self-preservation will hopefully drive them to incentivize them to do a little bit of front-end due diligence,” Pfefferkorn said.

Juries may request additional evidence

But these standards may need to be updated to specifically address what Loyola Law School professor Rebecca Delfino calls “the deepfake defense.”

“Right now, it’s kind of a wild, wild west,” he said, where lawyers can say, “‘Well, let’s get him up the flagpole and see what we can do about it.'”

Even if the courts adapt to these challenges, the reverberations of AI hoaxes will still be felt.

If allegations that the evidence is forged become more common, juries could expect even more evidence that the evidence is real, in what Pfefferkorn likens to the “CIS Effect.”

“If lawyers start getting juries to require all the bells and whistles to prove that evidence isn’t a forgery…that’s a way for lawyers and their clients who are trying to downplay or dismiss damning evidence against of them to essentially raise the bills and make it more costly, more time-consuming for the other party to get admission of that damning evidence,” he said.

This could leave out people who don’t have the resources to hire experts.

And both inside and outside the courtroom, denying that events really happened has a corrosive effect. Today, most people carry around devices that can record what’s happening around us at a moment’s notice, making it easy to share eyewitness accounts of protests, crimes, and public events.

Farid worries about a world where people no longer believe documented evidence of “police violence, human rights abuses, a politician saying something inappropriate or illegal.”

“Suddenly there’s no reality anymore,” she said. “And that’s really worrying because I don’t know how we think about the world.”


#People #claim #real #videos #deepfakes #courts #amused

Leave a Comment