Thursday, August 29, 2019

Deepfakes Are Coming. What Happens When We Can No Longer Believe What We See?



The following is taken from a recent (6/10/19) NY Times' op-ed piece, Deepfakes Are Coming; We Can No Longer Believe What We See, by philosopher Regini Reni. Are the epistemological standards by which we separate fact from hearsay going to be inevitably lost when we can no longer rank visual information as the most reliable kind? Rini explores.

************************************************

On June 1, 2019, the Daily Beast published a story exposing the creator of a now infamous fake video that appeared to show House Speaker Nancy Pelosi drunkenly slurring her words. The video was created by taking a genuine clip, slowing it down, and then adjusting the pitch of her voice to disguise the manipulation. Judging by social media comments, many people initially fell for the fake, believing that Ms. Pelosi really was drunk while speaking to the media. (If that seems an absurd thing to believe, remember Pizzagate; people are happy to believe absurd things about politicians they don’t like.)

The video was made by a private citizen named Shawn Brooks, who seems to have been a freelance political operative producing a wealth of pro-Trump web content. (Mr. Brooks denies creating the video, though according to the Daily Beast, Facebook confirmed he was the first to upload it.) Some commenters quickly suggested that the Daily Beast was wrong to expose Mr. Brooks. After all, they argued, he’s only one person, not a Russian secret agent or a powerful public relations firm; and it feels like “punching down” for a major news organization to turn the spotlight on one rogue amateur. Seth Mandel, an editor at the Washington Examiner, asked, “Isn’t this like the third Daily Beast doxxing for the hell of it?” It’s a legitimate worry, but it misses an important point. There is good reason for journalists t expose the creators of fake web content, and it’s not just the glee of watching provocateurs squirm. We live in a time when knowing the origin of an internet video is just as important as knowing what it shows.

Digital technology is making it much easier to fabricate convincing fakes. The video that Mr. Brooks created is pretty simple; you could probably do it yourself after watching a few YouTube clips about video editing. But more complicated fabrications, sometimes called “deepfakes,” use algorithmic techniques to depict people doing things they’ve never done — not just slowing them down or changing the pitch of their voice, but making them appear to say things that they’ve never said at all. A recent research article suggested a technique to generate full-body animations, which could effectively make digital action figures of any famous person.

So far, this technology doesn’t seem to have been used in American politics, though it may have played some role in a political crisis in Gabon earlier this year. But it’s clear that current arguments about fake news are only a taste of what will happen when sounds and images, not just words, are open to manipulation by anyone with a decent computer.

Combine this point with an insight from epistemology — the branch of philosophy dealing with knowledge — and you’ll see why the Daily Beast was right to expose the creator of the fake video of Ms. Pelosi. Contemporary philosophers rank different types of evidence according to their reliability: How much confidence, they ask, can we reasonably have in a belief when it is supported by such-and-such information?

We ordinarily tend to think that perception — the evidence of your eyes and ears — provides pretty strong justification. If you see something with your own eyes, you should probably believe it. By comparison, the claims that other people make — which philosophers call “testimony” —provide some justification, but usually not quite as much as perception.Sometimes, of course, your senses can deceive you, but that’s less likely than other people deceiving you.

Until recently, video evidence functioned more or less like perception. Most of the time, you could trust that a camera captured roughly what you would have seen with your own eyes. So if you trust your own perception, you have nearly as much reason to trust the video. We all know that Hollywood studios, with enormous amounts of time and money, can use CGI to depict almost anything, but what are the odds that at random internet video came from Hollywood?

Now, with the emergence of deepfake technology, the ability to produce convincing fake video will be almost as widespread as the ability to lie. And once that happens, we ought to think of images as more like testimony than perception. In other words, you should only trust a recording if you would trust the word of the person producing it.Which means that it does matter where the fake Nancy Pelosi video, and others like it, come from. This time we knew the video was fake because we had access to the original. But with future deepfakes, there won’t be any original to compare them to. To know whether a disputed video is real, we’ll need to know who made it. It’s good for journalists to start getting in the habit of tracking down creators of mysterious web content. And it’s good for the rest of us to start expecting as much from the media. When deepfakes fully arrive, we’ll be glad we’ve prepared. For now, even if it’s not ideal to have amateur political operatives exposed to the ire of the internet, it’s better than carrying on as if we can still trust our lying videos.

Below is a 10 minute documentary on the problem of deepfakes (courtesy of WSJ)




Comments/Thoughts:

In the tech sector, there have been attempts to combat the problem of Deep Fakes. For example, the photo and video verification company, True Pic, has designed technology which "fingerprints" videos and photographs right at the moment they come off the image sensor. These fingerprints can be used later to establish the authenticity of the image. There is an interesting article in Fast Co.
“If I was a campaign manager for the next election, I would be videotaping every single one of my candidate’s speeches and putting them on Truepic,” says Hany Farid, a professor of computer science at Dartmouth College who has helped detect manipulation in preexisting images for organizations including DARPA and The New York Times. He now serves as a Truepic adviser. “That way, when the question of authenticity comes up, if somebody manipulates a video, [the user] has a provable original, and there’s no debate about what happened.” https://www.fastcompany.com...
Further, Truepic is partnering with Qualcomm which manufactures a large portion of the smartphone processors in use today. The plan is to insure that the Truepic tech is built into the smartphones so that it isn't even necessary to download a photo/video authenticating app.

But tech solutions only get us so far here. Just as surely as partial antidotes hit the market, so will more advanced versions of deep fake tech. More to the point in terms of the points made by Reni in her article, no technology insures that it will be used. To combat the current problem, we would need to be, as Reni states, "at the point where a camera that does *not* use the tech is suspicious by default." We are far from being at that point. Why?

Our epistemological norms have served us relatively well for many centuries. Eyewitness accounts are now intuitively accorded much greater credibility than testimony or other sensory data such as audition without vision (e.g. claims that you recognized a voice without accompanying visuals). This hierarchy of reliable evidence informs everything from our methods of education, criminal procedure, news and journalism, right down to plain old small talk and gossip. In order to practice vigilance in the era of Deep Fakes, we would have to learn how to habitually put such implicit schemes aside and start with the premise that all perceptual evidence is deeply problematic in principle. Such a change in cognitive and perceptual MO, if it were to come about, takes a lot longer than the period of time in which more and more refined Deep Fake programs are developed and disseminated widely and cheaply.

No comments:

Post a Comment