fake news, news, fake news study, fake news research

 

 


 

Right before the 2016 election, Aviv Ovadya tried to warn us that something was very wrong with information on the internet. Now he's trying again, and this time what he sees coming is worse.

 

The MIT-educated engineer went on a crusade in mid-2016 to alert tech companies that the information universe they'd developed — one that rewarded click-based revenue over information accuracy and quality — was artificially weighted toward disinformation and propaganda, and there was going to be hell to pay. He circulated a presentation called "Infocalypse," which was largely ignored.

 

Ovadya was right, we know now, and the extent of his clairvoyance is still unfolding — most recently this week, with the revelations that Facebook enabled Cambridge Analytica to collect the personal data of 50 million users without their knowledge and use it to manipulate the 2016 election by spreading pro-Trump, anti-Clinton propaganda.

 

What does Aviv Ovadya think is coming next?

Now Ovadya has a projection about the next era of fake news, which will be supercharged by artificial intelligence. "We are so screwed it's beyond what most of us can imagine," he told BuzzFeed News. "We were utterly screwed a year and a half ago, and we're even more screwed now. And depending how far you look into the future, it just gets worse." 

 

Technology soon will be able to make it "appear as if anything has happened, regardless of whether or not it did,” he says. Tools to manipulate video and audio already exist: People already use computer algorithms to digitally superimpose celebrities' heads on porn stars. Soon, they'll be able to edit video to sync a speaker's lips to words they never actually said. Technologists at the University of Washington recently demonstrated this with an edited reel of world leaders doing just that.

 

Ovadya told BuzzFeed that soon "fake news" will look positively low-tech, soon supplanted by more terrifying buzzwords: "Diplomacy manipulation," in which someone uses technology to convince others that a geopolitical event occurred that really didn't; and "polity simulation," in which bots are used to create entire grassroots campaigns that are fake, which can influence politicians to act. (That's already happened: Earlier this year, bots flooded the FCC comment system on net neutrality with fake comments using the identities of people who are dead.)

Ultimately, when people discover that it's impossible to determine what is real and what is fake, they develop "reality apathy" — and just give up.

Is there anything we can do?

So what to do? Start thinking like a malicious digital actor, and encourage lawmakers to do the same. "I’m from the free and open source culture — the goal isn't to stop technology but ensure we're in an equilibria that's positive for people. So I'm not just shouting 'this is going to happen,' but instead saying, 'consider it seriously, examine the implications," Ovadya told BuzzFeed News. "The thing I say is, 'trust that this isn't not going to happen.'"