CHILLING REPORT: “Drudge, Facebook, NYT readers could face libel suits for sharing ‘fake news'”

In a recently published Washington Examiner piece, Paul Bedard shares his discovery that the former Democratic chair of the FEC is pushing for political content on the internet to face “substantial federal regulation to eliminate undefined ‘disinformation'”. These regulations are said to affect every spectrum of online political news from the Drudge Report to the New York Times.

In a shallow attempt to pass sweeping Federal regulation under the rationalization of countering “foreign interference” ex-FEC-chair Ann Ravel believes that there “is support for expanded regulation in the wake of reports foreign governments spent $100,000 on 2016 political ads on Facebook.”

More via Washington Examiner:

She would also use regulation to “improve voter competence,” according to the new proposal titled Fool Me Once: The Case for Government Regulation of ‘Fake News.’ Ravel, who now lectures at Berkeley Law, still has allies on the FEC who support internet regulation. The paper was co-written by Abby K. Wood, an associate professor at the University of Southern California, and Irina Dykhne, a student at USC Gould School of Law.

The proposal immediately came under fire from the Republican FEC commissioner who for years has been warning of the left’s effort to regulate political talk they don’t like, especially on conservative newsfeeds like Drudge.

Lee Goodman told Secrets, “Ann’s proposal is full blown regulation of all political content, even discussion of issues, posted at any time, for free or for a fee, on any online platform, from Facebook to the NewYorkTimes.com.”

He was especially critical of the undefined nature of “disinformation” to be regulated and the first-ever call for libel suits to snuff out talk Ravel doesn’t like.

In their proposal, the trio wrote, “after a social media user clicks ‘share’ on a disputed item (if the platforms do not remove them and only label them as disputed), government can require that the user be reminded of the definition of libel against a public figure. Libel of public figures requires ‘actual malice,’ defined as knowledge of falsity or reckless disregard for the truth. Sharing an item that has been flagged as untrue might trigger liability under libel laws.”

Goodman said, “A fatal flaw of Ann’s proposal is that it cannot define what is, or is not, ‘disinformation’ in a political message. Nevertheless, it proposes to tag threats of libel lawsuits and liability to thousands of American citizens who might want to retweet or forward a message that somebody else subjectively considers to be ‘disinformational.’ I call that the big chill.”

And Andrew Woodson, an elections lawyer and partner at Wiley Rein LLP added, “Any proposal built on intimidating Americans from sharing news stories on social media is headed in the wrong direction.”

They also want to build a national database of all regulated political ads and discussions, a potential invasion of privacy especially for bloggers or people who comment on news and Facebook posts, Goodman warned.

“Americans should not be required to sign a national registry everytime they post a political video on YouTube,” he said.

Ravel is clearly concerned about how the internet is used to influence voters and is also worried voters aren’t educated enough to know they are being given bad info. She is also worried about the financial disclosure, sometimes not required if no money is involved.

“The money involved in online political advertising is more diffuse than ad buys on traditional media. Like traditional ads, some ads produced for the Internet have high production costs. Others, like memes, are free to create. Unlike television and radio ads, some online ads are placed for free. Posting an ad to one’s Facebook Page, or tweeting it into a politically active social network in hopes it goes viral, costs nothing. Advertisers might pay a platform to promote the ad and place it in certain users’ newsfeeds. They might also buy ‘likes,’ ‘shares,’ and ‘retweets’ outside of the platforms, from ‘troll farms’ and ‘sock puppets,’ which are humans who create false profiles and boost content, or from ‘bot armies,’ which are machines mimicking human behavior to boost content,” the trio wrote.

 

Thanks for sharing!