A Boston University professor has an idea for combating fake news: Taxation.
According to an article on the Boston University (BU) website, BU’s Marshall Van Alstyne thinks that false news can have negative real-world ramifications. For instance, he sees a connection between a measles outbreak and anti-vaccination content online.
The BU professor has suggested levying a tax for the damage caused by fake news content. He says that this would be a tax on the negative consequences of the fake news, not a tax on speech. “What you’re doing is you’re taxing the damage. You’re not taxing the speech,” he said.
Van Alstyne engaged with reader comments on the BU website and noted that some readers had reacted to the article by saying his tax idea “sounds totalitarian.”
“A few readers have reached out to object that the tax proposal sounds totalitarian!” he said. “I’m delighted they’re willing to engage and they deserve a thoughtful response.”
Van Alstyne compared the dissemination of fake news to the contamination of the physical environment:
The first point is to recognize disinformation as a form of pollution in your news feed just like carbon monoxide in your air supply or dioxin in your water stream. And, because fake news generates engagement, social platforms aren’t sufficiently motivated to clean the contaminants. Polluters need incentives to stop passing their poison.
In another comment beneath the article, Van Alstyne explained that the tax would apply to platforms, not to individuals:
Some critiques, however, are based on a misconception regarding how such a pollution tax would be applied. It was *never* intended to apply to individual people or to individual messages. No, instead, it was intended to apply to platforms that pick up and amplify messages and it would apply to these platforms only as a means of curbing distortions caused by their business models.
He said that “What platforms *choose* to amplify is what the pollution tax targets.”
Extending the comparison between fake news and environmental pollution, he said that sampling a portion of Facebook content could identify the fake news contamination on the platform: “Facebook generates 4 petabytes of data each day. Like testing for pollutants in air or water, you don’t need to fact check everything. Just take a statistical sample to check the levels of contaminants.”
He suggested that independent entities would have a role:
One of the best ways to reduce bias is to separate the rules that define fake news, from the adjudication of fake news, and from enforcing penalties for fake news the same way we separate legislative, judicial, and executive branches of government. Critically, government should *not* be the certification authority but neither should Facebook. Both have too much potential for self-interest. We might need new organizations, more like FactCheck.org and Snopes, that are as independent as possible for certification.
Van Alstyne noted in his comments that actions could be limited to foreign actors:
A good solution adapts easily to tailored goals. Suppose you want to police (some) disinformation but not encumber free speech at all. There’s a solution for that too. In that case, apply the penalty narrowly to disinformation spread by foreign governments. They don’t have a citizen’s right to speak and shouldn’t be meddling in our elections anyway. Such a narrow intervention would reduce disinformation from foreign adversaries but have no effect on citizens themselves.
For reference, the tax on foreign fake news pollution is only one of four separate ideas for fighting fake news. It happens to be the most controversial and so the most fake newsworthy :-). A full paper, “The Problem of Fake News” will be available July 2019.