10 May 2019
Country: Singapore, Global
by: Grant Williams
Fake news is a “serious problem”—or so says Singapore Prime Minister Lee Hsein Loong.
It is so serious that he has just passed a law that gives government ministers the power to order major social media platforms to flag posts that they deem “fake” or misleading. In other words, the government—not the social media companies or independent auditors—will determine what is true, and what is not.
What could possibly go wrong?
Singapore is not the only country that has proposed the government is the arbiter of social media. Last year, Malaysia passed a similar law, which imprisoned the first perpetrator within a month of its passing. Egyptian President Abdel Fateh el-Sisi gave “fake news” a whole new meaning by equating it with treason, subsequently arresting journalists and social media users for articles and posts that criticized the Egyptian state—or, in Sisi’s terminology, fake news.Many are afraid that Singapore’s new law could take a similarly authoritarian turn.
“It is not up to the government to arbitrarily determine what is and is not true,” Daniel Bastard, head of the Asia-Pacific desk at Reporters Without Borders, said in a statement.
“We condemn this bill in the strongest possible terms because, in both form and substance, it poses unacceptable obstacles to the free flow of journalistically verified information.”
In its simplest form, fake news is false information, disguised as legitimate news stories. While US President Donald Trump may have popularized the term, the phenomenon goes back millennia, from Octavian running a misinformation campaign against Mark Anthony to Galileo’s trial in 1610. It took off again when Johannes Gutenberg invented the print press, and again—in our time—when the Internet, smart phones and social media broadcast it like never before.
“We live in an era where the flow of information and misinformation has become almost overwhelming,” said First Vice President of the European Commission Frans Timmermans, commenting on the new challenges posed by social media platforms.
“That is why need to give our citizens the tools to identify fake news, improve trust online, and manage the information they receive.”
It is clear that this is desperately needed. In Singapore, 80 percent of participants in a recent IPSOS survey believed that they could correctly identify fake news; 90 percent of them were proven wrong. According to a Parliamentary report from last year, only 2 percent of children in the UK have the critical literacy skills needed to identify a fake news story, and one third of all teachers think that the skills they do learn are not applicable to the real world.
However, much like Singapore, the UK is proposing to solve this problem by increasing government oversight of the Internet, specifically naming a regulator who would have the power to block certain websites and make individual executives legally liable for harmful content spread on their platforms. In a country where only 43 percent of the population has trust in government institutions, is giving the government the ultimate say in what is seen and what is deleted from the Internet really the best idea?
No matter what the country or context, it has been proven time and time again that increasing government control of online spaces opens up the floodgates to bias, and a serious infringement on freedom of press. For this reason, many of the big technology companies are proposing artificial intelligence (AI) tools as an alternative to remove false information from their platforms.
“We don’t write the news that people read on the platform,” said Facebook CEO Mark Zuckerberg in a statement. “But at the same time we also know that what we do is a lot more than just distributing the news, and we are an important part of the public discourse.”
To this end, Facebook has put AI measures in place which are programmed by humans to identify, flag and remove false information from its platform. Many of the tools take their cue from articles that have been flagged by human fact-checkers in the past as inauthentic content, and are designed to be used alongside human fact-checkers, until they have been perfected.
Unsurprisingly, academics and experts are not convinced.
“We are not aware of any AI system or prototype that can sort among various facts involved, let alone discern implicit attitudes,” said Dr. Gary Marcus, a professor of psychology and neuroscience at New York University. This lack of sophistication is not an isolated phenomenon; AI tools have also been criticized for missing coded, violent messages and replicating human biases.
In order to tackle an issue, it is counterproductive to simply pass legislation which outlaws it; this fails to address the root cause. If people struggle to identify real news from fake news or credible sources from inaccurate ones then the antidote should be education. Further restricting an individual’s access to a variety of information perpetuates the ignorance and prevents learning how to think critically and question its validity.
Perhaps it is time to stop expecting powerful governments, and even more powerful companies to provide accurate information, especially when they have a vested interest in the outcome.
Just a thought.
To read more about Media Diversity Institute’s projects related to media and information literacy education, check out our projects page here.