This article first appeared in Digital Edge, The Edge Malaysia Weekly on June 7, 2021 - June 13, 2021
With the rise of social media and popping up of new sites to provide a space for people to share their thoughts and opinions, the conversation around digital governance is becoming more complicated. While these platforms are supposed to act as a digital public space for these discussions, their improper use brings up the question of moderation.
Some have argued that the government should step in and regulate platforms while others believe this may hinder freedom of speech and prefer self-moderation. The question is, what should companies or agencies consider when governing a digital public space?
Self-regulation should be a method to be considered, says Justin Ong, Deloitte Malaysia innovation and regulatory leader. Self-regulation is defined as any attempt by an industry to moderate its conduct with the intent of improving marketplace behaviour for the ultimate benefit of consumers.
Ong says to ensure its success, an increased level of transparency and disclosure of information is crucial to allow independent and timely scrutiny by stakeholders.
“Well-informed stakeholders could influence the behaviour of big techs, discouraging them from engaging in activities which may result in exposure to undue risk that undermine their interests,” he says.
“We believe innovation is best preserved when regulators focus on outcomes. In setting a new regulation or when increasing the height of scrutiny, policymakers are urged to focus on the ends (for example, has this innovative solution resulted in any harm?) instead of the means (for example, does this follow the prescribed steps and regulations?).”
Policymakers today face the difficult task of balancing the need to evolve at the pace of technological progress, with their statutory objectives centred on preserving the integrity and stability of the financial system and protecting end consumers.
Digital changes come faster than expected. Thus, it is important to understand that effective regulation depends on the regulators’ understanding of the solutions offered by businesses, their efficacy and their possible unintended consequences, says Ong.
“Big tech companies are increasingly attracting the attention of policymakers. We observe that there is a growing proliferation of financial services and products provided by big techs including payment, consumer and SME financing, crowdfunding, investment and insurance. These financial services businesses are expanding quickly, thanks in part to asymmetries in the current regulations between tech firms and traditional financial institutions,” he says.
“Policymakers are rethinking their oversight of new market players, shifting greater regulatory focus towards these tech titans and exploring the notion of ‘same activity, same risk, same supervision and regulation’. An alternative of supervisory oversight will be dependent on entity-based and activity-based approaches.”
It is very much a certainty that social media is here to stay, says Alvin Gan, head of technology consulting at KPMG Malaysia. But does it need to be regulated? Some would argue for regulation in the interest of keeping things civil. Others are diametrically opposed to the idea, in the interest of free speech.
While both sides have their merits with governments struggling to find a balance in managing social media platforms, a tight grip may strangle the creativity and adversely impact small and medium businesses since many use these platforms to promote their products and services.
But a completely laissez faire approach may not be advisable either, as it can result in unfiltered fake information seeping into the system and having a sizeable impact on major issues in the country, such as the national election.
Gan suggests that governments find a middle ground and consider working with these tech giants that manage social media platforms.
“Another key consideration is the people factor, which at times, goes beyond what policies and regulations can curb. We can have the strictest of controls, but what end users post online is down to their maturity, culture, beliefs and emotions,” he says.
“While governments want everyone to colour within the box, it is imperative that they also allow for ways to make the box bigger. After all, having it any other way will only deepen the divide and worsen the already unregulated platforms.”
We are facing a big challenge of managing digital public spaces for which there is no quick fix, says Dr Rachel Gong, senior research associate at Khazanah Research Institute. There is a need to protect freedom of speech while guarding against harm.
Take down notices and content moderation will not be able to keep up with the amount of content being produced, she says. Heavy-handed censorship and blanket website blocks cannot stop the flow of information or misinformation.
Hence, algorithm regulation could be an option, beginning with more transparency around platform algorithms to allow independent review and scrutiny of algorithm outcomes. Gong says regulatory oversight over big tech companies may need to be a global effort that includes stakeholders from different sectors and regions, rather than a self-funded initiative.
“An all-of-society solution will also need commitment to long-term goals of improving civic responsibility by improving education, especially in terms of ethics. The solution to a socio-technological problem lies not in focusing on technology, but on society,” she says.
A valid challenge to the freedom of speech is the example of someone shouting “fire” in a crowded theatre, says Gong. Whether there is actually a fire or not, the expected outcome would be a general panic.
If the intention of the speaker is to warn people of danger, the speaker needs to figure out if there is a better way to do so without causing a panic, thus engaging in some self-policing. However, Gong points out, if it is indeed the speaker’s intention to cause panic rather than alert people to danger, the right to do so should not be protected under the guise of freedom of speech.
“In the context of a digital public space, there would need to be rules, whether unspoken but commonly understood norms or systematically codified regulations, governing what is allowable as freedom of speech and what would engender, not just panic but possibly harm, such as hatred and violence,” she says.
“This goes hand in hand with how much influence and power the speaker has and how much amplification her content receives from recommendation algorithms.”
Early internet users had idealistic notions that cyberspace would be a democratic and fair place where everyone would be able to speak their mind free from government rule, says Gong. But what this utopian manifesto failed to recognise were structural inequalities that find their way into every aspect of society, from education to income to technology.
These idealists supposed that the internet would allow everyone an equal voice because “on the internet, nobody knows you’re a dog”, says Gong. The supposed anonymity online would reduce racial, gender and class discrimination that typically privilege the speech of rich and powerful men of a certain race.
“Minorities and disenfranchised groups would be able to voice their opinions and concerns and wield as much influence as majorities and dominant groups. Unsurprisingly, this has not come to pass,” she says.
“People with power and resources continue to hold disproportionate influence and control of the digital public space, not just because of existing inequalities of power and resources that benefit them, but also because of algorithms that perpetuate and exacerbate those inequalities.”
Simultaneously, these algorithms have elevated and amplified the voices of the kind of radical individuals likely to yell “fire” in a crowded theatre, says Gong. This is because the chaotic activity that ensues when people try to escape a burning theatre is analogous to the likes, reposts and online “engagement” that social media algorithms are programmed to maximise.
“The goal of social media algorithms is to promote digital content to users that they would enjoy and be more likely to engage with. The algorithms have no malicious intent in and of themselves,” she says.
“They are programmed by people who, one hopes, also have no malicious intent in and of themselves. But as the algorithms analyse more data (for example, user preferences and the history of user behaviour), they begin to behave in unexpected ways.”
Save by subscribing to us for your print and/or digital copy.