Nigeria’s struggles with internet content regulation have remained unsolved after prolonged debate on the issue. More recent attempts at social media regulation by the government, tell of a shift of the government’s focus from internet content regulation through speech control laws to a focus on platform regulation which bypasses the legislature. A properly thought-out platform responsibility regime would go a long way in resolving the issues of internet content regulation in Nigeria and may put an end to the continuous friction between the government and internet stakeholders.
With the increasing popularity of social media platforms (such as Twitter and its rise to becoming the de facto town square for public discourse) and their rising real-world implications, governments worldwide are increasingly agitating to modulate harmful or illegal activities of citizens on social media.
Some of these harmful behaviours, such as hate speech, disinformation, misinformation, cybercrime, intellectual property infringement, child pornography and other kinds of harmful content disseminated or behaviours carried out through social media, may, if unchecked, unravel social justice, public morality, public safety, democratic institutions, and national security. The need to find ways to address these concerns has spawned global policy debates on possible frameworks to regulate internet content.
These debates fall under the broader conversations on internet regulation and, in recent times, have centred on whether social media companies should be made to be responsible for activities and content on their platforms. These debates are finding roots in Africa and Nigeria precisely, in the face of ongoing frictions between social media platforms and constituted governments. Although Nigeria’s struggles with this issue mirror the broader global challenges that remain unresolved, there is a sense that Nigeria needs to urgently find a sustainable solution within her context in the wake of recent events.
The Twitter Ban as a Problem of Internet Content Regulation
On June 4, 2021, the Nigerian government banned Twitter. In a decision from the Presidency relayed by the Ministry of Information, Telecoms companies were demanded to block access to Twitter by users in Nigeria. The ban came after Twitter had removed a post from President Muhammadu Buhari that threatened to punish specific groups of people within the country with violence in response to secessionist agitations. These tweets were removed by Twitter, as they went against the microblogging platform’s user rules which prohibit content that threatens or incites violence.
The government would subsequently insist that the ban was not a response to the deletion of the President’s tweets by Twitter. Instead, they were informed by the “litany of problems with the social media platform in Nigeria,” such as its uses for spreading “misinformation and fake news which have had real-world violent consequences.” It would appear these issues finally came to a head following the deletion of the President’s tweet.
These concerns, to the extent that they are true, are instructive. The impulse to ban Twitter, for what became seven months, reflects Nigeria’s more banzai approach to technology regulation and is condemnable for its human rights implications, authoritarian overtone, and impact on the economy.
But, make no mistakes about it, Nigeria has an actual need to modulate or limit the excesses of harmful online content and behaviour. This is especially true given the polarising heterogeneousness of the country, which amplifies harmful content, especially hate speech and disinformation and can lead to serious real-world effects. The Nigerian Twitter community, for example, is awash with the propagation of tribalism, bigotry and hate speech, as individuals with these views can easily find niches that can reinforce their views. As this continues, conflicting groups may well goad each other to violence.
Nigeria’s Earlier Attempts at Speech Control
Before the Twitter Ban, the Nigerian government had courted ideas of creating legislation to control/criminalise certain kinds of content on social media. Bills like the Frivolous Petitions and Other Matters Connected Therewith Bill (nicknamed the “Anti-Social Media” Bill”), the Protection from Internet Falsehood and Manipulation and Other Related Matters Bill, 2019 (referred to as the “Social Media Bill”) and the National Commission for the Prohibition of Hate Speeches Bill (often called the “Hate Speech Bill”) were touted at various points within the last few years to address some of these concerns.
These bills failed to make it into law due primarily to much-needed human rights objections. The Anti-Social Media bill, for example, aimed to criminalises publications which discredit government institutions, amongst other problematic provisions. The Social Media Bill, proposed draconian rules, criminalised a broad range of online interactions and was mostly a thinly veiled attempt by the government to create a means of prosecuting criticism of the government online.
The Hate Speech Bill, claimed to be aimed at prohibiting the commission of ethnic discrimination, hate speech, harassment based on ethnicity, ethnic or racial contempt and discrimination by way of victimisation by individuals or corporate bodies. Instead, it was composed of purposefully ambiguous provisions intended, which had the effect of criminalising a wide range of otherwise allowable speech, proposes capital punishment for contravention, had the potential to heighten ethnic intolerance and showed a similar draconian verve.
The bills, beyond their human rights failures, are demonstrable insufficient attempts at solving the problem of harmful content on the internet. For one thing, the approach of speech control by criminalisation is not only blatantly anachronistic but also inherently problematic, especially if proposed with an ulterior political motive.
For another, content-based laws, or regulations, which broadly discriminate against speech based on the substance of what they communicate, are tricky to navigate and require a careful balancing act to differentiate between what speech cannot be rightly outlawed even where politically uncomfortable (considering the universally guaranteed right to freedom of expression) and what speech is sufficiently toxic enough to justify derogation. This nuance is sorely lacking in the proposed regulations.
Additionally, the mechanisms for enforcement of what was effectively speech control in the proposed regulations were either not thought out, barely fleshed out, or categorically deficient. For bills that proposed to regulate speech, the glaring lack of transparent enforcement and contestation mechanisms and independent administrative and redress measures meant that the reins would be left completely at the discretion of the government. This is simply unsustainable.
A Partial Platform Responsibility Regime for Nigeria
A platform responsibility approach to internet content regulation undertakes to moderate harmful activities on the internet by placing expectations on platform service providers as to what speech should be allowed on their platform, within the jurisdiction of the country establishing the regulation.
There are some reasons why platform responsibility is considered a good approach to internet content regulation. First, social media platforms, even where they operate within a country’s territorial jurisdiction and are regulated by such country’s laws, operate their communities as private independent zones.
Essentially, their policies and user rules dictate allowed and allowable behaviour on their platforms. They have the prerogative to do so since an online platform does not fall within the territorial jurisdiction of any country, even where the corporate entity running it does. This is not to say that governments cannot punish activities on these platforms in the real world; it simply means that they cannot create rules for behaviour directly enforceable on those platforms.
Second, the infrastructure for operating the communities created by social media platforms is controlled exclusively by social media companies. This means that governments can hardly enforce any control over behaviour on the internet, at least not directly. Platform responsibility has emerged, therefore, as a response to these challenges, as they allow the government to exercise needed control through social media companies.
Nigeria could use a platform responsibility regime. Not only does a platform responsibility regime resolve some of the inherent limitations of previous attempts at speech control, but it could also, where properly implemented, help avoid human rights problems which exist with a speech control regulation approach.
It bears saying that if Nigeria is to establish a platform responsibility regime, it would have to do so with proper regard for human rights, with careful, thought-out mechanisms which respect international best practices and through a multistakeholder process which ensure that the perspectives of a variety of actors within the internet ecosystem, including social media companies themselves, are synthesised.
To create a platform responsibility regime, Nigeria can develop a law establishing human-rights protective rules around unlawful online content/activity. The law should carefully define what is hate speech, for instance, ensuring that only speech which legally warrants derogation of the right to freedom of expression is outlined as harmful speech and ensuring it draws a line between truly harmful speech and uncomfortable speech, as the latter should not, under any circumstances, be considered unlawful.
The law would also outline responsibilities on social media platforms to support the implementation of these rules, leveraging existing content moderation infrastructure, especially around unique pain points such as bigotry or disinformation. These obligations should promote platform responsibility while allowing platforms adequate room to operate independently. Platforms should also receive protection under a “Good Samaritan” provision to encourage discretionary moderation.
The proposed law should incorporate comprehensive public and private “reporting procedures” to support the combating of unlawful activities. The public “reporting” process should be operated by a designated authority with ostensible independence. The designated authority will receive complaints from public bodies about unlawful content on platforms, consider them against the yardstick of unlawfulness, and make a take-down request if convinced that the content is unlawful or other requests regarding other kinds of unlawful behaviour.
Platforms should be given sufficient time to consider take-down requests and at least a 48-hour window should be allowed for expedited requests. The authority may also conduct investigations and make requests independently. The Private “reporting” process would be open for individuals to make only take-down requests based on the law. Regarding individuals that put up unlawful content, liability should not extend beyond the takedown of content, restriction of accounts or ban from platforms.
Civil liability beyond the preceding may only be exercised where online content or activity is followed by or directly results in real-world harm. The law should generally avoid establishing criminal liability for content, but criminal liability, with limited sanctions, can be implemented for other online activities which would qualify as crimes under territorial law.
Platforms should be allowed sufficient discretion to decide whether content or request is justified based on the regulation and may refuse requests where it believes such requests are not justified. Further to this, there should be provisions for quasi-judicial contestation of takedowns. To achieve this, an oversight authority can be created to interface with relevant platforms, having powers to consider the contestation of takedowns, address complaints and conduct reviews as a first-level recourse.
Most importantly, however, the law should impose only limited liability on platforms for the content they distribute. This liability should be civil in scope and should be restricted to (a) content that the platform refuses to take down following a directive from the oversight authority, (b) content that the platform directly controls, and (c) content that the platform promotes under conditions incompatible with standards of net neutrality.
Liability should be varied, and different penalties should apply depending on severity – none of which should include veiled censorship. Importantly, non-platform/infrastructural internet intermediaries such as Internet Service Providers (ISPs) and Content Delivery Networks (CDNs) should be excluded from liability.
Nigeria’s “Attempts” at Platform Responsibility
Following the Twitter Ban, the Nigerian Broadcasting Commission (“NBC”) was tasked to create a regulatory regime for over-the-top service providers in an attempt at internet content regulation through over-the-top (“OTT”) services regulation. NBC, through its “Framework for Online Media Regulation” and “Regulations of Over-The-Top Services and Video–On–Demand Services in Nigeria,” attempted to do just that by categorising social media platforms as providing broadcasting services and requiring them to register as OTTs. This attempt, which eventually failed, was criticised for its back-door approach to platform regulation, for erroneously categorising social media platforms as broadcasting entities.
Recently, the National Information Technology Development Agency (“NITDA”) proposed a Draft Code of Practice for Interactive Computer Service Platforms/Internet Intermediaries remains the most coherent attempt at internet content regulation through platform responsibility. The draft code, which was opened for stakeholder comments, was also criticised for its problematic provisions.
In addition to being ambiguous, the Code requires platforms to act treat requests and take down prohibited content within 24 hours, creates criminal liability on users in Nigeria for a broad range of content labelled “prohibited,” creates a liability by proxy for users in Nigeria who share content originating outside Nigeria, compels platforms to police users and generally adopts suspicious requirements on platforms to share trend data. Although the Code has not been passed by NITDA, the predominant sentiment is that it should not be.
Nigeria’s struggles with internet content regulation continue. More recent attempts at social media regulation by the government through the NBC and NITDA tell of a shift of the government’s focus from internet content regulation through speech control laws — which have so far been refuted by the legislature — to a focus on using existing regulatory bodies to establish subsidiary legislation that by-pass the legislature. This echoes the bent of the government on regulating social media, for better or worse.
If the Twitter Ban was any evidence, gung-ho rules driven by political impulse cause more problems than they solve. The approximation is that Nigeria lost no less than half a trillion naira from the seven-month-long ban.
A properly thought-out platform liability regime would go a long way in resolving the issues of internet content regulation in Nigeria and put an end to the continuous friction between the government and other internet stakeholders. If nothing else, it would limit, if not completely prevent, the occurrence of perversions of regulation such as the Twitter ban. It could also provide important direction for other African countries with similar political contexts experiencing similar challenges.
Vincent Okonkwo | Lead Research Analyst, Tech and Innovation Policy | firstname.lastname@example.org
The opinions expressed are the sole responsibility of the authors and do not necessarily represent the official position of borg.
The ideas expressed qualify as copyright and is protected under the Berne Convention.
Reproduction and translation for non-commercial purposes are authorised, provided the source is acknowledged and the publisher is notified.
©2022 borg. Legal & Policy Research