home Article

AWARE’s Response to the Online Safety (Miscellaneous Amendments) Bill

October 13th, 2022 | Children and Young People, Gender-based Violence, News, TFSV

AWARE welcomes the Online Safety (Miscellaneous Amendments) Bill that was tabled in Parliament last Monday. The announcement is timely, as the recent Sunlight AfA poll on online harms found that 31% of respondents have experienced and/or witnessed gender-based online harms in Singapore.

We are glad to see various provisions put forth, aimed at closing the digital safety gap, i.e. the gap between advances in technology and existing capabilities to ensure user safety. For one, action can be taken against online communication service providers for non-compliance with the relevant Code of Practice, regardless of where they’re located. Also, online communication service and internet access providers may also face heavy penalties (of up to $1 million and $500,000 respectively) for not stopping Singapore-based users’ access to egregious content.

Even so, we have identified areas that this bill does not address.

1. Children’s online safety

The vulnerability of children to online harms, and the need to ensure their safety, are some of the main concerns driving the introduction of this Bill.

Age verification technology has been identified by parent groups and digital rights activists as an effective strategy to reduce children’s risk of online harms. However, whether the Singapore government will mandate the use of such technology remains to be seen.

In response to MP Melvin Yong’s parliamentary question, the Ministry of Communications and Information (MCI) rightly pointed out that age verification can be tricky and involves some data protection risks. But this difficulty should not deter us from implementing the best measures we can to protect children from premature exposure to online harms via social media services or pornographic websites.

We also reiterate our recommendation to introduce a mandatory onboarding programme, and require social media services to set up a resource centre for young users. This will equip them with the necessary knowledge to stay safe online, including information on how one may report certain content or change privacy settings.

Further, the Government’s consultation paper released in July stated that social media services would be required to proactively detect and remove content containing child exploitation and abuse under the Code of Practice (in line with the UK’s Online Safety Bill and Australia’s Online Safety Act 2021). We look forward to more details in the draft Code on the mechanisms that providers will be required to implement.

2. Definition of “egregious content”

In terms of online sexual harms, the Bill includes, under the definition of “egregious content”, “content that advocates or instructs on sexual violence or [sexual] coercion” and materials depicting child sexual abuse or exploitation.

Yet it is unclear what it means to “advocate” or “instruct on” sexual violence and/or coercion. By extension, it’s also unclear whether “egregious content” will encompass all forms of technology-facilitated sexual violence (e.g. non-consensually obtained and/or distributed sexual images).

We seek greater clarity on the kinds of content that this category covers, and hope that illustrative examples of each type of “egregious content” will be provided.

3. Modes of communication covered

As it stands, the Bill regulates online communication, but electronic services including SMS and  MMS services as well as direct messages (DMs) are explicitly excluded from its scope. This means that members of private chat groups in which sexual images and videos are shared non-consensually, such as the SG Nasi Lemak Telegram group, will not be held accountable under this Bill.

Recent reports on online harms have highlighted the pervasiveness of abuse perpetrated through DMs: A 2021 UNESCO study on online violence against women journalists found that of the online threats they experienced, almost half came in the form of harassing DMs. In another study by the Centre for Countering Digital Harm, researchers observed that 1 in 15 DMs sent by strangers to high-profile women violated Instagram’s Community Standards. Around 1 in 4 abusive images and videos sent were considered image-based sexual abuse.

We strongly encourage the Bill’s scope of coverage to be expanded to include SMS services, MMS services and DMs, to strengthen protection from online harms in all user-to-user interactions.

4. Duties of providers 

Although providers face penalties for failing to stop access to egregious content, the Bill does not mention what the take-down process will look like, e.g. how quickly users can expect reported content to be taken down. In cases of non-consensual distribution of intimate images or videos, rapid take-down is crucial to containing spread and minimising further harm on victim-survivors.

In Australia, providers must remove non-consensually distributed content within 24 hours of the eSafety Commissioner issuing a removal notice. Singapore should similarly stipulate a timeframe within which providers must comply with a take-down notice.

Additionally, we hope that the Government will clarify how providers will be assessed to have taken “all reasonably practicable steps” to comply with directions issued by the Info-communications Media Development Authority (IMDA).

5. Trauma-informed and gender-responsive approach

Trauma-informed and gender-responsive training should be provided to any IMDA staff handling complaints and take-down requests from end-users. This will enable the staff to provide non-judgmental support to victim-survivors and reduce their risk of re-traumatisation during investigations.

We look forward to these clarifications being made in the second reading of the Bill and in the draft Code of Practice.

Refer to our submission to the public consultation on Enhancing Online Safety For Users in Singapore, made to MCI in August 2022, here.