Minimum age requirements, user verification, and mechanisms to remove harmful content on short notice are some regulatory requirements for businesses across APAC.

Shutterstock.com/fadfebrian
The Asia-Pacific (APAC) region has experienced a significant surge in internet users in recent years. However, increased internet connectivity has also enabled the rapid spread of harmful online content. A survey conducted by Singapore’s Ministry of Digital Development in July 2024 found that 74% of surveyed respondents encountered harmful online content, marking a 9% increase from 65% in 2023.
In response to concerns over online safety, regulators across APAC are enacting overarching regulations to enhance online protection. More recently, Malaysia’s Parliament passed the Online Safety Bill 2024, Singapore published a new online safety code of practice for app distribution service (ADS), and Vietnam’s stringent rules for social network services came into force. While the new rules focus on broader online safety, some jurisdictions took further measures aiming to protect children from harm, like Singapore’s new age assurance for ADSs and Australia’s minimum age ban for social media platforms.
As APAC regulators continue to prioritise heightened regulations for online safety, businesses should stay informed and adapt their practices to comply with emerging compliance obligations.
Lexology PRO explores the region’s approaches towards online safety and key tips for businesses.
How are lawmakers and regulators in APAC approaching online safety?
Governments across APAC have taken varied approaches to addressing online safety concerns. Some have implemented comprehensive regulatory frameworks, such as Singapore’s Online Safety Act (OSA) 2022 and Australia’s OSA 2021. These laws require online platforms to remove harmful content within a short notice period. Other countries, like Japan and South Korea, have relied on existing privacy and data protection laws to regulate online safety.
More recently, some APAC jurisdictions have extended protections to minors in the digital space. Australia passed a law banning social media use for children under 16, which comes into effect in December 2025. Similarly, Singapore has issued new codes of practice with age assurance measures for app distribution services.
Jurisdictions with established regulatory frameworks for online safety
Australia’s OSA 2021 regulates eight types of online service providers, including social media, app distribution, and search engines. The law requires these providers to undertake risk assessments, determine their services’ risk profile, maintain records, and implement safety features to detect and remove harmful content. Furthermore, the amendments to the OSA, passed on 29 November 2024, require age-restricted social media platforms to take “reasonable steps” to prevent underage users from having accounts. The eSafety Commissioner will publish guidelines on these requirements before the new rules take effect.
Similarly, Singapore’s OSA regulates “egregious content”, including, among others, suicide, violence, and terrorism-related content. The law, which took effect on 1 February 2023, imposes obligations on online communication services, including foreign businesses providing services to users in Singapore. More recently, Singapore’s authority has required app distribution services to implement age assurance measures by 31 March 2025, followed by the newly issued online safety code of practices on 15 January 2025.
Jurisdictions with proposed laws
The upper house of Malaysia’s Parliament passed the Online Safety Bill 2024 on 16 December 2024, regulating harmful content like indecency, terrorism, and drugs. The legislation is awaiting Royal Assent. Under the law, service providers must implement measures to mitigate the risk of exposure to harmful content, issue user guidelines, establish a reporting mechanism for harmful content, implement measures to take down harmful content, and develop an online safety plan.
Indonesia is also taking steps to legislate online safety protections for children. On 2 February 2025, the Ministry of Communications and Information Technology Ministry established a special team to strengthen children’s digital safety (Indonesia language only), with draft legislation targeted to complete by April 2025. The regulator is considering introducing age restrictions for children using social media to reduce children’s exposure to harmful content.
Jurisdictions with additional rules
Meanwhile, China has issued regulations to protect minors in cyberspace (simplified Chinese language only) to prohibit harmful content to minors, such as obscenity, cults, and extremism. The regulation took effect on 1 January 2024, requiring online service providers to prohibit online information harmful to minors, restrict information potentially harmful to minors, and avoid automated decision-making in commercial marketing to minors.
Vietnam’s Decree 147 on managing, provisioning, and using internet services (Vietnamese language only) took effect on 25 December 2024. The decree introduces stricter requirements for onshore and offshore providers – such as local data storage, user authentication through Vietnamese credentials, and active content monitoring. Furthermore, the rule mandates platform operators to block or remove infringing content within 24 hours as requested by the authority. The decree defines illegal content as those that affect national security, social order and safety, violate ethics, customs laws.
What are some key tips for businesses?
Review services against regulatory categories
Businesses should conduct a thorough review of their services or platforms to determine the risks of users being exposed to harmful content under online safety laws. As a starting point, companies could assess if their services are accessible to children. For example, the UK’s OSA requires platforms to prevent children from accessing harmful and age-inappropriate content. While not mandatory, benchmarking against the UK requirements could help businesses stay ahead of the risk of exposing users to inappropriate content.
Companies should then update their privacy setting, policies, and terms of use accordingly. For instance, Vietnam’s Decree 147 requires social network service providers to ensure users agree to terms of service. Such agreements must include mechanisms for handling user complaints and descriptions of measures taken to ensure users’ rights over their data.
Implement risk mitigation measures
Companies should establish robust measures to reduce the risk of users encountering harmful content, such as user verification, dedicated rules for new users, and proactive threat detection. Some platforms like Google also use technologies like artificial intelligence or machine learning for content moderation.
Implement age verification measures
Platforms offering age-restricted content or services should adopt robust age assurance systems to comply with emerging regulations. Some effective methods include:
- account-based assurance;
- vouching for another person’s age;
- biometrics and capacity testing to estimate age based on characteristic; and
- artificial intelligence profiling or inference model.
For example, China’s age verification regulations require all internet users to register with their real names and official identification. Other jurisdictions, like Japan and South Korea, use mobile carrier data for age verifications, verifying users’ age through their mobile service providers.
Establish a quick content removal mechanism
Businesses should implement efficient procedures to comply with authorities’ requests to remove harmful content. For instance, India’s Information Technology rules mandate online platforms to remove content upon court orders or government notifications. Japan’s amended Provider Liability Limitation Act 2022 requires large platform operators to accelerate their responses to deletion requests and make their operations transparent.
Ensure compliance with privacy laws
Companies should align their processes and protocols with existing privacy and data protection laws. Notably, Australia’s amended OSA sets out additional privacy obligations regarding personal information collected for age assurance purposes. India’s Digital Personal Data Protection Act 2023 requires parental consent for processing the personal data of children under 18, which draft rules were under consultation until 18 February 2025.