Southeast Asia is tightening its online safety laws, from content takedowns to child protection. Businesses must act by assessing risks, strengthening content governance, and enforcing continuous monitoring.
Key takeaways:
- Online safety laws across Southeast Asia are shifting from voluntary guidelines to enforceable obligations.
- Businesses must proactively assess risks, especially for child users, and implement strong privacy and content governance measures.
- Continuous monitoring and regular policy updates are essential to stay compliant.

Shutterstock.com/Sutthiphong Chandaeng
Southeast Asia’s digital economy is accelerating rapidly and is expected to be worth US$1 trillion by 2030, according to 2023 research. However, the widespread of digital adoption of technology also increases the risks of online threats, ranging from fraud to child safety risks.
Regulators across the region are responding with stricter regulations, shifting from user-driven compliant mechanisms to platform accountability. Expected to take effect by year end, Malaysia’s Online Safety Act 2024 requires service providers to enhance online safety and mitigate risks arising from harmful content. This legislation mirrors a broader global regulatory trend holding tech giants and platforms accountable for online harm.
Many of these frameworks apply extraterritorially, capturing both domestic and foreign providers offering online services in the region. For multinational businesses, this regulatory landscape creates a complex compliance landscape that demands close monitoring and strategic adaptation.
How are Southeast Asian regulators approaching online safety?
Singapore introduced the Online Safety (Relief and Accountability) Bill 2025 in October 2025, establishing statutory torts and a new Online Safety Commission. The bill imposes obligations on platforms and online space administrators. The proposed law builds on the Code of Practice for Online Safety published in March 2025, which required designated app distribution services to implement safeguards against harmful content. The bill is scheduled for its second reading on 4 November 2025.
Thailand tightened its content moderation regime with a 24-hour takedown rule (Thai language only), effective 4 July 2025. Under the regulation, platforms must remove material deemed false or misleading within one day of receiving official notice. Non-compliant businesses risk losing their safe-harbour protections, exposing themselves to liability for tech-related offences, such as fraud.
Similarly, Indonesia enhanced its content moderation oversight through SAMAN, a compliance system which became fully operational (Indonesian language only) in October 2025. Platforms must promptly remove harmful content, such as pornography, gambling, and terrorism, or face administrative fines based on the severity and type of violation.
Indonesia was also among the first country in the region to regulate online harm against children (Indonesian language only). The rule adopts a risk-based approach, requiring electronic system operators (ESO) to assess child safety risks, enforce age restrictions, and obtain parental consent. ESOs have until 27 March 2027 to comply with child protections requirements, with no administrative penalties during this period, though civil lawsuits and criminal liability under other laws still apply.
Malaysia has signalled a similar approach to child protection. The government reportedly plans to enforce electronic identity verification to prevent children under 13 from creating social media accounts. More broadly, the Online Safety Act 2024 is expected to take effect by the end of 2025, following the finalisation of regulatory guidelines. Key obligations for platforms include ensuring platform safety, protecting children under 13, and limiting access to harmful material.
Meanwhile, the Vietnamese government regulates online safety through Decree No. 147/2024, which imposes strict requirements on content control, user authentication, data storage, and service license. Under the rule, online platforms must remove flagged content within 24 hours upon government requests and 48 hours upon user reports. They must also classify and display warnings for content unsuitable for children.
What does this mean in practice for businesses?
Conduct a risk assessment
As online safety rules across Southeast Asia progress from broad guidelines to enforceable laws, online service providers should assess their exposure to online harms, especially those affecting vulnerable groups like children.
An assessment should include mapping gaps in privacy protections and ensuring strong default privacy settings. Companies should strictly avoid profiling children unless they can demonstrate that such practices are in the children’s best interest.
Establish robust content governance frameworks.
As regulations increasingly hold platforms accountable for online safety, businesses should implement clear and accessible content policies to demonstrate compliance. These frameworks should outline content guidelines, moderation procedures, and enforcement actions for violations. Empowering users through transparent policies, user-friendly reporting tools, and, where necessary, age verification measures could boost platform safety and trust.
Implement continuous monitoring
To keep pace with tightening content moderation rules, businesses should deploy systems that continuously monitor or swiftly remove harmful content. When complete removal isn’t feasible, platforms could consider alternatives such as geo-blocking, content demotion, or restricted access based on user location or age. Regularly reviewing and updating moderation protocols is essential to staying compliant with evolving regulatory expectations.