The statement that “Thailand has no AI law” is only partially accurate. As of March 2026, no comprehensive AI statute has been enacted. However, the “Draft Principles of the AI Law” were finalized in June 2025, and ETDA is advancing the process of drafting actual statutory language. The risk-based structure closely mirrors the EU AI Act, with a clear distinction between provider and deployer obligations. This article maps the Draft AI Law’s framework at the level of statutory structure.
← Vol. 2: PDPA Enforcement and AI Nexus
Legislative History — From Two Bills to One Framework
2023–2025: Two Parallel Draft Laws
Thailand’s AI regulation took an unusual path: two government agencies developed competing approaches simultaneously.
ONDE (Office of the National Digital Economy and Society Commission) Draft: A regulatory royal decree focused on transparency, safety, and fairness in commercial AI use, with mandatory pre-approval for high-risk AI systems.
ETDA (Electronic Transactions Development Agency) Draft: An innovation-focused support law emphasizing AI sandboxes, data sharing, and a flexible risk-based classification system modeled on the EU AI Act.
June 2025: Consolidated “Draft Principles of the AI Law”
Following public hearings, the two drafts were merged in June 2025 into the “Draft Principles of the AI Law.” ETDA continues to lead the drafting of actual statutory language, with no confirmed timeline for enactment. However, AI-dependent businesses are entering a phase where “wait and see” is no longer adequate preparation.
Risk Classification Structure — Comparison with EU AI Act
Draft AI Law adopts a risk-based approach. Compared with the EU AI Act:
| Risk Level | Thailand Draft AI Law | EU AI Act |
|---|---|---|
| Prohibited | Social scoring, manipulation, indiscriminate biometric collection | Annex I (prohibited practices) |
| High-Risk | Employment, credit, medical diagnosis, judicial support, etc. | Annex III (high-risk AI systems) |
| Limited-Risk | Chatbots, deepfakes (transparency obligations only) | Limited-risk AI |
| Minimal-Risk | Spam filters, AI games (no regulation) | Minimal-risk AI |
Key Thai Feature: Delegated Rulemaking for Risk Lists
Unlike the EU AI Act, which specifies the high-risk AI list in Annex III of the statute itself, Thailand’s Draft AI Law delegates the specific lists of prohibited and high-risk AI to be determined post-enactment by supervisory authorities (ETDA and sector-specific regulators) via official notifications. This approach offers flexibility but may create uncertainty about scope at the time of initial enactment.
Prohibited AI — What Is Banned?
The prohibited AI category covers practices analogous to EU AI Act Annex I:
① Social scoring systems: AI that comprehensively evaluates individuals’ social behavior to restrict their access to rights, opportunities, or services
② Subliminal or deceptive manipulation: AI using techniques that impair individuals’ autonomous decision-making (e.g., exploiting unconscious emotional triggers)
③ Indiscriminate biometric collection: Mass biometric surveillance systems such as indiscriminate facial recognition in public spaces
④ Exploitation of vulnerable populations: AI that exploits the vulnerabilities of children, the elderly, or other at-risk groups
These are the most heavily regulated category — prohibited regardless of purpose or claimed benefit.
High-Risk AI — Registration, Conformity Assessment, and Monitoring
Sectors likely to be designated as high-risk (based on current Draft Principles):
- Employment-related decisions (hiring, promotion, dismissal)
- Credit scoring, insurance underwriting, lending decisions
- Medical diagnosis and treatment planning support
- Judicial proceedings and sentencing support
- Critical infrastructure management systems
- Educational assessment and credential evaluation
Key Obligations for High-Risk AI
| Obligation | Content |
|---|---|
| Registration | Pre-registration with the AI Governance Center (AIGC) |
| Conformity assessment | Risk evaluation and (where required) third-party audit before deployment |
| Risk management | Ongoing risk monitoring and corrective action |
| Technical documentation | Records of training data, algorithms, and accuracy evaluations |
| Human oversight | Procedures for human monitoring and intervention in automated decisions |
| Incident reporting | Mandatory reporting of significant incidents to authorities |
Provider vs. Deployer Obligations — The Key Distinction
A central concept of the Draft AI Law is the distinction between “Providers” and “Deployers.”
Providers (AI Developers / Suppliers)
- Entities that develop or supply AI systems for use in Thai markets
- Key obligations: Conformity assessment, technical documentation, AIGC registration, incident reporting
Deployers (AI Users)
- Entities that deploy AI systems obtained from providers in their own business operations or services
- Key obligations: Compliant use of high-risk AI (adhering to provider’s usage conditions), AI literacy training for staff, monitoring framework, incident response for specific use cases
Most Japanese Companies Are Deployers
Japanese companies using commercially available AI tools (ChatGPT API, Gemini, Copilot, etc.) in their operations are typically classified as deployers. Deployer obligations are lighter than provider obligations, but deployers using high-risk AI must pay attention to:
- AI in recruitment, performance evaluation, or credit decisions → compliance with high-risk AI usage conditions
- Automated decision-making → concurrent application of PDPA Sections 39–40 (notification and explanation duties)
- Outsourced AI processing → obligation to verify the AI risk management practices of processors
AI Governance Center (AIGC)
The Draft AI Law plans to establish an AI Governance Center (AIGC) under ETDA.
Key AIGC roles:
- Receiving and reviewing high-risk AI registrations
- Managing and approving AI sandbox applications
- Developing interpretive guidelines for the AI Law
- Coordinating authority with sector-specific regulators (BOT, SEC, FDA, etc.)
Delegated Sector Regulation: Financial AI (BOT), medical AI (FDA), and securities AI (SEC) will be regulated by sector-specific authorities. Compliance requires monitoring multiple regulators depending on the industry.
AI Sandbox and Individual Rights
AI Sandbox
The Draft AI Law provides for a sandbox — a regulatory relief environment for testing innovative AI systems. Companies approved by ETDA can operate within the sandbox for a defined period, exempt from standard requirements. This could benefit startups and companies developing new AI services.
Individual Rights
The Draft AI Law plans to grant individuals the following rights:
- Right to be informed: Right to notification when an AI system makes decisions affecting the individual
- Right to explanation: Right to request information about the AI’s purpose and logic
- Right to object: Right to contest automated decisions that adversely affect the individual (linked to PDPA Section 39)
- Right to human review: Right to request human reassessment of significant automated decisions
Extraterritorial Application and Local Representative Requirement
The Draft AI Law is expected to include extraterritorial reach.
Scope: Foreign AI providers that offer AI services to users in Thailand, or monitor the behavior of persons in Thailand, will fall within scope.
Local Representative: Analogous to the EU AI Act’s requirement for an EU representative, the Draft AI Law is expected to require foreign providers to designate a local representative in Thailand.
Penalty Structure
The Draft AI Law’s penalty framework (as anticipated based on the Draft Principles):
- Administrative penalties: Graduated fines based on severity, intent, and scale of harm
- Criminal penalties: Imprisonment for intentional development or deployment of prohibited AI
- Corporate liability: Corporate fines plus personal liability for directors and executives
Comparison: Thailand, EU AI Act, and Japan’s AI Guidelines
| Element | Thailand Draft AI Law | EU AI Act | Japan AI Business Guidelines |
|---|---|---|---|
| Legal character | Binding statute (when enacted) | Binding regulation | Non-binding guidelines |
| Risk classification | Prohibited / High / Limited / Minimal | Prohibited / High / Limited / Minimal | None (principles-based) |
| High-risk AI registration | Mandatory (planned) | Mandatory | None |
| Extraterritorial application | Yes (planned) | Yes | None |
| Penalties | Administrative + criminal | Administrative (up to 7.5% global turnover) | None |
Related Articles
- Thai AI Regulation 2026: What Companies Need to Know
- ← Vol. 2: PDPA Enforcement and AI Nexus
- Vol. 4: Thailand’s E-Commerce Regulations →
Next in the Series
Volume 4 (March 25, 2026): Thailand’s e-commerce platform regulations — the legal basis in the Trade Competition Act, the nature and scope of the TCCT Guidelines, six regulated conduct categories, and the structure of the Draft Platform Economy Act (PEA), compared with the EU DMA/DSA and Japan’s Transparency Act.
This article is for general informational purposes about Thailand’s legal system and does not constitute legal advice under Thai law. For specific matters, please consult a Thai-qualified legal professional. Our firm works in collaboration with JTJB International Lawyers’ Thai-qualified attorneys.