The Ultimate Guide to PDPA Compliance for AI Usage in Thailand
Introduction: The AI Revolution Meets Thai Law
Artificial intelligence is transforming how businesses in Thailand operate. From law firms using ChatGPT for legal research to banks using AI for customer service, the productivity gains are undeniable. But with this new power comes new legal risks.
In October 2025, Thailand’s Supreme Court and the National Cyber Security Agency (NCSA) released mandatory guidelines for AI usage. These guidelines, an extension of the Personal Data Protection Act (PDPA), create a new set of rules for any organization using AI tools.
The core requirement is simple but strict: Organizations must prevent personal and confidential data from being sent to AI models.
This means if your team is using ChatGPT, Claude, Gemini, or any other AI tool without a dedicated compliance layer, you are likely in violation of the PDPA. The penalties are severe: up to 20 million baht in fines and potential criminal liability for executives.
This guide will walk you through everything you need to know about PDPA compliance for AI usage in Thailand. We’ll cover:
- What the PDPA says about AI
- The key requirements of the October 2025 guidelines
- The risks of non-compliance
- Practical steps to make your AI usage compliant
- How Cazimir can help you automate the entire process
By the end of this guide, you’ll have a clear understanding of your legal obligations and a practical framework for using AI safely, responsibly, and compliantly.
Understanding the PDPA: A Quick Refresher
Before we dive into the AI-specific guidelines, let’s quickly recap the core principles of Thailand’s Personal Data Protection Act (PDPA). Enacted in 2019, the PDPA is Thailand’s equivalent of Europe’s GDPR. It governs how organizations collect, use, and disclose personal data.
The PDPA is built on seven key principles:
- Lawfulness, Fairness, and Transparency: You must have a legal basis for collecting and processing personal data, and you must be transparent about how you use it.
- Purpose Limitation: You can only use personal data for the specific purpose for which it was collected.
- Data Minimization: You must not collect or process more personal data than is necessary for your stated purpose.
- Accuracy: You must ensure that the personal data you hold is accurate and up-to-date.
- Storage Limitation: You must not store personal data for longer than is necessary.
- Integrity and Confidentiality: You must take appropriate security measures to protect personal data from unauthorized access, use, or disclosure.
- Accountability: You are responsible for demonstrating compliance with the PDPA.
These principles apply to all organizations that collect, use, or disclose personal data in Thailand, regardless of where the organization is located.
The October 2025 AI Guidelines: What’s New?
The October 2025 guidelines issued by Thailand’s Supreme Court and the National Cyber Security Agency (NCSA) don’t create a new law. Instead, they clarify how the existing PDPA applies to the use of generative AI tools like ChatGPT, Claude, and Google’s Gemini.
The guidelines are a direct response to the rapid adoption of AI in the workplace and the unique risks it presents. When an employee pastes a document into an AI chat window, that data is sent to a third-party server, often outside of Thailand. This creates a significant risk of data leakage and PDPA violations.
Here are the four key requirements of the new guidelines:
1. You Must Prevent Personal Data from Being Sent to AI Models
This is the most important requirement. Organizations are now legally obligated to implement technical measures to prevent personal and confidential data from being included in AI prompts. This includes:
- Thai National ID numbers
- Names, addresses, and phone numbers
- Financial information (bank account numbers, credit card numbers)
- Health information
- Client data and confidential business information
Simply telling your employees not to paste sensitive data into AI tools is not enough. The guidelines require organizations to have an active, technical solution in place.
2. You Are Responsible for the AI’s Output
Under the new guidelines, your organization is legally responsible for the output of the AI. This means if an AI tool generates a response that includes inaccurate information, defamatory statements, or infringes on copyright, your organization can be held liable.
This is particularly relevant for law firms, where AI-generated “hallucinations” (fake case law, incorrect legal citations) can lead to malpractice claims.
3. You Must Maintain an Audit Trail of AI Usage
The guidelines require organizations to maintain a detailed log of all AI interactions. This audit trail must include:
- Who used the AI tool
- When they used it
- What information was sent to the AI
- What redactions were performed
- Any potential risks that were detected
This audit trail is essential for demonstrating compliance during a PDPA audit.
4. You Must Have a Clear AI Usage Policy
Finally, organizations must have a clear, written policy that governs the use of AI tools in the workplace. This policy should outline:
- What AI tools are approved for use
- What types of data are prohibited from being sent to AI tools
- The potential consequences of violating the policy
- The organization’s commitment to using AI responsibly and ethically
These guidelines represent a major shift in how organizations in Thailand must approach AI. The era of unrestricted AI usage is over. The era of compliant AI has begun.
The Risks of Non-Compliance: More Than Just Fines
Ignoring the new AI guidelines is not an option. The risks of non-compliance are significant and can have a lasting impact on your organization.
Financial Penalties
The most obvious risk is the financial penalty for violating the PDPA. Fines can be as high as 20 million baht per violation. For a large organization with hundreds of employees using AI, the potential fines could be catastrophic.
Criminal Liability for Executives
In addition to financial penalties, the PDPA also includes provisions for criminal liability. Executives and board members can be held personally responsible for their organization’s non-compliance, with potential prison sentences of up to one year.
Reputational Damage
A data breach or PDPA violation can cause significant reputational damage. Clients, especially in industries like law and finance, trust you with their most sensitive information. A public data breach can erode that trust and lead to a loss of business.
Loss of Competitive Advantage
As clients become more aware of the risks of AI, they will start to ask questions about how their data is being handled. Organizations that can demonstrate a commitment to AI compliance will have a significant competitive advantage over those that cannot.
Legal and Remediation Costs
In the event of a data breach, the costs of legal fees, forensic investigations, and remediation can be substantial. These costs often far exceed the initial fine.
The bottom line: the risks of non-compliance are too great to ignore. The question is not if you should comply, but how.
How to Achieve Compliance: A 5-Step Framework
Achieving PDPA compliance for AI usage doesn’t have to be complicated. Here is a simple, 5-step framework that you can implement today:
Step 1: Conduct an AI Usage Audit
The first step is to understand how AI is being used in your organization. Conduct a survey of your employees to find out:
- What AI tools they are using (ChatGPT, Claude, etc.)
- What types of tasks they are using AI for
- What types of data they are sending to AI tools
This will give you a baseline understanding of your organization’s risk profile.
Step 2: Develop an AI Usage Policy
Based on the results of your audit, develop a clear, written AI usage policy. This policy should be easy to understand and should be communicated to all employees. You can find templates for AI usage policies online, but be sure to customize it for your organization’s specific needs.
Step 3: Implement a Technical Solution
As we’ve discussed, simply having a policy is not enough. You must have a technical solution in place to prevent personal data from being sent to AI models. This is where Cazimir comes in.
Cazimir is a browser extension and API that sits between your team and their AI tools. It automatically detects and redacts sensitive data before it reaches the AI model. This ensures that your organization is compliant with the PDPA without disrupting your team’s workflow.
Step 4: Train Your Employees
Once you have a policy and a technical solution in place, you need to train your employees on how to use AI responsibly. This training should cover:
- The key requirements of the PDPA and the new AI guidelines
- Your organization’s AI usage policy
- How to use AI tools in a way that protects personal and confidential data
Step 5: Monitor and Audit AI Usage
Finally, you need to continuously monitor and audit AI usage in your organization. This includes:
- Reviewing the audit trails generated by your compliance solution
- Conducting regular risk assessments
- Updating your policies and procedures as needed
By following this 5-step framework, you can ensure that your organization is using AI in a way that is both productive and compliant.
How Cazimir Automates PDPA Compliance for AI
Cazimir is the only AI compliance platform built specifically for Thailand’s PDPA. We help organizations use AI safely and compliantly by automating the entire compliance process.
Here’s how it works:
- Automatic Redaction: Cazimir automatically detects and redacts over 100 types of sensitive data, including Thai National ID numbers, phone numbers, and financial information.
- AI Hallucination Detection: Our platform flags fake citations, incorrect legal references, and other AI-generated errors before they can cause harm.
- Full Audit Trail: Cazimir generates a complete audit trail of all AI interactions, ready for PDPA audits and regulatory reviews.
- Works with Any AI Tool: Our browser extension is compatible with ChatGPT, Claude, Gemini, and all other major AI platforms.
- Easy Setup: Cazimir can be set up in minutes, with no complex integrations or IT support required.
With Cazimir, you can unlock the power of AI without putting your organization at risk.
Conclusion: The Future of AI in Thailand is Compliant AI
The new AI guidelines represent a critical turning point for businesses in Thailand. The era of unrestricted AI experimentation is over. The future of AI is compliant AI.
Organizations that embrace this new reality will not only avoid costly fines and reputational damage, but they will also build a foundation of trust with their clients and gain a significant competitive advantage.
By understanding your legal obligations, implementing a clear policy, and leveraging the right technology, you can unlock the full potential of AI while ensuring that your organization remains secure, compliant, and ahead of the curve.
Ready to make your AI usage compliant?
Start your free 14-days trial of Cazimir today and see how easy it is to automate PDPA compliance for AI.
7 Common PDPA Mistakes When Using AI (And How to Avoid Them)
Even with the best intentions, it’s easy to make mistakes when using AI in the workplace. Here are seven of the most common PDPA violations we see, and how to avoid them:
1. Assuming Your Employees Know What Not to Share
The Mistake: Believing that a simple memo or training session is enough to prevent employees from pasting sensitive data into AI tools.
The Reality: In the flow of work, employees will inevitably copy and paste client emails, contracts, and other documents containing personal data. It’s not malicious—it’s just human nature.
The Fix: Implement a technical solution like Cazimir that automatically redacts sensitive data. Don’t rely on manual compliance.
2. Thinking Your AI Provider is Responsible for Compliance
The Mistake: Assuming that because you’re using a major AI platform like ChatGPT or Claude, they are responsible for PDPA compliance.
The Reality: The terms of service for most AI providers make it clear that you are responsible for the data you input. You are the “Data Controller” under the PDPA, and you are liable for any violations.
The Fix: Treat your AI provider as a third-party data processor and ensure you have the proper safeguards in place before sending them any data.
3. Using AI for a Different Purpose Than Data Was Collected For
The Mistake: Taking a customer list that was collected for marketing purposes and using it to train a custom AI model for a different purpose.
The Reality: The PDPA’s “purpose limitation” principle means you can only use data for the specific purpose it was collected for. Using it for a new purpose requires new consent.
The Fix: Always get explicit consent before using personal data for any new purpose, including AI model training.
4. Not Having a Clear Data Retention Policy for AI Prompts
The Mistake: Allowing AI prompts and conversations to be stored indefinitely.
The Reality: The PDPA’s “storage limitation” principle requires you to delete personal data when it’s no longer needed. This includes data in AI prompts.
The Fix: Use a compliance tool that allows you to set data retention policies for AI interactions and automatically delete old data.
5. Failing to Vet the Security of Your AI Tools
The Mistake: Allowing employees to use any AI tool they want without a proper security review.
The Reality: Many new AI tools have weak security practices and may be vulnerable to data breaches. You are responsible for ensuring the security of any tool you use.
The Fix: Create an approved list of AI tools that have been vetted by your IT and security teams. Block access to unapproved tools.
6. Ignoring AI Hallucinations
The Mistake: Trusting the output of an AI tool without verifying its accuracy.
The Reality: AI tools are prone to “hallucinations”—making up facts, citations, and even entire legal cases. Using this inaccurate information can lead to malpractice and reputational damage.
The Fix: Implement a solution that flags potential AI hallucinations and requires human verification of all AI-generated content before it’s used.
7. Not Having an Audit Trail
The Mistake: Having no record of how AI is being used in your organization.
The Reality: If you are ever audited by the PDPA commission, you will need to provide a detailed audit trail of your AI usage. Without it, you cannot demonstrate compliance.
The Fix: Use a tool like Cazimir that automatically generates a complete audit trail of all AI interactions, including what data was redacted and what risks were detected.
