Organisations operating across the EU, India and Singapore face a trifecta of AI and data regulation: the EU AI Act (binding from August 2026), India's Digital Personal Data Protection Act (DPDP, enacted August 2023 with rules pending), and Singapore's Personal Data Protection Act (PDPA, with AI-specific guidance through the Model AI Governance Framework). These frameworks share common objectives but differ materially in scope, enforcement, and the specific obligations they create. Organisations attempting to build a single compliance programme that satisfies all three need to understand precisely where the frameworks align and where they diverge.
Why Multinational AI and Data Compliance Is Not Three Separate Projects
The instinctive response to multiple regulatory frameworks is to manage them as separate compliance workstreams — an EU AI Act project, a DPDP Act project, a PDPA project. This response is expensive, creates inconsistency across the organisation, and misses significant overlap between the frameworks that allows efficient integration.
But integration has limits. The frameworks have materially different legal structures, different enforcement mechanisms, different organisational obligations, and different concepts of what constitutes regulated AI or personal data. An integrated programme that treats all three frameworks as equivalent will fail to meet the specific requirements of each. The art is integration where frameworks align and separation where they diverge — and understanding precisely which is which for your specific operations and AI systems.
Where the Three Frameworks Diverge Most Significantly
Scope of regulated activity
The EU AI Act regulates AI systems based on what they do and the risk they create — not simply because they process personal data. A high-risk AI system that makes no decisions about individuals (a manufacturing quality inspection AI, for example) is still subject to EU AI Act high-risk obligations. Conversely, an AI system that processes personal data but makes only low-risk recommendations may have minimal EU AI Act obligations.
The DPDP Act regulates the processing of digital personal data of Indian citizens — regardless of whether that processing involves AI. An organisation that processes the personal data of Indian customers without using any AI is subject to DPDP Act obligations. An organisation that uses AI to process non-personal data is not. The DPDP Act's scope is determined by the nature of the data, not the technology used.
Singapore's PDPA similarly regulates personal data — with the Model AI Governance Framework providing voluntary guidance on responsible AI use that goes beyond the PDPA's legal requirements but does not have the same binding force as the EU AI Act.
Enforcement mechanisms and consequence severity
The EU AI Act enforcement is through national market surveillance authorities, with maximum fines of €35 million or 7% of global annual turnover. Enforcement is mandatory — member states are required to designate national AI authorities and enforce the Act.
The DPDP Act creates the Data Protection Board of India as the enforcement authority, with penalties up to ₹250 crore (approximately £23 million) for significant violations. The enforcement framework is still being established — the DPDP Rules, which will define detailed compliance obligations and enforcement procedures, had not been finalised as of early 2026.
Singapore's PDPA enforcement through the Personal Data Protection Commission can impose financial penalties up to S$1 million (approximately £600,000) — significantly lower than EU or Indian maximums. The PDPC has historically taken a relatively facilitative enforcement posture, preferring corrective action undertakings to financial penalties.
Obligations for AI-specific governance
The EU AI Act creates specific, legally binding obligations for AI system governance: risk management, data governance, technical documentation, human oversight, accuracy and robustness. These are prescriptive legal requirements, not guidance.
India's DPDP Act does not create AI-specific governance obligations. It creates data protection obligations that apply to personal data processing — including personal data processed by AI systems. The AI governance dimension in India is addressed through voluntary frameworks (the MeitY Responsible AI framework) and evolving regulatory guidance, not through the DPDP Act itself.
Singapore's approach is more nuanced: the PDPA creates binding data protection obligations, and the Model AI Governance Framework (MAIGF) provides detailed, voluntary AI governance guidance that PDPC strongly encourages but does not legally mandate. AI governance in Singapore is currently compliance-encouraged rather than compliance-mandated.
| Dimension | EU AI Act | India DPDP Act | Singapore PDPA + MAIGF |
|---|---|---|---|
| Legal status | Binding EU law | Binding Indian law (rules pending) | PDPA binding; MAIGF voluntary |
| Scope trigger | AI system risk level | Personal data processing | Personal data processing + voluntary AI governance |
| AI-specific obligations | Yes — prescriptive for high-risk AI | No — data protection obligations only | No — MAIGF is voluntary guidance |
| Max penalty | €35M / 7% turnover | ₹250 crore (~£23M) | S$1M (~£600K) |
| Enforcement body | National market surveillance authorities | Data Protection Board of India | Personal Data Protection Commission |
| Extraterritorial effect | Yes — EU market or EU persons affected | Yes — Indian citizen data wherever processed | Yes — personal data of Singapore residents |
| Data residency requirements | No specific AI data residency requirement | Rules may impose data localisation requirements | No data residency requirement |
| Consent requirements | Not applicable (AI regulation focus) | Consent as primary lawful basis with limited exceptions | Consent-based with limited exceptions |
Building an Integrated Compliance Programme
- Map your operations to each framework separately and specifically. For the EU AI Act: what AI systems do you develop or deploy that affect EU persons, and how are they classified under Annex III? For the DPDP Act: do you process digital personal data of Indian citizens, in what capacity (data fiduciary, data processor), and for what purposes? For the PDPA: do you collect, use or disclose personal data of Singapore residents, and do you have a PDPA-compliant data protection programme in place?
- Identify shared foundation requirements. Several governance capabilities are required by all three frameworks — with different specifics. Data governance and quality controls are required by all three. Consent and transparency obligations appear in all three. Incident notification requirements exist in all three. Individual rights (access, correction, deletion) appear in all three — with different scope and timelines. Building these capabilities once, to the most demanding specification across all three frameworks, eliminates duplicate effort.
- Implement EU AI Act-specific requirements as an additional layer for relevant AI systems. The EU AI Act prescriptive requirements for high-risk AI systems have no equivalent in DPDP or PDPA — they must be addressed specifically. For AI systems affecting EU persons that are classified as high-risk: risk management system (Article 9), data governance (Article 10), technical documentation (Annex IV), human oversight (Article 14), and conformity assessment. These requirements should be implemented within an ISO 42001 management system framework where possible.
- Monitor DPDP Rules finalisation and build adaptive compliance architecture. The DPDP Act framework will be significantly clarified when the Rules are finalised. Organisations with material India operations should build a compliance architecture that can accommodate the Rules when they are published — rather than waiting for finalisation before beginning compliance work. The foundational data governance, consent management, and individual rights capabilities required by the DPDP Act are identifiable from the Act itself.
- Leverage Singapore MAIGF for voluntary governance that future-proofs against mandatory requirements. The MAIGF represents what Singapore regulators consider responsible AI governance. While voluntary today, implementing MAIGF guidance provides a governance foundation that is likely to be consistent with future mandatory requirements if Singapore moves toward binding AI regulation. It also provides demonstrable AI governance maturity for enterprise clients in the Singapore market.
- Centralise privacy and AI governance documentation with jurisdiction-specific overlays. Maintain a central data protection and AI governance programme with jurisdiction-specific documentation overlays — a PDPA-compliant privacy policy, a DPDP-compliant data processing framework, and EU AI Act-compliant technical documentation — rather than building three entirely separate programmes.
Frequently Asked Questions
How AjaCertX Helps
AjaCertX delivers multi-jurisdiction AI governance and data protection compliance programmes for technology organisations, financial services firms, and multinational enterprises operating across EU, India, Singapore, GCC and UK markets.
- EU AI Act scope assessment and high-risk AI system compliance programme
- India DPDP Act gap assessment and compliance programme design — including DPDP Rules readiness architecture
- Singapore PDPA compliance assessment and MAIGF implementation programme
- Integrated multi-jurisdiction data protection and AI governance framework design
- ISO 42001 AI management system implementation
- Cross-border data transfer mechanism assessment and implementation (SCCs, adequacy, binding corporate rules)
- Regional compliance programmes for GCC, UK and Asia-Pacific markets
Regional compliance specialists. Integrated programme proposal within 48 hours.
Conclusion
The EU AI Act, India DPDP Act and Singapore PDPA represent three distinct regulatory frameworks with different legal bases, different scope triggers, different obligations, and different enforcement consequences. An integrated compliance programme is both possible and efficient — but only where frameworks genuinely align. Where they diverge — and the EU AI Act's AI-specific prescriptive requirements are the clearest point of divergence — separate, framework-specific compliance work is required.
The organisations that manage this most effectively build shared foundations where frameworks align, layer framework-specific requirements where they do not, and maintain the programme architecture to accommodate the evolution of all three frameworks — because all three will evolve over the next three to five years.