Skip to main content

Why ChatGPT is Not a Compliance Tool? The Deloitte $440k Case and EU AI Act Implications

Deloitte had to reimburse AUD$440k to the Australian government after delivering a report with AI-generated errors.

Trust This Team

Compartir este artículo:
Última actualización: 07 de febrero de 2026
Why ChatGPT is Not a Compliance Tool? The Deloitte $440k Case and EU AI Act Implications

Why ChatGPT is Not a Compliance Tool? The Deloitte $440k Case and EU AI Act Implications

What Happened with Deloitte in Australia?

In October 2025, Deloitte Australia was forced to partially reimburse the AUD$440k (approximately USD$290k) paid by the Australian government for a 237-page report filled with errors apparently generated by AI Fast CompanyThe Washington Post.

The document contained:

  • Fabricated citations from a federal judge
  • References to non-existent academic research
  • Studies from professors who never existed Fast Company1News

Why This Case Matters for Your Organization

The case is not just a Deloitte problem. It exposes a critical risk affecting DPOs, compliance managers, IT teams, and consultancies: the use of generative AI (like ChatGPT, Azure OpenAI, and similar tools) in tasks requiring precision, traceability, and consistency can create serious regulatory and reputational liabilities.

The central question is not whether we should use AI — but how to use AI responsibly, with governance, methodology, and quality control.

Why Tools Like ChatGPT Don't Work for Compliance Tasks

Generative AI is non-deterministic. This means the same prompt can generate different responses with each execution. For creative and exploratory activities, this can be an advantage. For compliance, privacy, and regulatory assessment, it's an unacceptable risk.

What Are AI "Hallucinations" and Why Are They Dangerous?

Hallucinations are the tendency of generative AI systems to fabricate information when they don't have reliable data to respond Fast Company1News.

In Deloitte's case, this included:

  • False citations of judicial decisions
  • References to non-existent academic articles
  • Studies attributed to universities that never published them

Compliance Risks of AI Hallucinations

In compliance environments, where every statement needs to be auditable and verifiable, hallucinations represent:

  • Regulatory risk: decisions based on false information can violate EU AI Act, GDPR, NIST AI RMF, and ISO standards
  • Reputational risk: loss of credibility with clients, regulators, and the market
  • Operational risk: rework, corrective audits, and remediation costs

Practical Risks of Using Generative AI Without Governance

For privacy, security, and corporate procurement teams, risks include:

  • Inconsistency between assessments: two professionals using ChatGPT to evaluate the same policy may get contradictory results
  • Lack of traceability: impossible to audit how a conclusion was reached
  • Absence of reproducible methodology: each analysis may follow different implicit criteria
  • False sense of compliance: apparently robust reports, but based on fabricated information

How to Use AI Responsibly in Privacy and Compliance Tasks

The problem is not using AI. It's using AI without governance, without quality controls, and without deterministic methodology.

The Four Pillars of Responsible AI Use in Compliance

#### 1. Transparent and Auditable Methodology

  • Explicit evaluation criteria based on recognized standards (EU AI Act, ISO/IEC 42001, NIST AI RMF, GDPR)
  • Reproducible processes that generate consistent results
  • Complete documentation of how each conclusion was reached

#### 2. Cross-Validation and Quality Control

  • Multiple verification layers to reduce hallucinations
  • Specialized human review before finalization
  • Alert systems for possible inconsistencies

#### 3. Traceability and Public Evidence

  • Based on verifiable and publicly available information
  • Citation of original sources (privacy policies, official documents)
  • Ability to audit each point of the analysis

#### 4. Transparency About AI Use

  • Clear disclosure when AI was used in the analysis
  • Explanation of how AI was applied and what controls exist
  • Human accountability for final decisions

AI Governance Platform vs. Generic Chatbot

| ChatGPT / Generic Generative AI | Platform with AI Governance | | ----- | ----- | | Non-deterministic results | Reproducible methodology based on fixed criteria | | Frequent hallucinations | Cross-validation and quality control | | No traceability | Auditable public evidence | | No formal compliance criteria | Alignment with EU AI Act, ISO, NIST, GDPR | | Use at your own risk | Governance, supervision, and accountability |

Regulatory Frameworks Requiring AI Governance

The global trend is clear: legislators and regulatory bodies are demanding transparency, explainability, and governance in AI use — especially in decisions affecting people.

EU AI Act (European Union)

  • First comprehensive AI legislation in the world
  • Classification of systems by risk level
  • Mandatory transparency and human oversight requirements for high-risk systems

ISO/IEC 42001 (AI Systems Management)

  • International framework for AI governance
  • Requirements for traceability, transparency, and risk control
  • Basis for auditing and certification of AI systems

NIST AI Risk Management Framework

  • AI risk management guidelines
  • Emphasis on reliability, security, and explainability
  • Adopted by government organizations and global corporations

GDPR (General Data Protection Regulation)

  • Article 22: right to review automated decisions
  • Transparency requirements about algorithm use
  • Need to explain criteria and logic of decisions

How the Deloitte Case Should Change Your Vendor Assessment

After discovering the errors, Deloitte was forced to revise the report, and the corrected version included a disclosure that Azure OpenAI was used in drafting the document Region Canberra. This transparency should have existed from the beginning — but the problem goes beyond disclosure.

What Your Company Should Require from AI-Using Vendors

#### For DPOs and Privacy Professionals:

  • Require due diligence reports to inform if and how AI was used
  • Request evidence of human validation and quality control
  • Ask for transparent and auditable assessment methodology

#### For CISOs and IT Managers:

  • Evaluate whether analysis tools use deterministic or generative AI
  • Prioritize solutions that document AI governance
  • Implement cross-validation processes for automated analyses

#### For Procurement and Purchasing Teams:

  • Include contractual clauses about AI use and deliverable quality
  • Establish objective criteria for accuracy and verifiability
  • Require reimbursement or correction when AI hallucinations cause damage

#### For EU AI Act Consultancies:

  • Use screening tools with transparent methodology
  • Position your human expertise as a differentiator
  • Offer continuous monitoring services based on solid governance

The Responsible Alternative for Software Privacy Assessment

The solution is not to abandon AI — it's to adopt AI with governance. Platforms specialized in corporate privacy should combine:

  • Methodology based on recognized frameworks (EU AI Act, ISO, NIST, GDPR)
  • Specialized AI with cross-validation to reduce hallucinations
  • Exclusive analysis of auditable public evidence (policies, terms of use, official documentation)
  • Transparency about AI use and quality processes
  • Deterministic and reproducible results to ensure consistency

What You Should Do Now to Protect Your Company

The Deloitte case is not an exception. It's a warning. As generative AI becomes popular, the risk of critical decisions based on fabricated information increases.

Immediate Actions for Your Organization

#### Process Audit:

  • Identify where generative AI is being used without governance
  • Map risks in reports, due diligence, and vendor assessments
  • Establish responsible AI use policies

#### Tool Selection:

  • Prioritize platforms that disclose methodology and AI governance
  • Require traceability and verifiable evidence
  • Test consistency: the same repeated analysis should generate identical results

#### Team Training:

  • Train teams to identify hallucinations and validate AI outputs
  • Establish human review checkpoints
  • Create accountability culture for AI-based decisions

The Bottom Line

Deloitte's error cost AUD$440k and reputational damage. How much would it cost your company?

The good news is that you can use AI intelligently, responsibly, and compliantly — as long as you do it with governance, transparency, and solid methodology.

SUGGESTED IMAGES FOR CONTENT:

  • Comparative infographic: ChatGPT vs. Platform with Governance (visual table of content presented in the post, highlighting key differences)
  • Process diagram: Privacy assessment flow with responsible AI, showing cross-validation steps, quality control, and human review
  • Visual checklist: "5 questions to ask before trusting an AI-generated analysis", in shareable card format
#eu-ai-act#generative-ai#compliance#deloitte#ai-governance

Trust This Team