
Anthropic Claude
Based exclusively on public evidence • 20 criteria (Privacy + AI)
Last review: 10 Feb 2026

AI Trust Summary
- •In AI: it does not detail the specific algorithmic logic of how complex decisions are made, which can generate uncertainties for contractors.
- •In Basic Privacy: it does not provide a Data Processing Agreement (DPA) for customers, which can compromise the security of processed data.
Safer Alternatives
Higher-rated software in the same category
Compliances in AI (3)
AI criteria the company meets. Buy the Premium Analysis to see all 3 criteria.
- •Anthropic Claude
- •documents the use of Inputs and Outputs to train models, ensuring clarity on data processing.
- •provides an opt-out mechanism for AI training, allowing greater user control.
- •these practices strengthen due diligence regarding privacy and AI use.
AI data retention policy clearly documented
The policy states that users can delete individual conversations, which are immediately removed from the history and deleted from the back-end within 30 days.
AI training opt-out control available
Offers a specific opt-out mechanism for model training through account settings, allowing greater user control.
Policy on data use for AI training clearly stated
Explicitly states the use of Inputs and Outputs to train models and improve services, ensuring clarity on data processing.
Source: vendor public documents
Highlights in Privacy (3)
Most relevant criteria for this category. Buy the Premium Analysis to see all 3 criteria.
Data Processing Agreement (DPA) not available for customers
The policy mentions that Anthropic acts as a data processor, but does not provide a direct link or explicit details of a DPA.
Data controller and processor roles clearly defined
Clearly distinguishes the roles of Controller and Processor, defining the scope of policy application.
Data controller identity and contact clearly disclosed
Provides full legal identity and physical contact addresses, promoting transparency.
Source: vendor public documents
Critical Alerts
- •Decisões automatizadas por IA explicadas de forma compreensível: Importante para a transparência nas decisões automatizadas..
- •Funcionalidades de IA claramente identificadas com suas finalidades: Ajuda a entender como as interações com o sistema são processadas e utilizadas.
Conformance analysis (20)
Deletion period for AI prompts and responses clearly defined
Reference: ISO/IEC 42001 (8.2) + ISO/IEC 27701 (7.4.6)
Opt-out control for AI training available
Reference: ISO/IEC 42001 (8.3) + ISO/IEC 29100 + EU AI Act
Policy on the use of AI prompts and responses for AI training declared
Reference: ISO/IEC 42001 (8.2) + ISO/IEC 23894 + EU AI Act
Source: vendor public documents
Follow this company and access all 20 criteria
Track score changes, get alerts on policy updates, and view the full conformance analysis
Don't miss any update
Sign up to follow this company and track changes in privacy and AI scores
Why trust the AITS Index: Open Community Audit
Public transparency, peer review and open evidence trails — all verifiable by the community
Trust guarantees
Peer review
users, professionals and experts confirm or contest items online.
Public history
vendor and index changes are versioned and accessible.
Participate
Evidence, confirmations and contestations
participate in the collaborative validation of AITS criteria
Understanding Privacy and AI Governance with Anthropic Claude
Privacy Strength: Transparent AI Training Practices
Anthropic Claude excels in its transparency regarding the use of prompts and responses for AI training. This is crucial for users who are concerned about how their data is utilized. With an OPTI Base Privacy Score of 92%, users can feel reassured that their interactions are not being used in a hidden manner. The clear definition of the retention period for prompts and responses also enhances user trust, as it allows individuals to know how long their data will be stored. For users, this means that you can engage with the software without worrying about your data being indefinitely retained or misused.
Privacy Strength: User Control Over AI Training
Another significant strength of Anthropic Claude is the availability of an opt-out feature for AI training. This allows users to have greater control over their data, which is especially important in light of regulations like the GDPR and LGPD that emphasize user consent. By enabling this feature, users can ensure that their data is not used for training purposes unless they explicitly allow it. This proactive approach to user consent is a vital aspect of privacy management, giving users peace of mind while using the platform.
Privacy Weakness: Lack of Data Processing Agreement (DPA)
Despite its strengths, Anthropic Claude has notable weaknesses, particularly the absence of a Data Processing Agreement (DPA) for clients. This is a significant concern as a DPA outlines how data is processed, stored, and protected. Without this agreement, users may face uncertainties regarding their data security, especially when handling sensitive information. This lack of formal documentation can be a red flag for organizations that must comply with stringent data protection regulations such as GDPR and ISO 27701. Users should be cautious and consider seeking additional assurances regarding data handling practices before fully committing to the software.
Privacy Weakness: Lack of Clarity in Automated Decision-Making
Another area of concern is the insufficient explanation of automated decisions made by the AI. Users may find it challenging to understand how certain outcomes are derived, which can lead to mistrust in the system. The absence of clear communication about the AI's decision-making processes can hinder users' ability to comply with their own regulatory obligations. To mitigate this risk, users should actively seek to understand the AI's functionalities and request more information from the provider regarding how decisions are made. This proactive approach can help users navigate the complexities of AI governance more effectively.
Practical Guidance: Enabling Opt-Out Features
To maximize your privacy while using Anthropic Claude, it is essential to enable the opt-out feature for AI training. This can typically be found in the settings menu under privacy controls. By opting out, you can ensure that your data is not used for training purposes unless you give explicit consent. Additionally, regularly reviewing your privacy settings and understanding the implications of your choices can help you maintain control over your data.
Practical Guidance: Seeking Clarifications and Alternatives
Given the weaknesses identified, particularly the lack of a DPA and clarity in automated decision-making, users should consider reaching out to Anthropic Claude for further clarification on these issues. It may also be beneficial to explore alternative platforms that provide a more comprehensive approach to data governance and privacy compliance. Look for software that offers a DPA and transparent explanations of AI functionalities to ensure that your organization meets its legal obligations while using AI technologies.
Other AI Tools software
Dive into in-depth research and analysis of each player
Source: vendor public documents
Related articles about Anthropic Claude

What the Slack case reveals about risks in corporate privacy policies under the EU AI Act?
Discover how discrete changes in privacy policies, like those from Slack and Claude Anthropic, can create invisible corporate risks under EU AI Act compliance.

Do you still think monitoring privacy policies is a waste of time? Remember the Claude Anthropic case
Claude Anthropic changed its policy in 2025 allowing training with user data by default. Discover why monitoring policies is essential governance under the EU AI Act.
Analyzed Sources
Public documents used in the audit of Anthropic Claude:
Evidence, confirmations and contestations
participate in the collaborative validation of AITS criteria
Scope & Limitations
TrustThis/AITS assessments are based exclusively on publicly available information, duly cited with date and URL, following the AITS methodology (privacy & AI transparency).
The content is indicative in nature, intended for screening and comparison, not replacing internal audits.
TrustThis/AITS does not perform invasive tests, does not access vendor technology environments and does not process customer personal data. Conclusions reflect only the vendor's public communication at the date of collection.
Source: vendor public documents






