
Microsoft Copilot
Based exclusively on public evidence • 20 criteria (Privacy + AI)
Last review: 10 Feb 2026

AI Trust Summary
Safer Alternatives
Higher-rated software in the same category
Attention Points in AI (1)
AI criteria that require attention. Buy the Premium Analysis to see all 1 criteria.
AI decision contestation mechanism not available
Source: vendor public documents
Compliances in AI (3)
AI criteria the company meets. Buy the Premium Analysis to see all 3 criteria.
AI data retention policy clearly documented
Policy on data use for AI training clearly stated
AI training opt-out control available
Source: vendor public documents
Highlights in Privacy (3)
Most relevant criteria for this category. Buy the Premium Analysis to see all 3 criteria.
Data controller and processor roles clearly defined
Data controller identity and contact clearly disclosed
Privacy contact channel available
Source: vendor public documents
Conformance analysis (20)
AI data retention policy clearly documented
Reference: ISO/IEC 42001 (8.2) + ISO/IEC 27701 (7.4.6)
Policy on data use for AI training clearly stated
Reference: ISO/IEC 42001 (8.2) + ISO/IEC 23894 + EU AI Act
AI training opt-out control available
Reference: ISO/IEC 42001 (8.3) + ISO/IEC 29100 + EU AI Act
Source: vendor public documents
Follow this company and access all 20 criteria
Track score changes, get alerts on policy updates, and view the full conformance analysis
Don't miss any update
Sign up to follow this company and track changes in privacy and AI scores
Why trust the AITS Index: Open Community Audit
Public transparency, peer review and open evidence trails — all verifiable by the community
Trust guarantees
Peer review
users, professionals and experts confirm or contest items online.
Public history
vendor and index changes are versioned and accessible.
Participate
Evidence, confirmations and contestations
participate in the collaborative validation of AITS criteria
Microsoft Copilot: A Comprehensive Guide to Privacy and AI Governance
Transparency in AI Training and User Control
Microsoft Copilot excels in transparency regarding its AI training processes. With an AITS Privacy Score of 94%, users can feel confident that they have a clear understanding of how their data is utilized. The software allows users to opt-out of AI training, which means you can choose not to have your prompts and responses used to improve the system. This feature is crucial for those who prioritize their privacy and want to maintain control over their personal data. Additionally, the purposes for data processing are explicitly categorized, ensuring users know exactly how their information will be used. This clarity not only fosters trust but also aligns with GDPR and LGPD regulations, giving users rights over their data.
User Management of Prompt History
Another strength of Microsoft Copilot is its user management feature for prompt history. Users can manage their historical prompts, which provides a layer of control over personal data. This capability is essential for users who may want to review or delete their interactions for privacy reasons. Having this control aligns with ISO 27701 standards, which emphasize the importance of data subject rights. By regularly reviewing your prompt history, you can ensure that no sensitive information is inadvertently retained or misused.
Lack of Mechanism for Contesting AI Decisions
Despite its strengths, Microsoft Copilot has a notable weakness: it does not provide a mechanism for contesting automated decisions made by its AI. This absence could be concerning for users who rely on the software for critical business decisions, as it may undermine trust in the AI's outputs. Without the ability to challenge these decisions, users may feel vulnerable, particularly in high-stakes scenarios. To mitigate this risk, users should maintain a critical approach to the AI's recommendations and consider implementing additional checks or balances in their decision-making processes.
Data Retention Policies
Another area where Microsoft Copilot falls short is in its data retention policies. While the software clearly informs users about the retention of prompts and responses, the lack of a robust mechanism to delete this data upon request can be a significant drawback. Users should be proactive in managing their data by regularly reviewing the retention settings and understanding how long their information is stored. To enhance privacy, consider limiting the amount of sensitive information shared with the AI, and utilize the opt-out feature for training whenever possible.
Understanding AI Usage in Copilot
Microsoft Copilot clearly states its use of artificial intelligence, which is a positive aspect of its transparency. However, this clarity does not extend to providing users with the ability to contest AI decisions or manage the implications of AI usage effectively. Users should familiarize themselves with how AI is integrated into their workflows and the potential risks involved. For those concerned about AI's influence on their operations, it may be beneficial to explore alternative software options that offer more robust governance features.
Practical Steps for Enhanced Privacy
To enhance your privacy while using Microsoft Copilot, consider the following practical steps: First, regularly check your settings to ensure that the opt-out feature for AI training is enabled. Second, take advantage of the prompt history management feature to delete any sensitive interactions. Third, stay informed about your rights under GDPR and LGPD, and make sure to exercise those rights if necessary. Lastly, consider using additional privacy tools or software that complement Microsoft Copilot, especially if your organization handles sensitive data frequently. By taking these precautions, you can better safeguard your personal information while benefiting from the capabilities of Microsoft Copilot.
Other AI Tools software
Dive into in-depth research and analysis of each player
Source: vendor public documents
Analyzed Sources
Public documents used in the audit of Microsoft Copilot:
Evidence, confirmations and contestations
participate in the collaborative validation of AITS criteria
Scope & Limitations
TrustThis/AITS assessments are based exclusively on publicly available information, duly cited with date and URL, following the AITS methodology (privacy & AI transparency).
The content is indicative in nature, intended for screening and comparison, not replacing internal audits.
TrustThis/AITS does not perform invasive tests, does not access vendor technology environments and does not process customer personal data. Conclusions reflect only the vendor's public communication at the date of collection.
Source: vendor public documents






