Discover how discrete changes in privacy policies, like those from Slack and Claude Anthropic, can create invisible corporate risks under EU AI Act compliance.
Trust This Team

Why do companies only discover critical policy changes when it's already too late?
In 2024, a global technology company altered its privacy policy in an apparently technical manner. Corporate users continued operating normally, without realizing that their data could be used in different ways. A few months later, in 2025, another tech giant did the same.
What connects these events is not just the timing, but the invisible nature of changes that can completely redefine corporate risk.
We're talking about Slack in 2024 and Claude (Anthropic) in 2025. Both cases reveal a concerning pattern: discrete changes in public policies that go unnoticed until they generate real consequences. For privacy, compliance, IT, and procurement teams, this represents a dangerous blind spot.
Slack updated its privacy policy including terms that allowed the use of user data for training machine learning models. The change was communicated in a technical manner, without great fanfare, buried in extensive legal documents.
Companies using the platform only noticed the change when discussions began circulating in specialized forums and social media.
The problem wasn't just what was being changed, but the lack of active transparency about the real impact of the change. Organizations with strict policies about data use in AI needed to:
Less than a year later, Anthropic updated aspects of Claude's policy related to data processing and retention. Again, a discrete change that could impact how companies manage sensitive data processed by generative AI.
Companies using Claude for document analysis, automated customer service, or content generation need to be aware of any changes regarding:
Without an active monitoring system, these changes simply don't appear on the radar of those who need to act.
The answer is simple: scale and speed. A typical DPO needs to monitor dozens or hundreds of software vendors. Each of them can update privacy policies at any time, without obligation to directly notify corporate clients.
Manually checking policies from all vendors is:
Moreover, when a change is finally detected, response time is critical. The faster your company identifies a risky change, the more options it will have: renegotiate contracts, implement additional controls, or migrate to alternatives before the problem worsens.
The consequences of missing a critical privacy policy change go far beyond abstract non-compliance:
Violations of the EU AI Act, GDPR, or other regulations can generate significant fines. If your company processes personal data from clients using software that silently changed its retention policy, you may be in non-compliance without knowing it.
Many corporate contracts include specific clauses about data treatment. A unilateral change in the vendor's policy can put your company in contractual violation with your own clients.
Discovering too late that sensitive data was being treated inadequately can result in trust crises with clients, partners, and regulators.
Having to hastily replace critical software because an incompatible policy change was discovered late generates costs, delays, and organizational friction.
Trust This was created exactly to solve this blind spot. Our platform continuously monitors the privacy policies of software your company uses, alerting about any risk changes in real time.
Versioned history: We maintain a complete record of all previous versions of each software's policies, allowing detailed comparisons between what changed and when.
Automated alerts: When a policy is updated, you receive instant notifications with analysis of the potential impact of the change, classified by risk level.
Comparative analysis: We identify not just that something changed, but what specifically changed — data retention, third-party sharing, AI use, international transfers — and how this affects your compliance criteria.
Auditable evidence: All alerts include links to original policy versions, change dates, and specific affected criteria, generating documentation ready for audits.
Policy monitoring is not the exclusive responsibility of one area. It's a cross-cutting issue that impacts multiple personas:
DPOs and privacy teams: Need to ensure continuous compliance with EU AI Act/GDPR and issue updated opinions on vendors when policies change.
CISOs and security teams: Need to assess whether policy changes introduce new data risk vectors, especially related to third parties and sub-processors.
Procurement and purchasing: Should be aware of implicit contractual changes and prepared to renegotiate terms when necessary.
Legal and compliance: Need to assess contractual and regulatory impacts of unilateral changes in vendor policies.
AI governance teams: With the growing use of generative AI, any change related to model training, prompt retention, or data use needs to be evaluated immediately.
If you don't yet have a structured process to monitor privacy policy changes, start now. Don't wait for the next "Slack case" to happen with one of your critical vendors.
Map your critical vendors: List the software that processes sensitive data or is essential for your operations. These are the first that need to enter the radar.
Establish a baseline: Document the current policies of these vendors and the compliance criteria they meet today. This will be your reference to identify future changes.
Implement automated monitoring: Configure alerts to be automatically notified about changes. Trust This offers this capability in an integrated way, allowing you to focus on what really matters: acting when necessary.
The Slack case and the Claude Anthropic case are not exceptions. They are symptoms of a new cycle of corporate risks where discrete changes in public policies can completely redefine your compliance profile.
The difference between prepared companies and vulnerable companies lies in the ability to detect these changes before they become real problems.
Monitoring privacy policies is no longer optional. It's a matter of corporate survival.