AITS transforms public evidence into a comparable index of AI and privacy transparency. Discover how it standardizes supplier mapping and comparison under the EU AI Act.
Trust This Team

Companies that contract corporate software face a critical challenge: how to evaluate the AI compliance of dozens of suppliers quickly, in a standardized and auditable way? Manual analysis of AI governance policies is time-consuming, inconsistent between evaluators, and rarely allows fair comparisons.
The AITS (AI Trust Score) methodology was developed by Trust This to solve exactly this problem. It is an index that measures the degree of transparency and public communication of AI governance and privacy practices of software and digital services, with special emphasis on systems that use artificial intelligence.
Unlike certifications or technical audits that require internal access to companies, AITS works exclusively with public evidence: AI governance policies, terms of use, official documentation, and statements about AI use. Each evaluation is based on specific URLs, versions, and dates, creating a complete and reproducible audit trail.
The AITS methodology evaluates software through 20 criteria inspired by the EU AI Act and international AI standards (ISO/IEC 42001, ISO/IEC 23894, and ISO/IEC 42005).
The structure is divided into two layers:
When software declares AI use (criterion 71), 16 additional specific algorithmic governance criteria are applied:
The second layer of 12 criteria covers fundamental aspects of privacy transparency:
Each criterion receives a response with three scores: 3 (clear and available information), 1 (generic information without tangibility), or 0 (absent information). This simplicity allows objective comparisons between suppliers.
The choice to work exclusively with public information is strategic and brings unique benefits:
Scalability: it's possible to analyze hundreds of suppliers without depending on questionnaires, on-site audits, or access to internal systems. This drastically reduces evaluation time.
Auditability: each score comes with specific URLs, document versions, and access dates. Anyone can validate the evidence supporting the analysis.
Comparability: by using the same criteria and public sources for all suppliers, AITS allows fair and objective comparisons, eliminating individual evaluator biases.
Reproducibility: the methodology can be applied consistently over time, allowing continuous monitoring and detection of changes in supplier transparency practices.
Democratization: companies of any size can access the same information and perform comparable evaluations, without needing resources for deep technical audits.
Transparency about limitations is a fundamental part of the AITS methodology. The index does NOT evaluate:
AITS is a screening tool based on public communication, not a certification or substitute for in-depth technical audits. It answers the question: "How does this supplier communicate its AI governance practices?" and not "Is this supplier 100% compliant with the EU AI Act?"
The AITS methodology was designed to support different moments of supplier governance:
Use AITS - AI Trust Score to quickly filter dozens of candidates, identifying those with better public transparency. This allows focusing in-depth due diligence efforts on the most promising suppliers.
Compare scores and specific criteria between finalists. For example, if two suppliers have similar functionalities but very different AITS scores, this indicates that one communicates its practices better — an important sign of governance maturity.
Identify specific gaps (criteria marked as "NO") and use this information to require compensatory contractual clauses. If a supplier doesn't document data retention, include explicit obligations about this point in the contract.
Track changes in scores and public policies of already contracted suppliers. Significant changes may indicate the need for contract review or risk reassessment.
Trust This uses an advanced AI pipeline to operationalize the AITS methodology at scale:
Automated collection: web scraping identifies and captures AI governance policies, terms of use, FAQs, technical documentation, and AI statements from each evaluated software.
AI analysis: specialized models (Gemini Pro 2.5, Claude 3.5 Sonnet, DeepSeek V3.1) process documents, identify evidence related to the 86 criteria, and generate analyses in executive language.
Cross-validation: multiple models verify consistency in analyses, reducing chances of false positives or incorrect interpretations.
Versioning: all analyses are dated and referenced to specific versions of public documents, allowing historical tracking and change detection.
Insight generation: beyond the numerical score, the system identifies patterns, market best practices, and transparency trends by software category.
The AITS methodology was developed to serve different profiles within organizations:
DPOs and privacy professionals use AITS for quick supplier screening, replacing manual analysis that would take days with standardized evaluation in minutes. The auditable score facilitates justifications in opinions.
CISOs and IT managers identify privacy risks in tools adopted by shadow IT and prioritize which suppliers deserve in-depth technical security analysis, optimizing limited resources.
Purchasing and Procurement teams find in AITS an objective criterion for tiebreaking between similar suppliers and defensible documentation for internal audit processes and approval committees.
Legal and Compliance accelerate contract analysis using gaps identified by AITS to require specific clauses, reducing rework and contractual exceptions.
EU AI Act consultants offer quick diagnostics to clients, justify the need for in-depth analyses, and create recurring supplier monitoring services.
The AITS methodology doesn't seek to fully automate AI governance, but rather to enhance human decisions:
Professionals maintain final responsibility for evaluations and contracting decisions. AITS provides standardized and comparable diagnosis that serves as a basis for contextual and strategic analysis.
The index identifies where to deepen. Critical gaps signaled by AITS indicate points that deserve direct questioning to the supplier, detailed legal analysis, or specific contractual requirements.
The evidence trail with URLs and versions allows teams to validate findings, verify original context, and make informed decisions — not just based on a number, but on documented and auditable facts.
By making AI transparency measurable and comparable, the AITS methodology creates market incentives:
Suppliers are motivated to improve public communication about AI governance and privacy, as low scores become visible competitive disadvantages in purchasing processes.
Buyers gain negotiating power based on objective data, accelerating screening and reducing regulatory risks through more informed decisions.
The market evolves to higher transparency standards, better preparing for emerging AI regulations and creating competitive differentiation through best practices.
The AITS methodology doesn't replace audits or formal certifications, but fills a critical gap: the need for quick, standardized, and auditable supplier screening based on public evidence. In a world where companies manage hundreds of software suppliers and AI becomes ubiquitous, having a clear methodology to map and compare AI transparency is essential for effective governance.
Want to know the AITS scores of your current suppliers? Explore our platform and see how the methodology can support your purchasing and monitoring decisions.