ExplAIn: Anove’s accessible guide to Trustworthy AI Tools
By Cintia Nunes, Anove Operations Manager
AI is everywhere… but can you trust it?
From note-taking apps to chatbots, AI tools are reshaping how we work and live. But how much do you really know about the tools you use?
Who built them?
Are they compliant with regulations like the GDPR or the EU AI Act?
Could your data be at risk?
At Anove, we believe transparency is the foundation of trust. That’s why we built a ExplAIn: a free, curated database of AI tools and models, evaluated for transparency, compliance, and ethical use. Whether you’re a professional, a business, or just a curious user, ExplAIn helps you make informed choices so you can use AI with confidence.
What is ExplAIn?
ExplAIn is Anove’s AI catalog, designed to help you compare AI tools based on transparency scores. ExplAIn is available at https://www.explain-ai.com/ .
Anove combines expertise in law, technology, and ethics to build tools that support responsible AI adoption. With ExplAIn, we evaluate AI tools using five key criteria:
- Supply Chain Transparency (20%): Who’s behind the tool? Where is it hosted?
- Compliance Transparency (20%): Does it follow GDPR, the EU AI Act, and other regulations?
- Policy Transparency (25%): Are terms, privacy policies, and data practices clear?
- Technical Transparency (25%): How was the model trained? Are there known biases?
- Ethical & Operational Transparency (10%): Does the provider commit to responsible AI?
In general, a higher score means more accountability, meaning that the AI provider is open about how their tool works and who has access to your data. Higher scores also mean that a tool is more likely to comply with regulations and protect your privacy.
Did you find out that an AI tool you use has a low transparency score? Then proceed with caution. The tool may lack clear policies, use unclear data practices, or pose higher risks.
Our goal with ExplAIn is to empower users with knowledge so they can choose and use AI tools safely. We want to contribute to AI literacy by helping users understand AI risks such as data privacy and bias, hoping they discover alternatives to low-trust tools.