The 12 key principles of the CLeAR AI transparency framework

As artificial intelligence (AI) becomes more embedded in everyday life, it’s poised to impact everything from hiring to loan approvals to medical diagnoses—and demand for AI transparency has surged.
However, AI models remain opaque—often functioning as “black boxes” that reason and make decisions that are difficult to understand, even to their developers.
For the 75% of business leaders that now use AI, the possibility of AI’s unchecked capabilities leads to risks: biased outcomes, non-compliance, and losing the trust of users, stakeholders, and regulators.
Understanding how AI systems operate is a key step in taking responsibility for them. This involves providing visibility into how AI systems learn, use data, reason, and make decisions so they're interpretable, explainable, and accountable.
To assist businesses in ethical and responsible AI use, researchers at the Harvard Kennedy School’s Shorenstein Center, in collaboration with Microsoft Research, introduced the CLeAR Documentation Framework. It’s a set of guiding principles for building transparency into AI systems from the ground up—making AI Comparable, Legible, Actionable, and Robust by design.
This article breaks down these 12 principles, starting with a definition of AI transparency, why it matters, so you can understand quickly how to build AI systems that are trustworthy and transparent.

8 major support hassles solved with AI agents
What is AI transparency?
AI transparency refers to a set of processes that create visibility and openness around how artificial intelligence systems are designed, operate, and make decisions.
It aims to foster trust and accountability in AI systems by making their inner workings understandable to users, stakeholders, and regulators. This includes providing information about the data, algorithms, and decision-making processes involved in AI systems.
Why is AI transparency important?
Transparency provides a clear explanation for how and why things happen with AI systems. By understanding the inner workings of AI, as well as its ethical, legal, and societal implications—organizations can take steps to ensure AI’s outputs are safe, fair, and reliable. Transparent AI is trustworthy, safe, compliant, and promotes innovation.
Learn more: AI transparency: A complete guide and overview
Understanding the transparency challenge in AI
Unlike traditional software, AI models operate based on patterns learned from their training data. This opaque “black box” decision-making process introduces a set of challenges for those seeking to ensure responsible, ethical AI use:
Unreliable outputs: Model outputs can be difficult to understand at times, while issues like biased or inaccurate outputs (aka hallucinations) can also make it difficult for users and stakeholders to trust and accept AI systems.
Regulatory gaps: Governments are racing to catch up with the pace of AI deployment, leaving organizations to navigate a patchwork of federal, state, and international laws and guidelines.
Lack of standards: As companies focus on innovating new AI solutions, transparency documentation in companies may become outdated and inconsistent, go unenforced, or be absent entirely.
Public distrust: Users are more likely to reject AI systems they don’t understand, which can result in loss of user trust and churn.
Levels of disclosure: Organizations must create and provide visibility into the data, algorithmic, interaction, and social impacts of AI systems. This robust process of disclosure must satisfy technical audiences like stakeholders and regulators, as well as consumers.

8 major support hassles solved with AI agents
12 key principles of the CLeAR AI Transparency Documentation Framework
The CLeAR Documentation Framework aims to make AI systems comprehensible, trustworthy, and compliant. It lays out a strategic approach to embedding transparency into every stage of the AI lifecycle. The CleAR AI transparency framework lays out 12 principles, including:
C – Comparable
AI systems are difficult to compare and evaluate side by side without standardized documentation. Standardization helps to promote stakeholder understanding around performance and safety, leading to effective innovation, collaboration, and communication with users and regulators.
1. Standardize transparency documentation across models
Using uniform formats to document key details about AI model inputs, outputs, and training processes allows regulators, users, and researchers to evaluate different models with consistency. The formats include simplified model cards or datasheets for datasets.
2. Use benchmarking to contextualize performance
Claims about model performance are meaningless without context and details, such as model training inputs and conditions. Using established benchmarks to report AI performance provides consistency across various model groups, tasks, or environments. This baseline makes models testable, meaningful, and transparent to all concerned.
3. Include model evaluation metadata in reports
Due to the increasing sophistication of machine learning processes, model evaluations are often opaque or incomplete, which makes it hard to trust or reproduce evaluation results. Disclosing details like dataset composition, preprocessing steps, evaluation timeframes, and relevant data exclusions can help to replicate findings, detect bias, and ensure transparency.

Harness proactive AI customer support
L – Legible
Transparency hinges on disclosing how and why AI models make decisions at the deepest level to promote understanding among users, stakeholders, and regulators. However, if AI documentation is filled with jargon, it can be inaccessible to users and non-technical decision-makers. Ultimately, the format of information should suit the audience and the use cases, as comprehension and accessibility are paramount.
4. Prioritize plain language in public reporting
Everyday users must be able to understand reporting documents, such as the AI policy pages on a company’s website, or a data privacy consent form they must complete before they engage with an AI system (per laws like GDPR or the EU AI Act). Writing for the public, not just developers, can also involve using visual aids to aid in comprehension.
5. Make model behavior intelligible for stakeholders
Technical documentation can obscure key information or overcomplicate the way AI works. Explain the decision logic of models using explainability tools like SHapley Additive exPlanations (SHAP), Local Interpretable Model-agnostic Explanations (LIME), and counterfactual examples. This helps stakeholders to understand both what the AI’s output was, and why it happened.
6. Avoid jargon and enable clear interface design
Build dashboards for AI systems and reports that surface the most relevant information clearly and concisely. Simplicity promotes engagement and enables responsible oversight.
A – Actionable
Even when transparency in AI exists, stakeholders don’t always know how to act on it.
7. Support stakeholder interpretation and use
People affected by AI often have no clear way to appeal bad outcomes, which is increasingly possible in high stakes industries like finance or healthcare. Provide guidance that helps users understand how to apply, question, or reject model outputs. Explain boundaries of the model’s competence and appropriate use cases.
8. Enable redress and appeals for impacted users
Transparency often stops at the model level, ignoring the broader system it operates within. Include clear documentation on how individuals can contest decisions made by AI systems. Establish human-in-the-loop review policies where necessary.
9. Provide system-level impact documentation
Designate and disclose who is responsible for deployment, how updates are handled, and how model changes impact users. Describe governance processes, stakeholders, and downstream risks that are being accounted for.
R – Robust
AI systems evolve over time, but if transparency documentation doesn’t keep up it can lead to mismatched expectations and outdated assumptions.
10. Ensure documentation evolves with the model
Use version control for documentation just like for code, helping to ensure transparency efforts are logged for full visibility and review if needed. Maintain a changelog for transparency updates and clearly indicate when data, architecture, or use cases change.
11. Audit for completeness and gaps in transparency
Periodically assess whether documentation still reflects how the system works and how it’s used. Transparency efforts can become meaningless over time if it's not accurate, but third-party audits can help to validate claims and surface blind spots.
12. Build feedback loops to strengthen trustworthiness
Invite feedback from end-users, impacted groups, and oversight bodies. Allow others to flag errors, ask for clarification, and recommend improvements. Transparency is an iterative, ongoing process.

Leverage omnichannel AI for customer support
Best practices for using the CLeAR AI Transparency Framework
Transparency often exists in theory but not in practice. It’s common for companies to focus on innovating new solutions with AI rather than focusing on designing safe, accountable, and complaint solutions.
Here’s a set of best practices to follow while creating, deploying, and maintaining transparent AI systems.
Incorporate transparency into machine learning pipelines
Make transparency part of the model design and development process, not an afterthought. Automate the generation of model cards, track data lineage, and document model updates as part of your regular CI/CD workflow.
Align with existing frameworks
Developers may lack usable tools or templates for good documentation. Map CLeAR principles to the NIST AI RMF, OECD AI Policy Observatory guidance, and the EU AI Act’s documentation requirements. This reduces redundancy and boosts legal readiness.
Use established tools
Model cards, datasheets for datasets, and transparency reports are all proven, open-source tools for transparency. They can be customized to your domain while keeping core sections consistent and auditable.
AI transparency is a pillar of AI operations
As AI becomes more central to business operations, transparency will become more than a best practice—but a key to effective innovation and competitive operations.
Transparency is an ongoing commitment, a foundational pillar to be considered at every stage of AI development and deployment. AI models constantly evolve and adapt, so maintaining transparency demands ongoing monitoring and regular check-ins. Only by staying transparent can AI be kept reliable, trustworthy, and compliant.
AI transparency with Sendbird
Sendbird makes it easy to achieve transparency with our AI agents for customer service. Try the powerful AI agent platform equipped with transparency-specific features like agent scorecards, activity trails, and end-to-end testing—all of which support the explainability, interpretability, and accountability of AI agents.
To see how Sendbird helps you build, optimize, and scale AI support agents with full transparency, contact our sales team for a demo.
To learn more about AI at Sendbird, you might enjoy these related resources:
AI agent scorecards: Bring oversight and accountability to AI
Activity trails: Full visibility into how your AI agent acts