
Data teams have long lived in a tradeoff: power versus trust.
SQL is powerful, expressive, and flexible, but struggles with consistency and reusability. Semantic layers are trustworthy, governed, and explainable, but rigid and slow to evolve.
The data world has been waiting for LLMs to close this gap for 3 years now. Yet the data world is still lagging behind software developers in LLM adoption. So how can the data team achieve the same combination of power and trust and bring "data-driven" to the masses?
Our mission at Zenlytic is to bridge that divide. We’re thrilled to introduce the Clarity Engine, a new foundation for intelligent analytics that combines the flexibility and depth of SQL with the explainability and governance of a semantic model.
Clarity Engine: Just Ask, and it answers
This is the most significant upgrade we've made since the launch of our AI analyst Zoë in 2023.
Zenlytic has been a pioneer in using semantic layers with LLMs (and earlier, using Small Language Models before LLMs existed). However, the reality is that semantic layers, while accurate, are time-consuming to set up and, even worse, require the data team to anticipate every question a business person might ask. On the other hand, Text-to-SQL is essentially a black box, with limited context on the data or the business. When it’s wrong, business users can’t tell why and can’t trust the results.
The Clarity Engine retains everything that makes traditional semantic layers valuable: consistent metric definitions, governed measures & dimensions, and explainable logic. But with the Clarity Engine, the semantic layer is an optimization rather than a requirement. It uses direct SQL generation at its core (resulting in deep, flexible analytics), but then uses the semantic layer to structure and govern the query.
“We already had a dozen tools that could tell us our sales last week. But only Zenlytic can answer the questions dashboards can’t. Zoë handles those high-impact questions that would be impossible to ask in traditional data platforms.” -Matt Griffiths, CTO of Stanley Black & Decker

Features:
- Powerful SQL - Ask a question and Zoë composes an answer using powerful, flexible SQL. Then, our SQL Bridge maps the query into the structured data definitions in your semantic layer.
- Dynamic Flexibility - If the semantic layer isn't complete, dynamic measures and dimensions are created intelligently to answer the user’s question.
- Explainable AI - Every dynamic answer is explained in-GUI so business users can immediately understand and trust the results.
- AI Governance - data teams can review all queries, promote helpful dynamic fields to the semantic layer, and maintain control over analytics logic.
As Amanda Yan, Head of Data at J.Crew and Madewell, put it:
“We’ve tried every AI-powered platform out there. But our self-serve users still asked us to verify everything. Zenlytic solves this. Once our end users understand the results, they trust the results.”

The Context Layer
When using LLMs, context is everything. That's why we’ve extended the traditional semantic layer into what we call the Context Layer. The Context Layer contains traditional structured view definitions, but also captures unstructured feedback into “Memories” that don't fit into a YAML format.
Context is more than just mapping columns to dimensions; it’s about understanding why a user is asking a question, what tribal knowledge is needed, and how to generate answers that are not only correct but meaningful.
“LLMs haven’t destroyed the semantic layer; they’ve changed our relationship with it. The Clarity Engine shows what happens when you let semantics and context work together.” - Ryan Janssen, CEO & Co-Founder, Zenlytic
The Context Layer stores information in three ways to ensure consistent and accurate answers:
- Natural language memories: capturing the tribal knowledge that usually sits internally amongst teams.
- Semantic logic: remembering prior logic, user preferences, and evolving business concepts.
- Dynamic data modeling: generating new metric logic on the fly, with the ability to promote it for future reuse.
This makes the Clarity Engine not just reactive, but proactive, learning from interactions, improving over time, and helping data teams scale insight without giving up control.
Governed Flexibility
One of the most powerful aspects of the Clarity Engine is how it aligns self-serve analytics with governance:
- Data teams can see questions asked, with feedback from end users.
- New fields and measures can be created by the LLM at question-time, then promoted into the semantic layer with a click.
- Governance remains intact, definitions, logic, and trust are preserved, even as usage scales.
This is a future where analytics is no longer a bottleneck or a black box, it’s transparent, flexible, and collaborative.
The World's First LLM-Native Data Architecture
The Clarity Engine represents more than a product release; it’s a rethinking of how data stacks can work with AI. As we enter an era where LLMs are coworkers for business questions, the need for both contextual flexibility and semantic trust becomes essential for the agents to reach their full potential.
With the Clarity Engine, you no longer have to choose between flexibility and governance. Between depth and trust. You get both — and a whole lot more.
Intelligent Analytics is here.