Blog

LLM's & Semantic Layer: Self Serve has Entered the Chat | Zenlytic

Paul, the CTO and co-founder of Zenlytics, discusses how LLMs and semantic layers enable self-serve analytics. He explains that self-serve is a spectrum that increases capabilities continuously based on the power of underlying technology. While large language models (LLMs) are powerful tools for understanding intent and distilling it into useful information, they require business context to be able to make correct decisions. This is where semantic layers come in - they encode important information like definitions, dimensions, joins, etc., ensuring correctness every time you calculate something. Companies without proper semantic layers often struggle with ad hoc SQL queries or outdated dashboards which can lead to errors in reporting. Warby Parker is a good example of a company that spends most of its data team's time refining their semantic layer to ensure consistency in metrics across stakeholders.

Product
January 31, 2024
LLM's & Semantic Layer: Self Serve has Entered the Chat | Zenlytic

This keynote discusses the challenges of self-serve analytics and how the combination of LLMs (large language models) and semantic layers can improve the user experience. The semantic layer ensures correctness while LLMs provide context, resulting in faster response times. The demo shows how users can ask iterative questions without needing prior knowledge. The use of LLMs and semantic layers may lead to data scientists spending more time on complex statistical work rather than ad hoc SQL queries. This combination creates a better experience for end-users and a more valuable outcome for all parties involved.

Full Transcript and Timestamps below:

00:00
We're going to be talking about is LLMs, which you've all heard of, and the semantic layer and how together they enable a form of self serve analytics and self serve data that wasn't possible before. Let me jump in and I'll introduce myself. So who am I? I'm Paul, I'm the CTO and co founder of Zenlytics, a bi tool. We compete with Tableau, Looker, Power, Bi, all of these bi tools to make data more accessible to end users. I've got a master's in data science from Harvard. I live in Denver, Colorado. Moved there from New York. I've worked in data for a little over seven years, mostly setting up a lot of these stacks. I've set up and configured basically all data warehouses you can think of, and pretty much all bi tools you can think of. I've got a lot of experience in how these things go wrong firsthand, because the default stance of doing anything data is that it's probably going to go wrong.


00:57

I'm also very into rock climbing, which is what spurred the move from New York to Denver. So let's dive in. If we're going to talk about self serve, we've got to ask what is a data scientist job? Because when you think about what is self serve for a nontechnical person, that is the inverse, that is what isn't a data scientist job. That is what we're going to be looking at for selfserve. Selfserve is things that end users, end consumers of the data are able to get for themselves without the help of a technical person. The thing is that's not necessarily a single thing, a single ability that changes. It changes based on the underlying technology that enables different forms and different modalities of self serve, different abilities that weren't possible before. If we think about what you can consider self serve, maybe back in the day looking at static dashboards is self serve in a sense if you're looking at revenue over time in some big OLAP cube back in 2005, that's self serve in the sense you're not messaging someone to get that data.


02:11

It's maybe not self serve in the way we think about it, but self serve itself is this spectrum that increases capabilities continuously based on the power of the underlying technology. After these view dashboards, you have something like tableau, where instead of just being able to view this cube of a dashboard, you can actually filter it, you can edit it, maybe you can change a few things, group by something, actually have of interactivity, so that then ables another layer of self serve. If you are able to go in and add a filter that you need or change something from monthly to weekly, that's fewer and fewer emails that end up going to the data team. Technology changes and it advances, and you get tools like snowflake, which enable products like looker, where instead of just being able to be limited to, okay, we've got some filters on this dashboard, we've got maybe some widgets we can change on the dashboard.


03:10

You're able to explore from here on a dashboard. You're able to go and actually take some content on that dashboard, and iterate on it maybe one or two steps if you're not a technical user. For technical people like us, these interfaces might be straightforward, but for people who are not as technical, they're still pretty hard. The question is, well, what's the next step and what's the next major change in terms of technology that's going to allow self serve to move forward? That major change of technology allows you to explore anywhere where you no longer have to know which explorer, something's in, which table, something's in which folder, something's in, you no longer have to figure out what exactly the terminology is that you're talking about. You can ask a question and be able to get a competent answer that actually answers it. That is something that you need the intelligence of a large language model to be able to do so.


04:11

We've all heard of large language models, so it's been a lot of time talking about what they can do. They're pretty cool. Well, just with the release of GPT four, the amount of hype around these things is just at an absolute fee for pitch. Now twitterverse basically hangs on every OpenAI release. They release chat GPT, and it's compared to the iPhone moment, they release plugins, and it's compared to the app store. It's absolutely groundbreaking technology, and that's apparent to everyone. It's absolutely incredible what they can do and what those products are capable of, but it's actually not good enough. Okay, so why isn't it good enough? It's a pretty impressive product. We've all seen it. It's amazing because when you get inside a business, you have problems like these. Do you use processed at or created at for recognizing revenue? Net revenue net of refunds or just of discounts?


05:12

Why is the customer table fan out? When I join on customer ID, seems obvious that should be the primary key. Are active users based on that user status field, or is that some interactions logins, things like that? The problem, and the kicker is that these are real world examples of real world problems that I've had as a human doing data, setting people up and seeing exactly this. I've seen problems translating data because people are looking at the wrong dates for revenue, and they're unable to match numbers that they need to match. I've seen basically as many different definitions of net revenue as you have companies, because companies have different needs, they have different definitions that serve them in different ways. These definitions are not the same across companies. It's really important that they stay consistent inside of a company. Otherwise you have a lot of chaos.


06:04

This customer table example, I've seen customer tables where people decide to add in email leads that have no customer ID, that fans out all these joins downstream. This stuff happens and it's just what data looks like. In a lot of organizations, what defines as an active user isn't just a field or a level of logins. That definition can actually change. Business needs change, what it means to be active can actually change. And this stuff isn't generalized. You can't just take generally what does it mean to be active and apply that to any specific business. Businesses have real needs that need to be unique and specific to them. The core point here is that even a human with experience in this area can't do this without context on the business, without going in and doing the work and figuring out how this stuff is defined, figuring out the way that these definitions work, the way that the tables work, the nuances, and the tribal knowledge that lives inside of these companies.


07:11

Even though LLMs are really powerful and really impressive technology, if they don't have that business context, they're just not going to be able to make the right decisions and be able to give people reliable metrics that they can actually trust. At the end of the day, you can't trust them to pull data and you can't trust them because basically data is hard. They make generalizations, they take what across their whole training. Data means revenue or net revenue or active users. That might not be what it is for your business. Mistakes here aren't just, oh, it said something silly in that poem, or it didn't understand what that Idiom is. This is board reporting. This is a big deal. People can get sued over wrong numbers here. It's really critical that this is data that you can trust. Part of what that means is that an end user isn't going to use a tool, no matter how easy it is to use.


08:10

If there's some chance that they will just get a wrong number, that comes back to them, because that defeats the whole purpose of having the tool in the first place. What that means at its core is that text to SQL won't cut it for analytics. Because no matter how good you are at generating that SQL, unless you have the context on the business and exactly how everything is defined for the business, you're going to occasionally do wrong stuff. You might use a fine definition, but that's not the one that the company uses. And that's a big deal. The errors there are a very big problem. So how do you fix that problem? How do you give an LLM the context for it to be able to make the right decisions? You have to marry it to the business's context. You have to marry it to the semantic layer so that you actually have the context.


09:03

You can take advantage of what the LLMs are good at because what the LLMs are good at is comprehension. They can understand a person's intent and distill that into a set of actions and a response that they can take that is useful and informative for that person. Their core skill, their core kind of value prop is comprehension. We get into the context part to the semantic layer. I'll give a quick little overview of what a semantic layer is. The semantic layer is just think of it like definitions. That can be a definition of a metric like how you define a net revenue or how you define a discount, a gross revenue, a churn, an active user, things like that. Those can be dimensions that used to slice by things that includes acronyms, things that are internal to your company that maybe don't make a lot of sense anymore, but they've always been done like that.


10:00

You need to continue to have that naming. That stuff's important, how joins work, how to properly join tables together, even if you have weird nulls in certain cases, that's just how data looks in most companies and you have to be able to handle that. The semantic layer gives you the ability to encode that information in something that is governed in git, that lets you provide that information to any downstream service that's going to generate SQL. The core value prop of a semantic layer then is correctness. Whereas with LLMs we had comprehension, with semantic layers we have correctness. The semantic layer can guarantee that every time you calculate active users, every time you calculate churn, no matter what you joined in to do that, or what you sliced by or what you filtered by, you're going to use the same exact definition every time and you can trust that number is going to be calculated correctly.


10:56

At their core, what semantic layers do is they give you correctness and semantic layers really are necessary for effective self serve. Even without talking about LLMs, I'm just going to talk about a few examples that again I've seen out in the wild of what this progression and what this looks like without a semantic layer. There are real companies, and a lot of the people who use data at these companies who have some bi service that they go and download a CSV from and then they go to shopify or recharge. Or something and download a CSV from there and do all these VLOOKUPs in Excel and it takes them 20 hours a week to get data. That a lot of the times there's some small error in and it ends up being kind of wrong and that's just how it works in a lot of organizations.


11:46

Or you have this dashboard sprawl where you've got these nine different, partially outdated definitions of churn or active users or something like that. Those outdated definitions, some of them might be right, some of them might be wrong. It's really unclear to any of the people actually using the Bi tool which one's right, which one's wrong. That still ends up being an email to somebody because you can't pick the wrong one. Or even if you have a technical team, even if you have people who are able to write SQL, they end up writing ad hoc SQL to answer basically every question because there's no framework in place that lets them be able to say, oh, hey, I've got this answer over here based on these two tables. Someone just asked me to join in. A fourth table over there, a few tables away. There's no way to do that easily, so you just have to write the SQL yourself.


12:37

That process, not only is it excruciatingly difficult, but it just takes a really long time and it's error prone. You can forget that one little rule that you needed to do on that join and all of a sudden things are wrong. It's just really easy to make mistakes here. It's not like any of the teams that are doing this are slacking off. They're working really hard. You have to work really hard to be able to fill all of these requests. It's just tough. The question is, what does good look like? To not pick on, but to promote like a specific example. Warby Parker is great at data, they make great glasses and they're very good at data. What makes them really good though, is that they spend pretty much all of their data teams time building and refining their semantic layer. Instead of answering all these questions and putting a lot of energy into answering these ad hoc questions, they go in and they spend a lot of energy defining these joins properly, defining these metrics properly, working with stakeholders to make sure that they have a consistent definitions for these metrics.


13:45

And that's a different kind of work. It still takes work. Their data team overall spends a lot less time answering these ad hoc questions than a lot of these other companies. But that's not it. You can't fully get to self serve even with a best in class semantic layer and a top tier data team that's working really hard to make this semantic layer polished and usable. Because at the end of the day, you need to be able to merge these results together. End users don't know where things are if they are asking a question about marketing spend or something like that. They have a question about acquisition. Those might not actually be near each other in terms of the same explorer, the same table, and then they can't figure out how to get to them. It's just very hard for end users to be able to navigate these interfaces.


14:36

You see that just in the adoption rates of pretty much all bi tools. It's just really hard to get end users to actually use these tools instead of just email someone, because that's the kind of default and they can trust that if they email someone who has contacts with the business, they're going to be able to give them the right answer. Even with a good semantic layer, it's still really hard to be able to make self serve work. That's where you get the meme that self serve is a myth, that it's just not going to work, that there's no way that until the end of time we will be emailing data teams asking them about revenue or churn numbers in the last week. We even see here from our favorite birds, CSV, that we've got like two thirds think that we'll have self driving cars before we have self survey analytics.


15:26

So I think we might be wrong. We'll see about that. Then, going back to where we are now, so we've got these two tools, we've got comprehension from LLMs, we've got correctness from the semantic layer, and when we marry them, we have context. That context is really where the magic happens. Like you're able to take the ability that the LLMs have and combine it with the ability that the semantic layers have. When you combine those, you can actually take the businesses context, your acronyms, your Idiosyncrasies, the long weird descriptions that you need to describe what's going on, and make all of that available as context. You're effectively able to make an LLM a high context analyst that effectively works on behalf of your end users. Instead of responding in days due to a very long queue, it's responding in seconds. And that's pretty game changing.


16:29

We're going to look at of how these two complement each other. The semantic layer fixes the LLM's Hallucination problem. We've all seen tons of examples, including the little silly one I have up here. 413 divided by seven is actually 59 with no remainder. And that's sort of a silly example. It's funny that it got it wrong. If the LLMs are coming up with SQL and they get something wrong like that might not even be super obvious to begin with. The results can be catastrophic. It can calculate a funnel wrong, it can calculate revenue wrong. If it does these things and gives a wrong answer to someone like a CEO or someone like that, you have board reporting at stake. It's a really big deal to get these things wrong. So Hallucination is not a joke. It's not like something that a human is going to then correct.


17:21

If you're generating SQL plane and running it on the database. It's really dangerous even with a powerful LLM, unless you join it with the correctness of the semantic layer. Conversely, the LLM interface, which literally everyone knows how to use, is what can fix the semantic layer's complexity. Semantic layers are really powerful. You have these definitions of joins that cover potentially hundreds of tables and hundreds of thousands of metrics and they can get really complicated. It's hard in any traditional UI to be able to surf that huge quantity of content there. With an LLM you have effectively analyst who has context on all of those and can just answer the questions. The combination of those is that self serve becomes like talking to a data scientist. You can actually just ask someone in the same way that you've all probably gotten emails asking about various data questions instead of those emails going to you.


18:23

That same email goes to an LLM that has actually all that huge amount of context on the business and at these larger scale businesses like a warby or something, the quantity of that context just becomes so great that even if you've worked at the company for years, it's hard for you to keep all of that in your head. At the same time, you're going to make a mistake. It's a lot easier for an LLM to keep thousands of metrics in its head effectively at the same time than it is even for you as a human. So it's a really natural interface. It's an interface where you don't have to, when you get started, know exactly what you're looking for. You can kind of ask some probing questions. You don't have to know where things are, you don't have to care that to get what you want, you've got to run two queries and then merge them.


19:16

That just kind of happens. It takes a whole new level of self serve and makes that level available when it wasn't before. Now we're going to look at a little demo that we have of Zenlytic, and this is very early innings of what's possible with these LLMs. So see what we've got here. Like I was saying, you don't have to have perfect context on what you can ask where things are. You can just ask things and be guided in the right direction. And importantly, you can ask these questions. Iteratively so you don't have to have we've all seen like SQL Generators that are basically like show me revenue with the state is California. It's effectively SQL, that just isn't exactly SQL, but here you can actually talk to it and just ask it. Iteratively maybe refine what you're looking for, change the way things are ranked the same way you would in email where you ask for something and then you're like, oh wait, can we filter that?


20:44

Can we do something else with that? Just make that whole experience very iterative. Importantly, you also aren't stuck in one spot in the sense that if you need to change the topic, if you need to ask about something else, maybe you're asking about products here and then that sparks something about marketing channels. Maybe you want to, okay, let's change the topic. I need to know about acquisition and different marketing channels. Now, you can actually just do that because the LLM doesn't care that your previous question was in the order lines table and your new question is in the marketing spend table. That's irrelevant to it. It has that semantic layer knowledge, so it's able to actually go and figure out where that new query, where that new context needs to come from. You're able to change the topic as well. Again, similar to an email thread that all these end users are more than familiar with go, what does this mean for data scientists?


22:05

What does this mean for what our jobs as data scientists will look like? I think to answer that question, we've got to go back to our big view of self serve. We're looking at self serve, if what I'm talking about is the progression, then you've got these view, these OLAP dashboards, you've got the interactivity of these dashboards, you've got the ability to maybe go one step beyond that dashboard. With LLMs, you've got the ability to basically just ask a question. If it's in this semantic layer, if your business has context on it, you're able to answer it. So then what does that mean? Where does that put data scientists? And I think it's good. The end result, I think, of how that affects data scientists work is that teams will spend more time building these semantic layers because the value of having that LLM interface is so great that you will spend more.


23:01

Time building these semantic layers, doing these definitions, making sure that when the model goes to execute a join, it's executing it correctly, it's calculating metrics correctly. You'll spend more time building these semantic layers, but you'll also spend a lot more time on the complicated things that you actually need your education for. A lot of data science work. Now and I see this in friends that ask me about getting into a data science field, is that you're going to go and learn ScikitLearn and then you're going to get a job and you're going to just write SQL. That's what a lot of getting into data science actually looks for people who are trying to make a career switch. I think with the advent of LLMs and their combination with the semantic layer, being able to make a lot of that just SQL monkey work basically go away is going to open up the work for you to be.


23:58

Able to do complex statistical things, build custom models that are internal to your company and actually use the education that you have and the skills that you have as a data scientist. That's also going to be less time spent answering these ad hoc questions where people are just asking you to effectively write SQL for them, that those questions will still exist to some extent, but for a large part, for the most part will go away and that work will take place where it does now. Like most of that is in code, most of that is in jupyter or a hex or something like that. I think there is a sense in which a lot of this work doesn't really ever get automated away. Like a lot of what you do as a data scientist is very specific to a company and has a ton of nuance in it and you really do need your education to be able to do those kinds of things, to be able to do that level of work.


25:01

That's the work that will become more valuable and thankfully the work that you'll be spending more time doing, because it's also more fun than just answering ad hoc SQL questions. I'll just close with this is a really exciting future that data scientists get to work on stuff that their education makes them uniquely qualified for, they get to add more value to the business as a result that they're working on things that they're better at and are more valuable. This combination of LLMs and the semantic layer makes a lot of what's grindy and what's tough about the data science job just evaporate and that end users are able to get a better experience, they're able to get answers faster, they don't all feel like they're on the bottom of your ever lengthening queue and it's really a better outcome for everybody. It's a future I'm really excited about.

Want to see how Zenlytic can make sense of all of your data?

Sign up below for a demo.

get a demo

Harness the power of your data

Get a demo