00:00:06 Juan Sequeda
Hi everybody. It's Juan Sequeda from Catalog & Cocktails. I'm on vacation this week, but I just wanted to share a quick rant. My vacation place here is Oaxaca, Mexico. I am in inaudible. Look at this fantastic view I have right now. But anyways, I wanted to rant about one thing, which is the big thing on everybody's mind is the generative AI. The large language inaudible. In particular, the one that I'm seeing a lot is on integrating large language models to use them to over structure data, so over data that's in SQL databases. And to be able to go do what we're calling chat with the data, all these chatbots. And I think one of the things that we're starting to go see is integrating your internal data with the large language models. Everything that we're seeing is all about unstructured data. And these are the vector databases and embeddings, and you're talking about chatting with all your internal data, but everything you're seeing is with unstructured data. And when it talks about structured data, what we're starting to go see is like, oh, we can just go translate the natural language question into SQL queries. And we're seeing some tools and people talking about it, and I think everybody's acknowledging that, yeah, it's kind of cool, but this is only working, or all the demos are for the easy cases, right? Small data meaning, one or two tables, easy questions, which involve joins on one or two tables. But what else? So this is something where I'm really happy to see a lot of people are calling BS and are very cautious about it. But heck, it doesn't mean that it's not possible. So I think the question is to really understand what is the extent that these large language models have with respect to be able to effectively understand natural language questions and be able to go translate them to effectively queries or SQL queries over your structured data. So this is something that's really on my mind, and one of the things that I've been working on kind of from a research perspective is to figure out how to find that extent, how to find that tipping point. And so that's number one, what is the tipping point and the extent that large language models can actually be able to capabilities of translating natural language questions to SQL queries, and it's going to return the accurate results. And then, the other thing that I've been having a lot of conversations with folks is that within this whole chatting with the data over structured data, knowledge graphs, semantics come into play. And everybody I've been talking to acknowledges that the semantics and the knowledge graphs are key to be able to understand the internals of your data, provide that extra context, but it's not clear how it's being done. And also it's not clear how much the accuracy increases. You all know me, I'm all about knowledge graphs and semantics, so I'm all for this. But the scientist and the skeptic in me says we need to understand also to what extent the knowledge graph and the semantics increase that. So this is the other part that I'm not seeing, I'm seeing a lot of these blog posts. People are talking about, why we need knowledge graphs. And I completely agree, but we don't know how much it actually is going to improve things, because there's an investment that needs to be done in knowledge graphs. So my hypothesis is that if investing in knowledge graphs, you are going to have higher accuracy for when it comes to this chatting with the data, being able to translate questions over data that's in structured in SQL sources. Higher accuracy compared to if you use these systems that don't use knowledge graphs. So to that, something I've been working on is a benchmark. I'm calling it just a chat with a data benchmark where we're including real enterprise schemas. This is the other thing is that existing benchmarks that do text to SQL, I mean the scientific community has been working on text to SQL translations for decades and they have their own benchmarks, but if you look at those benchmarks, they're academic and they're really disconnected from the enterprise world. So the schemas are just one or two tables. The questions are just pretty straightforward questions. So this benchmark that we're working on, it has enterprise schema, and actually we're using an open domain model from the insurance domain from the OMG, which is a standards organization. They have a property casualty insurance model. So that's the schema. Then we'll have kind of data and differentiations of the data. And then when it comes to the questions, we really want to look at a set of questions that come from a different complexities, like very low complexity to high complexity, basically meaning easy questions to harder questions. Easy questions, just being kind of reporting. Questions about, hey, what is this about? Give me a list of these things versus all the way up to more complex questions that are about metrics and KPIs. And then when you look from the technical characteristics of these questions, they can be over just one or two tables, a couple of joins. All the way to, oh, you need a lot of tables that need to be joined, and also you need to be about aggregations, and math, and functions, and so forth. So you have this entire spectrum of questions that you want to be able to go consider. So that's the other part. And then the fourth part is going to have all the context layer, which is going to have the ontology, the semantics, and the mappings. So then systems can be able to go test this out either using the context layer, the semantics, the ontology, and the mappings from the source to the target with or without it. And then we want to be able to go have a scoring and figure out all the accurate questions. Now also, when it comes to the questions, we also want to have as the answers. So give it a question, what is the accurate answer associated to it? Not necessarily we are looking to have the SQL query, because heck, these systems may generate one SQL query, multiple SQL queries, or I don't know, something underneath. They may use embeddings, vectors. At the end of the day, you really don't care about the system per se, you just care about the accuracy. Is this system giving me accurate data or accurate response answers? So that's the stuff I'm actually working this. I'm reaching out to folks, if you're working in this space, please reach out to me. I'm really, I would like to talk to more people. I think in a couple of weeks I want to go release this benchmark and just get more folks in the community thinking about this. This is the honest no BS right now. We really need to call out the honest no BS on all this chatting with the data over structured over SQL databases because if we don't, we're just going to continue to live in this hype and it's just going to explode on our faces, and we can't go and do this again. This is the moment we need to understand our history. We've seen this type of stuff happen again, and we need to really call out the BSS around this stuff. And I think this benchmark that I'm working on, that I'd like to push this out, work from the community, is to help call out the BS and let's be very honest about it. Anyways, that's my rant for this week while I'm on vacation. And I know Tim also has a rant when it comes to AI hype. Bye everybody.
00:07:10 Tim Gasper
Hey everyone, this is Tim Gasper, Catalog & Cocktails presented by Data. world. Thank you, Juan for all your awesome thoughts. Now, I'm going to jump in on this AI rant also. So, I want to say that there's so much hype going on around AI. You've got to focus on value and customer use cases. What are the use cases that matter for you? So don't just do AI for AI's sake, don't let the hype overtake you. AI is cool, it's fun, but that is not all that it is. And companies that are going to focus on not just the cool factor, but actually the value are going to make sure that they spend their time, their money, and their goodwill wisely, right? You don't want to waste those things. So think about what is the strategy for your business? What is your strategy around data? AI is going to be an amplifier and an accelerator for that. Don't let it be an end in and of itself, okay? And I think that there are four areas that you can really focus on, productivity, first of all. Let's say you've got a sales team, right? If sales team is leveraging generative AI, can they actually send 30% more messages? That would be a productivity improvement. Creativity. You're working with clients and you're helping them with different client projects. Could you leverage generative AI to come up with better ideas, more diverse ideas, more unique ideas, leveraging generative AI? That could be a way to create value, become more creative. Scalability. Maybe you're trying to classify all your data to see what's sensitive and maybe doing it with AI is going to let you do this at massive scale across all your data. Whereas before you were only focusing on just your most critical data elements, right, scalability. And then finally, innovation. Could you actually incorporate it into your products and services? Could you actually provide a customer service experience for your customers where generative AI is providing them better initial touch points and self- service knowledge where they actually, it increases your customer satisfaction, your CSAT innovation. So to cover those again, right, productivity, creativity, scalability and innovation. Those are the four vectors that I think you can leverage. Map it against your strategy for your business, for your data, to figure out how you can really make sure you focus on value around data, not just hype. And our customers are really focusing on these four areas to get value. And so, it's a really good approach. Secondly, really encourage your company to experiment, right? Learn, measure, iterate, experiment. Try this out. What that means is similar to data governance, it's not just about being the police, it's about being enablers. It's about paving the roads, making the rules of the road, putting the brakes in the car, putting the seatbelt in the car so that you can drive fast safely. So really focus on not just being a blocker for the organization, but actually encouraging experimentation and paving the way to create value in those four areas that I mentioned. If your organization is just figuring out your policy around generative AI or hasn't figured it out yet, it's time to really get moving here, all right? Get legal involved, get InfoSec involved. There's only a short period of time here before the companies that really get in front of the ball on this are going to get those four areas that I mentioned to value faster. So really get in front of that. There's probably people in your company already playing around with ChatGPT, or GitHub Copilot, or the many other generative AI tools out there. It's time to give them some guidance on what's okay while letting them experiment and start to infuse that AI DNA into your company, because the ones who have that AI DNA are going to achieve that productivity, that creativity, scalability, and innovation before you do. It's important to get in front of that. There are a lot of things that you can do to be safe around the use of generative AI that isn't that hard to do. For example, you can opt out of your inputs, your prompts being used for open AI model training. So that's a simple thing you can do, right? If you have a relationship with Microsoft, maybe you want to leverage Azure open AI, and that's an easy thing to do. Maybe you want your employees to register with some sort of a central authority within the enterprise when they're using LLMs. Great, you could do that, right? Maybe you want them to use their business email address so that it's not personal use mixing with business use. Great, that's a thing that you can do. So those are all good things. And one more thing I'll mention is there's a services out there like private AI that are allowing you to actually filter out all sensitive information or personal information before you leverage generative AI. That's another great example of something simple and safe that you can do. So figure out what your lowest combinator safety approach is so that you can really encourage people to move fast and be innovative to experiment. Otherwise, you're going to get left behind. Now, last thing, some companies are going to be building their own LLMs, but you're probably not going to do that. A lot of companies probably aren't going to do that, which what probably makes more sense is taking your own data and your own knowledge and using some service or some product that's going to help you, right? Zoom just recently announced that they're incorporating AI into their product. Data. world has AI in our product in order to help with documentation, code explanation, ideation, and also natural language search. So you're going to leverage different services that's probably going to be the most valuable for you. And whether you're leveraging services or products or you're using your own LLMs, it's really important for you to think about what models and what services are going to provide us value in those four areas: productivity, creativity, scalability, and innovation. If you're a legal firm, then maybe you care about a model that's really good at taking the bar exam, right? But if you are a design firm, then you're going to care about a model that's going to help more with creating good images and assets and videos and things like that. So find what makes sense for you that's going to drive value for you. Don't just get caught up in hype around performance or benchmarks. Focus on value, value, value. And that's my edition of the rant here. Thanks everyone. Hope you have a great week, and don't forget to have a good cocktail or mocktail while you think about data and you work with data and have some honest no BS conversations and focus on value from AI, not the hype. Cheers, everyone.