url
stringlengths 36
73
| transcription
stringlengths 31
481k
| title
stringlengths 8
99
| duration
float64 95
31.1k
| uploader
stringlengths 4
44
| upload_date
stringlengths 8
19
| description
stringlengths 0
4.04k
| datetime
stringlengths 26
26
|
---|---|---|---|---|---|---|---|
https://www.youtube.com/watch?v=15DqlUTgYYM | Our speaker today is a Silicon Valley veteran who has forecasted tech trends, launched companies, and leads community of AI founders and investors. Currently, he's leading the AI fund at Blitzscaling Ventures, investing in early stage AI startups. The firm is based on the famous book called Blitzscaling by Christian Reid Hoffman. Please welcome Jeremiah Aoyang. Thank you. What an honor to be here in the center of Silicon Valley. This is where innovation births and we think about the future. The future we're going to talk about today is the future of AI agents, agents, and agents. I'm Jeremiah, general partner at Blitzscaling Ventures. I started tech research firms here in Silicon Valley. I was a forester industry analyst forecasting the future, and I've worked at tech companies all over the valley. What I love to do is think about how does this next generation of technology impact the next wave, and what does it mean to business, society, and us. I lead a community called Llama Lounge. Hundreds of AI founders assemble from around to the globe and I get to meet the top founders to understand what are they working on and it gives me this ability to see what's around the corner. And what we know is that the biggest industry right now in AI is the AI agent space. It's destined to grow at a 10 times every five years. Right now it's estimated at 5 billion, which will go to over 65 billion, and then it estimated 500 billion just within the next several years. This is the hot market growing within the AI space. But what the heck is an AI agent? Well, let's start with the basics. You probably today use an AI co-pilot. You know, it's inside of your apps. It's integrated working with you in a real-time way. Next, you probably noticed that some tools like Microsoft have AI assistants that run through different apps synchronously or asynchronously working together within a suite of applications. The next phase are the AI agents. Now they're different. They can work when you're sleeping. They can work across multiple apps and they have a few characteristics. First of all they can self-task. They don't always need a prompt, they have a memory, they can recruit other agents and they learn and improve over time. You know you got to think of them as living creatures, young children, junior employees, assistants, pretty soon you might have 10, 20, 100 different AI agents working for each of you like you're a billionaire that has all these assistants. The AI agents are the future and it leads us to the next phase which would be artificial general intelligence which is equal to human capability and thinking and then eventually to something like super intelligence which Nick Bostrom's book talks about at over a thousand times human intelligence. So it's on the evolutionary path of these AI entities. Now I'm here to share you our thesis. Thesis number one, right now people use the internet. You physically go out and find different websites to get information. You fill out tasks and you order flights, you order e-commerce, you get things done. Even inside of your enterprise you have to fill out even say even it's even inside of your enterprise you have to fill out expense reports and time cards all of these things are wasteful and not great experiences in the future your AI agents are going to go out and do those tasks for you the dominant entity on the internet and the intranet will be AI agents, not humans. This is a big change because it means they will go out there and fill out those tasks and they will bring the information back to you. You as humans, we no longer need to go to the internet with AI agents. They will bring it back to us. Thesis number two, the information will be reassembled in the way that you want, in the way that you want, when you want, in a multimodal way, whether it's text, video, AR, VR, whatever time of day that you want, and the amount of information that you want. It's no longer up to the web designer. It's no longer up to the web designer. It's no longer up to the website or the app designer. It's up to you and your 100 agents. So those are the thesis. But let me give you some examples of what's happening today. You probably heard about Agent Force, which launched last month in Silicon Valley by Salesforce. They have three different AI agents for enterprise to interact with the end consumers. They've got a marketing agent, they got a customer care agent, and they have a sales agent. And they're all being working together in orchestration with a tool they call Atlas. So really Salesforce brought this to life for every enterprise to see how they can use their own agents on the business side. Of course, HubSpot as well. They have agent.ai, a marketplace of different agent tasks that's being integrated into their CRM. And so we can see more things will emerge into this space as well. Also, we'll see new forms of payment happening. This is Skyfire, one of our portfolio companies, where agents can hire other agents to get tasks done. Imagine that, a new economy that births with a new type of identity. You've heard of the term, know your customer, but in the near future, we'll have a new identity for agents called, know your agent, and it'll be certified to you. And it can conduct its own transactions on behalf of you. Another example is the leader in the enterprise space Crew AI, an open source platform that enables developers to build AI agents that connect and can do a numerous tasks that solve business problems. On the consumer side, let's take a look at this real-world example from Multion Palo Alto based consumer AI agent company. Let's kick that video. So in this example this browser-based extension is doing all the tasks for the human. The human does enter the actual prompt, it could be text or voice, go to Amazon and order these two books for me. And what happens is amazing. Now the human is hands off. The human can drink tea, do some yoga, sit on their hands. All of these actions that you see right in front of you, finding that book is being done by the AI agent serving the human. The human doesn't have to go to amazon.com. The AI agent does it for them. So it's grabbing those books and you can monitor the transactions and it's putting it into the shopping cart right there so that's an example of it conducting that commerce on your behalf so you got to think about that if you run marketing or e-commerce your future customer is not a human it's gonna be an AI agent and what type of agent is it representing a human or is it an identity that says, I'm part of a human and it's an abstracted identity? Or is it completely anonymous agent? We'll see all these different levels of agents. So you can see it's completing that transaction. Of course, we need to have review systems to ensure that doesn't buy a thousand of one books and then your front porch is filled with books. So here's another example. Go to Google Calendar and invite Eric Schmidt to an AI brunch. All right, so go do that multi-on. And it's doing that. So you could do a text-based prompt or a voice-based prompt. Then it's going to a Google Calendar, web-based. It's setting it up. It's filling it in using LLM technology with natural language and it's finding the email address, inserting it and then it's sending that invite. Boom! Maybe Eric will show up. So that's the second use case. Now let's find another use case. Hey, Multion, check my Google Calendar, find out what my next appointment is and book me an uber all right so here it goes so the human is hands-free yet again these are tasks that humans just don't need to do let's get our agents to do these tasks so it finds out that the next event is V's birthday in Palo Alto and it identifies that information then it's going to open up a new browser tab of course you know it's going uber.com fills in the information boom orders that car into the physical world so that's some examples but what does this actually mean well gartner forecasts that search engine traffic is going to drop 25 percent because humans will turn to chatbots and agents to get the things done and that's just gonna happen in 2026 that's devastating news for search engines also Bill Gates this quote is a fantastic one he says that AI could really change the way that the internet works. Whoever wins the personal agent, that's the big thing, because you will never go to a search site again, bang, Google. You'll never go to a productivity site, bang, IBM. You'll never go to amazon.com. I think Bill's probably been waiting to say that for like 25 years. But here we go. So to bring things to a close, here's how I see this world changing this is my forecast this right here is a decision funnel every day you make decisions for personal use every day you make decisions at work every day we are collectively making decisions and the way that we do it today is broken we have to be exposed to ads we use Google search we use other searches and they know the answers they So the way that we do it today is broken. We have to be exposed to ads. We use Google search. We use other searches. And they know the answers. They don't tell us. They show us 10 blue links, and we're exposed to ads. Then we go to this websites. The information that you want is at the bottom of the website. Now you're exposed to more ads. You leave that website. Your information has been tracked. Now you've got retargeting ads following you around. It's not really serving you. And then finally you collect that information, you compare the product, and then you make a purchase, which might be on that site or another website. It's a mess. The internet is actually not a great experience. And this is going to change. In the near term, we're going to collaborate with AI agents, we'll ask them to find this information and bring it to us. You can already do this in Perplexity and ChatGPT, and soon those creatures will become unbounded from those apps and become living creatures like AI agents, and you'll ask it to compare these things. And then finally, what we're going to see is the AI agents are going to be proactive. They're going to connect to your SMS, to your email. You might enable it to listen to your conversations with permission to understand what do you need and proactively seek out the things that you do. Proactively fill out your expense report. Proactively to book your Uber ride to your next meeting. It's like everybody will have their own personal assistant and executive assistant, every human on the planet. And it means we're gonna shift from a way of making decisions from a manual way to a hybrid way where we're collaborating with AI agents to a future way where it's actually starting to become autonomous. Wow, it gave you a lot of information. Let me summarize it to you in five final points. AI agents bring the information to the humans. Let me summarize it to you in five final points. AI agents will bring the info to humans. You no longer need to traverse the internet. It'll bring it back to you and assemble it in the way that you want. Have it your way. Oh boy, it means that media models, revenue models and e-commerce will be embedded right into agents and to LLMs. We're gonna see a completely restructuring of the internet that we know it. These billion dollar companies here in Silicon Valley, some of them will topple. The AI agents will completely change the media, marketing, and advertising space. Now big companies aren't going to sit by the wayside while this disruption happens. They're going to launch their own AI agents like I showed you examples from Salesforce and HubSpot and Crew AI. And they're also going to launch their own APIs that communicate directly with your consumer side agents for the fastest transactions possible. We'll also see something happen inside of companies that we really haven't been able to think about. AI agents will help us with our productivity. And then soon, just watch, they will start to become your colleagues. And eventually, potentially your manager and even your customer and someday the agents could become your competitor and that breeds this final example where the AI agents become like their own living entities where they can autonomously trade amongst each other buying and selling using Skyfire and other technologies then they learn and govern and reproduce like their own species without human intervention. So I came to you today to talk to you about the future of AI agents. What does it actually mean? Agents, agents, agents will be throughout all of our lives. Thank you so much. I'm Jeremiah. | AI Agents: The Next Digital Workforce | Jeremiah Owyang | 808 | Speak About AI | 20241115 | AI agents are revolutionizing the digital workplace. Silicon Valley veteran Jeremiah Owyang reveals what's really happening with autonomous AI agents and why they matter for your business.
From his unique position running Llama Lounge, Silicon Valley's premier AI startup community, Jeremiah shares insider insights on:
- Why AI agents are different from chatbots
- Which industries AI agents are disrupting first
- How companies are already using AI agents
- What's coming next in autonomous AI
- Real examples of AI agents at work
As a General Partner at Blitzscaling Ventures and advisor to Fortune 500 companies, Jeremiah offers a rare glimpse into the future of AI agents - straight from Silicon Valley's frontlines.
Perfect for: Business leaders, entrepreneurs, and anyone interested in staying ahead of the AI revolution.
🔔 Subscribe for insights on AI's future
Book Jeremiah for your next event.
#AIAgents #FutureOfWork #ArtificialIntelligence #DigitalTransformation | 2024-12-11T14:47:25.706917 |
https://www.youtube.com/watch?v=uAwjR4scCLY | What happened when I was at Yahoo many years ago in the mobile iteration is that they recognized that this is not just a revamping of the existing technology, which they did to us at Yahoo at first. Oh, just use the web APIs. No, it's going to be a transition. It's going to be a shift. That's why you have material design now. That's why you have, you know, you know, these teams at Apple that want to make sure that the experience is meeting what the expectations are. So developers with designers can design these applications that are going to work really super well but we don't have that defined in the future yes we are going to be able to have something using ai in that development process so that all of my software is going to match well with whatever type of design language that the designers come up with and that's not just on the unit on the ui it's the ux so that i can define an experience from the product management's perspective and be able to have a way of implementing that quickly and deploying it i hope that's sort of my dream. But in the meantime, there, you know, this paper is, I've been waiting for something like this. This is a great paper to start with. Well, I'm going to get off my soapbox. No, I really enjoyed that, Mike. And you said a couple of really key things to me that sort of I've noticed as well. And the first thing is the chatbot era will be looked back on like the DOS era, you know, before we had GUIs. That's the first thing. But there was a phrase you used over and over again in that discourse then, and that was with me. I want this done with me. And that really comes the the crux of that paper in many respects because it was you know that that that bi-directional communication of watch and this is coming from that paper what should this agent achieve how should how should they achieve it what tools should they be calling and more more than that is the education part as well. What can this agent actually do? What is it currently doing? Is it doing it with me? And ultimately, did it achieve its goal at the end? And am I happy with that? And that all takes some kind of UX design. And as you rightly said there, this is so nascent. The paper is extremely readable, and I'm very glad that they've made this call for design patterns and principles, because nobody is an expert in this yet. So someone's got their hand up, it looks like. Simon. Yeah, just a few thoughts maybe to get people thinking. So when we talk about the first thing, I'm extremely, my perspective, I'm extremely skeptical about this whole thing. I think it's just a hype cycle. We've all been here before. But like, the integration piece, I think is pretty, it's a lot harder than we kind of make it out to be. But also, there's like this fundamental problem with interacting with an agent that can kind of do whatever, which is intent. Intent resolvers are extremely difficult. And if you get it kind of right, it's a terrible UX. It needs to be 100% accurate. And there is no way to know if you've gotten it right or wrong until you've made that guess. And so like, when we think about UX, we'll think about like, yeah, it's a lot harder. Getting this right is a lot harder than we think even in a very narrow scope context. Like, yeah. And it's, I think it's a human thing that unless you are a human talking to a human, that you understand the context and you understand the people, place, products, and processes, which is human, you are unable to understand the intent. The same way that everybody on this call would describe the same tasks differently. And I probably wouldn't be able to guess what you're talking about until you started talking about it. So that's just, I think, you can't just like, we'll figure it out eventually. It's a lot harder than we think it is, I think. No, I think you hit on some really important points there. And one is that the user experience for me is I'm going to expect to be much different from you. And I'm expecting that these systems are going to be able to learn how I interact. That, you know, at 8.25 in the morning, this is the way that you can expect me to be communicating. And what I would be expecting is like the rush to get to the kids to school, the time, like the way I interact right now is going to be very different from, let's say at five o'clock tonight. and that the agents should be able to customize my interaction and the dynamically work based on yes by the way you have a flight to catch by the way you know knowing what the like you said is is understanding what the intent and having some way of being able to collect the feedback from that i mean in mobile in mobile, we did that for many years. I mean, it was, you know, analytics. Every single phone, every single app is just spewing tons of analytic information that can be used to then be able to provide the feedback to the agency. Well, Mike, I'm going to push back on analytics about like this agent stuff because exactly because it's non-deterministic, the being like analytics and maybe I'm in the wrong, the wrong round table. And then I'll stop talking after this, but like fundamentally that, that you do not know what the person is going to say. You do not know how your user is going to interact with it. You also don't know what the person is going to say. You do not know how your user is going to interact with it. You also don't know how the processing is happening because it's like kind of black box and other ones. And so you effectively do not know what your product is outputting. And from a product management perspective, that puts you, like you don't know what the user experience is, which is just so wacky, like such a wacky premise to come up with. And then also, if you want to try to analyze that, you're unable, just because of the variabilities and the non-deterministic nature of these interactions, you can't have a really nice flow because like, maybe this person is using the same five words, but in a different way than the same person who used the same five words. So like, you need the intent. It's just so complicated. And these aren't sci-fi problems. I am having this problem right now at work. And it's not with an agent. It's just with an intent-based chatbot. And so it's just like it scales in just this crazy way that I think we're underestimating how hard it is. Absolutely complex. Yeah, I must admit, given this some thought as I have been, I'm not underestimating it, that's for sure. It's pretty scary stuff, to be honest. And I agree with you because I'm dealing with large language models and I'm trying to analyze intent. And the only way I can do it is kind of if people start repeating the same question. I know I haven't nailed it. I'm moving from the deterministic. Put in X, get out Y every time. Now it's like, I'm not sure. So much of it is an education piece as well. But the whole world's kind of trying to engage with this. You know, hey, you're dealing with probability now so yeah it may it may not work you know but if we're if we're gonna get i i mean as i say uh stripe released an sdk so that large language models can can do financial transactions on your behalf this is it's kind of like uh that like, I actually, sorry to toot my own home, but this is how I concluded the post on the Microsoft thing. Yeah, building powerful AI agents is one thing. Ensuring they're transparent, trustworthy, and aligned with the human need and expectations is another. And I think that everybody that's like AI agents, AI agents, is not considering this aspect probably most have gone off to do a technical talk they'd probably prefer to be there then consider hold on a second the thing that might really hold us back is the human aspect so i'm there with you is what i'm trying to say simon i'm dealing with the same same stuff and it's tough it's hard yeah yeah okay anybody else anybody got any comments anything they want to add to it anything anything. Hi, everyone. Hi. I come from a very different background. I just started diving into AI like a couple of years ago. I come from an operations background, finance, trading, and all that. So I built my first AI agent, which was very static, not built on any framework, not Autogen, not any other framework. That was like sometime. And then I was wondering now when I listen to what everyone was saying right now, what we expect from an AI agent differs from each. Like everyone has a different expectations. What's an ai agent differs from each like everyone has a different expectations what what's an ai agent what does it do what because it also depends on what features are we giving this ai agent yeah and once we give uh ai agent some features we program it to do something we our expectations of ai agents will differ let's say today I gave it some features. Okay, I see some potential. I expect something different from, let's say, someone was using the same AI agent yesterday, for example, and they were using a different, let's say, the older version of the same AI agent, for example, or compared to a different AI agent developed by someone else with different capabilities, for example. So we don't have a unified expectations. So our experience, there's no, let's say KPIs or something to measure the experience or what should be the standard or the benchmark for experiences we get from AI agents, because each AI agent is built differently and for a different purpose as well. So there are so many variables going on and there are no unified frameworks. Everyone, like every now and then there's a new framework coming up and some frameworks are getting updated or all of that, you know? And also AI agents are being implemented for different reasons. For example, some AI agents I've been working on, there's one project I was trying to prove solopreneur can work so one person one man show and surrounded by AI agents multi-agents even using your multi models you know so my expectations when I'm using those AI agents which I'm using them for internal purposes to handle work to do things autonomously and all that would be different from someone, let's say using a custom GPT, for example, with some features, for example, let's say, or an AI agent for the development as a front, let's say, a front office functionality to handle customer service, you know? So how do we exactly define what's a good experience from a bad experience related to ai agents so i think it's very difficult to narrow down or set a benchmark or something like that this is my input on this thank you oh yeah i totally agree with that uh but ultimately in the end you're either going to have a good experience, a mediocre experience, or a bad experience. Whether it's a program that is a mobile app, or it's on your computer, or it's an agent that has an interface that you're working with. And what your feedback is, I mean, one-star review when I had a mobile apps, oh my goodness, the VP would be at my desk. You are going to have, and that was my point back in terms of analytics, you are still going to have a way of providing feedback to whoever the provider of the agency is to say, this is not working. I didn't get my trip vacation to LA set up properly with this agency. I don't want to use this agency anymore. Perhaps I should use another travel agency. So yes, it is going to be, again, you're right, there is no clear definition, frameworks, KPIs, and ways of being able to manage this. And yes, because a lot of this is behind this is a non-deterministic LLM that is making decisions. It's not something of somebody can sit down and create a test suite with 1500 different interactions and knowing exactly what words would be said in every single time. We're not doing that. So, and again, the expectations also is, I want my experience to be different from stewards. And we have different ways that we'd like to see working with a travel agent. And that's good because now we can do that, right? If anyone's ever tried to create a customer support chatbot before from scratch or from any of the frameworks where they give you this, you know, tree structure that you have to put all the questions in and try and help your users, it's horrible. But if you have a way of leveraging LLMs in an agentic mechanism where you're using the reasoning and you're being able to get those types of responses where, again, you can by building in guardrails as well as being able to build in ways of modifying what the inference and the response that you're giving to the person so that you can have that experience. And in the end, I'm either going to give the guy a five-star review or a one-star review. And that's going to determine whether or not that agency is working. Thanks, Mike. Gil, did you want to say something? Yeah, thank you. I just want to add to what has been said so far. I think we do have KPIs and metrics I just want to add to what has been said so far. I think we do have KPIs and metrics to make sure the user experience. As Mike said, it's irrelevant for the end user how we create this magic behind the curtains, right? We can make sure maybe not the outcome, the output, maybe that is the tricky part, but the outcome, we do have metrics for that. We have a task success rate, time on task, error rate. Those are outcomes that we can measure and see if this system or this tool that we are creating is providing value to an end user right so just wanted to point out that and then and this is more kind of like a question sometimes i get confused when we talk about ai agents versus an ai assistant right In my understanding, AI agents are part of an ecosystem around the model that enables the model to do more things, right? It's sort of like plugins, right? It's not something that the end user needs to be aware of or that brings value directly to the end user. There is a middle tier in there. In my opinion, that's what I would call an assistant. And the assistant is the one that is going to interface with the user. It could be voice, it could be an actual chatbot UI, or it could be also just automating tactical tasks that are repetitive and boring so that the end user doesn't have to take care of them anymore. It doesn't have a UI, right? And the AI agents are helping with that, right? So from the assistant point of view, that's a tricky part in my opinion. That's where we need to create these experiences that connect to the end user, that relate to the end user. But again, I don't feel that that is something related to AI agents. AI agents is a technique, it's an architecture, it's something else that we use when we develop the solution. But that front end that presentation layer that interaction layer is what becomes more more relevant um taking all that work that has been done by the model and the agents and uh presenting it in a way that solves somebody's problem right and one last thought I also think and this is a really personal standing the u.s industry and especially its process its methodology is broken in my opinion my background is in architecture and when you are learning how to become an architect there is a lot of technical learning that comes with that you need to understand construction techniques you need to understand uh structural analysis right it takes years to to become an architect and the same goes to industrial designers, right? But when it comes to UX, people can do a boot camp in three months, right? And they are able to interview users, right? And create documentation based on that and maybe some models, right? But most of them don't understand the technical side of things. They don't know how to manipulate this medium software, right? But they, most of them don't understand the technical side of things. They don't know how to manipulate this medium software, right? For which they are designing. And I think that that's another issue. There was always a gap between design and development. And now with these new technologies, if designers are not more aware of how these things work for instance agents right and if they don't understand that interaction design is the u.s skill that overlaps a hundred percent with ai agent workflow design and that things like finite state machine theory is relevant because it's the glue between interaction design and the tech that supports it right then we have a problem right then this gap is just going to become bigger so we keep talking about the user-centric side of things and in my opinion most of the time is virtual virtue signaling why because if i spent hours interviewing people but i don't understand how to take advantage of this technology and the technical constraints too right in order to provide a solution that nails their problem then i am just making drawings up in the air and having a lot of back and forward with developers. So I wish we can go back to what we had before. You know, back in the 2000s, designers knew how to code. They would create amazing animations on Flash and things like that. They were aware of HTML and CSS when they were aware of html and css when they were designing um and we we came from something called human computer interaction right and i feel that that's the part that is missing so designers should get a bit more involved into understanding computer science understanding how a model works um why guardrails are important, because that's part of the user experience, how we can get that done, how agents work, and how they can manipulate these different workflows. You don't need, you don't have to design just user flows, you can design agent flows, right? And that's a responsibility for the designers. So anyway, just wanted to share those thoughts and see what you guys think about that. But I think that's what is making it hard to solve. As we've only got the 15 seconds left, two things that I really do agree on. The first is that where there's no definition of AI agents, it's difficult. And the second one is users won't care how we architect it, just like you said, they just want a result. So. Right. Thanks, everybody. Thank you. | Agent UX // Roundtable // Agent Hour | 1,309 | MLOps.community | 20241209 | A bi-weekly "Agent Hour" event to continue the conversation about AI agents.
Sponsored by Arcade Ai (https://www.arcade-ai.com/)
This Roundtable is facilitated by Stuart Winter-Tear, Head of Product - Genaios | 2024-12-11T14:48:55.459481 |
https://www.youtube.com/live/6Ivmza-3mVM | either work for me but i'm i'm not like a stickler about either uh look at the background and the music man hold on we're not are we live we are live all right cool um so it's on and it is public and things are live nice i like it uh so today what are we doing what we're going to be talking about sequel mesh and we're using ducktb and we're just before i hit go live ben was just talking about how hard it is to use ducktb in production so can you tell us exactly why that is? Uh-oh. I don't hear you, Ben. I don't hear you anymore. Oh, shoot. Sorry. Oh, there it is. I thought it was the music. It's not a classic live stream. It's not necessarily that it's hard to use in production. It's hard to use as your production database, right? Because Ducdb makes it obviously incredibly easy to query anything from anywhere under any circumstances. And we're gonna do a lot of that today. But fundamentally based on at least the open source Ducdb, there's no really great way to be writing to locations that aren't your local file system system unless you're just writing arbitrary parquet files for some teams it might be sufficient to just write parquet files as your database but like probably not you probably either want like a database like a duckdb.db file or a like iceberg lake house or a delta lake house something like that and duckdb can read from all those places remote it can read from a remote duckdb file but it can't write to it and so when you want to start writing you you sort of move to mother duck which i don't i don't intrinsically have a problem with i think that's awesome like i think that's a great that's a great business model and i'm i'm happy to pay them if they've made my life so easy to get started and to get moving. So I think that's a cool model of how open source can work. And you tried to hack around it, right? I tried to hack around it. I tried to mount S3 as a local file system and it worked. It created the lock and it worked fine. It was just obviously incredibly slow. I also mounted DuckDB. I mounted a volume to modal and then made DuckDB as an endpoint where I could send post and get requests for writes and reads. And that also worked, but then you can't hook it up to anything like SQL Mesh or Prefect or SQL Alchemist because it's just an arbitrary rest endpoint that that you created that's like not a an official thing so that was a little happy too um so you're saying like duckdb can't write to a remote duckdb file and it can't write to a remote like iceberg or delta lake file and that's a problem when you want to like collaborate with people like is that when that becomes a problem? The second two of us want to share the same DuckDB, like we can't because we can't both make rights to it. Exactly, exactly. But a model that I've been thinking about a lot, which I think is really, really cool, is maybe you're using mother duck or maybe you're using Snowflake with a Delta backend or an Iceberg backend. DuckDB, but with all of those tools, you're paying for compute seconds. So when you write the query with Snowflake, you're paying for the compute for Snowflake to do it, which is fine on the writes. But on the reads, you can also just connect a local DuckDB instance to your iceberg and Lake house and just actually query as a data scientist, just using DuckDB against that dev environment. And now all of your analytics are free, as long as your MacBook can handle it, which like all of them can at this point. So that's a really cool model too wait and i think i've heard about this and when mother duck came out they were talking about how that was something that people loved so basically you're saying say that again because i'm not sure i fully understood it but i like that it's free yeah like like if you have your prod and your dev iceberg lake houses, or maybe you have a virtual one using SQL Mesh, that's maybe for a different conversation or later in this conversation. Is that what we were today, by the way? I mean, hopefully. That would be a good start if we got there. If you are using, let's just say, Snowflake for conversation's sake, and Snowflake is not using internal tables, but it's using external lake house tables, and you're able to, from an authentication perspective, connect to your dev lake house environment, you can query that environment using DuckDB because DuckDB has native Iceberg and native Delta extensions. The Delta one is built in. You don't even have to install it. You could just select from scan Delta and pass in your Delta Lakehouse S3 URL. And so now as a data scientist, if I'm doing analytics, or if I'm the business analytics team, or if I'm the data analytics team, any other team that's a read-only system building dashboards with Sup superset, for example. In theory, they could all go through DuckDB, and you're not paying for that even for a moment. And just to go crazy, I don't know if this is legitimate. It's not, because obviously you're going to run out of storage and memory really quickly. But just to be crazy, you could use PyCafe, which runs on the user's web browser and install DuckDB with Wasm or Pyodide if that works. Because DuckDB has an example of this. And now it's free. It's like now you have a Solara app running on PyCafe, clearing with DuckDB against a lake house that is essentially free because it's just S3 bucket storage. It crashes your browser. It crashes your browser for sure. But it's a very cool, it's just a cool model. It's cool to think about. So like you said, if you have some Iceberg files or some Delta Lake files sitting in S3, you can point DuckDB to a remote URL. What is that remote URL? Is there a single file that is the catalog and that's what you're pointing to? Or do you have to point it to some set of all the tables in the bucket? Yeah, I've only done it so far against a table, which would be like the path in S3, like the bucket slash essentially the next prefix, which would be the table name, and then it has all like the part files. But now you're just selecting from tables, and you treat it like any other set of tables, right? You're joining them. So select star from S3, call in my bucket slash my table, join another table, be on A.ID equals B.ID, something like that. That's insane. Just creating it like a table. So I guess you'd have to, so in the simple case, you'd have to, or like in the naive setup, you'd have to know the location in S3, at least of like the full, the tables, but I don't think that's unreasonable to do. And you could hard code that. Like I have these 10 tables that my, that are like gold that my data engineering team made. And really, I just have the names of those folders. And then maybe you have an S3 bucket, and each of the tables is like in the root. Yeah, I don't even know if that's any different from any other database. I mean, you have to know your tables at some point. Like, I don't know. Yeah, I'm not sufficiently a Delta expert to know, or a DuckDB, maybe expert to know, like, can you register a path as a table name based on how much DuckDB does? I kind of think, I don't know, maybe? I have no idea. But, yeah, I don't know. But it's the same kind of thing, even if you're putting in paths. With Snowflake and whatever, you can introspect, you can type show tables, and it'll know how to list them. I'm kind of doubting that would exist, the way we're talking about it. Maybe it does. You can get the schema of a parquet file. But if we're talking about 10 tables are like 10 sets of parquet files you know like yeah it's a we we should bring on like we should bring on like toby or or like simba uh because they're they've become such experts in in lake houses like i would like to bring someone in who who really understands catalogs super well and can tell us like the benefit of i know the iceberg catalog has benefits where it has like locking and schema evolution, things like that. The Delta doesn't have that external catalog and like why, why you might want to choose one or the other. And I would imagine though, with those abstractions of the lake house over parquet files, you're getting information where you can get introspective. I guess the hard, the hard thing here is guess the hard thing here is there's no collaboration. Let's say you have these 10 tables and a data scientist can use DuckDB to read them and write whatever SQL queries they want. It's like they can't give anyone the results of those queries. So it's like... Screenshots. That's right. Well, I think that's where you i think but well i think that's where you create like dashboards right that that's that's where you talk about like what what should data scientist outputs be and like i mean some are apps pie cafe apps or like private pie cafe apps or just like weights and biases reports i think like i think weights and biases reports are super under under you they're so cool you can even i mean i guess hyper query got popped by deep note i don't know if deep note does this but like building those kinds of reports and showing the queries that you ran live in the report to get there i mean that's that's super valuable even if you're not creating a view on top of it like just that analytics is useful or Or you train a model and part of your artifact of that model is the queries that you ran to get there. But like, where does the data live for those reports? Like, is it on someone's laptop or is it like? Well, the resulting data is in the report, but the resulting data could be tiny because it's just, it's showing the results of the query and not necessarily the data in the query. Like a bunch of aggregations. So it's like, I'm going to show you prices. I'm going to show you the, for item on the RuneScape market, which we're going to look at today, like I'm going to show you its highest price every week, which is like eight data points if we look at eight weeks in the past. Okay, that makes sense. What are weights and biases reports, Ben? Weights and biases reports are pretty cool. They're, they're pretty old at this point. Like if you screen a model or you run a sweep, um, you have in weights and biases, all of your pretty graphs and things. And those are really great for like very technical people who know what they're looking at, but they can be incredibly overwhelming for people who don't and so you can like click a button in weights and biases it'll create a report which is like a kind of like a live pdf with the tables and the graphs that you want and you can fill that in with markdown it almost looks like a notion page but with all the charts that are live and interactable and embedded in the way that you want them to showcase the model that you've developed. It's like moving from PowerPoint slides and screenshots to a lot like you go to that, let's say, and you have the graph. And you can click on that graph, get back to the run. Or at the bottom, it's like maybe you can even do sample predictions. And you can click on the model and see the model and the regimen. It's all live and connected in a way that's really intuitive. That's cool. What are we doing today? Sorry, go ahead. What are we doing today? Okay, so Ben knows more about SQL Mesh than I do. So I think Ben is going to kind of teach me SQL Mesh. This is the second time. He already tried to teach it to me once. And we're gonna, we were looking for some fun live data that we could pull to like bring in with SQL mesh and like do stuff with it. It's already fairly clean, which is a little sad because if the data is clean, like what transforms can we show? But like, maybe we'll just make up some dummy transforms just to show you how they'd even be done so the data we have today is from runescape which is a game that a lot of people have played it's kind of like world of warcraft and in the game like there's a marketplace so so in the game like you're leveling up like you can go level up your mining by like going and mining a bunch of bronze and And like, you can level up your, your smithing by like smelting the bronze in like a furnace. And so as you go and acquire these items, um, there's a place in the game that you can sell the items just like Facebook marketplace. Like you can, you can sell it to other players. So like if I have a bunch of bronze items, I can stick it on this marketplace called the Grand Exchange and I can list what price I'm selling it to you for. And then other people can come and like they can either buy or not my bronze at my price. And so like this is hilarious because it's a real marketplace. I mean, it's basically like Facebook marketplace. And for all these transactions that are happening from all these players playing RuneScape, RuneScape exposes an API where you can get like aggregated values of like the high and the low price for every item in the last five minutes. And so like, so you could totally like, you could like the same kind of system you could build from this, you could use on like the actual stock market in the real world. You could like, you could use on like the actual stock market in the real world. You could like, if this thing can accurately forecast the prices of items on the RuneScape market, it's not unreasonable. You could take something very similar and predict the price of stocks on the stock market. So it's like- Not financial advice. Exactly. This is- Do not use any of this on the stock market and then blame us if you do lose money is there like a runescape analog of that disclaimer like don't lose don't blame losing all your gold on us yeah um so yeah how about we so so ben and i set up a live share so ben's mostly gonna drive um we set up like a uv environment here so we can start by just showing off the two endpoints we'll hit from runescape and then maybe starting to pull in some of that json and get it into sql mesh cool share your screen ben yeah and uh since we were talking about it a second in case any if there's actually anybody listening to this in case they are and they know a lot about duct tv and they're yelling at you can create a table through a delta scan in duct TV and so you can just create tables as uh queries and and then just start querying your tables which is very cool um yeah okay let's just maybe even start with nothing I think one thing also maybe to call out is that hopefully this makes sense, what we're doing and is able to be extended and we can find potentially a more interesting data set. that has everything, GitHub CICD, model training, retraining, monitoring, like orchestration, all of that stuff over the course of however many streams we do and however long that takes. We're starting with this data set because this is like an intro to a bunch of things. We're going to intro creating an MLOps-focused repo, Python package management, like CICD, SQL Mesh, maybe Prefect. In a future one, we'll introduce Feature Form and XenML. There's a bunch of tools. And so while the data is important to make it interesting, it's not everything. And so we might change the data over time, but the tooling, the point is that it should be easy to plug any data into this and kind of model what you're looking for. I want to add too, like, so first off, reiterating what Ben just said, we're going to build an entire ML platform end to end multiple times. Well, like, if the ML platform has like an orchestrator, we'll swap it out with a few different orchestrators. I love that. Prefect, and then we'll do XenML, and then we'll do Metaph and then we'll do metaphor. Like we'll do all of them or a bunch of them. But also I feel like a lot of these things, a lot of MLOps tutorials and stuff that you find, they'll, they'll skip straight to training and serving maybe with like a Kaggle data set or something. But like they completely ignore the data engineering side. And I think that's actually led to a lot of MLOps engineers just being completely weak at data management and data modeling, which is not good because actually like feature creation is probably where the biggest value add, I think, in ML projects is. So true. And not just feature creation, but like you don't necessarily need to be the one, like neither Eric nor I are great data scientists are great data scientists like neither of us are the ones who are coming up with phenomenal features but it's if you're going to be an enelops engineer it's really important to to know that process and to understand how to support that process in an easy way like column level lineage when things are breaking how to run backfills how to scale those backfills like that kind of stuff is in that world of being in the middle of those two spaces that's a pretty important one to be good at even if you're not the one creating uh the features and every job i've had that's been part of my responsibility even if i'm not the one training the models last comment too like um i think data engineering in general is like a black box to data scientists and mlobs people and that has the problems i was saying and that that's where sql mesh comes in right now today for this project like sql mesh is this tool that like at least analytics engineers would use but also probably data engineers would use and like you'll bring in like raw data and you'll use a real you'll use a tool like sql mesh or dbt to like bring that data from raw to bronze, silver to gold. And now it's like this clean model to data that like the data scientists, like that's kind of like the handoff point to the ML folks is like these nicely made tables. And so anyway, so we're starting today with kind of like the data engineering side first. So yeah, so yeah, I'll, I'll explain SQL Mesh maybe a tiny bit more. Like I said, we're going to look at other tools as well. Like another one I'm stoked about is Feature Form. They're both, they're very different in so many ways. But if you boil down, like they're both helping you build like a unified data platform kind of thing. SQL Mesh is focusing on making sure you don't run queries multiple times, how you're sharing that environment across multiple people. They talk about virtual environments and we'll look at that. And it's really cool the way they handle it. Feature Form is more focused on like, all right, in reality, things need to scale pretty aggressively. We're just going to bake that in from the beginning to make, like right now, I'm going to start with a local DuckDub instance. And the idea is you can switch to another one easily. But nonetheless, I am on a DuckDB instance. So it's local. So it's not necessarily scalable. But okay, so we have a bunch of things here. I feel like if a newer engineer might look at this repo and say, I don't even know how you got to this step. Like this in and of itself feels pretty overwhelming. It has gotten so easy to create Python projects that are like well-formed. So if we look at the base repo, this is just, this was an empty GitHub repo. Eric made it an hour ago. There was nothing in it. It was a readme with no text. All you really have to do to get started with a repo is just run uv init, that's just lib, if it's going to be a library. It creates your PyProject. It creates your lock file. It creates your Python version. And it creates your SRC and then the name of your project, and then an init file and a Py.Type for type enforcement. Why don't we just start from scratch? Because I think we just ran two commands to generate all this boilerplate. Sure. Still don't want to lose it. Nice. Cool. All code is I don't have I don't have push access to the repo you created oh I'll fix that here we go I was clearly trying to make it so you couldn't save your boilerplate code and then yeah I'll start over you you just commit and stash it locally and then yeah um all right fine fine fine fine all right we have an empty repo i'm even gonna remove uh the virtual environment okay so if you don't have u, go to the website and figure out how to install it. You want to install it globally, with Brew or PipX or whatever thing. They even have a curl, I think, to install it. You just want to install it globally. Global init lib creates a repo, creates an SRC, and an init inside my project, a Py project, and the readmeam is already there. Python version is going to be 3.10. They default to 3.12, but Eric's afraid of Python 3.12, so we're using Python 3.10. And it's pretty empty, but you don't have to think about anything. It filled everything out for you. So when you're using UV, a really easy way to do it is just prefix everything with uv and then it'll find the right environment you don't have to think about anything so we want a couple of libraries in here we want to add uh what did i add we want to add steeple mesh with their web extension want to add duct to be one add polars um right now that's probably enough sequel mesh will update your PI project, create the lock file, install your things. You said SQL mesh will update it. UV will update. UV will update everything here. And then we want to add, I like having IPython in here. You can have my PI, you can have PI test, et cetera. And it'll add it in a group dependency. So when you install from your lock file to build your docker image that you push to prod these won't get installed but when you're local you can install them and then if you ever want to sync you just run uv sync and then you know there's a bunch of extra cli commands but now we have an environment we have a bunch of code here nothing is here so we'll move into our project and we'll move into src sql mesh runescape and there's nothing here there's a pi that typed which is if you're distributing this project out and you type in force you type into all your code which you should be doing those type things will propagate and other people will get the type safety that that you worked on um and then you're in it so now we're in here and we want to create a sql mesh project that's quite simple as well sql mesh is a cli and we're going to use the sql mesh from our uv virtual environment to make sure we're running the right one we're going to init and the only parameter we're going to provide is duckdb that could be that could have been postgres that could have been um the big query that could have been spark they have like a bunch of different back ends and the idea is that it it's very straightforward uh nice to switch between them so we'll run uv init sql mesh.db and i did this inside of our project so it lives inside there and it created a bunch of things it created audits macros model seeds tests we'll go through all of them like in a moment but maybe not yet um we have a config.yaml which tells us the type of connection we can have multiple connections um we have our model defaults which is our our duct db dialect and in here you can add other things like global variables like secrets um other configurations uh we'll get maybe if we have, we'll get to that later in this session or the next one. SQL Mesh has the coolest GitHub Action CICD flow I've ever seen. And it's quite simple to add it. And another project, I have it, and we can bring it in if we get to that point. But it's super cool. Okay. Any questions, Eric or Demetrios, maybe before we try to model some data? I think this is a good time to step back and introduce SQL Mesh, what it'll be doing. Goobie T came out into the data engineering scene a couple of years ago, and basically what I let you do is render Jinja templated strings into SQL queries. And it got really popular. And I think the reason it became popular is because it suddenly, it took people who used to write a ton of SQL queries and it got them into Git. So now like all these analytics teams who were writing SQL queries all day, they weren't just losing those queries to the ether because they were typing them in some sort of UI and clicking a save button. And then if the UI goes down, you lose it all. Suddenly, all their SQL queries are like version control, which is great. And then also now there's like a standard way to collaborate across teams and even across companies. Like if you move from company to company to company, if all those companies are using DBT, suddenly you find yourself knowing to ask where's our get repo with our sql queries and where's our dbt command to like render these things so basically it's standardized the process across a lot of analytics teams like writing and storing sql and collaborating on it um and then i think what the founders of sql mesh say i mean they you can every time anyone mentions ZBT on, on social media, like you, you can expect, like you basically just summoned them to your thread and they will, they will. So, so I think like rendering SQL with a templated string thing is cool, but it's, it's still pretty limited and has a bunch of problems. And I think one of the reasons that this took off is because I don't think that the folks who originally were the target audience of dbt were software literate enough to realize that what dbt was giving them actually wasn't that amazing. Using Jinja to render SQL as something any of us could do. Like any of us could have done it at any time. And like, so it's just kind of funny. I'm not the huge, I'm not the biggest DBT fan. It does offer like other things, right? It allows you to reference models with other models, and then it builds out the entire DAG for you. Like it also is a DAG orchestrator of SQL queries, which Equalmash also does, but it is a DAG orchestrator of SQL queries, um, which SQL mesh also does, but it is worth calling out that DBC certainly paved the way. Yes. In the same way that Prefect added so much on top of airflow and I am a huge fan of Prefect and we'll use Prefect here. I think SQL mesh is adding a whole bunch of stuff on top of DBT loads of things. My favorite of which is definitely virtual model, virtual environments and the way they let you handle backfill and forward filling data at different increments. Yeah, like once there was a standard way to render SQL queries from Jinja, it was like, okay, then DPC started building things on top. They made a cool little UI. You could visualize the flow of queries running into each other. So they were able to add some cool features on top of that. But I think there were still some, and this is like what Ben just brought up, there were still some fundamental problems with dbt that dbt wasn't solving. So one was people were kind of upset with the, with their ginger, like being like with their queries being filled with ginger syntax like that, that can like lead to some messiness and painting yourselves into some corners and also airflow is a tool that a lot of people don't love but one thing that airflow does really well is it manages state so like if i if i could have an airflow dag that every day so like i could have a streaming system and like we're gonna look at this runescape data and there's transactions happening on the runescape marketplace every day so like it's reasonable that I could want to pull in a fresh batch of data from the RuneScape marketplace every day. And once I pull in that fresh batch of data, I might want to do some standard transforms on that to clean it up. So it would be nice if I brought in some data on Monday and cleaned it up that on Tuesday, it'd be nice if I brought in some data on Monday and cleaned it up, that on Tuesday, it'd be nice if I didn't have to reprocess Monday's data. It would be nice if I could just run my transformations on the data from that day. Because if I'm processing all my data in all my history at once, then every day my queries to clean all this data are going to get more and more expensive. So Airflow is really good at this. Airflow can remember if I run a transform DAG on like data from a specific date range, Airflow will remember the date range that I call this transform on. And that's great because then in the future when I want to run Airflow on all my data, it'll remember that this date range has already been done and it won't reprocess it. So that's like idempotency, idempotent so, air flow was great for that. DBT doesn't have any inherent concept of state. And so DBT doesn't help you not reprocess yesterday's data or the data from the day before. SQL mesh does. So like there's two things minimum that I can say SQL mesh is already doing better than DBT. So what is it like got rid of Jinja and actually added some plugins that are really nice. So software engineers tend to like it. Secondly, it manages state. It remembers what data has already been processed. That alone can save you a bunch of money. But then there's some other cool things too. So that is where SQL mesh lives. We are going to be using SQL mesh to do what I just said, bring in some data and run some transforms. And then I think we'll show you today how you can do this thing to not reprocess old data. Yeah. Yeah. Yeah, I think that's a good start. So if I run the SQLMS UI, I want to see if there's anything in here in the default project because I think they give you three models. And I think they tie into each other. And so as we change them, we can see how they're changing. But I actually haven't run anything yet. I don't have a database, so we'll see if there's anything here. So what you can see is this is just our repo. Very nice. You have an environment. You can plan. You have, wait a second did I not run UV this is our other sequel mesh project and I do not know oh these are just I actually don't know how that's here. Is your browser, did your browser cache this somehow? Is using your local cache? Maybe you have a hard one. I haven't opened this in forever. I don't know what these tabs are. Maybe these tabs are something special. I have literally no idea. We can get it. I don't have an answer to that. But for these three models, which are actually the models in this project, you can see the code, which just maps directly to the code in these models. And you can see, so for example, let's look at one of them. So the seed model, that's supposed to like, you don't need to have a seed model, but what it will do is like, it'll essentially point to where data is, it won't actually run a query. And then if you look at the incremental model, which inherits from the seed model, and you even get column level lineage like that, which is pretty awesome. And you can see where the event date column came from. You can see that all we're doing is we're just kind of in this way like dbt, we're just referencing not a table, we're referencing a SQL mesh model that we're pulling from and you can reference six or seven or a million and it'll build that out. And then again with the full model, you can see the full model is pulling from the incremental model um and again you get the column level lineage uh if there is any i guess in this case yeah there's item id and so that builds that that here. So the item ID maps to the table above it. So that's what you start with a project with SQL mesh. You kind of get stuff out of the box. We're going to replace a bunch of these. Now, the seed model comes from some CSVC data. That's fine. My complaint with that would be like, how real the situation is that that you're gonna have some csv data that really is gonna seed your entire project inside of your github repo it feels a little bit unrealistic so i'm actually just gonna delete this model entirely um and i'm gonna remove remove the CSV data entirely. What I will do is we'll quickly maybe talk about full models versus incremental models. This goes back to what Eric was just talking about with how SQL Mesh maintains state. By default, it will store the state in the same database as your actual data. So right now all of our data is in DuckDB, but also our state is in DuckDB. The state is super, super small. Any production like Postgres environment will be able to support it. The SQL mesh people will definitely suggest that you take that data out of your analytics database and put it like out of DuckDB and put it into a Postgres table like a transactional uh system um it's pretty easy to configure that we're not going to do that here for simplicity but that's what they recommend for production um in that state database uh we will have all the things about all of our models so and and when they ran so sqlmash has the concept of a kind and we're going to look at two or three today we're going to look at full and incremental full is like as you'd expect you run this query and it runs this query in full at the schedule in which you run it. An incremental model will not run it full. It will, when you define an incremental by time range, and there's incremental by others, but I think time is the simplest to think about because you might be getting data every day. You define your time column, and then you can optionally define when to start. You might have data back to 1980. You might not want it from 1980. You don't have to do anything special in your code, right? Like the query stays so simple. And then what happens is SQL Mesh gives you the start date and end date globals. And those globals will update over time based on SQL Mesh's state and this just ensures that your time column, it ensures that your query is only incrementally pulling data from every time stamp that it, from every cron that it's scheduled for. So one thing that's really cool about this is like let's say you're like, if you, if you use the enterprise SQL mesh, you can have that schedule all of your models and have them run. But if you aren't, and you're using the open source and maybe you're scheduling them with Prefect or running them on your own, um, you have a cron daily, but that doesn't necessarily, like this isn't, this doesn't live anywhere off my laptop. you don't run it it won't run so what does that mean what it means is like if i don't run this for three days and now i want to do a sql mesh run to catch up all my data three days later it knows that i want this to happen in a daily window as opposed to for example an hourly window or a weekly window and so it will fill every full, it'll run this model once for every full cron window of time. And so like, if, yeah, yeah, it's, it's super cool. Right. So like if today is, is December 13th, let's say the last time I ran, this was December 10th and I run SQL mesh run today. It will run for all of September, December 11th and all of December 12th, but it won't run for December 13th because our cron is a daily, and December 13th has not completed yet, so it won't run for December 13th. If this was hourly, it would run to December 13th up to 10 a.m. Eastern Standard Time, but not 10 to 11 because it hasn't finished yet. At 1101 or at 11 o'clock, if I run it again, it will know and it will fill in the gap. Does that make sense? Yeah. 100%. What's up? So I was thinking, now I've started talking, so I'll just say it. I wanted to give the Eric restate. What was cool to me about this is when I saw this cron thing, I was like, oh, so where's the place in the cloud that this is running? Because clearly that cron statement is me telling this DAG that I want to process a group of data every day. And yeah, I think if you were using their cloud or if you were using something like Prefect, you could schedule a process that would run this SQL query that we see here every day. But you also don't actually have to schedule this thing in the cloud to run every day. You could run this on your laptop once a month, and then your laptop would break that last month into one-day ranges and then just go do 31 versions of this query all at once. Or maybe it wouldn't actually do 31. Maybe it would actually just do the last month's query intelligently. But I think it could break it up into the day ranges for you. That's cool because you don't have to use a cloud scheduler, at least in the early phases of bringing this onto a team, which is kind of fun. It's very cool and you can even um take it a step further which we're not going to get to today but we'll get to in another time you can imagine let's say you are doing this with duck db or let's say you really actually have to do some some serious stuff and and you can't do it with duck db and you have to do it with Polars or even Pandas, but let's, let's talk about Polars. Um, you might be worried because that brings all the data onto disk. I mean, Polars has a bunch of scanning capabilities and lazy frames and that's awesome, but nonetheless, like you might have to do something and that thing, uh, might have to run and it might pull in just way too much data. And let's say your cron is daily. You're like, I cannot fit a day of data on my machine. There are even more parameters when you're specifically doing incremental by time range, which is definitely my favorite model kind to use, where not only are you going to do it on the cron level of day, you can tell it to break that day up into n hours and minutes and seconds and how many you want to run in parallel so let's say it's incremental by time and in fact i like can i can be parallelized i can tell it to do the daily cron but break it up into one hour windows and run 12 at a time and now it will do 12 one hour windows at a time. That's so cool. And yeah, and infill the data correctly back into my database, which is super cool. Oh, yeah, I forgot to say, in the Airflow world, when people talk about backfilling, this is what they're talking about. Backfilling is breaking previous time. If the last time I ran this query was in January and now it's March and I haven't run this DAG since, backfilling is the process of taking from January to March, splitting that into some smaller time ranges and then just processing all those in their own little batches as though we had run one batch at each step. This is very imprecise language. No, that's not wrong. I mean, I always, when I used to use Airflow, I always found it, I always found backfills unbelievably confusing. Like I could not figure out how to do backfills in a reasonable way. Honestly, even with Prefect, I haven't tried three, but on two, I struggle to think about like how to handle backfills. But SQLite's sort of like, it just does it. It does it for me. And it breaks down each model into such a simple unit that it makes it so easy to do the backfill. And that's something I definitely appreciate. And once you do it in dev, you don't have to redo it in prod. We'll talk about that. So let's pull in some data. Let's start in iPod on Shell. So here's pull in some data. Let's start in I-Python, Joe. So here's why we're starting with DocDB. We are looking at some RuneScape data, and for right now, we'll see if we can pull in the mappings, the five-minute and the one-hour, create three different models for those, and then create a Python model that aggregates those together in some useful way and spits out a table that we will call train data nice do you want to show people like what what the mapping yeah exactly so so so here's here's i mean this data is like we're starting with this because look it's as easy as it can be to get some stuff out of it it's json data it's it looks like json lists of dictionaries and this is hard to work with and hard to think about certainly we could pull this in with requests like python requests and json it and put into a panda's data frame but because pan because duckdb is just awesome we can do a duckdb.query and i i grabbed the column names but originally i just did select star and that's how i grabbed the column names, but originally I just did select star and that's how I got the column names. It's the same thing. If I run star or not star from readjsonauto and you literally pass in the get request and it will run this and you will very quickly get your table in your data frame that you're looking for. So that's about as easy as it can get. And so- your data frame that you're looking for. So that's about as easy as it can get. You just ran a SQL query on an HTTP request. That was awesome. Yeah, it's ridiculous. It's ridiculous. And someone might very reasonably say, this is so silly. Why would you do this? There's no reason to do this. And I will actually push back and say that there's totally a valid reason to do this. This is literally the valid reason to do this. This is a mapping table. This table does not change. Unless they add a new item to the game. Sorry. Yes, unless they add a new item to the game. What is this data? So the RuneScape marketplace, there's like 3,700 items on the marketplace that you can buy and sell. And so this endpoint we just hit gets us the list of all the items that exist at Runescape today, which sometimes the game adds new items. They come out with a new kind of armor and they'll add an item to this list, this mapping. But basically this is a table that has the names and the IDs together. And then another endpoint we're going to hit later is for a given item ID, what is the price in a certain time range? Right, exactly. And so we're going to set it up like this. We're going to create our model and we're going to call it our runescape mapping. This is a full model because if we change the query, we want to rerun this in full, right? This is a full model because if we change the query, we want to rerun this in full, right? This is like our base mappings table. And it's really easy to pull it when we do it like this. And so we'll grab these columns just like this. We'll grab it from the rejson auto, and we won't even group anything. We will just select these columns from here. And we do sql mesh ui or plan or apply like it'll know now that this model maps to these columns in the metadata table and it won't rerun this query even every day because it knows that it hasn't changed if i change it and i remove a column it will know that it's a breaking change and it will tell me i have to re-run it and it will tell me which tables are going to break as a result because it will know which tables query the members column of this model we'll get to that in a bit let's run this i mean i mean let's get to running this as soon as we can whatever yeah so this is a full model running it daily the grain which is the primary key effectively is the id and we have an audit which is search positive ids what is an audit you get one by default you can write as many as you want an audit is just a sql query and it will fail your job will fail if this fails so we're asserting that all of our ids are uh greater than zero if this returns anything that means an id is less than zero our model will fail great let's build our incremental model incremental makes sense so we're going to do we're going to start with the hourly data because where is it one hour uh we're going to do that because I know that this is hourly and makes it really easy for me. We're going to do the start. We're going to set the start to 2024, December 13, today. I know what you're building to, and this is going to be unreal when you see this. It's going to be so cool. I do not know what you're building i also don't know what i'm building expectations are so high maybe yeah if you don't end up building do what i thought then i'll just tell you and it'll be awesome okay uh hold on let's see what this table looks like because i've never actually seen it unfortunate it does not map it out as we want and i like as an array so i was hoping it would map it out as an array there's got to be a way like oh it's it's is it json b it's just a bunch of json's one after another well i wonder can i do like oh data is that like a thing because no i don't know the syntax for getting this to the next level. You said you didn't need chat GPT. I'll search for it. Search for it. Let's start with this. Oh, what if we... Wait. Oh no, because it's the one timestamp for all this data. What happens if I do select data? Is it gonna be smarter now? Okay. All right, we will do in our incremental model, we're gonna select ID. Maybe this is actually fine because it'll give us a reason to do, it'll give us a reason to do the Python model. We'll do this from here. A, no, A, join SQL Mesh example dot, what do I call this? It matches the name, makes it easier. Runescape mapping. SQL mesh example.runescape. So I get that autocomplete, which is nice. B on a.id, a.Data, B. What do we have from our Runescape data that we want? Let's say low out B.I.L. PyL B.examine. Let's start with that and do that. Okay, so now we have our SQL mesh and this is not gonna be called incremental model. This is gonna be called like hourly scrape, for example. And we'll rename this to hourly scrape. Okay, now we have our two models. If we run uvrunsqlmeshui, go to our UI, and we can see in our models, hourly scrape. Event date are missing in the model. Oh, I must have. Oh, it's not ID event date. It was timestamp. This is the hourly scrape. OK, cool. Yeah, I just didn't update the name from the example. Still have an error. Partition by key event date. Oh, thank you. Time stamp. Does it do a reload? I wonder if you need to put those quotes on line 8 too in the grain. Maybe so. You hate it when column names map to... Keywords. Keywords. Anything? Where's event date? Oh, on line 4 4 time column event date what is that oh let me see the stream gets so grainy sometimes it's hard to see yeah oh is it dang i was gonna say maybe you can there we go make it bigger too yeah i'm on my macbook 13 inch so it's I don't have that much space. But how's this? Can you see this? That's great. It's got to be gigantic on your side. Yeah, that was good. So here we go. So check this out. We have the full thing. So we have our reJSON auto is giving us our high alt column which is then mapping to this high alt column here um that's so cool so cool and you can also see it does not know the data types of a lot of these columns uh which makes sense because it's not coming from a table it's coming from a json uh and And so SQL Mesh has no way to figure that out. We could tell it, and we're not gonna do that right now because we have no time, but we could. So just a note. And then once you tell it, if it changes data types, it'll know things. So we could do a plan in the UI, but I think it's better to do it in the CLI. And so we'll run UVMesh SQL Mesh plan, dash dash, plan dev. Sole model, sole model, where do I still name that? Okay, so we have tests. We're gonna comment out the test because the tests are based on the old data that we didn't, that we updated. But what's cool is that you can write tests and you can write input and output data and you can assert what the output looks like and you'll get an error oh come on okay great so uv run sqMesh plan dev, it figures out everything that's going on. And what you'll notice is that it creates some schemas of our schema name that we created SQLMesh example. And that comes from just the name of the model, underscore, underscore dev, because we just did a dev plan. You could have named this anything, but it reasonable to name it dev. And then our model names it model it knows what models need backfills because it stores that in the metadata and it know and since we set the start time to december 13 um this is what i was talking about before um the runescape goes from December 12 to December 12 because it is a daily cron and December 13th hasn't finished so it won't go to December 13th this one is an hourly cron the hourly scrape it's hourly and so it goes up to the most recently completed hour 14 whatever UTC and it won't do the next hour because our hour hasn't finished. So we're going to do a full backfill. Enter the backfill start date. Could be a year or blank to backfill from the beginning of history. We'll backfill from the beginning of history. And we'll backfill up until right now just by sitting done. And we'll apply backfill to all of our tables. And it failed because we're doing things live. TableAuth does not have a column ID. Let's fix it. Red.jsonAuto does not have an ID. What did it have? Do you remember? Oh, right. It didn't have an ID. Oh, we're not going to be able to combine them. Okay. We're not going to be able to join them in thing. We're going to have to parse that JSON and combine them in Python. Okay. No problem. We will grab this. We'll just grab these three columns. Select data timestamp. Let's see if we have time to finish all this. I put a query in Slack. You could try. It tries to like, chatPT gave it to me. The response of the API is a single JSON object. And this query that ChadGPT gave me, I think, can reach into the JSON object and grab each key as a row. Okay. We can try that in a second. But I want to get through building the dev environment just to showcase one really quick thing. So what we just did right now to keep it simple is we just pulled data from the hourly scraper through the timestamp windows that we that we wanted. And so we have two tables that are if we went back to the UI, they're no longer connected. And we can have a Python model that merges them. And so I reran SQLMesh plan dev from history to now. It created the tables and then it evaluates the models. It creates them and then it virtually updates dev. So what does that mean? It means we can do a UV run SQLMesh fetch df. Fetch df. run sqlmesh fetchdf and then sqlmesh example underscore dev dot what do we call this rune runescape mapping and what it will do is it will reach into the DuckDB table, and it will grab the table for us, and now this exists. Obviously, we haven't done any joins yet, so it's a little unfortunate, but if I now run uvrun SQL Mesh plan, and I don't run it, and I'm going to plan, so that means for production, if I don't give it an environment, what it's going to do is it knows what prod is missing, which is our two tables. And you can see it says apply a virtual update. So all it's doing, it did not run a single query. We didn't pay a single dollar here because it knew that it found that an environment in our SQL mesh world, one of our virtual environments was already up to date with the code. It just did a pointer swap. It grabbed those schemas, the underscore, underscore dev schemas, and renamed them to the non-underscore dev schema. And so now prod exists and it's up to date with dev. And so what that means in practice is you're building tables. Let's say we build a new table now, a new model, and it joins them together, creates a new feature, and that feature is used for modeling. We can run all of that locally, connected to our virtual, wherever our server database is, with SQL Mesh, do the plan, make sure the query runs, open up a PR, and set up your CICD such that when that PR is merged, SQL Mesh does a plan on prod. Since it was already planned on dev and it was up to date with the database in one of your virtual environments, it just does a pointer swap. And now prod is immediately up to date with the things you were doing locally after the PR passes and the tests all pass. And so you don't have to repay for any of those queries because they've already been done and validated and you trained your model which i think is like one of the coolest components of sql mesh there's one question coming through um which is how comparable are the tests to something like great expectations i think they are a lot like great expectations i think that so there's two things there's audits and tests um while we're doing this um grabbing any model that i've already built from python because the syntax you know just to keep the syntax easy audits will let you like audit um very simple things like a column is positive, something like that. Tests literally let you define full input output tests. So given this is my input, SQL Mesh knows how to query from that and run the incremental model. These are literally the outputs I expect. So it's like, I would say even, I would say an audit is like grid expectations. And that you can define a thing that you expect to happen. And then tests, I would say are closer to like pie tests where given this input and you run this model or this function, this is the output you expect. I would say that that's a good comparison there. Let's see if we can do this really quickly get rid of this is the batch size and batch concurrency I talked about but we'll do that another time make this hourly the columns are I'll do this quickly enough in my head um these are those speed coding oh i, I have them in here. Yeah. Let's grab low-alk and high-alk. So what we'll do in our Python model, get rid of this, get rid of this. We'll return a pandas data frame. Mapping. This is hourly. It's great. And now what we're going to do is we have our SQL mesh in Python. You can create these Python models as well. And they're very similar. You name them the same. You define the columns that will be returned and then the same kind of model kind and cron and then you just define python you could do whatever you want um so the mapping table was sql mesh example dot runescape mapping the hourly scrape was sql mesh dot hourly scrape the where clause with python you do actually have to insert yourself which is a little unfortunate but it is what it is um you connect to uh you connect to uh your database um like your sql mesh actually i don't even think you have to do that and you can do like a context query um have to do that and you can do like a context dot query yeah context fetchdf great um it's simpler so mapping df this will give you a Python data frame. And you'll do like this, select, what do we want to grab? Low alc and high alc. Low alc, high alc id from mapping, where clause. And then the hourly scrape will be a fetchGF from timestamp data. This is timestamp. This one doesn't have that. This has the timestamp. Now we have these two data frames, and we can merge them and maybe do other things with them. I don't know. I'm going to be able to do this quickly enough because I don't know the data well enough. But should we stop here or should I keep going? Eric, what do you think? I feel like I'm moving too quickly. I feel like it's fine to keep going a little bit um okay if you want if you want to jump that's fine because to be honest this whole stream was kind of our first test run anyway so that's true um one thing i want to try is that that endpoint where you can fetch the hourly prices. It gives you the most recent hour whenever you hit that endpoint. But it can accept a timestamp as a query parameter. It'd be kind of awesome to have SQL Mesh create timestamps for each range to pass in and then call that endpoint a bunch of times. Yeah, I agree. OK, we have our df1. And then df2 is equal to db.query. Select the timestamp data from SQL Mesh example.hourly scrape. Oh no. Is it because... 12? oh oh that's kind of confusing it's because this p is this this this route already only gives me the last hour so this isn't really an incremental model this is actually just a full model which is what i was saying so what you can do is like here there's a parameter i think it's just time stamp equals a thing and then like this could be the start date like let's let's do that like okay um yeah how do we like do we need like wrap this in double quotes or something like How do we do a... No, no, no. I think it's fine. Wait. Oh, wait. Oh, it's not a format. This isn't Python. This isn't Python. So this is... We don't even need these curly braces. But this has to be UTC times. Is there a way we can cast... I don't... I think it already is done in UTC. Oh, is it? Is there a way we can cast? I think it already is done in UTC. Oh, is it? Okay. But I don't think this is going to work because it's in a string. I see what you're saying. Do we need a – oh, okay. I do that. I have no idea. Cool. Let's just try running. Let's just try it. There's a trailing a all right do we need this where clause um i don't think it does anything well that yeah that was my point about it not being a full model, it's an incremental model. Yeah. This is the equivalent. Running this HTTP request is the equivalent of adding this where condition to selecting the data. I think this is an incremental model. Yeah, no. This is an incremental model. That's fair. No, but this would actually be, I think, end date. I think let me read what the API says says because i think i think it's the start of the time range is what it is um yeah time stamp if provided will display the time stamp field represents the beginning of the one hour period being averaged so i think so this time stamp parameter gets us the beginning of a time range. I think this does need to be start date. So it's the beginning of the range. Well, it seems like that might work. All right, let's try it. If this works, this is so cool. That would be very funny. I found processing. Oh, okay. I failed processing. Oh, okay. Because how does that hourly actually work? Is it supposed to be like a number? Yeah, that it's like a UTC thing. There's a, I can give you a sample yeah if you go to that runescape docs page um they have a sample one you can just click on yeah see there yep um okay no what didn't time okay Can we cast that as UTC or something? Cast. I just don't think that's how you concatenate strings. It's very Pythonic and I don't think that's legitimate. It doesn't have a cluster operator. Yeah, and I don't know how to. I'll run this through chat GPT because I bet it's a plus operator. Yeah, and I don't know how to. I'll run this through chat GPT because I bet this is a quick one. Yeah, I bet it's quite easy. Because if we get this, I feel like this is a good stopping place. Like, check it out. We have an API. We can use SQL Mesh to break up the date ranges for us. Super cool. Yeah. I agree. But actually actually right now you can just do this to get the latest hour just to see it complete and then we can always edit back sweet sweet oh it's pipe pipe. Let's see. Yeah. To concatenate strings, you double pipes. Yeah, drop that in there. Yeah, sorry, one sec. It's given it to me as integers, too. Live coding. What is happening? It doesn't look like. Snapshot. Where did I write snapshot? Nowhere. I need to yield. I need to yield. I need to return mapping. Yes. I think I just added it. Do you want to try running it? Some casting the start date as units. Yeah, so this is a good example of something it knows what we changed right so you can see it it sees that we changed this it sees that and it sees now what we do this this is such a breaking change i mean it might be actually i mean this is not a breaking change that's kind of funny but yeah and it knows that we need to update the hourly script and it knows that we need to update dev because dev is dependent on the hourly scrape nice cool backfill yes no what happened two times down i don't i what is this using duckdb sql dialect I think so that's another thing to call out is you can write these models in any SQL dialect and they'll work like right because they have sql clot under the hood yeah like the sql mesh folks they are like the ultimate sql trance pilot thing that can translate any sql dialect to any other single i would just what if we just do this Oh, here we go. Oh, did you get it? Oh, was it already Unix time? I just want to cast a date, please. I know, it's so silly. Okay, that worked. No way, wait, I just lost the... No, it worked. That worked. So now we just have to figure out what's wrong with the Python model. Nice. It says execute got an X-Way. Oh, snapshot. You just have to figure out what's wrong with the Python model. Nice. It says execute got an expert. Oh, snapshot. Where is snapshot? I don't know where that's coming from. SRC slash. These are logs. For people watching, I'm sorry. I can't see the YouTube comments. So you're probably, you might be asking things. We're just ignoring you. But next time, I think Demetrius is watching it, right? He left the call. Oh, no. Okay. Let's stop here and let's come back and then we'll figure out. I don't know what it's actually talking about with respect to this. I don't know what it's talking about with snapshot. Because. Yeah. Oh, wait a second. Is that like. Oh, do you have to define it? Is it like built in? Yeah, maybe. Do you want to use snapshot any equals none or something? Yeah. Let's see. Oh, maybe you need to provide keywords. Oh, okay. That's unfortunate. Okay, let's try it. What a lovely SQL mesh plan we have there. So like timestamp is bad again, why I think we need to. That's why. Yeah, what a pain. Yeah, I know terrible column names. All right. Does it have to be like, what SQL language is being used in this FetchDF? Is it DuckDB? Yes, because our engine is DuckDB. I think it knows to do DuckDB. Oh, context.fhdf is a SQL mesh API. Yeah, it comes with your Python model. That's super convenient, actually. From mapping? Oh. Oh my gosh. Timestand preference and select call. Timestamp reference in the select call. Timestamp from hourly scrape. Isn't that timestamp? Can't be referenced in what's defined. Did hourly scrape execute successfully? Yeah. OK. Just get the motion. H.J. Okay. It's cool that even though we're getting an error in the Python model, it doesn't seem to be rerunning the upstream model. So even as we're developing and running into errors, it's like saving us from spending tons and tons of money on our lake house. Yeah, that's the whole thing. And also just time wasted. Yeah. Because this is like a pretty real set of development. I would totally need to go into the same debug cycle if I were working right now. Let's see what happens if I do a select star. Oh, what's interesting is that this returned nothing. It seems like that returned nothing. Oh, yeah. If you look at the data frame. Let's comment it out and just do this for a moment and we'll come back to it. Okay. Oh, you know, I think this hourly fetch, that HTTP endpoint is getting us, it's returning us one single JSON object at every... Yeah, but that shouldn't matter. It does return two tables. Like if I do... Oh, it does work. Okay, cool. Yeah. But if I do uvrun ipython and import DuckDB, DuckDB.qu dot query select data timestamp you use the ipython terminal a lot oh i love it yes yeah so that so that worked fine. But it only returns us a single row, which is like this giant object containing the entire table. Well, that's why I wanted to do... Oh, dude. So, oh, is it? Oh, my God. It's because it's double quotes for... Oh my god, I hate myself. I think it's this. I love that. You don't just click on the token and do a quote. You always manually put the quote on both the start and the end. Yeah. I'm not an expert. Everyone's got their own little weird guesses. Not a VS Code expert. Let's see. Oh, yeah. Okay. You could do... Wait, what failed? Hold on. I think it's in your... Oh, it's in the model. I think you're single quoting timestamp in the model config in hourly. Yeah, go up. Timestamp is in single quotes there. Is that the whole thing again? On line 8 as well, there in single quotes there is that the whole thing again on line eight as well there's single quotes yeah i would think that this one shouldn't maybe it does matter let's say i i don't know it is a bit of a dsl yeah a little bit column timestamp referenced that exists but i cannot be that supposed to find what column timestamp referenced that exists. But I cannot be that as far as it's fine. What? Seeking SQLMesh of Sample Train. Well, maybe we're not qualified to be doing this stream. Maybe we should have had a SQLMesh. Oh, we certainly should have had a SQLMesh. There's no question about that this feels a lot like the other one i built i don't know what the difference is well this is where if someone's in the youtube chat and we can't see the comments they're like you idiots like they they probably have the answer like let's find out i don't know how to see the comments though i feel like we're way over we should uh just kill it yeah kill it i mean come back and it's just like if people want to drop they can so i feel like it's fine if we just stay on the stream and um and i mean we can go as long as we want. Anyone else can drop it. In the future, we won't do them this long, but this is kind of like a practice round. So if you want to see. I was in a comment. It's very funny. OK. Yeah, I think we should call it, I feel like we're, we can go back and do a little bit of prep and make sure and figure out what we were doing wrong here and then explain it on a fresh call. Sick, sick. Well, see you guys. Whoever joined, thanks for being on our first one. So, see you later. | Exploring SQLmesh | 4,738 | MLOps.community | 20241214 | SQLmesh is an up-and-coming tool that addresses some of the shortcomings of dbt. Ben has been playing with it to see how it might fit into an ML Platform.
Join us live to ask questions and hear Ben's perspective after doing a short PoC with prefect and SQLmesh. | 2024-12-17T01:38:50.210832 |
https://www.youtube.com/watch?v=hOFCP4xG1I8 | On-prem agents, Wiz. Let's go. What a great way to round out 2024, huh? Yeah. Who would have thought this is where we'd end, you know? That's right. That's right. You know, it's really cool. We saw some evolution happening, even within the last week or so from LangGraph and from the folks over at LangChain. And we've actually got some new things that we're going to cover today, including LangGraph platform that actually has LangGraph server implemented, which is a little bit different than LangServe, right? Yes, we have entered the platformification era. And so the names be changing, but ultimately, it's the same technology we don't love, just bundled in an easier package, a prettier present for the holidays. That's right. That's right. That's right. So it's still serving the Langs we love. It's just a little bit more, let's say mature heading into the next year, something like that, right? A little bit more production ready. Yeah. A little bit more enterprise grade. Excellent. Excellent. But there is a free version and it is open source and that's the one we're using today. Isn't it so? Yes. And it does the thing and we're going to see it do the thing. And that's basically all you want. All right. And we're also going to wrap up the year with talking about agents and agentic rag and rag and how you might think about these differently in an on-prem context. I'm excited for today. How about you, Wiz? Last one of the year. Let's go, baby. Can't wait. I can't wait. Let's go. All right. We'll see you back in a bit as we kick off discussion. Let's go ahead and rock and roll with on-prem agents today, guys. If you have questions along the way, if you have comments, throw them in the chat. We'd love to hear from you guys, especially during the discussion components. All questions will be answered at the end if you put them in the Slido link. So please do use the Slido to put in questions and upvote your favorites. We'll cover the latest and greatest from LangChain and LangGraph today. We'll talk about agents and how to build them on-prem, what kind of requirements you need. And we'll also discuss some of the differences between using an approach that we get from Langchain versus using an approach that we get from like a Llama Index. So this is going to be a great way to wrap up the year. Let's get ready to rock and roll with on-prem agents. All right, so as we kick off the old sesh, let's make sure we understand exactly what it is that we're aiming at. We want to understand on-prem and we want to understand agents. We've covered these before, but we're going to give a quick review and primer here. We also want to understand the LandGraph platform because this is sort of a new structure. It's an evolution that's taken place over the last year, and it's important to stay up to date on what's going on right out at the LLM edge, especially as these tools and technologies like LangGraph and Langchain mature. We're going to discuss, because it's quite interesting actually, agents versus agentic rag versus rag in an on-prem context, because the compute and memory requirements you need are actually different depending on what exactly you're looking to build and do. Then we're finally going to build an agent research assistant. Shout out to Langchain folks for putting one together right in time. So we're going to go ahead and stand on the shoulders of giants a little bit with one of their recent repos they put out this week and give our own flavor to it. And then we're going to discuss some of the latest releases they've had this week to sort of close out the sesh, including Langraf Command and Langraf Inter interrupt and give you our two cents so with that let's go ahead and get into on-prem agents we will again talk about the platform we'll also mention the model hosting this is just sort of a for those of you that may be slightly more beginner the inference server versus the API server. These are different things. And we really want to zoom in on that. And then we'll build. So on-prem, when we talk about this, what we're talking about is we're talking about it's right there with you. What's it? Well, the stuff you need to run your models, to run your apps. It could be at your house, like it is at Wiz's house today. It could be at your business. It doesn't really matter. Now, that's what on-prem is. I just want to be super clear like what on-prem isn't. You don't need on-prem to do production grade applications. You can do them on-prem or you can do them in the cloud. Production simply means running and not going down. You might also consider like, do I need specific types of hardware on-prem? Like, do I need GPUs with video RAM on-prem? And the answer to that is like, yes, okay, you do. Because LLMs run on GPU. That means both chat and embedding models. And depending on what you're building, you might need one or both. These things run on GPU, and they care about how much memory we have. This is a really, really important, probably top level important thing if you want to do stuff on-prem. And of course, GPUs are quite expensive. So are the models that we typically are trying to run on them. So what we'll use today is we'll actually use a version of LLAMA 3.2 that you may not know about this, and I didn't really know about this, that they have text-only versions that are great for on-device. It's not just a multimodal LLM. So we're going to go ahead and pick up the 3 billion parameter model because it's more lightweight and still very, very performant. So if we're looking at on-prem, we might also consider looking at things like on-device models, right? Makes a lot of intuitive sense there. Our device is, in fact, on-prem. Now, the next question we need to ask is what about CPUs? What about RAM? What about hard drives? Do these things need to be on-prem? Yeah, of course they do. And, you know, as we think about the types of requirements we need when we go on-prem, it's going to depend on what we're building. We're talking about on-prem agents today. And when we use agents, we may or may not use some of these other pieces, like embedding models, vector DBs, and creating robust data pipelines. Today we're talking about agents though, so let's be real specific and define what we mean by agent. Our definition of an agent at AM Makerspace is a system that can leverage or emulate reasoning or equivalent to make dynamic decisions in an application flow. And in order to create an agent, an agent, we need to really define what that means to us because as we all know, requirements really do matter. What we're building and why we're building it really does matter. And we hear a lot of variations today about on-prem, in the cloud, RAG, agents. We might really take a hard look at this heading into 2025 if we want to spend something up on our premises for either ourselves as individuals or for our teams to leverage because the compute requirements for things we need to build are actually different. If we want to build a RAG system, for instance, we're going to need an embedding model for the query. We're going to need it for our data. We're going to need a vector DB corresponding to the embeddings we create with our data. We're also probably going to need a pretty robust data pipeline if the data is changing from time to time and something we need to update. Whereas if we just want to build something like we'll build today, where we're leveraging sort of a simple agentic flow that hits the internet, comes back, talks to the LLM again. Well, in that case, we don't really require an embedding model, a vector DB, some sort of robust data pipeline. We can actually get away with much less. How much less? Well, these are important questions. And the answer, of course, as all good questions are answered is it depends. And of course, we can have a gentic rag where we kind of have all of the above, we have our sort of proprietary database, and we can pull from it, maybe we're leveraging some sort of internal API to hit it if we're doing this on-prem. And we're also giving our agents access to things in the way we would with simple tool calling, simple function calling agentic flows. So when we go to estimate the requirements, you know, the first thing that we need to do is we need to say, well, how much data? What are the models? Right. And like. The most classic thing ever in machine learning is just exactly that. So none of this is new. The new part is really when we talk about agents. Now, we've talked about agents many times on this channel, and we'll talk about agents many more times in 2025. We've typically given a few different agent definitions. One of the ways you can talk about agents is sort of in an agentic RAG capacity where you're augmenting and enhancing search and retrieval to do even better RAG. Like say agents are a fancy RAG. You can also use agents in a simpler way where you avoid RAG altogether and you're simply giving them access to tools or APIs. The simplest one and one of the ones we'll leverage today is a simple web search. How can we give the agent access to the internet to do search? And this sort of meta pattern that's kind of always at play is this react or reasoning action pattern. So we're going to focus in today specifically on the function calling, the simple agent, not the react or not the fancy rag agentic rag approach. The react pattern and the simple function calling agent are sort of one in the same thing. In fact, we can take a look at what we're building today by looking at a simple reasoning action loop. We're going to have some sort of query, some sort of question that we're trying to answer. We're going to try to do some research today. And our agent is going to take some action. That action is going to have some observation, some sort of result that we're pumping back into the agent to consider and reason through and think through again. Now that might be enough just to hit a quick Google search or something like that to get our final answer. In this case, we're doing summaries of research that we found. So sort of a final summary to answer a detailed question that we asked. Now, what kind of actions are we going to be taking today? This is where it starts to get a little bit more interesting. Well, we're going to leverage the web research. This is from the Rabbit Research Repository or the Research Rabbit Repository on LandGraph. And we're also going to be able to summarize the research. And there's one additional step. So when we go into our toolbox, when we go and we use function calling, we're typically going to go to the web search, and then we're going to sequentially move to a summary step. Now, you can leverage kind of different LLMs for different parts of this, including the third part, which is to reflect. And this is where the reflection agent comes into play here. But we're going to stick with having just a sort of LLAMA 3.2 setup where we use one LLM for each of these steps. Remember, we define agent or agentic system as a system that can leverage reasoning to make dynamic decisions. Basically, it can reason and it can decide, it can act within our application flow. When we think about reflection, this is actually an idea from the paper on reflection, language agents with verbal reinforcement learning. This is kind of a tough definition here directly from the paper verbally, a reflection agent is one that verbally reflects on task feedback signals, then maintains their own reflective text in an episodic memory buffer to induce better decision making in subsequent trials. That's a lot of words that are probably a little bit over complicated so let's take a look at this idea of the reflection agent and let's consider that each time we're going back to our agent we're sort of managing this conversation thread and this conversation thread and everything that we hold in it from our web research to our summaries. This is what we are reflecting on, okay? Simply the conversation that we're having. That's the memory buffer, that's the sort of episodic different loops, and that's the sort of crux of what it is that we're really doing here. We're going to build again, an on-prem, a local researcher or research assistant. This is another way that we can sort of visualize what's happening. We give it a topic. We're gonna go ahead and go to the LLM. It's going to do some research. It's gonna create or update our summary. And then we've got this sort of other LLM pictured here that's sort of doing the reflection. Again, you can picture this a number of different ways. This is directly from the Langchain repo. Now, this was obviously built with a very common piece of visualization, block diagram software that you've probably all seen many places before. You can also visualize it directly through the LangGraph Studio IDE. And the reflection piece, as we go to sort of from the query into research, doing the summary, reflecting on the summary, the reflection piece is actually going back to the research based on that thinking through and that reflection. Now, this is a quite simple setup, actually, this is really kind of as simple as some of the useful agents you might build out there. Get, we don't necessarily need the summary piece. We don't necessarily need the reflection piece. Those are sort of a little bit extra additions, but really this is quite simple. It's much more simple if you joined us a few weeks ago than when we did the TPS report generation using Lama index, and we sort of tried to assess dopeness of particular amounts of data. This was an agentic rag setup, and it's one that we actually had to put quite a bit more work into. We needed an embedding model. We needed a vector database. We needed to make sure that we built out our data pipeline. Whereas when we talk about what we're doing today, when we talk about this research rabbit that we picked up directly from the Langchain repo, they put this out this week on LinkedIn, we saw it, thought that was perfect. This is not leveraging all of that. We don't really need to do all of that. And so one of the interesting questions when we talk about on-prem setups heading into 2025, I want to bring Wiz back up for this, is like if I have a simple setup like this, Wiz, where I don't need all this extra stuff, at least in theory, I don't need all this extra stuff yet. How can I think about my requirements, both from an LLM perspective, and from an agent perspective? Like, howpu vram cpu and ram do i really need yeah i mean this is not really a a question that we have the answer to out of the box uh this is very much you know just dependent on what you're doing uh even for these simple things, right? If you use the 1B model, well, that needs less, you know, resources than if you use a 3B model and performance will appropriately scale, right? So I think it comes down to at the end of the day, like what kind of performance you're looking for and that performance is largely gonna dictate the actual, you you know resources that you need just to to give a a non-answer that's right question that's it so the performance you know you mean sort of like how long will it take someone to get a final answer from our research report after giving it a topic like what's the latency for a final answer from our research report after giving it a topic? Like what's the latency for a final answer kind of thing? Sure. What's the latency? That's a great way to think about performance. Another way is like the accuracy. How well, you know, how good is the report? What's the quality of the report? So on all facets of performance, our resources are going to be taxed as we want things to perform better. If we want a better report at the end of it, a better model is going to lead to a better report, which is going to require more resource. And if we have the same pool of resource, well, maybe now the model is running slow, but the report is generating accurately. And then we need even more resource for it to generate fast and accurately you can imagine how we get in the situation where uh we're talking about nodes and clusters very quickly right when we're we're thinking about on-prem that's right that's so okay so I'm imagining you know I I I'm a developer you know, I'm a developer, you know, at Christmas is coming up, I either have some Christmas money coming in, or, you know, maybe I'm going to give myself a gift here. And I want to be able to build on prem dope agentic apps. Okay, so like, all right, that's my goal 2025. Like, I know we've talked about this before with the gpu but are there cpu requirements that you'd recommend for people today you know do i can i get the the the macbook that is like basically standard and i can kind of get very far with it or do i need to spend you know four grand on the macbook right now to be able to to build agents or does that just make people feel cool and they really don't need it yeah i mean if you want to build on prem it's going to cost you money no matter what uh we've talked about this before would always recommend getting kind of a you know 40, 4090 or 3090, pick your poison, you know, if you want to just get started, but that's, you know, not going to serve a lot of customers, right? So, you're immediately going to have to purchase more hardware. If you just want to tinker and build and have a great time, the macbook uh you know whatever whatever m series chip right uh it works well with olama which we're gonna be talking about today and that's really all you need to like tinker and play right prototype prototype once you get into you know trying to build a performance system you're gonna need to care about things like, are your computer resources high uptime? How do you handle when they go down or something breaks? You immediately get into a much more complex, much less straightforward engineering problem once you move past that prototype phase on-prem. Okay, okay. We got one quick question in the chat here. Can we set up on-prem systems with Red Hat 8.x to execute Gen AI stuff? Sure. I think Red Hat can run Cates, so that's all you need, right? You just need a path. As long as there exists a path to Kate's, you can use whatever operating system you want. I think even Windows can do Kate's stuff through WSL. So your OS shouldn't be a limiting factor. Okay, okay. All right. So basically to recap, like agents, we're not talking like we need some massive CPU, some huge amount of RAM, especially for the simple, straightforward kind of function calling agents. This is something you're going to get out of the box with any kind of standard latest laptop you get off the shelf. Yeah. I mean, you can run this on a MacBook and it will work. You just need, you know, whatever, 24 gigabytes of unified memory, right? We don't worry about the CPU and GPU on a Mac. It's all the same box. It'll be good enough if you have the M series. Good enough, good enough. All right. Well, thanks, Wiz. We appreciate those insights. And we're going to go ahead and introduce the LandGraph platform next, everybody. So we'll see you back in a bit for the build. Okay. So with that in mind, on-prem agents, let's take a look at the latest and greatest tooling from Langchain. The LandGraph platform is really a rebrand of the LangGraph cloud. It came out Halloween this year, so not that long ago. And we're going to use the self-hosted lite version, which gives us access for free up to 1 million nodes executed is the language that they're using. And it's kind of a limited version of the LandGraph platform, but we can run locally, we can self-host, we can do stuff on-prem. There are also other enterprise options if you're interested in those, you know, definitely check those out. Now, how did this come about? This is the more interesting thing for us, right? How did this really come about? Well, the evolution was kind of like, you know, we did an event on this agent IDE, the Landgraf Studio idea. It turned out a lot of people really liked this. We thought it was cool too. We actually thought it was a no code builder when it came out, but it wasn't, but it still really is a useful tool to sort of debug and visualize and accelerate application development for these types of applications. So this came out and then Langchain realized, oh, people love this. They love it so much that they actually want to be able to use it on their own hardware. They don't just want to use it in LangGraph Cloud. They released a desktop version of this. So that's kind of where this began, at least in the story that they spin. Now, additionally, LangGraph server was something that people were finding to be quite valuable. This idea of serving applications up that are built with LangGraph is not new for LangChain, but this is kind of where the agents met the serving a little bit, LangGraph really for those agentic flows, and then serving kind of came together here. And what LangChain heard is that really using this LangGraph server was a lot easier. It was a lot easier than, for instance, a previous solution that we actually thought we were going to cover today. And interestingly enough, we covered last December 14th, almost exactly a year ago, we covered LangServe for the first time, but this is a tool, it's the same idea. You can deploy a Lang chain application directly to an API that people can hit. It never really caught fire in the industry as we saw throughout the year. And even as of this week, you can see on the docs, they're recommending to use LangGraph platform instead of LangServe. So that's exactly what we're doing today. And then finally, the nail in the coffin for going with the plat. On-prem, baby. You know, we've got a lot of people out there. They're like, hey, data privacy is way too important. I'm not going to use LandGraph Cloud. They're like, we've got to be able to hit internal APIs, can't use LandGraph Cloud. And so, you know, they listen to their customers, they listen to many developers out there like you guys, and they realized, well, actually, let's do something that allows for either cloud or on-prem for people with strict data privacy requirements or for people that need to connect to internal APIs. Cloud, which was previously obviously a cloud solution, was now replaced with Landgraf Platform, which can do both. Look at that. Maturity, evolution, 2025 is indeed on the horizon. So Landgraf Platform has three primary components here. It's got Landgraf Server, very similar to this idea of LangServe where we can serve up the application. It's got LangGraph studio where we can visualize, debug, use it to accelerate application development. It's got the command line interface and the software development kit, classic, and it helps us to play scale. All right. Okay. So we're gonna use this today. And we're also going to, importantly, you know, none of this hosts LLMs for us. So we have to do that separately, no matter whether we're building just an agent or a rag system or an agentic RAG system. And as Wiz mentioned, we're going to use Ollama. Ollama, as we talked about before, it's really quick and easy, very, very useful, allows you to deploy stuff locally, and you can really take it anywhere. As we've discussed in a recent event on VLLM, it's not necessarily gonna be the hyperscale, super fast solution for everything in production as you grow things out. But you can take it anywhere, like wherever your premises are. Again, we're going to leverage some efficient models, the 3 billion off the shelf, LAMA 3.2. This is text only. This is not a multimodal LLM. And to be super clear about the servers that we're going to use here, OLAMA is going to be the inference server. So that's the one that hosts the large language model. The lang graph server is going to be the API server, the application server. This is the one that we're hitting so that we can basically say, here's my topic, give me back the report, do it all. And I'm personally a little bit curious to see, can we get some more visualizations from Langchain about what's going on inside LangGraph server? If not, that's something that I'm quite curious to build out here for you guys that follow us at AI Makerspace. So let's go ahead and get into the on-prem research assistant today. It's called on Langchain's's repository Research Rabbit, and Wiz has whipped up a special AI Makerspace edition so we can get Gentic on-prem today with you guys. Wiz, over to you. Oh, yeah. Okay. So we're going to do something terrible, which is share the whole screen for just a few moments. So this is what we're working with today. The idea is that we have a number of things that are rolling. So what we want to be able to do is we want to be able to use this Research Rabbit. We want to be able to use it in our LandGraph studio. And we want that to be hosted through LangGraph Platform. So we have a series of steps that allow us to do this. We have two different terminals here. One is going to be running a local studio version, which we can use to debug, play around, and test. And then the other is going to be running our actual deployment through LangGraph Studio. So this is running on my local machine, and then this is running on my living room server. So what I would say is important to keep in mind here is that, you know, there does exist this separation between like the debug prototyping mode. And then that we've pushed this thing to quote unquote production. Also, you'll notice that we have our Ollama set up. So we can do cool things like talk to this Lama 3.2 model. Like, how are you today? Right. You'll notice that the inference, once the model is on the GPU, is very zippy. This is because it's a relatively small model, right? So this is all of what we're working with just to give you an overview, right? And now what we'll do is we'll dive into each of these components. So first things first, we got to talk about what the actual application is doing. It's going to be this research rabbit from Langchain. You'll notice that we have our LLM. It's just going to be our chat Ollama model. We have the LLM and JSON mode as well. So we can leverage JSON mode since the LLAMA 3.x series of models support that JSON mode. Then we have some nodes generate query. Unsurprisingly, this generates a, you know, this generates a query. We have web research, which is going to be a tool use node. We have summarize sources, which is going to be another kind of tool use situation. We're going to think of it as such. We have reflect on summary, which is going to be a node that allows us to reflect on the summary we got from that other node. And then we have our finalized summary, which is kind of like locking it in. And then we have route research, which is going to tell us where we need to go. You'll see here that what we're doing is we're just going to loop a set number of times in this simple agent. And then we're going to escape to our finalized tag. So all we want to do here is make sure that we have the actual nodes set up, which is pretty classic. You're used to hopefully you're used to nodes at this point. And then what we're going to do is we're going to, you know, build our graph. Once we've built our graph, we're going to go ahead and run our graph. Now you'll notice as well that we have this interesting file structure where we have some prompts. These prompts are pretty straightforward. All we're going to do is we're going to think about them as instructions. This is normal agent stuff, right? So nothing too weird or crazy at this point. And then, of course, we have our state. Our state is pretty simple given the actual application, right? So what we have is the research topic, what our query might be, what our results have been, what sources we have so far, you'll see when we look at the application why this is important. And then we have our research loop count. So this is how many times we've gone in a circle. And then our running summary, this is our summary so far. And that's it. With the entire thing, this whole deduplicating format sources is just going to build a list that's not filled with duplicate links in case we get them. Okay, so that's the application. Okay, that's dope. Now we're going to go ahead and see how it works, right? So we've got this cool application. We're very excited about it. And we're going to go ahead and we're going to look at how it works. So in the LangGraph Studio, which you'll notice, or you might not have noticed, but I'm on a Windows machine. And I'm looking at LangGraph Studio, right? Now, LangGraph Studio is running in the Docker container. We're accessing it through that smith.langchain.com. And we're able to see even our Windows machine, LangGraph Studio, though we can't launch it as a desktop app, which is okay. You'll see here that I've already done a research query, which is how is Llama 3.28B Instruct trained by Meta? And you can see as we go through the steps, first we generated a query, which is how is LLAMA 3.28B Instruct trained by Meta. And you can see as we go through the steps, first we generated a query, then we did some research, right? After the research, we summarized, then we reflected on the summary, and then we did some more research, and then we, and on and on. You kind of get the idea, right? So we can add a new thread here and do new research like what is an LLM agent anyway. And then we can submit this and get the beautiful visualization as our system works through the steps in order to solve this problem. That's just working through those nodes we created in that Python file earlier. Okay. So that's that. I'll just let this thing run. That's fine. What we want to do as well, now that it works, is we want to go ahead and get it running on our remote machine. In this case, a different server. So what I've done is I've already set this up so that we can access it. You can see here that we can access our docs, which is going to be how we use this service. It's got all of the instructions we could ever think about or want, thanks to the documentation that has been created for this service because it's using the open ai or open api uh you know structure the docs look very good and are very robust and that's all been delivered through the land graph platform so in the first instance we just kind of have this land graph server here running by itself. And then we have our land graph platform running the separate instance. This is extremely important because it lets us wrap this whole thing in that platform, which is more than just the actual component of or more than just like the land graph graph so if i once again stop my screen and then we're going to share my terminal window which will be lots of fun which we can share here you'll see here that we have this terminal window i'll go ahead and zoom in so it's less uh it looks less tiny here and the idea pretty straightforward. We're going to go ahead and run things down. And we're going to go ahead and look at our Docker Compose. When we have our Docker Compose, you know, we've talked about Docker before, right? But Docker Compose, what it does is it has a bunch of different services. We have our LangGraph Redis service, Postgres service, and then we have our actual LangGraph service. This is the reason why we say this is more production ready. It's not just this kind of naked API chilling out there in the wild. We have our Postgres service. This is a database. A database is gonna handle things like threads and assistant interactions and all kinds of other fun things. Redis is gonna act as a pub sub for how we're actually interacting with the services. Because many people might be using the service at once, there might be many background threads running, and we need to be able to highlight those to the correct users. There is one limitation with this build as built out specifically, which is that Olama doesn't support concurrent generation. So this is not going to be something that you plug and play into production. You're going to want to or need to look for something like VLLM in order to bridge that final production gap. But very few code changes are required to do that. And it works well for a single user with a llama. And that's it. To say that's it, it's a lot, right? We have this idea of Docker Compose. We have this idea of all these other things that we're using. And if we can finally just Docker Compose up, right, which is going to spin up all of these services, Postgres, Redis, and then of course, our favorite, the LandGraph API. graph api and uh this is you know all gotten quote unquote for free all we had to do is build that docker compose file they already have an example on the uh in the documentation for for land graph platform and then we spin it up and and that's it you know we're we're away to the races and we can start using this service uh with uh with some some reliability uh that's that's the whole thing that's the whole shebang uh so with that i'm gonna pass you guys back to uh greg uh but before i do i gotta ask you to for the last time this year in fact uh you know uh don't forget to subscribe and ring the bell icon on YouTube. We're live every Wednesday doing events like this. Most Wednesdays, we won't be live for the next two due to them falling directly on holidays. But there you go. Back to Greg. All right. Good stuff, Wiz. All right. I want to remind everybody, go ahead and throw any questions you have in the chat. We'll have some time but i wanted to go ahead and also introduce some of the latest drops that we saw this week from lang chain slash lang graph and the first of these is called the first of these is called LangGraph Command. And LangGraph Command was dropped on December 10th. And, you know, this whole idea of like these crazy drops we've been happening, we've been seeing from the industry, from OpenAI, from Google, it seems like LangChain, LangGraph got in on the action here. So I want to kind of have a little discussion about this. You know, the kind of couple of things they pointed out with each of these are really important. And I think maybe it sums up to, you know, something, again, maturity 2025, what's coming next. LangGraph command basically said, you know, requiring edges to connect nodes, turned out sometimes it could make it harder or less intuitive to express very dynamic logic. So basically Langraft command is this idea of edgeless graphs and edgeless graphs. And edgeless graphs allow us to more easily build multi-agent architectures. And this idea of sort of no edges is quite interesting. So I wanna just kind of actually pause here before we go to the next one and bring Wiz back up. You know, why would we care so much about edgeless? This feels like some of the discussions we've been having around event-driven architectures versus graph-based architectures. Is this how it sort of, you interpret this as well? Or is this something specific that you would expect them to roll up so that they could be more production ready? Yeah. I mean, it's just all about how you're thinking about having multiple agents, right? I think the idea of like an edgeless graph lets us kind of emulate the pattern that llama index has not not directly uh but it does let us have more uh kind of easier time to to build out these more complex graphs uh that doesn't require us to have like 40 lines of boilerplate with conditional edges. Remember the idea of conditional edges is they're kind of maps that tell you where you can go next, right? Command helps us kind of do that in a more, in a less verbose, less manual way, I would say, is the way to think about it. Yeah, so I guess the idea of having conditionals for everything does sound like it doesn't quite scale particularly well so you know from that perspective there's some intuition there that i think makes a lot of sense. So, so they released this command piece. And then they released four days later, this was, you know, on December 14, Langraf interrupt, which allows you to interrupt Langraf, you know, and we've talked about this before, where state machines are a little bit more flexible than fully autonomous agents. And the reason is because you can basically track state and you don't necessarily want things being totally autonomous. So, you know, like as we, as we sort of look at this idea of putting humans in the loop more easily, they give a couple of examples here where we're talking about, should a human approve or reject the next action? Should a human approve or reject the next state? Should a human approve or reject the next tool call? Should a human approve or reject the next agentic reasoning flow here. And so, you know, what do you think, Wiz, about these new releases? Are these things that are going to move the needle for people in 2025? We're sort of trending more towards this maturity and enterprise grade thing. They seem to be listening to their customers quite well. The naming's not so bad. I don't hate the naming. These don't necessarily deserve dedicated events from us, but what are your sort of high-level thoughts on some of these new releases from them? Yeah, I mean, we didn't live in a world where we needed to build these specific kinds of multi to the, to the experience to create like a, a, a, a well operating agentic system. So things like command, which are more so like helping developers and the things like interrupt, which can make sure we have this idea of, you know, being able to gracefully and appropriately basically pause and start uh graphs right uh lets people interact with them in a way that would feel natural in a way that would be desirable uh but we didn't yet have the construct for it it is again you know we're we're all in exploration phase right now. Multi-agent systems exist technically in production, technically, right? But they're not like, it's not solved. And so the tools that we need aren't solved. I'm very happy to see Langchain continuing to push forward and help create some of these tools so that people can build better applications than it did in 2024 when it comes time for 2025. Yeah, yeah, definitely, definitely. And I think it seems like the shipping velocity has also kind, increased as everybody has kind of dialed in around agents. We're seeing sort of a lot of people really move towards this and move their product towards this as well. You know, we're doing our best to define agents here as we head into the new year when people hear agents, when you hear agents, and you're sort of hearing about the new agentic release, you know, how, how would you encourage people to, to think about what matters, what doesn't matter, how to sort of pick and choose, how to separate signal from the noise with some of these new tooling that we're seeing some of these more you know they're kind of coming from everywhere whether it's crew ai or swarm or llama index or laying chain or you know everybody's getting in the game now um what are the most important things from your perspective to pay attention to when it comes to agents in 2025? Yeah, I mean, they're, they're not solved. This is still a big play space, right? I think, keep trying to innovate, keep trying to just try things. I mean, we're still in the phase of like, basically, like the prompt engineering equivalent of agents, right? So it's, you know, just get out there trying things, use different frameworks. Definitely we're going to recommend, you know, LangGraph and LangChain, but, you know, try out different things, you know, poach patterns from other frameworks and research that you see. In terms of separating signal from noise, it's very difficult right now because we just don't know where something like a multi-agent system really prospers. We don't have super clear, consistent case studies. And so it's kind of all noise right now. Um, you know, but pay attention to agent force, pay attention to, uh, you know, uh, uh, folks like AG two, you know, who are going to hopefully help steer the ship a little bit. Uh, other than that, it's just, you know, it's, it's, it's all over again again chad gpt but this time with the agents and uh 2025 is going to be fun and hopefully we're going to lock in a few use cases that make sense and leverage the technology in a way that adds value to the businesses yeah yeah yeah and you know my one last comment on that too is everybody's releasing these sort of like study of who's using agents and there's a lot of sort of feedback going on on LinkedIn and elsewhere where people are talking about well yeah like that percentage seems high like yeah I mean the people that are answering the surveys about agents are the people that fundamentally know about agents already, right? So they know about Lang Chain, they know about LangGraph, they know about Crew AI. So yeah, there's this whole world of people that are not out here that I think it's gonna be really, really interesting to see how they decide to use these tools in the year to come. So let's go ahead and open it up for Q&A after we sort of draw some conclusions from today for everybody to keep in mind. You know, when we build on-prem agents, we want to make sure we have the agent hardware support and we have the LLM hardware support. RAG is going to add other stuff. So if we're doing a Gentic RAG, we're going to have to keep that in mind. And when we just want to tinker, we can go far with something simple. But if we want to scale out our app usage, we're going to have to scale out our hardware. So if you do have questions, please let us know. We wanted to add some little bit extra time today just because it is the end of the year and it is our last event of the year. We really appreciate all you guys joining us throughout the year. We have one question um i'm starting to see them coming in the chat i do not worry my data leaving my prem can i use a grok llm in that research application i think this is talking about grok hardware because it's grok with a Q, not Grok. Yeah, you can use, if you don't care about data leaving your prem, you use whatever API you've got access to. Yeah, easy. Will my data leave the network if I am using LangGraph platform on prem like do i need to worry about it going to land graph server uh hosted somewhere and somebody's seeing it i think this is the indication here uh i believe the answer to this question is technically uh dependent on whether or not what you consider your data the short answer is no. The long answer is some data does escape through traces, things like this, depending on the integration that you have with Langsmith, what you're tracing. So this seems to be configurable though, which means that you don't have to share any data outside of your premises. And certainly with the enterprise solutions or even the cloud solutions uh that they offer so this is everything kind of past like the free tier uh your your you have full uh full ownership the data that never leaves leaves your you know your aws uh uh world it never escapes. Yeah. Next question. How are you guys thinking about sizing slash complexity for agents versus assistants? That's an interesting versus there. For example, in setting up a large orchestration agent with many sub agents as nodes, how are you thinking about sizing and yeah at the end of the day i think it would be very difficult to uh to want to reach for very complex agents i think we'll be the the great part of laying graph right is that it's kind of like these little atomic graph units right that that we that we care about um and so you know i'm content to have two or three sub graphs per kind of meta level uh but getting past that i think you you just wind up in this like hyper complex space where you have no idea what's happening even with great tools like studio and langsmith it's very hard to trace through and understand and debug what's happening even with great tools like studio and langsmith it's very hard to trace through and understand and debug what's happening in that application so you know creating little sub graphs uh or sub agents as nodes right uh we we don't we still want them all to have very limited uh scope uh especially because we just don't know yet what the best patterns are. They haven't been figured out, they haven't been trialed and tested. So I would say keeping complexity as low as humanly possible, right, is the goal. And so that's how I think about sizing. What's the absolute minimum that i can get away with and the thing still functions okay okay okay so um two last questions to end with the picture behind me guys this one the attributes of the sciences by jean baptiste simeon chardine it is something that i'm i'm waiting to update until we get the attributes of the AI sciences in an image. So if you guys get one, maybe I'll print it off and put it behind me. Last one. I love this question because it's very similar to a question that we got yesterday as we're beginning to close out our final course of the year, Wiz. And that question yesterday was, is there anything you'd put in the course that you couldn't put in the course? And we said kind of no, because that's the way we design our curriculum. This question is, Dr. Gregg, if you started a second PhD today, what would your topic be? Or I would say, Dr. Wiz, if you wanted to get a PhD today, what would your topic be? Or I would say, Dr. Wiz, if you wanted to get a PhD today, what would your topic be? And I think this is such an interesting question because you and I were talking about this last night, Wiz. It's like the stuff I studied during my PhD in optimization and computational design, and Wiz, your background is in sort of general physics. These sort of backgrounds really did prepare us quite well for navigating the LLM edge today. Like I spent a lot of time in gradient descent. I spent a lot of time learning the old school perspectives on optimization algorithms. You can look at things like PPO and DPO today when you talk about alignment. And, you know, it's like, I guess for me today, what I would do knowing what I know now is number one, I wouldn't necessarily decide on a topic. I would decide on a research group and a professor or an advisor that I really, really thought was the best. And I would really drive towards working with the absolute best. So here's a little insight into advice I've given previously, which is reach out to the research advisors you dream of working with, set up a trip to literally go there and ask, can you give me a walkthrough of your lab? Do you have any open projects? The most successful professors always have open projects and they're always hiring new PhD students. Show the initiative, show that you've studied the background on these LLMs, show that you're really, really in to staying up on what's happening at the edge, show you watched the ILLIA talk from last week, show that you're sort of there and you're willing to go wherever it is that you need to go. And, you know, that'll be sort of, you know, if you're into this stuff today, that'll be sort of the best way to stay right there at the edge and continue to be useful. Like, Wiz, what's your two cents on this? If you had to pick a thing to study today, what would it be? That's an impossible question to answer. It would be, it would probably just be more LLMs, man. Like, you know, I think, unfortunately, you already gave the best answer. So it's, I would just copy that, right? Like, find the people working on the problem that you think is important, and then do anything in your power to work with those people on those problems i think the the you know you don't have to be doing a phd to do that to be clear don don branson from our community uh you know just submitted a pr to ag2 because he likes that right he's contributing uh this is the it's it's it's never been easier to be part of the research community right now because the research community is exploring, right? They're trying to find what's next. Ilya just gave his big talk where he says data is fossil fuel and all this other wild stuff that's got people in a fervor and you know we're trying to figure out what the renewable energy is right now of of ai just to extend his what however you you may think about it just to extend what he was going for and it's never been easier to contribute somebody was easier to to help as an academic as a non-academic uh you know this is this is the time in quote unquote ai history that we we need those ideas foundation models are plateauing uh we all we have is using using them and using them the best no one knows the answer to that question yet that's right and at neurops you were at neurops this past week or so and you mentioned that research today is not just in academia it's very much also an industry you know and this has been true at open ai but it's also true elsewhere today it's true places like google it's true at places like amazon it's true at the large tech companies as much as it's true in academia now. And I think this is an important insight that wasn't the case when I was studying optimization, you know, 2010 to 2015. So don't sleep on industry. Is that a fair insight from your time? Don't sleep on industry. All right. Thanks, Wiz, for your insights. Thanks, Wiz, for the demo. And it's time to wrap it up, guys. Thanks for joining us for our very last YouTube Live session of the year. We're going to be back for 50 more next year until we take a little time off between Christmas and New Year's 2025. So look out in the newsletter for what's coming next. And please don't forget to keep building, shipping, and sharing over the break. Now, if you're looking to accelerate everything that you're doing and really, really try to get out to that edge as quickly as possible, you might consider joining us for the AI Engineering Bootcamp cohort number five, that's gonna kick off January 14 14th it's going to be a banger we've got some amazing peer supporters ready to help out we've got really the most agentic and state-of-the-art curriculum we've ever had and i'm really really excited for everything that comes next so hit me up if you have any questions about that and again uh have a great holiday season everybody thank you for supporting us this year. You guys are our early adopters and we really do appreciate your support. We look forward to following your journeys in 2025 and beyond, whether they lead to PhDs, industry, or otherwise. And although we'll take a little time off, you best believe I'm going to be cooking something up and I'll be building and shipping. And if you jump into the Discord, you might just see what we're ready to share over the holiday break. Until next time, we hope you guys keep building, shipping, and sharing as well. Merry Christmas and Happy New Year's everybody. See you in the new year. Продолжение следует... | On-Prem #agents with LangGraph Platform | 3,824 | AI Makerspace | 20241219 | Learn how to build, ship, and scale secure agent applications on-prem in this live event! We'll dive into prototyping with LangChain, deploying with LangServe, and hosting models locally with ollama—all on your own hardware. Perfect for AI engineering leaders in highly-regulated industries, this session covers choosing the right on-prem setup, scaling challenges, and live Q&A to get your questions answered. Don’t miss this deep dive into enterprise-ready AI solutions!
Event page: https://bit.ly/onpremlangraph?utm_source=youtube
Have a question for a speaker? Drop them here:
https://app.sli.do/event/d76N61c8Dbqdxf4RUMtWaZ
Speakers:
Dr. Greg, Co-Founder & CEO AI Makerspace
https://www.linkedin.com/in/gregloughane
The Wiz, Co-Founder & CTO AI Makerspace
https://www.linkedin.com/in/csalexiuk/
Apply for The AI Engineering Bootcamp on Maven today!
https://bit.ly/AIEbootcamp
LLM Engineering - Foundations to SLMs
https://bit.ly/aimllme
For team leaders, check out!
https://aimakerspace.io/gen-ai-upskilling-for-teams/
Join our community to start building, shipping, and sharing with us today!
https://discord.gg/RzhvYvAwzA
How'd we do? Share your feedback and suggestions for future events.
https://forms.gle/EJH8kjSoM7w9QKuU9
#onpremise #langchain #api #server | 2024-12-23T09:25:38.599442 |
https://www.youtube.com/watch?v=YRYxsb_VLhI | so i'm excited this uh this should be fun i thought i was gonna be able to say a little music and set the mood right but i couldn't figure that out in the allotted time and i just want to get rocking now that you're here i think it would be awesome to just kick it off get us going hard and for those that are wondering we have like uh two talks today and we did we did this agent hours last two weeks ago and we had the breakout room sessions which were a lot lot of fun. So we wanted to do it again, but we wanted to concentrate the breakout rooms into one place. And so now after the talks, this time around, we're going to have the breakout room or just one breakout room. So it's all going to be here with us now. So you can just hang out. We're going to have the talks, Samuel, you're up first. And then after that, we will, uh, hopefully have some cool discussions. So if there's questions, I'm sure I'll have plenty of questions to ask. I know there's people that are, so the fun part about this is that i think there's some people that are looking for how to get into this room right now so if you are watching us on the live you hit the agenda on your left head uh and then that should ping you right in here uh and once you're in it's easy it'll be around uh but yeah samuel if you want to share your screen or anything i think um yeah i will do that i will try and put you on this screen and put where what i have in the way of slides on this screen there we go and i can get going so uh can you i presume you can you can see that yeah it's nice and big too thank you that's good i hope that's a good size if anyone is is struggling to read anything like or struggling to understand like i can see the message so like send a message and i'll um try and fix it this is my i mean i've got frustrated with every possible deck format and so i'm i'm using this plain text format so i hope it's not too annoying i'll try and remind me how long i have to to talk just so i've got a got a feeling 15 minutes okay so i i probably i gave this to a version of this talk and it took a bit longer so i'm gonna like whiz through and assume most of you have some idea of who i am and and what's going on but like yeah started pydantic ages ago back in 2017 became a company uh in 2023 we built logfire which is our observability platform which you should go and try if you're building in python uh um yeah the first thing we did was release Pydantic v2. That was last year, the rewrite in Rust. Pydantic today is downloaded 300 million times a month. So it's important to remember, obviously, we're talking about, like, Gen AI today. But, like, one of the power of Pydantic is it is not just for Gen AI. It is, like, widely used in general development, used by everyone who's writing Python, which is basically everyone, but obviously it's got a kind of new lease of life from Gen.AI where it's used in general in API stuff, but in particular in like validating structured responses. So Pydantic is, yeah, I kind of said ubiquitous, like boring and generally liked by developers. The laugh is perhaps a bit strong, but I'll try it. And again, going back to my point before that Pylandic is general purpose, I think it's worth just repeating. This is what Pylandic was built to do. Pylandic long predates LLMs. And the basic idea is you give it, you define a model like this, which has, we use type hints in Python to define your types, but unlike normally type hints, which do, as per the name, are just the hint and don't do anything at runtime, we basically use those type hints to enforce that type. And in particular, and this is obviously super relevant to Gen.AI, we're kind of lax in the sense we try to coerce types. So if we see, you'll see here that PyCharm is complaining that ID is supposed to be an integer, but I've passed it a string. Pydantic by default will try to coerce values. Similarly, this is an ISO 8601 or RFT339 format date, but it's a string. Pydantic will take care of coercing values from in this case a string into a date um and we'll do the same in json so we'll take a json input and again this is the the coercion thing is super valuable because there's no date type in in json so if we were big super strict we snookered we literally cannot define a python date in json directly so we have to do the coercion thing. And then the last thing that we built into Pydantic, actually, Sebastian Ramirez, who built FastAPI, actually originally contributed JSON Schema. At the time, it was all about APIs. We didn't even know JSON Schema was going to go and get used by LLMs to define tools, but we built that long ago, which turned out to be super valuable. So this is like old school use of Pydantic for general purpose programming. What then happened and what everyone went and realized was, oh, when we're making calls to OpenAI and in particular doing tool calls, Pydantic becomes super valuable. So we have our same model defined here, given it a doc string. And with a little bit of work and a bit of like ugly use of Dunder methods, we can define tool calls entirely using our definition from our Pionic model. So parameters of the JSON schema, we take the name from the name of the base model, take the doc string to become the description and hey presto, we get tool calls. And then of course, the great thing is you can then use that same model to go and validate the JSON you get back from in this case, OpenAI and give you a user or generate errors as to why that data was invalid. And this is basically the thing everyone uses Pydantic for. So this is what now you can do this built into the OpenAI SDK. It's also used in all of the agent frameworks, LangJane, CrewAI, PhiData, Instructor, etc., etc., etc. I can't even remember the name of all of them, but the trick that they're all really doing and the thing that Pdantic is most useful for is basically this thing. And so the kind of reason I'm talking today is that we decided that, to put it politely, we weren't that happy with any of the Asian frameworks. We didn't particularly want to go and build with any of them, but we wanted to build with Gen AI, and so we released pydantic ai what like two weeks ago now it's been it's been quite a manic couple of weeks but yeah i think two weeks ago on monday is roughly when we released it and it's had obviously an amazing amazing reception but like basically it's a it's it's a number of different things but it's it's kind of a wrapper for this thing fundamentally. So again, we have our Pydantic-based model defined, but we're importing agent from Pydantic AI, and we're setting the result type to be user, which is going to go and do that same setting up the tool call. But this comes with a bunch of other nice stuff. So in particular, we get reflections. So if validation fails, Pydantic AI will take care of taking the validation errors that Pydantic gave us, sending them back to the model and saying, please try again. But there's more it can do. So dependency injection is something that we, again, Pydantic AI is not about producing the best possible demo or giving you the easiest experience in the first 10 minutes. It's about building production applications. And so dependency injection and type safety are really critical. Type safety doesn't particularly matter when you're giving a presentation, but when you're trying to build a real application, it's super valuable. And so in this case, we define our dependencies here. We have an HTTP connection. We have some API keys and we define depths here, depths type when we define the agent. And then we go and register some tool calls. So in the first example, we were using just the result type, which internally is a tool call to give you structured data out of the agent. But here we're defining tools that are effectively discretionary. So the agent can, sorry, the model can choose to go and call them or not. And in this case, we define a getLatLong tool call, which gets us a location. And we define a weather tool call, gets us a location and we define a weather tool call which given a lap long will return the weather and then we go and run our model here we define our depths and we pass them in as a keyword argument when calling the agent and we get our result. But the powerful thing here is the type safety. So using some clever tricks in Python's typing that you don't need to worry about, we can basically, with static analysis, guarantee that you're using the right depths. And so if you run mypy or pyrite over this and your depths type here is incorrect, you'll get an error. And obviously, once you define your depths type here, when you come to access your depths, so here we have context.depth.client, which is accessing this attribute. If you accessed it wrongly or if you called it wrongly, you would get a type checking error. And that's super powerful. So yeah, the overall workflow here is that the model would be clever enough to say okay i'm going to take the input which is let's get the weather in london and wiltshire let's extract the locations from that let's then go and call the get that long function to extract to turn those locations into elasticity and longitude and then take those values and use them to call the weather function to get the weather and then return it. And so at the risk of never run a live demo in a presentation, especially a 15-minute one, I will try and run that as an example. So this is that same code. The only difference is there's a bit more code to get the weather to look pretty. But if we go and, oh, nice error from GitHub Copilot, if we go and run that, oh, it's not going to run like that. But if I exit this mode and I come here and I try and run, what was I running? Run that example. What we should see is it making the relevant calls and then coming back with a response and giving us the weather. So OpenAI in this case, and I haven't talked about model agnosticism, but we have good support for other models, will take care of effectively calling the right tools. You can see here that it's calling them together. So it's calling get that long twice and get weather twice in parallel. But if we come back to the slides and we go on, this is all very well, but the problem is what's that model doing internally? And you started to see the beginnings of it just now, but like, obviously we think observability is really important. That's why we've built Logfire and why we've built, um, an optional integration into Pydansk AI so that you can use Logfire to understand what your agent is doing. So what tools it's calling and how long they're taking. Um, uh, so I have, I'm not going to open up, uh, Logfire on this occasion, but well, the first thing is you saw immediately here this is logfire giving you an output of what happened uh that's nested in the terminal but if you open the logfire dashboard you get effectively the same view but with much more information so not only do you see what took how long in not only the llm calls but also the http calls to the different apis but you can also see on every given line exactly what's happened the cost in terms of tokens and what's taken how long um so yeah we think observability is super valuable and we've seen that immediately in in logfire loads of people coming to use logfire off the back of finding it through Pydantic AI. But to go on to a few other things, and now I'll talk for a little bit about a couple of things that are coming up soon in Pydantic AI but aren't quite there yet. So agent handoff is a big subject. It's how Swarm got a lot of its attention you can already do this with pydantic ai by effectively registering uh other agents into dependencies like this and then calling those other um agents from within a tool call but we're just about to add support uh for for basically syntax for adding another agent directly to uh an agent so here we, instead of using the decorator to add tools, we add tools using the keyword argument tools. And then we have this agent tool, which you will see PyCharm is complaining doesn't exist because I'm literally working on the PR right now. But the idea is that you can register agents directly with an agent. Again, we use, we can do some clever stuff to make sort of types are correct. So all of these agents need to have the same depths type. So you can then pass depths between agents as a call, as a run is ongoing and have confidence in the typing. We obviously have an input type now, which is then used to validate the arguments passed to the tool. And I think this will be, this is one of the two forms of model comp or agent composition that's super exciting. So this is the like agent handoff. And then we have the kind of calling multiple different agents in sequence, which is the other, the other big thing for us to go and add. I don't quite know what the API is going to be for that yet, but that's another thing we're going to talk about in the future. One of the nice things about doing this in a structured way like this, rather than declaratively, imperatively, excuse me, by inside a tool, is we can go and build a state machine for what this looks like statically and tell you the different agents that are being called under what conditions and what the different flows are of a like multi-agent model. I mean, one of the problems we have honestly with Pylandic AI is the agent isn't quite the right word because our agents are quite small and self-defined. They're almost like agentlets um and so yeah we you end up composing them together to form actual uh as components rather than necessarily trying to bundle all of your logic into a single agent um i just i see some questions but i'm just going to go through a couple more things and then i can can try and answer. Another thing that's coming up, many of you will be aware of Model Context Protocol, which Anthropic announced probably, I said last week, but probably two weeks ago now. And again, we want to add this concept of tool sets to Pydantic AI, where instead of registering a single tool, you can register tool sets. And these can either use model context protocol or you can define your own ones, or we will have tool sets for things like, I meant open AI, sorry, I do mean open API there. So using open API and JSON schema, you could add register a tool set using purely an API's open API endpoint, or using model context protocol or defining your own ones. But the idea here is that if you're building an application that's using Gen AI, enormous amounts of your work is basically doing the boilerplate to integrate with, let's say, Slack's API. This should allow us to have a like rich library of different existing tool sets that you can basically register if and when when you want them um so yeah thank you very much i think now i don't know how much of the 15 minutes have used up i tried to go really quickly i may have gone too quickly but i can answer some some questions now or later on whatever you whatever you think's best yeah we've got uh a few minutes and anybody that is on the call if you want to just jump on and ask questions live uh in your voice go for it i threw one in the chat um i'll just answer a couple of things i've seen here i'm saying hello from cohere i think it's worth saying the examples I talked about here were mostly using OpenAI, but we already have support for Anthropic, Gemini, Grok, Olama, and there's a PR for Co here, and we're happy to add that. We do actually, one of the things that's been amazing in the time since we released this is the number of people who've come and contributed models to the point where we started to have to say no to some lesser known ones because we don't want to be just maintaining loads of different model integrations, but Cohort is definitely one we will accept. What were the frameworks lacking? I don't want to be too rude about existing people who built open source. None of them did what I wanted. In particular, they were not particularly type safe. If you go and invent some sexy, esoteric syntax to define chains of models, you end up losing all the type safety that is available in modern Python. And that's very very frustrating and so it was the production uh readiness in particular that was that we had trouble with um yeah i think achilles you is that how i pronounce it somebody's got their hands up yeah yeah Can you hear me? Yeah. Yeah. We hear you. Yeah. Great presentation. Awesome release, guys. I have a few questions. The first is right now the agent, you define some tools, and the agent decides to call some tools until it finishes the task. Can I override this logic? I mean, for example, an agent might want to use a tool, but depending on the arguments and the tool that it wants to use, I want to follow a different flow, to do something differently than the standard flow, or the native would do internal. Is this easy to customize? I mean, the whole principle of tools as they are defined by the underlying model providers is they are these discretionary things that a model can decide to go and call. If you wanted to have a tool that behave differently depending on your own context, you can have that logic inside the tool. So you take the different arguments you want and you decide what to do inside your tool call. There's also, the other thing is, if you want to get the result out rather than have it as being discretionary, you can use this syntax of structured results, which effectively require the agent to go and call this tool to end a particular run. And we have a PR open to basically add a way to exit a run early in the case that you basically call a function. And within one of these function tools, you can basically end the particular run. So again, we don't want to add too much esoteric custom Pylandic AI syntax. We want the library to be relatively thin and allow you to fall back to writing Python code for everything that doesn't need to be within the library. I mean, I think that is like good principle of building any like software library, any Python library. Don't go and do things that don't need to be in your library. Be relatively cautious about adding functionality. And I've learned that from having maintained a library that has 300 million downloads and has been around for five years. And if you accidentally go and add something because it seemed cool once once and now you're stuck with maintaining it. Yeah. So I think it's worth being cautious about those things. I mean, I got excited one day and defined a color type in Pydantic because I wanted to do something with colors. And now we still have a color type that hangs around that like passes hex colors, which is not something that anyone would expect to be in Pydantic itself. So, yeah, I'm, I think we have the right predicates here, but if there are particular things people want, we can definitely think about adding them. Excellent. Thank you very much. Uh, the code is this in some repo somewhere or the best AKA the best slides ever? Um, I, they, I think they are on my, my on my profile on my github um they're just an markdown file if they're not today i think they're called confusingly they're called boston because i talked at a conference in boston and i use them use them there but i can i'll tweet the link after this as well so yeah if anyone is looking for them they can find them sweet and then uh james was saying is this a replacement for llama index or lang chain uh that's your choice uh if you're happy with llama index or lang chain then obviously you're totally free to carry on using them uh i'm personally going to use it instead of instead of them i mean in in the particular case of llama index andG, we don't have yet a model agnostic interface for generating embeddings, but I think we'll have it. But again, coming back to my point about production and not trying to do too much, in the end, Vector Search is database querying, and we don't want to go and build an ORM, especially an ORM inside Pydantic AI. And so in the end, we're going to give you an interface for generating embeddings, but probably not an interface for querying because to do that in an actually production ready way, you need full ORM or write SQL, which would be my personal preference or depending on whatever database you use. Awesome. | Why Pydantic AI is the Future of AI Agents | 1,368 | MLOps.community | 20241226 | Join the MLOps community mlops.community/join. Thanks to arcade-ai.com for the support
Samuel Colvin the creator of Pydantic goes nto more detail on why they built Pydantic AI and what problem they're aiming to solve. He'll also cover some of the future enhancements they plan for PydanticAI.
Link to presentation: https://github.com/samuelcolvin/boston-ae/blob/main/slides.md
//Bio
Python and Rust engineer. Creator of Pydantic and Pydantic Logfire. Professional pedant.
This is a bi-weekly "Agent Hour" event to continue the conversation about AI agents. Join the next live event at home.mlops.community | 2024-12-29T13:02:09.315841 |
https://www.youtube.com/watch?v=vRTcE19M-KE | . It's wonderful to be here with you all. Large language models get the hype. This is certainly true. I do believe firmly that the future is in compound systems. I want to emphasize, though, that I also feel like the present is compound systems. I think we are all kind of intuitively aware if we're working in the field that we only ever interact with systems, but models get all the headlines, and so that kind of core insight can get clouded out. And part of what I want to do today is just raise your consciousness on the fact that we are all the time dealing with systems. But if that is getting lost in the shuffle, it is understandable because we are awash in headlines that are centered on large language models and other foundation models. This is a trend that really kicked off with the GPT-3 paper in 2020, of course. And that makes perfect sense because the headline result from that paper is that they did a thing in the world. They trained a model at 175 billion parameters, which is more than an order of magnitude larger than any model that had come before it. And of course, the central insight is that simple act of scaling led them to be able to design systems of a kind that we had never seen before. And so that really kicked off the trend of announcing lots of exciting new models. Here's another example of this. When Google announced its Palm model, of course, a headline there was that it was 540 billion parameters, so much larger than we had seen before. I think Palm is a system in the sense that I mean for the talk today, but they made the announcement in terms of a model. And they even did that for their Gemini system. I think, again, Gemini is a paradigm case of innovation at the level of model artifacts, but also at the systems that we build around them so that they can do exciting things. Gemini is a system, but of course, they described it as a model in keeping with this trend. And even OpenAI, which I think of as an outfit that really has reoriented around full software systems, where the paradigm example of that would be ChatGPT, which is a kind of experience that is powered by lots of interesting models. Even they, though, tend to announce things in terms of models. So GPT-4-0, the new flagship model. I think these are really systems, but they centered it on the model. And of course, that's just the big tech players. If you're like me, it feels like every time you open up Twitter, your feed is just full of breathless announcements about some exciting new model series that is supposed to change everything. So if you're living in this kind of world with me, it makes sense that you would end up thinking in terms of models. But this is the central lesson here. We only ever deal with systems. And let me try to make this vivid for you. Here I've got on the slide a large language model. You can imagine that you downloaded it from somewhere as a pre-trained artifact, or maybe you spent millions of your own money to train this thing yourself. In some sense, you feel like this is going to be a really exciting and highly performant artifact, but currently it is just sitting on disk, completely inert. All it can do is occupy disk space at this point. To get this model to do anything at all, you have to do at least two things. You have to prompt it to put it in some kind of internal state. And then if you want it to be a generative system, you have to define some kind of sampling method. You have to get this model to talk, because intrinsically, all it really does is represent things in an abstract space. Having made a choice about a prompt and a sampling method, you now have a system. And part of the main thesis for today is that both of these things are non-trivial choices that are going to define the kind of system that you end up with. The prompt, the model, and the sampling method form a kind of minimal system but already an important one and then of course in the more modern mode now we are taking minimal systems like that and giving them access to calculators and programming environments and databases and maybe even web apis and the web. I think at that point, we can all see that this is a software system. The language model might have a privileged place here as a kind of hub for all the things that are going to happen, but it is just one component. And the capabilities of this system are going to be defined by how all of these things work together in concert and not solely by the language model. And that is kind of the essence of my thesis for today. In my group and in related groups at Berkeley and various other places, we have been thinking in these terms for quite a while now. And at the start of this year, we did a blog post called The Shift from Models to Compound AI Systems. And I'm kind of today just giving you an updated perspective on the thesis from that blog post. I want to emphasize that that blog post is from February 18, 2024. And the reason I want to emphasize that is that in AI terms that was a long time ago, and I'm hearing more and more of this talk. And in particular, Sam Altmanman at a very recent open ai developer event was musing aloud about the future of ai and he said he expects to see a shift from talking about models to talking about systems which to my ear directly echoes the title of our blog post i don't know whether he was influenced by us or whether this is a case of convergent thinking or what um but we did say this first i guess that's the main point i want to make is like sort of goodbye sam we've been thinking in these terms for a very long time why is this important i'm gonna try to support these main claims here, but let me just give them to you at a high level. The first is that this is important just from the perspective of investing in all of the right things as you're developing an AI solution. Here's an analogy. When you design a system in this mode, you are doing something analogous to designing a Formula One race car. And I think we all can recognize that an F1 race car is much more than just its engine. It's also aerodynamics and friction and the driver and the controls for the driver and everything else. All of those things have to come together to get a really good race car. And too often in AI, we are behaving like F1 race car designers who focus obsessively on the engine. And I don't know much about F1 racing, but I'm confident that if all you ever did was think about the engine for this car, you would not end up with a good race car. And just by the way here, you can probably tell that I generated these images using ChatGPT. And it was sort of funny. I had trouble prompting it to get me to just show me a picture of an F1 engine. It kept putting wheels on the engine, so it would produce pictures like this. And I decided that this is almost a kind of comical embodiment of the thing that I'm worried about today, is that we're trying to design F1 race cars and we end up doing that by putting wheels on the engine. Do not do this. It's not going to lead to a good overall system. You've got to think in these terms. Here's another important point. Building the best system for your goals and constraints is almost certainly going to emphasize system components. And I maintain that, for example, a small model that is embedded in a smart system is always going to be better than a big model embedded in a simplistic system. I think this is true even if you're just shooting for raw accuracy or performance in some sense, but it is absolutely true if you also care about things like cost and latency and safety and privacy. Once those considerations come in, you're going to be thinking about a system that can offer you all of those guarantees. And in that context, a small model might be the only choice that you could make, especially if cost is a consideration. And then finally, we could expand the purview out a little bit here and just think about safety and regulation. I'm going to talk about this at the end of the talk today. A lot of discussion in these areas has been focused on model artifacts. And I think that is simply a mistake. I think we need to think about regulating entire systems, oops, not just models. I think if we focus on the models, we're inevitably going to let some really dangerous stuff through and also end up over-regulating things that could actually be just very productive. But if we think in terms of systems, I think we have a better chance of getting these things right. So let's dive in a bit now to the system components. And I'm going to stick with the simplest ones for today because I actually think those are the most consequential, especially in this era of in-context learning. But let's start with the one that seems on the surface to be the simplest of all. This is the method that you're going to use for sampling when you want your model to generate. Here is that minimal system here. I imagine that you've got a prompt that came in. And now we're going to think about getting this model to talk. And of course, there are lots of methods. We could do greedy decoding. That would be just where we decide that we're going to generate the most probable next token, conditional on what has come in so far. We think of that as a kind of default for generation, but there is no sense in which it is the privileged method here. It is just one choice among many. We could also think about top P. This would be where we're sampling tokens from the most probable tokens according to the distribution for the model. We could also do something like beam search, which would be a more exploratory method. We could think about insisting on token diversity for these generations. That's an even higher level ideal that we could impose. And we could even go so far as insisting that the things we generate conform to being valid JSON. A very high level consideration in play here. And that is just the tip of the iceberg. And in fact, if you start to look around in this literature, you find it's a space that's full of innovative ideas i won't have time to go through these in detail but a lot of these methods are doing things like making use of the gradients to get much more information about the forward backward flow in the model a lot of them are focused on that question of how we could ensure that the generated output conforms to a high level grammar like for a logic or a computer programming environment. Here's a really nice paper that tries to do that efficiently and also offers an overview of lots of prior methods that have tried to ensure model generations conform to the specification of computer code and things like that. And here's a really recent paper that is actually adding parameters to the model to try to adaptively find a good temperature so that the model is creative or constrained depending on the task that you want it to solve at any given moment, a really innovative way to think about sampling. And if you'll permit me one step further, we could expand out what we mean by sampling for generation and really get into things that are going to look like creative exploration. I've put this under the heading of majority completion strategies. This is a simple instance of a pattern that we're seeing a lot now in technology and in research. Imagine you've got a prompt that has some really hard reasoning task in it. Now, you could ask the model to simply generate the answer in one step, but that might be too difficult. What you might want to do instead is have it generate some reasoning path and then produce an answer. But you could do that by sampling multiple reasoning paths, and different reasoning paths might lead to different answers. So if I do this a bunch of different times, I get a distribution over the answers, and I might then say that the actual generated output in this context is going to be the answer that was the most common outcome given all the diverse reasoning paths. So that's a way of letting the model explore and do some reasoning in generation before you insist on it producing a final answer. Now, this isn't strictly speaking just a sampling strategy, but we might trick the user a little bit and hide from them all of those intermediate steps so that it looks like we have simply sampled an answer over here on the right for their input. And that's just one simple instance of the many things that you could do as you had reasoning paths lead to other answers, lead to other reasoning paths, and so forth and so on before finally producing the answer at the end. So that's the tip of the iceberg, I think, still, even though that's a pretty rich array of ideas. This is the essence of this, though. There is no one true sampling method for your model. This is a highly consequential step. In some sense, you are making the model speak, which it does not really do intrinsically. And the choice that you make here is going to be highly consequential for the overall system that you design. I think I'm emphasizing this because we too often don't consider the sampling method even though you can see that it really matters for the behavior of the overall system and it's going to interact in complex ways with the language model that you've chosen as your basis so that's sampling let's think now about prompting and prompting is really the heart of modern ai system development probably a lot of you out there have done a lot of prompt engineering. You've worked very hard on prompts and seen that you can get complex and interesting behaviors from the resulting system. So that is certainly the heart of all of this. And if you've done it long enough, you might have discovered that it also can be quite heartbreaking. I wanna discuss that with you as well. Let's reflect though for a minute on the origins of all this. We've started to take it for granted, but it's a new and very unusual idea. I think the origins of this really trace to the GPT-2 paper from 2019. This is Radford et al. There are some precedents before this, but this is the paper where it really crystallized i believe in that paper they say we demonstrate language models can perform downstream tasks in a zero shot setting without any parameter or architecture modification they go on to induce summarization behavior we add the text tldr after the article and generate 100 tokens in context learning. I want to confide in you all that when I first read this way back in 2019, I did not properly understand what it was saying. I was so cognitively biased against this kind of thing working that I thought surely somewhere hidden in this paper is a description of some fine-tuning method that they used to get that token TLDR to produce summarizations. But now I see, of course, that they really meant what they wrote, which was perfectly clear. I was just not primed to really understand it. That simply by relying on this language model to do in-context learning, they could get it to summarize. Really amazing. And in that paper, they also tried translation, question answering, text completion, and reading comprehension. Performance is variable across them, but mostly they can get signal, and it really is a kind of striking exploration of this idea. We now take it for granted, but at the time, at least for me, it was very surprising. Then fast forward just one year, we get GPT-3, right? We've gone from 1.2 billion parameters for GPT-2 now up to 175 billion. And the GPT-3 paper is an incredible exploration of what that scaling leads to. It's full of cases where they do successful complex in-context learning. And here's an example of that. For question answering, we just prompt the model with a context passage. And then we give it a few demonstrations, which are also just part of the prompt that we're creating here. And that prompt is meant to show it that we want the behavior to be that the answer is a substring of the context passage. We maybe provide a few of those, and then we give our target question. And the major discovery is, in context, the model can learn to imitate that behavior and answer questions as substrings of the passage. Really striking behavior. And the paper really already identifies a kind of general pattern for this, where we're going to have some context or instructions then we'll have some list of demonstrations that further exemplify the behavior that we want to see and then finally a target down here and they apply this template to lots of different tasks from QA to reading comprehension to machine translation here's a quick example I could say please unscramble the letters into a word and write that word as my instruction or context. Give a few demonstrations. Then at least at the time of GPT-3, maybe I would get something that was at least close to the behavior that I wanted to see and so forth and so on. So already a template for building systems in this modern mode. It's very exciting, but as I said, if you have done some work with prompting models, you have already seen that there can be a real dark side to all of this. This is a paper that kind of crystallizes it for me. This is from a group at Berkeley and UW. Quantifying language model sensitivity to spurious features in prompt design, or how I learned to start worrying about prompt formatting. This is a really dramatic case of sensitivity of the model to the prompt choice that you make. This is a sample table from the paper. They're looking at LAMA 2.7b. The paper explores a bunch of other models. But just to take one example here, task 280 is an instruction following task. They have two prompt formats for it, and those formats differ only in whether or not they have colons in them after the words passage and answer. And that minor change leads to an 80 point difference in the performance of one in the same language model. And that pattern is repeated across lots of tasks in this paper. This kind of shows you that with the wrong prompting strategy, you could make any model look arbitrarily bad. But the essence of this for today is that I think it shows that it just doesn't make sense to ask what it means to evaluate this model in the context of these tasks. The only way to make sense of this question is to ask how the model paired with a prompting strategy is going to do, and think in those terms, and that is already systems thinking. So get rid of that question and think right away in terms of systems. You should be asking yourself, what is the optimal prompt model combination, given whatever goals you have? And let me just tell you a few more stories about this. I'm going to kind of go up at a higher and higher level, starting with chain of thought. This is a lovely paper called Echo Prompt. It's an empirical exploration of different strategies for doing that classic let's think by step by step chain of thought reasoning and here's a sample table from the paper you can see that they've just got different variants of that phrasing let's repeat the question let's reiterate the question and so forth and so on and this is a glimpse of the results which show wide variation in terms of the performance of the model based on how we frame the chain of thought reasoning step. This is for Code Da Vinci 2, which is an older model, but again, I think this reproduces for newer models. And it shows that it just doesn't make sense to ask this question. If you asked, what does it mean to evaluate this as a model, even just for chain of thought, you already find that you have to be thinking in terms of model prompt combinations, or in this case, model chain of thought combinations. And again, that is systems level thinking, not model level thinking. And I think that's just going to bring so much clarity to your own development cycles. So get rid of that question. And let's go up one step even further here. This is a nice tweet that I saw. This person just observes, fun fact, I realized I was still using old GPT-4 for tool calling and part of an agent, updated to 4.0 and immediately broke stuff. I think if you've been in this business long enough, you have seen this happen yourselves. And this nice response tweet says, had the same exact experience where going from 3.5 to 4.0 mini triggered a fresh prompt engineering exercise as if it's a new project i think this is showing that the model and the prompt are inextricably linked we write these prompts in english but in fact they are more like an effort to communicate with this sort of alien creature the language model. And we can deceive ourselves by thinking that our understanding is going to translate directly into the performance of the system. We would like to get away from this, but I think the first step toward that is thinking about these two things as inextricably linked. And by the way, a fun fact here, which Petra pointed out to me, I didn't know this at the time of quoting this tweet, but this tweeter here is an alum of my natural language understanding course and so maybe it's not surprising also that they are a fan as you can see here of DSPi, the programming library. One more final story about this. You might have seen that people found the Apple intelligence prompts. These are in system files that ship with the OS. And the prompts are fascinating to read because you can kind of glimpse the development cycles that they had to go through to get the Apple Intelligence models to behave in the intended way. They're full of things that are kind of like special pleading with the model to do what they want and so forth. You can tell that these prompts are very tightly knit to the models that they shipped. And again, that makes me want to take the perspective that even though they're in English, they are much more like compiled binaries, which are meant to be paired with a particular model. That is, again, systems thinking. And I think it shows that these things are really interacting in tightly knit ways to deliver the performance that we want and we can't really separate the prompt as a system component from the model as a system component and this is a kind of a funny thing to reflect on and this does lead us to the dspi library that we've been trying to um to promote with people and get lots of people to work on it's a funny moment because some of the core lessons of artificial intelligence seem to have been forgotten. So let's just step back and reflect on this. Throughout AI, we've been successful in part because we have adopted these lessons, modular system design, data-driven optimization, and generic architectures. These are kind of the essence of what made us be able to move so fast, especially in the deep learning era leading up to all of these exciting large language models. I think these time-tested lessons got nicely embodied in libraries like Torch, and Tejano, and Chainer, and especially PyTorch. Those things embody these concepts and helps us be able to move more quickly as a result. So I wish these things would carry forward, but the current moment is actually kind of funny. In the current moment, we do a lot of prompt templates, manual adjustments to prompts, and we get complete model dependence, as you just saw in those preceding stories, where the prompting strategy iterated on over time by hand ends up very tightly knit to whatever model we happen to be working with. And this is kind of tragic for me because these time-tested lessons have taken us so far, and it's surprising that we seem to have forgotten them as we entered this new mode of AI system development. And that does bring us to DSPy. DSPy is a programming library for moving away from prompt engineering and toward language model programming. And this is a way of really honoring this insight that when you have a prompt, a language model, and a sampling strategy, you have designed a software system. And we would like, in essence, to bring the core insights of artificial intelligence and also software engineering to this new mode of AI system development. If you haven't seen DSPi before, let me just give you a quick sense for how it works. At the top here, I've got some code that sets up some high-level tools that are going to define the system. For this example, I've got Turbo as my model. I've got an index of Wikipedia. And I could have other tools at my disposal there if I wanted. Those will be set up at a high level. And then this here on the left, dspy.predict question to answer, is a minimal system in dspy for doing basic question answering. That's all it takes is one line. What happens under the hood when I actually use that system is that it gets compiled down into an actual prompt to a language model, of course, because that's how we communicate with these language models. But you can see here that there's a real difference between the code that I wrote and the thing that gets compiled down. The thing that I actually prompt the language model with is very tightly knit to the particulars of that language model and could look different depending on the tools that I had set up at the top here. So we've abstracted away, we've removed some of that model dependence that I was worried about before. That's a very simple program, I could also write a much more complex one. This is a program for doing multi-hop question answering where I might want to gather evidence from a bunch of different passages in order to answer a question. As you can see here, it is mostly Python code. It's also adhering very tightly to the design principles of PyTorch. And it's meant to allow you to freely express in code the kind of system that you want to develop. And then the really nice aspect of all of this, in addition to these design principles, is that as a final step, I could optimize this program. And in doing that, I would try to find a prompting strategy that was really successful, given the labeled examples that I have, and also independent of whatever tools I had chosen up here. And what I'm showing off at the bottom is an optimizer that would allow you to simultaneously optimize the instructions as well as the few shot demonstrations that you were using. Those are both crucial aspects of successful prompting. And this kind of moves all of the burden of finding good ways of doing that onto the optimizer a return to that time-tested lesson of data-driven optimization and to show you how much this can matter and how important it is to think about expressing systems in these terms i thought i would just highlight for you a few results from the original dspi paper to emphasize a few different things about how important it is to think in systems terms. So I'll have here as a kind of framework for evaluation the program I'm going to write and the optimizer and those will get paired with the language model to specify a complete system. And for the two models I'll have Turbo and Llama213B to show very different sizes and types of language model. At the top here, I have a real baseline system. It's the one that I showed you before, where I just go from questions to answers. And for my optimizer, I'm just going to randomly select a few few-shot demonstrations to help the model understand what kind of behavior I want to see. For this baseline here, we get about 34 for turbo and 27.5 for llama 2, where our metric here is exact match on the answer that we want to see. So that's a baseline system. If I go up and I just do retrieval augmented generation, a very simple DSPi program, I already get a boost just from gathering relevant information. And if I do bootstrap PewShot optimization, where here I am using the DSPi program to generate full examples, which could include retrieved passages, and including those in the prompt as examples of the behavior that I want to see, I get really large gains from my baseline, all the way up to 42 and 38 here. We could also think about using React agents. This is a very interesting proposal for having the model do some reflection and thinking about how to solve the task and break it down into pieces. This is less successful for this problem, but this still shows the power of systems thinking, because I have combined my program here with different optimization strategies and, again, seen real gains over the baseline. There is the nice twist here that you can see the human reasoning prompts, the ones that were carefully written by hand, actually underperform the prompts that we get from simple bootstrapping up here and also in this React context, which again shows the power of data-driven optimization over trying to very intelligently think of your own prompting strategy yourself. And then finally, at the bottom here, if we move to a program that is designed to do multi-hop reasoning, it is designed to gather evidence from multiple passages and use that to provide answers, we get really good systems here. We have gone all the way from 34 as our baseline up to almost 55 for turbo and seen even larger gains from 27.5 all the way up to 50 for the smaller model there. Really showing the power, not only of intelligent system design, where we think about the prompting strategy and the optimizer here, but also showing the power that we can get out of small models. We have almost closed the gap here, and we saw larger gains for the smaller of the two. And that's a bit of a digression, but I think this is so important in the current moment that we think about designing systems that can get the most juice possible out of small models. This is a really insightful post from an analyst at Theory Ventures, and it's just titled Small But Mighty AI. And it includes the observation that 77% of enterprise usage of models is at the 13 billion parameter size or smaller. So not the largest models that get all the headlines, but actually much smaller ones. And for a glimpse as to why that is happening, you could just think about the latency numbers that are included in this blog post. If you've worked at all on industrial systems, you'll know that it's very nice to be at the space of around 18 milliseconds for latency. But as we move up to things that are more like above 50 milliseconds for latency, and certainly all the way up to 750 milliseconds, we have real headaches here, where at some level, those headaches are going to translate into this being very expensive. And of course, that might be just prohibitive. You could have a wonderful solution, but if it costs too much for people to use it relative to what you're gaining in the rest of the organization, it's just a non-starter. And again, that is pressure to pick the smallest models, but to get any juice out of those small models, we really need to think about the systems that we are designing around them. So your prompt will be a deciding factor in your systems performance. really need to think about the systems that we are designing around them. So your prompt will be a deciding factor in your system's performance. That's the thing that I want to emphasize the most here. Finally, I want to talk a little bit about tool access, because that, as I said before, is where we really transparently end up thinking in terms of entire systems, not just in terms of models. So this is the step where I'm going to actually bring in calculators and programming environments and databases and the web and web APIs and so forth. I think it's really clear at this point that these are systems. So I thought it would be nice to instead of diving into the technical details, think about the overall consequences of designing systems in these terms. So as some food for thought for you, let me pose a few different questions. I'm going to offer you choices between two types of systems, and you can reflect for yourselves on which system you would prefer for whatever you're trying to do out there in the world. First question, which is more reliable? A giant large language model that embeds a snapshot of the entire web as of today, and then it's frozen, or a tiny language model working with an up-to-date web search engine, which would be more reliable? Here's a second question. Which one would you prefer in some general sense? A giant large language model doing contextless autocomplete on your phone via a centralized service, so it's sending these completion messages back and forth, or a small model doing that same autocomplete task, but locally on your phone or whatever, and using your own chat history. Which would you prefer? How about this this which one is more dangerous gpt4 with no access to databases or the web or a 10 billion parameter language model that has been instruct tuned to log into websites and has tool usage that gives it access to the web which would be more dangerous. And then finally, what do you expect to see in 2026? Massive foundation models that do math, retrieval, reasoning, and so forth entirely in terms of their parameters and their standard computations, or systems consisting of multiple models and tools working together in a coordinated fashion to do things like math and retrieval and reasoning. Maybe in the Q&A, we can talk about these questions together. Let me start to wrap up here by just offering you a few thoughts on high-level consequences of all of this for technology and society. A nice place to start here is this recent legislation, SB 1047, which was vetoed by California's Governor Gavin Newsom. SB 1047 sought to do many things, but one of the main things it tried to do was offer new regulation based on the size of models. And in particular, SB 1047 was going to regulate specially models that cost more than $100 million to train and had 10 to the 26 flops performed during training. So truly a massive scale that we're talking about, whereas models of smaller sizes were by and large not going to be regulated by SB 1047. Newsom vetoed this, and the rationale that he offered is really interesting. He notes that smaller, specialized models may emerge as equally or even more dangerous than models that were targeted by the legislation. I'm not sure what prompted him to say this, but I think it is really wise. I think it is the observation, in essence, that we could take small models and embed them in complex systems that would, as systems, do things that were really surprising. They could be productive, but they could also be quite dangerous. But the constellation of those things working together in a system could be more dangerous than a very expensive model that was just sitting there on disk, unable to access the web and do other things that are really going to get us into trouble. So ultimately, I think this was a wise decision and points toward the idea that future legislation should be oriented around systems, not around models. We could also think about the consequences for research and in particular for the kinds of comparative evaluations that we conduct. There are lots of leaderboards out there that are meant to rank different language models. This is one from Hugging Face, this is Chatbot Arena, and we have Helm from Stanford. And all of the entrants in these things are nominally listed as individual models. But of course, you can't actually evaluate a model. You can only evaluate a system. Under the hood here, we have to have at least a prompting strategy, a model, and some procedure for generation, the minimal system. And I can't help but feel that this is not quite the thing that we want to be evaluating. For my F1 race car analogy, this is as though we were running races that really were just engines with wheels. If you want to run a race of engines with wheels, that might be a sensible thing for you to do, but we should be clear-sighted about the fact that that is probably not the thing that we think we are doing. What we really want to do is think about these language models as components in larger systems. And so I would kind of exhort the community to reorient all of these leaderboard evaluations around entire systems. And so I would kind of exhort the community to reorient all of these leaderboard evaluations around entire systems. We could give privileged place to the language models if we want to as important components there, but we should really think about all the pieces working together in Constellation, because that's the relevant thing to be asking when we think about these technologies being deployed out in the world. Final slide here, and this is kind of a prediction about the future, so this is also something that you might think about and ask questions about. I'm just reflecting on these last five or so years and what has happened, and what I think we see already is a few different notions of scaling in play as driving forces behind all of the progress that we've seen happen so fast. Starting in 2020 with the GPT-3 paper, we kind of began the era of scaling unsupervised training. That is just taking a really big language model and have it learn to imitate all the data on the web and all the data you can find indeed. We continue to live in an era in which scaling up these processes is showing some gains, but I believe that on its own this is not showing the kind of gains that we're seeing overall in AI. Those gains now are being driven at least in part by scaling up in a different sense. This is scaling up instruct fine tuning. Starting in about 2022 with especially chat GPT, we saw the power of having large teams of very smart humans create good input-output pairs that we can use to update models so that they learn particular things and acquire particular skills. And again, we continue to live in an era in which scaling these things up is leading to gains. But I think we see also that it's not a silver bullet. And that has led very recently to a rise in the first theme that I mentioned. If you think about scaling, sampling for generation, we're now seeing very sophisticated forms of sampling for generation that you could think of as kind of scaling up of inference time processing, search that you might do on your way to producing an answer to a user query. And I think that's going to continue from 2024 onward. But here's the final prediction. As we really think about the future, we are going to see scaling up of systems. Transformative things are going to happen in virtue of the fact that we take perfectly good language models, maybe even small ones if we're thinking about high volume services, and give them access in productive ways to lots of different tools and other things that make them really capable as systems. That's my prediction about the future. I think you should join me in moving from LLM thinking to full-on system thinking, because I think it will make you more productive in your work and lead to bigger gains, just like I'm predicting for 2025 and beyond. So I'll stop there. Thank you very much. I'm eager to hear your questions and comments. Thank you so much, Chris. Amazing, so wonderful to hear from you again. Like, I always learn so much. We got a bunch of questions. Hopefully more of those will keep coming up as we have this Q&A part of the session. Maybe the first one is more about the understanding of kind of the compound systems and the relationship to generative agents. So this question came up a few times, like how do generative agents play into this whole thing? Agents, yeah. So that could be a kind of technique that you use to take a language model and have it not only do things that it couldn't do in simple generation on its own, but also could be the key thing that bridges you into having that language model make use of tools and also make use of tool output. And so I think that really is systems thinking. And I would encourage people not necessarily to be purists. If you think about this as a software system, you could design an agent that was entirely depending on the model doing complex things in generation and really nailing whatever problem you've set up for it. But you could also write some code that would help bridge the gaps between the language model's capabilities and the thing that you want to see. If you're designing a system, that's a very natural thing to play with. And I think we could test this by the results that you achieve. Thank you so much. Near the beginning of the talk, you mentioned that modern AI systems are already producing multiple reasoning paths in the background. How does the system produce reasoning paths? And now how can we give users more access to the inner workings of these systems to help them evaluate the results? Yeah, interesting. There are a few layers to this. The thing that I was referring to with the majority completion, that really kicked off with the advent of chain of thought methods, like let's think step by step. And that was just the simple observation that if you had a model go through that, prompt it with let's think step by step and let it generate tokens in response, the process of producing all those tokens in ways that we don't fully understand often led it to more reliable answers. And you can see, we see now in retrospect that that was just the tip of the iceberg in terms of what's possible, because I could have it do chain of thought to generate an intermediate answer, which could itself generate chain of thought reasoning. And I could have lots and lots of inference paths leading to lots of different outcomes. And then think about doing statistical analysis of all those outputs to decide on a final generation. And that's a real playground of different ideas that you could play with, different techniques that you could use. And it really is systems thinking, because you're going to now think about the prompt that came in, the structure of the overall system, which might even have multiple language models working together, and also think about how you're going to actually do these generations, what kind of sampling method you're going to use. And it adds that twist of how are you going to decide on the final answer. In my illustration, it was the most common occurring answer across all the reasoning paths, but you could even think about different variants of that idea. And I think that we've seen a kind of culmination of this in the OpenAI announcements of its 401 models, which are clearly doing lots of inference time work before they produce an answer for you. Part of the question was how we could expose more of that. I think OpenAI is not going to expose more of this. I think they regard these as trade secrets. But you could explore with smaller models to see what kind of behaviors you can coax out of them. And I think a lot of this is going to start to be explored in the research literature going forward. Thank you so much. The next question is about challenges to solve or to reach the system level scaling. What do you see is still missing or the most challenging to get there? What would you point out? So that's kind of like what is going to happen in 2025 and beyond in terms of scaling systems. They will just get ever more complex. You could think a paradigm case of this would be Google search. We have some precedent for this, where at some level that began from a very simple search technology where they were just indexing pages on the web. And by now, 2024, that is probably such a complicated software system that no one individual could even begin to understand it. But it still functions as a result of teams of people and lots of dynamical behavior. And we're just at the starting point for these Gen AI systems. They look like Google did in probably the year 2001. And so over the next decades, I think we're going to see just incredible systems built up. And I think the thing to watch is when people actually do provide lots of tool access. So that essentially these language models become like you or me as we cruise around on the web, with the capability to try different pages, log into different systems, communicate with people on social networks. When that really starts to happen in a very free form way that maybe even the system designers don't understand, we are sure to see consequential things happen in the world. And you can probably hear in my tone of voice that I think some of those will be productive and some of those will be quite problematic. But you can just predict that it's going to be consequential. There is also a follow-up question on that. And what are the recommended guardrails, right? Like, how should we thinking about, like thinking also not about just the positive consequences, but possibly also negative consequences, how to establish the guardrails? Like what might we be needing to think about at this point? Yeah, that is a wonderful question. And I'm not a regulator or a lawyer, but if I were, this would be where I would focus all of my attention. As I said, I think focusing on the language models is going to miss the mark. But I think if we think about regulating the systems, we're more likely to get it right for a few reasons. First, we just already have legislation that governs how these systems can behave, indeed, how any software system can behave. And I frankly think a lot more of that is going to carry over into the Gen AI realm than we currently give it credit for. We could also think about human aspects to this, like guarantees that these systems need to identify themselves as non-human as they interact with us, because I think that helps us as people figure out how to calibrate to them as agents. And it could also help us kind of control the situations that we allow them to enter into. But there might also just need to be some fundamental restrictions on, for example, whether or not we allow these models to log into certain kinds of websites or interact freely on some kinds of social networks. I'm not sure I'm taking a kind of wait-and-see attitude, and I guess I'm hoping that the initial disasters are not cataclysmic so that we can learn from them and figure out how to respond as a society. Thank you so much. There are a few questions about consequences for individuals. So we talked briefly about kind of how this can influence the technology and the society. Like, how can people think about this for themselves? There was a question actually, which was phrased, like, how might this influence a kind of normal person like me in kind of the next five years? Like, what would you say to that? Oh, I think it's going to impact all of our lives. I think if we weren't even thinking about AI, it would still impact our lives, because we're going to see more and more systems that can help us in our daily lives. And again, a language model on its own is not going to help you very much at all because it is kind of inert. But a language model embedded in a system that has prompting strategies and access to tools could be something that helps you with low-level tasks in your lives. It could also help you with doing, you know, interactional things, companionship, discovery of new ideas, creative expression. I think we're going to find systems that can help us with all those things. And that's the bright point. And then, and also things like education. I think, you know, I have a big booster on the idea that there's about to be breakthroughs in terms of the education experiences we can provide in a customized way at low costs because of Gen AI. But there are also going to be bad actors that try to do things like social engineer their way into getting our usernames and passwords, right? That is like the dark side of giving them a capability to log into websites and something like a goal of doing so. If we leave them unfettered, they might do really surprising things in response to that goal. So we're going to also have to, as individuals in society, just be on the lookout for AI systems run amok. There are a couple of directions people are asking about how they can learn more about this so one of the groups of people at least I'm seeing is very technical so like they are trying to learn more about kind of the SPI why and like kind of what you are recommending here the other part is like leaders and business leaders and they are not sure kind of like how to grasp this what this could mean for the businesses and kind of leadership thinking do you have recommendation for both oh fascinating in terms of general education for dspi we have a discord and we have lots of tutorials and i think one wonderful way to get involved in an open source project like that is to file issues or even make prs that might turn into contributions because that's a way to introduce yourself to the community and begin to have a positive impact and learn about the kind of things that are on people's minds. Maybe the first stop there is the Discord. It's thriving and give you a sense for the kinds of things people are working on. Some of them are working in teams, and you could think about kind of joining forces with them. So that's wide open the research community is also wide open i do think there are wonderful things on youtube that could help you think about different prompting strategies agents tool usage a lot of the themes that i touched on today can be unpacked into entire courses frankly but youtube is pretty good about that in terms of understanding what's happening in industry, unfortunately, it seems like things are getting increasingly closed and we're losing a lot of insight into exactly the decisions they're making and why they're making those decisions and so forth, even at the level of research innovation. And then if you're yourself a leader in an organization thinking about how you would define a generative AI strategy, that's probably worth its own separate lecture. But there are some things that I could offer as advice. And I think that maybe the main thing would be to early on think about what success is going to look like and what kind of testing you're going to do. So that's a guided process with some clear goals in mind. And then you could think about designing a system that balances the thing you're trying to achieve against the known risks of deploying a system in the current era. Great. Thank you so much. There are also people asking, there's a lot of information as you mentioned like you showed that nice slide with kind of just the twitter or x kind of running like how do you get information and like up to date with what's going on in the field like what would you recommend somebody you know they have maybe 10 minutes a day to kind of deal with like what's going in the field like what would you recommend people and also what do you yourself follow and like find important um to learn about yes i feel some sense of loss about this because four or five years ago twitter was my go-to resource for this my timeline was full of people announcing papers and discussing papers and it was a great way to filter, to get a sense for what was important and what kind of innovative things were happening. Because of changes at Twitter, now X, it feels less vibrant in that regard. And it seems like the communities have spread out into blue sky and threads and Mastodon. So that's less reliable. One nice thing about this, this is is kind of meta but with the rise of generative ai and in particular systems that can do retrieval augmented generation into research papers you can often begin to get a sense for an area by simply typing a common sense question like what is deep learning or what is chain of thought into some search engines like chat gpt which does search functions now i saw that there's a new tool from semantic scholar at the allen institute that does this and i think that could be very productive in terms of getting a sense for what's happening in the literature and where to begin in terms of papers to check out and so forth nlp is kind of easy for this because it's a very organized community in terms of its literature. If you go to the ACL Anthology, that pretty much includes every NLP paper. And you could use those search functions together with citation information to get a sense in a given area for what's most influential and what the latest things are. And that's a nice chance also to push the course that Petra and I do, Natural Language Understanding, which includes a project development phase that includes the technique of building a good literature review, kind of forming an experimental protocol, and then writing a paper maybe with some associated code. And that is a kind of guided way to do a focused research project and get a sense for the rhythms of research in the domain. Great. Thank you. Maybe we can take one or two more questions, like shorter ones. So one of them is, is DSPI getting traction in the business world? My engineering team is very focused on long chain, but I'm trying to open their eyes to what DSPI offers. And we've gotten actually this question a few times about kind of the long chain and kind of like where to look and what to do. Do you have some recommendation and kind of your input to this question? Yeah, I would check out, so dspi.ai is the website. It's got documentation. It also lists out use cases. One nice thing about those is that many of them are blog posts from various organizations, from like JetBlue down to very new and small startups that have been using DSPi in various ways. And that could give you a sense for the coding patterns and the kind of problems that people have tackled and also plenty of starter code. And then I think at a high level, whether you use DSPy or not, it could be Langchain. But I think the thing to do if you're just starting out is make some principled choices. It's very tempting at this point to begin with some prompt templates. And I think that can be very productive in terms of teaching you things. But the problem is that you might look back in six months and find that you have designed in some sense a system entirely around these prompt templates but now any change that you want to make is almost impossible and it has unintended consequences throughout the system and then you have that dreaded moment which i alluded to where somebody says you can't use claude anymore You have to use these open AI models or these Lama models. And what you discover is that now everything is broken. And that really does mean going back to step zero and rewriting all these prompt templates. You have to avoid that failure mode. If you're already entrenched in prompt templates, you might just have to live with that and try to get out of it somehow. But if you were just starting out, do something that will involve expressing these things as proper software systems and i do think dspy is a great choice for that it's a specially tailored to people who are really experienced in machine learning it has many of those pi torch principles that i alluded to before so there could be a bit of a learning curve at the start, but I think the investment will pay off in terms of you ultimately ending up with a system that is very flexible and adaptable and can respond to new requirements and changes in the underlying environment that you're working in. Thank you. And last kind of summary question. If people should take one thing from this talk, or like one fact, one information that you find the most important, even if they forget everything else, what would it be? What would you recommend? I think it really is to just avoid the trap of thinking entirely in terms of models. We're all doing it and I feel like it's a trick we're pulling on ourselves because we always talk about the latest model releases and we even talk about things that are clearly software systems like ChatGPT as though they were models when in fact they are not. And so if you just embrace the fact that it's a system, that will mean that you concentrate your energy not just on the model choice or its properties but also on the other things that are so consequential and in that way you'll be like that f1 race car design team that of course is focusing on much more than just the engine because it is an effort to try to get all of those complicated pieces to work in concert to do something really difficult and so i thinkically, that's just a better perspective to have. And then if you think about that note about what's happening in industry, where most of the energy is focused on small models, this is especially important. Because with a small model, you can't rely on simplistic system design, you know, a simple prompting strategy. You have to do everything you can to get that relatively small thing to do a big thing in the world. And that really does place more pressure on system design. But as you can tell, I think that that pressure is actually just a huge opportunity. Thank you so much, Chris. Thank you so much for making the time. Thank you for all the wonderful questions. We couldn't get to all, but we tried to kind of at least cover most of them on higher level. Appreciate everybody joining. We will post this on YouTube. Also, we will share the recording of this session. So thank you again and have a wonderful day. Thank you, Petra, and thanks to everyone for all those questions. This was really a wonderful discussion. | Stanford Webinar - Large Language Models Get the Hype, but Compound Systems Are the Future of AI | 3,485 | Stanford Online | 20241203 | In recent years AI has taken center stage with the rise of Large Language Models (LLMs) that can be used to perform a wide range of tasks, from question answering to coding. There is now a strong focus on large pretrained foundation models as the core of AI application development. But on their own, these models don’t do much besides taking up significant disk space—it’s only when they’re embedded within larger systems that they start to deliver state-of-the-art results.
In this webinar, Professor Christopher Potts will discuss how AI systems built with multiple interacting components can achieve superior results compared to standalone models. He will also examine how this systems approach impacts AI research, product development, safety, and regulation.
Learn more about the AI Professional Program: https://online.stanford.edu/programs/artificial-intelligence-professional-program | 2024-12-29T19:40:50.946619 |
https://www.youtube.com/watch?v=B4LWmFFHioY | Hello everyone. We work at Neo4j and the GenAI team and today we will be talking about measuring the performance of vector rag versus graph rag. So I assume that people in this session are already familiar with the concept of rag and the different rag strategies. But in case you don't, don't worry. I'll be briefly explaining those concepts in this session. But I highly recommend you to join the dedicated session. There are plenty of them diving deeply into these concepts. So in this presentation, I'm going to talk about the motivation. So why do we need an evaluation for the RAG strategies and the main objectives behind this work? I'm going also to talk about the different RAG strategies that could be used using Neo4j database. I'll be also talking about the framework that we use for evaluation, which is called RAGUS. And we will be also talking about experiments that we conducted and share with you some insights and of course, complete the future work. So what is RAG? Why do we need an evaluation for RAG pipelines? And what are the challenges? So RAG stands for Retrieval Augmented Generation. And it's basically when you don't only have a basic interaction with LLM where you only provide an input query and a prompt, and you just receive an answer that is generated by the LLM, but you provide it with a grounding context, which means an additional information that you acquire from external sources, such as documents, APIs, but most interestingly, knowledge graphs. So this has many advantages, but mainly it helps the LLM to overcome the knowledge cut off. So for instance, there's plenty of knowledge that wasn't present in the training data. And there are also like domain-specific knowledge and confidential data or enterprise-related data. So when you provide the grounding context, you provide this additional information and you help the LLM to answer better the questions and to reduce hallucination. It's also considered as a cost-effective implementation with respect to other implementations such as fine-tuning LLMs. So in general, there are two phases in the RAC pipeline. The first phase is the retrieval phase, where you acquire this additional information from external sources. Here in this presentation, we'll be focusing on the Neo4j database. There are different strategies that you can use in order to get this additional information, and you have different parameters or hyperparameters to configure. For instance, if you're relying on something that is called vector search, you have to configure the K for the top K nodes that can be retrieved from the graph, similar to the query. You can also configure the similarity threshold. We'll be talking about that in the next slides. And the generation phase, the LLM used this additional context or additional information and answer the question so here uh also we have different configurations and different parameters to choose for instance we have uh to choose which llm to rely on in order to generate the answer the maximum answer size etc so here in this work we added a third phase which is called evaluation in order to prove which strategy performs the best, because we have different configuration for retrieval, different configuration for generation. So we want to evaluate each strategy in order to identify weak points and to be able to improve them in the future and monitor, of course, the pipeline and production. So there are many challenges for the for doing the evaluation. So first there's currently no standard process for evaluation and if the evaluation is not automated it's really very hard to generate a ground truth answer. So it's time and effort consuming. And we also need a framework that can give us an overall score for the evaluation of the whole framework, but also to be able to give scores related to each component of the five times. So for instance, further retrieval apart, further generation apart. So here I'm going to talk about the strategies that we used using Neo4j databases. Again, there are plenty of other sessions on VectorRAG and GraphRAG. I also recommend to watch the recording of the workshop on the Graphrag Python package that was developed by Neo4j, which dive deeply into the concept of vectorrag and graphrag. But briefly, what is vectorrag and what is graphrag? So vectorrag is when you rely on vector similarity in order to acquire this additional information that we call grounding context. And this is done usually by doing a similarity function between vector of embeddings that represent the query and vector of embeddings that represent some textual properties on nodes and the graph. In the graph rag, you basically rely on the power of the graph and the relationship between the nodes to not only acquire information from text nodes, but also from the neighborhood, so so from related nodes thanks to the power of graph traversal so here in the uh in this work of evaluation we use two uh different strategies for graph frag the first one is augmented vector search so you basically rely on vector search uh to um search for the top case similar nodes with respect to a query. And then you rely on those nodes as starting nodes, and then you traverse the graph using a Cypher query, and you can acquire additional information, which are concatenated in the context and provided to the LLM. The other strategy that is also considered a GraphFact strategy is Text2Cypher. And here, you basically rely on Text2Cypher to automatically translate a natural language query into a Cypher query. And the Cypher query is executed on your graph and you get a subgraph and you can concatenate information from the subgraph and provide it to the ELLM to get better answers. So let's say you have a graph, a Neo4j graph that represents movies and actors who acted in those movies. So let's say the embeddings in your graph are generated only on nodes related to movies, and the query is on the movies, then vector search would do good for this, because it will retrieve a relevant context and provide it to the LLM. But if your query is on actors who played specific roles in those movies, then you get a limited context with the vector search. While with the GraphFrag, you can, based on the nodes movies, and then using a cipher query, you can traverse the graph and then acquire additional information. And in this case, information about actors will be concatenated in the context and it will be provided to the LLM. But this requires, as I said, a retrieval query to be configured, which is a cipher query. Whereas with text to cipher, you don't need to configure a cipher query because text to cipher will automatically translate the natural language query into a cipher query. It will be executed and then information related to actors will be concatenated in the context. Now, we used the RAGAS framework for the evaluation. I'll be talking about RAGAS, and I'll be explaining some metrics that we considered relevant for the evaluation. So we basically looked at different tools and frameworks that are based on an automatic evaluation of the RAC pipeline. I'm sure this is not an exhaustive list. There are plenty of tools and frameworks that could be used, but we relied on Raga specifically, not only because it's RAC- and it also um involves uh core met well-known metrics for for the evaluation and it's open source but most importantly because it requires no to very minimal human intervention and if you are also working with other well-known frameworks such as lang chain and llama index it's really very easy to work with RAGAS, too, for the evaluation part, because it offers a good integration with these framework. So RAGAS stands for Automated Assessment of RAC System. As I said, it's an open source project, and this is the link of the related GitHub project. It is based on the LLM to provide the evaluation or metrics for each component of the RAC pipeline. So basically, there are some tailored prompts, and they are given to the ELLM, so the ELLM can generate the scores and evaluation for each component of the RAC pipeline. The good thing with RAGA is that you can get an overall score evaluating the whole RAC pipeline, but also some scores relative to the retrieval and the generation separately. So this is an example of the Ragus metrics that we relied on in this work to compare vector rag and graph rag. So here, for instance, it's a list for the rag, for the ragas metrics that are relevant for retrieval. So some of these metrics do require the presence of ground truth answers generated by humans or by LLMs. But other, like the context relevancy, do not. So for instance, context relevancy, basically the LLM is asked to count the number of statements in the context that were relevant to answering the question. And here by statements, I mean, like, consider the following sentence, Nathalie is a software engineer and works at Neo4j. So here we have two statements the first one is that natalie works uh at neo4j and the second one is that natalie is a software engineer um the other two like context precision and context the context recall um do require ground truth uh answers so here basically the llm is asked to find ground truth elements either in the first case sentences of the context or in the whole context now there are other metrics in ragas for the generation part so so for instance um answer relevancy um you ask basically the llm to infer questions from answers and then to compare inferred questions with the original one in order to provide a score for answer relevancy faithfulness for instance um you you ask theLM to count the number of statements and the answer that were supported or can be inferred from the given context. And now I'll hand it over to Magbule, who will be talking about experiments. Thank you, Nathalie. In the rest of the presentation, I will be talking about the experimental setup and the results. We will be talking about how we prepared the dataset and which kind of databases we have used. We have used two databases from two domains. One of them is movies and the other one is products. We have used the available databases which are publicly shared on DemoLabs, DemoNeo4J Labs databases, so you can access them. We have expanded them by preparing embeddings for them. In these databases, we have some nodes, such as for movies we have users, movies, actor or director, information, and genre. Each of these nodes have some properties. For example, in movies we know the title or the release year and we have relationship between these nodes. For example, user has rated a movie and also we know the property of it. The rating is some value. And similarly for products, we have product, article, department and customer notes, their relationship and their properties. Next step would be the dataset preparation. As you know, for question answering, such as retrieval purposes, we need a question and ground-root answers. These questions are what users would ask and users would ask in their natural language question. For example, it can be the user could be asking about the top five movies by the IMDB rating. And the ground rules would contain the list of those movies. However, preparing this kind of data sets is not very easy. What we have done is we started with a core set which are created manually. Initially, we started to be prepared 15 to 20 questions for each domain and these questions have different complexity levels. We have identified four levels. The first one was the basic very simple questions which are asking about the properties of the nodes. Let's say, release the year of a movie. And the part is the very much more difficult questions, the hardest questions, which contain many constraints or aggregations and etc. Having this core set, then we try to expand it. For this purpose, we have used LLMs. In the first approach, we use this core set and ask LLM to rewrite these questions. We ask them, we and ask LLM to rewrite these questions. We ask the LLM to rewrite in a more formal way, rewriting as if it is a teenager, rewriting using synonyms and etc. The second version of expansion directly uses LLM. Since we already have the schema of the database, we know the relationships, nodes and properties. We provided this information to LLM and asked LLM to generate questions for us. have the schema of the database we know the relationships nodes and properties we provided this information to llm and asked llm to generate questions for us however in this step not all questions are meaningful or not all of them are relevant to the data we have as a result we need a step for human validation for this purpose we have used SexyCypher to generate cipher queries and we collected the output from cipher and compared them manually using the human annotators. And since this step requires human annotators we were able to collect around 100 question and answers for movies and around 250 questions for products. When we check the complexity levels, we have seen that movies dataset is a little bit more complex. It contains more questions about aggregations. However, products complexity was around 1.6. When we check the results of the experiments, as you know, we have used the RUGAS framework. In the next slide, you'll see it. We have used, there are two kinds of metrics, as Natali has already explained, retrieval relevant and generation relevant. When we check the retrieval relevant metrics, we see that graph RUG strategies, augmented vector search and text cipher, performs better than vector search. And when we check the best results usually obtained by text cipher, only exception is context precision, where vector search performs better. But it is expected because precision is relevant to how many items are returned and how many of them are right. Since vector search are returning a few number of information, having higher precision is easier for that kind of approach. In the next slide, we will see that the generated relevant metrics. Here we see the similar pattern. GraphRack strategies perform better, and TextCypher has better results, relatively slightly better results than AugVVS. However, there is an exception. As you see, Faithfulness value for AugVVS is very high compared to the others. And we were wondering why Text2Cypher couldn't achieve that result. And we think that it is relevant to how we provided the context to the generator. Since in the text cipher we have provided cipher queries together with the context we have collected and we believe that that creates noise for the LLM and that could be the reason why we have a different pathfulness result. When we check the product question answer set, we have similar observations. GraphRack strategies outperforms vector search results. Except for generation, for answer similarity and paidfulness are very high in vector search. But we believe it is relevant to the complexity of the data set as i have mentioned vector search is better at asking questions about properties of the nodes and for simpler questions less complex question set they perform better and products data set was relatively simpler less complex compared to the Moody's dataset. We believe this result is because of these complexity levels. But of course, these things require many more exploration. When we check the questions themselves, we explore the questions and their answers. As you see, in our database, we are looking for the genres of the assertion movie. In our database, the genres are different nodes. That means we need to execute one-hop neighbor execution, neighbor exploration. And when we look at the results, vector search is not able to answer the question. Augmented vector search also wasn't able to. However, TextoCypher was able to find the right answers. In the next slide, we'll see that a more complex question and answers. This time we are using some aggregations. We are looking for the first three genres. We are looking for most rated genres that have the most movies associated with them. And the right answers are seen on the right side of the screen as expected output. So the results are drama, comedy and thriller. TextCypher was good at it and V and vector search was not able to answer it at all. One surprising result is from augmented vector search. It is able to return two of the right answers, comedy and drama, but it returns some additional information, like this tall movie is relevant to those things, but the numbers are irrelevant because it is not able to access all the database. It is hallucinating some numbers itself based on knowledge it has. So in short, what we have seen is that a graph-break strategies are outperforming vector search results. Augmented vector search performance depends on the input of the Cypher retrieval queries. So if you have expert knowledge, if you are able to prepare right Cypher queries, it can perform really well. Take Cypher compared to the others is more consistent retrieval. And for vector search, we can say that it is able to answer questions which are directly relevant to the properties of the nodes. But for others, it has some challenges, struggles. So we will conclude our presentation right now. In this presentation, we have talked about the evaluation and comparison of different drug strategies. We have explored vector search and two graph strategies, which are augmented vector search and text cipher. We have used two domains, movies and products, and relevant databases. We have talked about how we prepared these questions and answers. And as a result, what we have seen is that, in general, graph RAC strategies outperform the vector search strategy. In the future, of course, we want to use larger data sets, which are well known in the domain instead of preparing them ourselves. And we want to explore the shortcomings of these strategies and cover them as much as we can. Also, we want to use different data sets from different domains to reveal their shortcomings again. This is all from us. If you have any questions, we can have a live question and answer. That's all from us. Thank you. So my name is Clavin Tulli and I'm the lead data scientist at Gallo. I have over 10 years of experience developing AI solutions for manufacturing execution systems, logistics, and supply chain domains. That's my LinkedIn QR code right there. Feel free to connect. So the outline of the presentation, first we are going to talk about the use case. And our use case is track and trace in manufacturing. I think track and trace, it's kind of like self-explanatory. We are just going to be tracking and tracing where our product is. Next I'm going to talk a little bit about generative AI, prompting methods, then I jump into knowledge graphs, describe what they are, then the graph retrieval augmented generation, which is the graph rag, then customizing it with tuning, and lastly, the results and findings. So in terms of the use case, trick and trace in manufacturing here is the background manufacturing does demand near perfect execution and over over time process management methods like six sigma have evolved and they call for a target of 3.4 defects per million opportunities. I think you can actually see that this is like a gold standard and, you know, we need to achieve that. But regardless, defects do happen and they tend to be very costly and time consuming when they happen. We also lose production. And if you think about product recalls, they are a major pinpoint for most manufacturing companies. Therefore, the ability to track and trace with agility, it can vastly improve problem identification in our supply chains and value chains. And it can also improve the time required to resolution. And we think that generative AI and knowledge graphs, they offer exciting capabilities to transform trick and trace. So, if you are familiar with the manufacturing value chain, we all know that it kind of starts with suppliers, raw materials are transported, processed, and there's further processing, QA and packaging, and there's further process transportation to warehouses and distribution centers and ultimately to the customer. The biggest challenge within manufacturing is that most of the systems are siloed and the data is siloed. And when we are doing track and trace and we are faced with questions like what products depend on this supplier? Where did this material come from what equipment was used where is the product going where did this product come from from a customer perspective it becomes very challenging so the ability to integrate the data across all these siloed systems it's a game changer in how we do track and trace. So we will see how we achieve that. But in the meantime, I wanted to introduce you to generative AI. So what is generative AI? Generative AI is all about generating content and here i've just shown a few techniques of prompting to generating content zero short prompting few short prompting and chain of thought prompting there are other prompting techniques like meta prompting and so forth but i just wanted to keep it simple so for example if we look at zero short prompting the question I'm going to ask my model is how would you describe the test of a smoothie with too many oranges and then it gave me that answer a smoothie made with too many oranges can be overwhelmingly citrusy and tangy and all that and when i looked at the answer i was like i liked it and i generated um more for different foods fruits like watermelons strawberries and so forth now with few short prompting I have to give my model an example one example or many examples so that it can learn what kind of content I'm expecting in this case I'm prompting my model and saying that you have knowledge about how machinery fails tell me more about a breakdown where the motor fails, limit output to 50 weights. Then I give it an example here, and then I later prompt it for bearings wearing out, and then it gives me an answer that can like consistent with the example that I had shown it. Now with the chain of thought prompting, it's used to break down the problem into sub problems so that we can increase the accuracy of how our model is generating content. For example, here I'm giving it all kinds of events that are happening and i wanted to tell me the total duration of the breakdown in the last one right so in in the example i give it i ask it to first identify the start time which is 11 a.m identify the end time which is 4 20 pm and then calculate the difference so the chain of thought is in the sequence of how the model is solving the problem and once it kind of like knows how to do that then i will prompt it and it will give me an answer so this is generative ai in a nutshell but here we are not using any external data sources we would want to use external data sources so that we can give more context and make our responses more meaningful based on the data that is within our enterprises. So that will be achieved by retrieval augmented generation. So retrieval as in retrieving from external data sources. Now in terms of knowledge graphs, there are kind of like two approaches to build knowledge graphs. One of the approaches, approaches it uses unstructured data and you have so many documents. If you think about Wikipedia, it's a good example. And then you can use AI to create a knowledge graph. There are large language models that you can use to create a knowledge graph. Within the manufacturing industry, I would say proceed with caution with this method. Remember, we mentioned that manufacturing demands near perfect execution. So you would not want to take any chances with how the chunking is being done, whether it's hierarchical chunking, whatever. You want to be careful in how your data is organized in the knowledge graph. That's why the best option is approach number two which uses semi-structured data and if you have your relational databases it's going to have all kinds of tables in there and some of these tables they can have columns that have um free text and using all these tables you can use um ontology mapping and ontology it's pretty much the study of things and how they are connected and with this ontology mapping you can create a knowledge graph and that's what's going to be a knowledge your knowledge base for how you are going to implement your graph rag so we went ahead and created our knowledge graph using ontology for our sub for our value chain and here's what can what it looks like the good thing about knowledge graphs is that they are intuitive um so it's very easy to read them or query them for example here we see that we have this supplier uh with our we like which is shown by the orange bubble it supplied materials and the materials were processed to a product it could it might be refined or not and then it was shipped to a customer so that's how you can like um read a knowledge graph this is like a trivial example i'm showing here and when you look at how you query it with cipher it's also kind of like very straightforward it's very intuitive and this is the good thing about knowledge graphs their intuitiveness it makes them work very well with generative ai solutions so on the nodes for example the supplier material product and customer nodes we can also have properties there for example we can have the customer address we can have a project description we can have a material description and so forth same Same as the relationships, we can also have data there. For example, when we say processed to, we can tell what equipment was used to process and if we encountered any downtime, we can have the reason codes there and also the downtime descriptions and so forth. So that's the nice thing about graphs they are so intuitive now once we have our graph we are kind of like going to overlay it on our value chain and supply chain and as you can see it starts to connect information or data sources that are siloed. This is the most powerful thing about these graphs. They help us connect all these siloed data systems. And once we connect the data, it's much easier to answer all those questions which we were concerned about earlier. For example example what products depend on this supplier where did this material come from it's going to be much easier to answer all those questions with a graph now the other interesting thing we can do about graph is we can add freeform text to our nodes or properties. So, for example, when we are processing our materials into a product, we had an issue with an equipment that broke down here so we can um add that free form text here like increase the noise vibration and so forth and that's gonna be a property on this relationship proper process process too and we can also include all the reason codes and the equipment that was used same as the product we can add maybe the product description in this case that this is the text that we were generating earlier how would a smoothie made with too many oranges test so we add that to our product node and we add this freeform text as a property so the reason we add this text as property is because we can create embeddings out of this freeform text. And once we create these embeddings, we can now perform a semantic search on our graph. So if you, so most of vector rags, right, they kind of like just store these chunks of text as embeddings and it's just like a vector store but in a graph we can see that we are able to achieve the vector store and also we are able to capture the connections so a graph rag pretty much is a superset of the vector like it can do everything that effect Iraq does and it also provide those connections that we really care about so what do embeddings of the free-form text look like it's going to be something like this so if you remember from your linear algebra class about vectors, a vector is going to have maybe three dimensions if it's X, Y, Z. But when we are dealing with these models, the vectors for text they are going to have a length of um 1024 or 1520 and 36 that's how big the dimensions are uh as you as you can see here the embedding is too many numbers uh for for the text we show here so these embeddings here that's what we store in our property graph and when we are doing a semantic search we can use cosine similarity to retrieve what matches what best matches our prompt and once we find what best matches our prompt we can also traverse the graph the neighboring nodes and retrieve more information and bring all that information to the large language model so that it can enrich and personalize the response that's the good thing about the graph rag now um to achieve this with the graph rag you want to customize it with the tuning and the tuning that we see here it's prompt prompt tuning. Some call it P tuning. And remember, we talked earlier about few short prompting where we give the model some examples so that it can learn. With prompt tuning, we are giving it so many examples so that it can refine or it can improve uh in the accuracy of how it uh it responds so for example here uh we see our samples here uh the human the the prompt might be which material has the most suppliers and then we have this um cipher query here which helps us retrieve the information and we are going to have lots of of examples of these cipher queries you can use gen ai to generate some of these ciphers then modify them to make sure that they are accurate and feed them as samples to help improve how your model response is when you prompt it so we talked about how we use semantic search and we talked about how we do we use cipher to search for information in our graph so once we are able to do those two, it's going to enrich our responses. And in our results and findings, we discovered that the graph lag has enormous potential to transform track and trace in manufacturing. An example we see on the right where we are prompting with our graph lag i'm just going to provide a prompt and say i'm feeling very thirsty and on the beach which smoothie should i have and how many suppliers are used to make this product so what we can see is uh based on the semantic search our model is able to recommend the watermelon smoothie because in its description it says it it's watery and it's good for hydrating uh on a hot day that's why it recommended um the watermelon smoothie also it tells us that this product has four suppliers and that's achieved with the cipher queries that we saw earlier the to to to achieve something like this the there is of course a lot of experimentation that is required um particularly with the prompting and also with the cipher and also the mapping of the the knowledge graph requires deep domain expertise because we want to make sure that our nodes and relationships are capturing the domain rather than just connecting things randomly the other thing is capabilities of AI what we found out they are evolving very fast the technology is evolving very fast and what you think is not impossible today it may change tomorrow and that presents a challenge in terms of um how you keep up in terms of IT infrastructure in terms of um learning uh the new technology that's involving so it's kind of like um very challenging to keep up with how AI is evolving because it's evolving very fast and then the last thing in our findings is it's very challenging to quantify the return on investment for these tools we all know that we want to improve productivity but quantifying it it's always very challenging now um despite all these good things there are a few primary ethical considerations and here i have just four uh the first one is these models they tend to lack explainability and interpretability that's why i'm a big fan of traditional AI ML. There are explainability and interpretability tools there, like Lime and Sharp. But with the large language models, it's a little bit challenging. It's evolving. Some solutions are beginning to come up, but they are not that good. Then the other challenge or ethical consideration beginning to come up, but it's the they are not that good. Then the other challenge or ethical consideration is data provenance. So once you expose your data to so that it can be used in the RUG architecture, you want to make sure that the data has integrity and there are no challenges in terms of missingness and so forth because your model is not going to know that then the other ethical consideration is a sensitive information disclosure you want to make sure that the data you want your large language models to access it's kind of like isolated and has been curated if i may say because if you just expose it to the entire databases it may pick up personal identifiable information or some other sensitive information and you don't want that and then the last ethical consideration is the future of work is changing there's been concerns around worker displacement and displacement and that's just going to be an ongoing ethical consideration given how these um AI models are evolving so that's pretty much about it if you want to have a hands-on exercise on what I've just gone through I have a resource for you here you can scan this qr code and then you can use it you can play with the code and experiment or with the example that's on this link and then lastly thank you all for joining me on this journey through the world of ai today we have explored not only the powerful potential of AI, but also the responsibilities that come with it. As we look to the future, AI is not just a tool, it's a transformative force that can redefine industries, enhance human capabilities, and address some of the world's most pressing challenges. Thank you. With that, I can take any questions. | NODES 2024 Best Of: GraphRAG | 2,776 | Neo4j | 20241227 | Re-Run of two great sessions from NODES 2024 in November!
Watch all NODES recordings: https://neo4j.com/video/nodes-2024/
Session1: Measuring the Performance of VectorRAG vs GraphRAG
Join this session for an in-depth exploration of evaluating Retrieval Augmented Generation (RAG) pipelines, which are crucial for enhancing the performance of large language models (LLMs). As RAG becomes increasingly prevalent, understanding its evaluation is vital to identifying and improving weak points. This session delves into the benefits of integrating knowledge graphs for better grounding and explainability, contrasting it with traditional vector-based retrievers. We will discuss the latest LLM-based tools for automated RAG evaluation, and showcase applications using Neo4j-backed RAG pipelines and the RAG Automated Assessment (RAGAS) framework. You will gain valuable insights into advanced RAG evaluation techniques, helping you optimise your own RAG implementations.
Nathalie Charbel, Makbule Gulcin Ozsoy, Estelle Scifo
Session2: GraphRAG for Increased Trust and Confidence in GenAI for Manufacturing Processes
Disparate data systems often obstruct a comprehensive, end-to-end view of processes, hindering digital transformation efforts. In this session, Clive will explore how to overcome these challenges by modeling knowledge graphs for manufacturing processes. You will learn how knowledge graphs combined with generative AI can integrate and enrich data across various functions, providing deeper insights and an enhanced prompting experience.
Clive Ntuli
Get certified with GraphAcademy: https://dev.neo4j.com/learngraph
Neo4j AuraDB https://dev.neo4j.com/auradb
Knowledge Graph Builder https://dev.neo4j.com/KGBuilder
Neo4j GenAI https://dev.neo4j.com/graphrag | 2025-01-01T23:45:36.212722 |
https://www.youtube.com/watch?v=qFSNBUss3QI | Welcome to Graph Power Hour. Hi, my name is Paco Nathan, and I'm very glad to be the host here for Graph Power Hour from Sensing. Today for our second episode, we're thrilled to bring you Claire Sullivan. Dr. Sullivan is one of my favorite instructors and writers and engineers in the whole space of data science and AI. I love her tutorials and she has a lot of fantastic material here to share today. Just as some background, Claire and I have been working together. I've been learning from Claire about some of the intricacies of Graphrag, particularly in the context of using more structured kinds of input. It's working with entity resolution, but really looking in depth, like where are some of the gotchas? So I'm super thrilled. Let me click here. The title is Graphrag, The Good, The Bad, and The Ugly. And if you're just tuning in uh we have just launched graph power recently this is episode number two and we definitely want to encourage everyone to stay to the end because we have a special surprise um we're going to do a little bit of a contest, not a contest, it's a random drawing, I should say. Let me use the right verbiage here, but stay to the end and a lucky winner will be getting, well, we have some board games. We'll announce more about that later on. And so without further ado, let me introduce here.ire take it away all right thank you paco just wanted to thanks for the invite to uh come and talk today also thanks to senzing for having me as well um so today's subject graph rag the good the bad and the ugly and we'll see some good and we'll see some bad and we'll see some real ugly i'll tell you you. But before we get started, I always like to know who it is that's in the room and whatnot. So I can kind of talk about the interesting stuff to whomever's here. So why don't we kick off with the first poll here. So this is the audience participation portion. Let's see. So do we, there we go. Okay. So what is your current role? Got some answers coming in. It's looking like we've got a good diversity of people. I love that. We've got data scientists and ML or AI engineers, data engineers, product managers, business leaders, other business owners. This is great. Yeah. So give this another couple seconds here. Yeah, that's interesting. It is really, really spread across different roles here. And a lot of software engineers. That's good to see. Yeah. All right. So let's get, let's stop that poll. Let's go to the next poll now. And the next poll is what is your level of experience with large and GraphFrag. Oh, interesting. Okay. We've got a lot of diversity in this one, too. So, you know, hopefully there's going to be something for just about everybody in this talk i'm glad to see this um a lot of people in their learning journey with it that's wonderful love it um give this one another couple seconds we'll go to the next one there's four of these so we'll go to number three here now okay and then question number three how familiar are you with entity resolution and sensing? Some good diversity here too. Love it. All right, maybe another couple seconds on this one and we'll jump down to the final poll. The piece de resistance. Do you use entity resolution in conjunction with your knowledge graph practices is that something you're doing today this is the one i've been waiting to hear from i'll be perfectly honest you you know despite what you're hearing today sometimes i do sometimes i don't and you know i every time i don, I'm feeling pretty shameful to be perfectly honest, because I think what we're going to see today is there's really good reason to be doing it. So, all right, we can give that one a stop now. So a lot of people are in the evaluating and production and cases, but a bunch of people are not the evaluating in production and cases but a bunch of people are not using it yet so hopefully today um when we look at the good the bad and the ugly we will get to see when you might want to consider doing it now i will tell you that um i'm going to be looking a lot at my slides but i'm going to also be having um the&A window open, and I'm going to try to be good and remember every time I get to like a transition slide that has a black background to it, I'm going to try to consciously look over at the Q&A window, and then we'll have time for questions and answers at the very end as well. So let's dive in. All right, a little bit of how we're going to structure the talk today. Start a little bit about my graph journey, which I think is perhaps a little different than most people who come to graphs through proper means. I had a very unusual introduction to graphs. Then talk a little bit about why knowledge graphs are hard. Okay. And then, you know, when you see something is hard, you say, you know what, and I mean, this is true in all of data science is we try to, you know, we have fancy models, but a lot of times it's just the simple stuff that works best. And so if knowledge graphs are this hard, why should we bother if we just have generative AI and text? And then I'm going to show you some, this is the real ugly part, the creating knowledge graphs in the before times, show you some of my sad tales of woe when it comes to creating knowledge graphs. And then what they look like today and why that's such an improvement. And then ask a question that I know I get asked a lot, I'm sure Paco gets asked a lot, if you already have your data in a graph, why do you need entity resolution? So we're going to get into that a little bit too. But you're all here because graphs are having a moment. And let's be real, they're having a moment because of large language models, LLMs and generative AI. I just was kind of going through my stash of articles and whatnot and came up with what are some of the three like what are the three that really kind of shaped my view of graphs this year on or well I should say the past 12 months when I say year um the this and these two that are on the left here we'll we'll actually be coming back to later in the talk the retrieval augmented generation or RAG for AI generated content. This is a great survey article. Anything you could possibly want to know about the science behind RAG is in that article. The next article, the one below it here, this benchmarking article by Juan Cicada and the folks at Data World. I saw the results of this presented at a conference, Data Day Texas, in case that sounded too much like one word. Data Day Texas in Austin, Texas in January. Great conference. And Juan presented some results that just really motivated why you want to consider using graphs for your RAG. You don't have to. You can use a vector database such as Pinecone. You can have Postgres act as a vector database, but talk a bit about why graphs are great at this. Then of course Microsoft put out this really great research blog post and new tool that they have their own GraphRag thing going on. Yes, graphs are truly having a moment and I'm really excited to be part of that domain right now. But I didn't start as a graph person. I started just as, believe it or not, a nuclear engineer who became a machine learning engineer at GitHub. And so while I was at GitHub, it was a picture of my computer in a hotel room. And I just, you know, I had a bunch of stickers on it like everyone does. And I'll be honest, I created this sticker that you see on the right here. I will tell you, I got a little nervous when I was putting this up. I was like, oh, if I take a photo of that with my phone, is it going to come up like some QR code to some, you know, inappropriate website or something like that? It won't. It's not a QR code. But does anybody know what that is? If you do, drop it into a public chat because it's a pretty unusual thing, I think. It's really fun. You obviously know it has something to do with graphs. So I'll give a couple seconds here. Okay. So it turns out software and GitHub being the largest repository of software is a graph. And lots of things are graphs. But when you talk about software compilers, software compilers actually work on this thing called an abstract syntax tree. And this is a very small one that I've shown here, an abstract syntax tree. And it basically says, you know, this variable is connected to this function and goes to this other thing. And there's these if statements and all of that. But the bottom line is that you can represent software as a graph. Now this was circa 2018 when I was working on this. And the idea here was we had access to all code across all of GitHub. And could we look at the abstract syntax trees of all code and detect duplicates of code? So, and not just duplicates by saying, you know, I'm writing a function to calculate a cosine in Python. Paco's writing the same thing in Python. Did we have exactly the same code? We're actually looking at here whether the code is written in a completely different language. It's looking at the entire code base. It's looking at individual functions within the code. And can you use some sort of approach to detect this function written in JavaScript is the same as this other function over in somebody else's repo that was written in, I don't know, Rust, for example. And so looking at these abstract syntax trees, I was just I was looking at these one day and I was like, you know what? This is a graph. Let's look at this as an adjacency matrix. The adjacency matrix is the mathematical definition of the graph. What you're seeing in this not-quite-QR code thing is the adjacency matrix of a piece of software written in JavaScript. I thought that was really cool. I had to turn it into a sticker and give it to all of my friends. But then I said, you know, my background actually, even before nuclear engineering was in astrophysics, and I was like, that looks like stars. Wait a second. If that looks like stars, I can apply image analysis techniques to it. So i started doing convolutional neural nets to set to detect is this code identical to anything else within a database and it turns out using that approach we got something like 97 accuracy at detecting duplicate code written in other languages from each other so that was that was a really exciting project i loved working on that project. And if you think about it, it almost kind of is the mathematical basis of what became things like GitHub Copilot today. So I did not work on Copilot. I don't want to make it sound like I'm taking credit for that. But the point being that there is a lot that can be done when you look at your data as a graph. And this was just a really unusual, fun form of data to look at. Okay, so now I'm getting over to my first black screen thing. I'm checking the QA window. I don't see any questions come in yet. So graphs are obviously having a moment because of generative AI. That being said, do we really need graphs if we're just analyzing text? I mean, I can throw my data into chat GPT and it can come back with answers and it's not in a graph, it's just words, right? Well, so here's another project that I worked on, a little demonstration that I did. This is a particularly popular data set for graph analytics called the quora data set and what it is it's um you know 2700 scientific publications and data science and they are broken into seven classes i will tell you that those classes are very imbalanced so not a fun problem and then they look at um the citation relationships between one paper to another. This isn't quite a proper graph schema here. If it was, I would have paper and it would look like a loop citing back to paper. What I'm trying to convey here is that one paper is citing another. Now, I don't know what these papers are. They're given an integer ID and they're given this subject, which is one of the seven classes. And then they're given features. What these features are is somebody has encoded the vocabulary of the abstracts. So now what we have are word vectors. So this is a data set that is both a graph and it has word vectors. And so the question is, I just wanted to say, well, how well do we do at classifying based on word vectors versus graph vectors? So graph embeddings, you can set up a graph, you can create embeddings. I did that using an algorithm called fast RP for anybody who cares. But the idea here is that I wanted to know if all I did was change the model or, you know, change which embeddings we're using. I'm not changing the model. I want to be clear about that. I used the same support vector classifier model. I did not change any of the hyper parameters. I kept those all the same. All we did was use the same machine learning model and give it word vectors or give it graph vectors. One caveat here, there is no access to how the word vectors were created. So I don't know the vocabularies. I don't know anything about that. It's possible that they could be tuned to do better. I'm just going with what's in the base core of data set. So again, both models were given the same task, both approaches given the same task, multi-class classification with this massively imbalanced data set. So when I did that using the word vectors, here's what I got. Okay, so this is our confusion matrix here. And for those of you who are not familiar with this thing, the x-axis, the predicted label is what did the model say it was versus the true label on the y-axis, what actually was it? So we care about the numbers on the diagonal and the closer they are to one, the better. And we average those numbers up and we get the mean accuracy of the entire algorithm of about 73%. Okay, so that's not bad. Now, I will tell you that the class four here, this true label of four, that's the one that had the fewest samples in it. And by fewest, I mean, like, you know, it was in the tens of samples, and everything else was in several hundred samples. So, you know, you can see that it got 50, it got class four correct 54% of the time. And that's just barely better than flipping a coin. So not great. Now, this was just using the word vectors. Now we're going to do the same model and we're going to use the graph vectors. So when we do that, what we see is that our mean accuracy went up by 13%. So 85%. And that was without any hyperparameter tuning, I hyperparameter tuned, and I got like a little better 87%. And you can see, look at this class four here, where it went up from 54% to 76%. So that should be a really encouraging thing that, you know, when you actually look at how things are related to other things, you can do a lot better, right? Whether it's just this really basic machine learning problem of classification, or when we start getting into some harder problems. So great, why don't we all do that? All right, well, let's talk a little bit about knowledge graphs. You know, what I was showing you there before, that's just kind of like a bread and butter graph. Knowledge graphs though, these are the things where we start looking at how information is related to other information. And it's what gives us, you know, potentially some really useful information for getting our LLMs to not hallucinate so much. So that retrieval augmented generation or RAG is really key to making sure the LLMs don't go make up information. But that means we have to convert our information into a graph. And why is that hard? It is hard. The goal here is that we're going to start with stuff. That's the technical term stuff. And we're going to go into a structured thing, another technical term. That stuff could be structured or unstructured. So for example, you could have a stack of PDF documents. That's your stuff. Text, really not structured. Really hard, messy to deal with. And that's probably the most interesting stuff. Like if we already had our stuff in a table, you know, that's very helpful. But like a lot of the interesting stuff that gen ai is going after is not you know in that happy space of being structured then there's this thing that we're trying to get it into which is a graph structure but you know when we talk about graphs usually like we have we talk about knowing what the schema of the graph is so you remember i had those green circles and a paper pointing to another paper. If I'm just giving you a whole bunch of PDFs, we don't know what that schema is. And that can make things very challenging. So then, I'm not the only person who uses this term, gigo, garbage in, garbage out. This is especially true for graphs. And especially true when you're trying to understand the relationships between entities within your text. Verbs, as you're gonna see here in a minute, verbs are those relationships. And verbs are very hard to properly get and attribute to nouns within text. So it's really easy to have very noisy graphs. Garbage in. Okay. If you're putting garbage into your rag, you're going to get garbage out. So this is another problem. Language is messy, like I said. Natural language processing NLP approaches, which kind of are the basis of generative AI, and I'm going to show you when I get to the ugly portion, you know, some, what this looked like a few years ago. They're really complicated, and they're especially complicated if you're talking technical language, And technical language is the place where RAG really shines. Because, you know, chat GPT and the like have been trained across the entire internet. Most of the internet is not spoken at the level of a PhD, you know, chemist. Most of it is spoken at the level of a Kardashian. And so we have to somehow find a way to teach our language models, you know, that technical terminology. RAF, sorry, RAG is the way to do that. But like I said, technical language is really, really difficult for it to get. And, you know, honestly, yeah, language is messy, but data is messy too. Just even nice, basic, happy data living in some SQL database is messy. I say to student data scientists, look, 80% of your life, you know, once you get a job is going to be actually being a data janitor. And so this GIGO problem really does not help. So why? If it's this hard, why? Why should we bother? Well, let me get back to this paper here that Juan Cicada and crew put out. And what Juan did was he had this whole very systematic approach. He had started with a set of SQL databases and I think it was based in the insurance industry. This article here, I really strongly recommend this article. Using that SQL database, he had a very regimented approach to asking easy questions and hard questions of it. So asking a question in natural language to then go query the database. And then you can assess whether the resulting information coming back from that query was correct or not. And what Juan saw was a significant increase in the accuracy of the language model when it used a graph versus not. So, you know, I've kind of highlighted this section here. There were differing degrees of difficulty, which is why you see 66.9% to 71%, 25% to 37%. Go read the paper. If you do not come away after reading that paper saying, I must use knowledge graphs for my RAG, I don't think I can sell it anymore. So, now we're coming up to another black screen here. I'm checking with the Q&A section. I don't see any questions that have come back. Okay, graph structures can do better with text than text-based approaches as we saw, for example, with that Quora database thing. However, they can be pretty difficult to create. Now I'm going to grab a quick drink here. So I want to talk about the ugly. And you know, I'm not proud to admit it. I created this many moons ago. This is a very unpleasant look at natural language processing. And this is what natural language processing was like circa three years ago. And all I have here, this is just the entry paragraphs of Barack Obama's Wikipedia page. And I'm using a standard natural language processing package called SPACI. And all SPACI has done here is just kind of gone through and highlighted the entities of this particular text. So you can see it might be, the font might be a little small, but like the purple entities correspond to people. So the very first one, Barack Hussein Obama the second. You'll notice the very second one of that same color is Obama, as is the third one, Obama. There's lots of places where it says Obama. And having, this is a spoiler alert for what's coming later in the talk, Barack Hussein Obama and Obama are those the same people. Maybe they are, maybe they're not. Could be referring to Michelle Obama. A little complicated, but that entity resolution, as you're going to see here, is going to be pretty important once we get a little further down here. We've got dates that are kind of in mint green. We've got geopolitical environments and kind of like this gold color. So let's look at this, though, in terms of getting into the weeds a bit here. Language is a graph, just like abstract syntax trees like I showed with that GitHub example. Here I've just taken the first sentence, and this is Spacey's attempt to try and attribute which words go with which other words. It's doing part of speech tagging so I can see proper nouns. I can see determinants, adjectives, things like this. This is complicated. OK, to try and come up with what is the verb here and then what subject. The subject is Barack Hussein Obama II. But natural language processing does not necessarily get those as one thing. You know, that's three and a half different words there. And so somehow we have to lump those all together as one entity. Similar, we've got American and politician. So like, you know, we want to, what we want this graph to look like is Obama is politician. And maybe politician has the property of American. And then there's this word and, and just throws everything off. Okay. I always hate it when and shows up in the sentence um the bottom line here is that the process of doing nlp to create a graph is wrought with peril and if you go look at any of my previous content on it you know it's it's it's so sad because honestly i went through and i did this actually earlier today i just gave it uh chat GPT this section and hey create me the subject verb object triples here and it just was so glorious. So that's why I call this the ugly is this is the before times the before generative AI times. So the steps of NLP to create a knowledge graph in those before times. The first thing we had to do is text cleaning because like if you're going to get into text that is not you know nice and happy wikipedia text you're going to have emojis and you're going to have you know all kinds of different character encodings and it's awful and you've got you know web links and you got to do a lot of cleaning before you can do anything then you have to tokenize meaning you have to find all of the different words um you remember You remove the ones that we don't really care about. For instance, words like the, and. These are called stop words. They add nothing to the actual mathematical analysis of the statement. Then we do this thing called stemming and limitization. And what this is, is if you have a word that is, say the word was, was is the past tense of be. And so when we go for the lemmas and the stems, you could say swimming, ing, we take the ing off. So the stem is swim. Okay, so we want to get those verbs particularly in their most basic form. Then we do the named entity resolution. So that's the Barack, Hussein, Obama, the second is all one thing. And then from all of that, then we have to somehow extract the subject, verb, object trickles. The subjects and objects are going to be nodes within the graph and the verbs are going to be relationships. Okay, so that's the process that I used here. And like if you do this, this was the graph I created a million years ago on Barack Obama's Wikipedia page. We can zoom in on parts of this. And, you know, one of the things that I was asking in this workshop that I taught on this was using the graph, find where Barack Obama was born. Now, first off, you had to realize that for whatever reason, the NLP process gave the phonetic spelling of Obama. Don't ask me why, just go with it. So if you look at that upper left, you can see Obama, and then you see B, United States, B, African American, Bear, Honolulu. Okay. So it turns out that, excuse me, Bear is the lemma of born. So you had to know that in order to ask the question, because again, this was before LLM days. You had to really understand how to use language to query this graph. It was highly unpleasant. It's going to be kind of hard to see, but if you look at the two that are on the right, there is an orange node that is Hillary Clinton. And the relationship between Obama and Hillary in this graph was the verb elect. I believe in the context of the Wikipedia article, is that he was elected over Hillary Clinton, the primary. And then the one on the bottom there is Osama bin Laden, and the relationship there is order. Okay, well, that's not going to make any sense. We don't have enough context there to understand that what is actually going on here is that Obama ordered the assassination of bin Laden. So if you wanted to do some sort of question answering with this graph, it was incredibly, incredibly difficult in the before times. It was made worse by lots of things. Like when you do natural language processing, there's just duplicates everywhere. And, you know, so I just I'm just printing out some of them. You can see there's a gazillion Barack Hussein Obama the seconds district officials officially state. We've got John Sidney McCain, you know, so this is like there's just so much mess in this graph. Duplicate it. De duplicating was just a never ending process with this graph. Duplicating was just a never-ending process with this approach. Then there's the concept of named entity recognition, NER, and it's hard. Okay, so I had nodes in this graph that were, you know, Obama terms, Barack Obama, President Barack Obama, Barack Hussein Obama II. Then if you just wanted to go with Obama, you're going to come up with Michelle Obama in this graph as well. And you have to have ways that if you're giving this information to something to try and answer questions about it, you have to somehow figure out how to resolve all of these different entities. Very important. You could do something like really super silly basic, like what's in the bottom here. And what I'm doing here is I'm calculating what's called the Levenstein distance between the names. And what that is, is like how many characters off is one of these things. So if I said Barack Obama, the name Barack Obama has a distance of zero. If I said like President Barack Obama, you have to change 10 characters to get that to match up. It's a really crude way of doing it. I do not recommend it because this is just wrought with peril. But let's actually get out of this scary space. It's gotten better, I promise. So has the making of graph improved in the age of Lms oh my gosh yes it has just unequivocally um so i gave this same bit of text um neo4j graph database company they um have released this really fun tool which you can get at this web page here and basically you can just give it text you can give it a pdf whatever and it will create the graph for you which is super handy i wish that existed when i was trying to do this a few years ago okay and so what you have here this is the entire graph of the very same text and you'll notice it's much smaller like before you know this is what my graph looked like okay so lots and lots of nodes and relationships and a lot of them really bad. Now this is what the graph looks like. Not only that, but if you see these nodes that are kind of like a light pink and the text has an underscore, like you'll see Barack underscore Obama, that's chunks of text. That's not even actually. It's just Neo4j saying, aha, I got this bit of data in and it's a chunk and this is my chunk. You'll see though that there's Hillary Clinton and Osama bin Laden are both in this graph. So it actually went through, it did some sort of prioritization on what should be in there and came up with a much smaller graph. So let's see. They actually have a nice little question answering bot built in. And so I asked how was Barack Obama involved with Osama bin Laden and it gives a correct answer just based off of the graph. How was Barack Obama involved with Hillary Clinton? Correct answer just based on the graph. So the point here is that yes, this graph is smaller, but it is better because the data is higher quality. Okay, it's got relationships that are higher quality and text that's higher quality. So what goes into the graph matters. You get better answers. This is why you really need to think about entity resolution on a graph. But I'm getting ahead of myself. I decided to play with this a little bit more. And so going back to that survey article I was talking about, you remember this? I said this was the article to end all articles survey of all the mathematical approaches that could be used for RAG. Okay. So I took that entire article and I threw it into that knowledge graph creator that Neo4j put out. And this is what it made. This is a portion of the graph. This isn't the whole thing. But it was kind of fun. And so I wanted to ask some questions of that article because this article is like 30 some odd pages long. So let's get a summary of the article. And, you know, I said, what are the three most important takeaways from this paper? And it gave them to me and they were great. I loved it. Okay, so let's ask some more questions. What are the top five methods for RAG? Okay, because I really wanted to just cut to the chase. Tell me what should I use? Just give me the answer. And I'm sorry, but I don't have enough information on the top five methods for RAG. Okay. Not exactly what I was hoping for here, but, you know, it's kind of fun. Like I clicked that little details button at the bottom of that window and it told me, you know, where it was getting that information from and you could go see it in the graph. Really useful tool for experimentation. But let's get into this just a little bit more. Because unlike that Wikipedia page, this article is highly technical. And you remember I said technical terms were pretty problematic. Okay. So I'm just looking at some of the nodes that I created. And I'm like, hmm, okay. So in this top bit, there's a node called edit-sum, and then there's another node called edit-sum. Now, those are probably the same thing, but these are highly technical terms that the language model did not know what to do with. Now, that would be a problem if I was trying to ask the language model a question about edit sum and expect to get back a good answer. Then I've got this, you know, these retrievers. I've got dense retriever and sparse retriever, which are related but different, but then I have retriever and I have g retriever. These are probably the same thing. We need to have better data in our graphs. Okay, so better data in our graphs, looking at the QA bar. Somebody famously recently said, knowledge graphs and entity resolution, like peanut butter and chocolate. I love that quote. So we have to have to have to consider cleaning up our graphs for RAG. And a great way to do that, depending on your data, a great way of doing that is with Senzing. And I want to direct you to a couple articles. The first article here, Paco wrote about entity resolved knowledge graphs. And then I wrote a follow-up article about doing graph rag with knowledge graphs and entity resolved knowledge graphs in this case. So looking a little bit at it, the data set, we used the same data set, three sources, safe graph, which is a data about places and locations and businesses and things like this. We've got this Department of Labor wage and hour compliance action. What this is looking at is Department of Labor grievance or violations by businesses. And then the Small Business Administration was giving out these loans, PPP loans, to various businesses. And, you know, so this is a fun set of data that you can try to link together. Paco linked it. Paco ran this data through Senzing and, you know, kind of his results are on the top. I'm going to talk about these schemas here in a minute, but just wanted to show you what it looks like when you put this data into Sensing and what the kind of information you get out. For instance, this is just G2 Explorer, which is one of the tools that comes with Sensing and lets you look at your data. I just came up with this. Show me this particular company called Desert Spring, which is one of the tools that comes with sensing and lets you look at your data. So I just came up with this, show me this particular company called Desert Springs Landscaping. Okay, so this particular data set is scoped down to the Las Vegas area, just to keep it within the free tier of sensing. And okay, great. So I've got this result that comes back and you see it's got this entity ID. Now, if you look to the right, it says six relationships were derived from this data. So let's get into that a little. So using that entity ID, I can then back out some more information about Desert Springs. And you can see that they have an entry within that PPP loan database, as well as SafeGraph. Now, if you look at that that there's some variations there um for example let's look at the business address you can see that the zip codes are different because one of them is using the the full you know five numbers plus four numbers the other one is just using four numbers um you know it's it's uh one of them has LLC at the end, the other one does not. You could probably get away with linking these things via text, but it's not going to be, it's not going to make you real happy at scale. That's not going to be something that's going to work for more complicated cases. At the bottom, you see some possible relationships. So Desert Springs Pools and Spas, Inc. Now we can all, and you also see Desert Spring renovations there. You know, we can all go with the idea that, you know, these are probably the same thing. We've got Desert Springs Group. You know, it would make sense for all of those type of businesses to be related. Turns out if you click into this and look at Desert Springs pools and spas, yeah, it's probably all related. You can see that that address is the same. But again, if you're doing this at scale with say business names that have absolutely nothing to do with each other, you're not necessarily going to get all of the correct information i'm going to show you an example of that um but what i did was i took that data now um usually when you do entity resolution you have a whole bunch of entities that you collapse into a single resolved entity which is what paco did and it's shown in this top graph here so this is mandalay bay and we've got all kinds of different entities that are related to Mandalay Bay because they're owned by Mandalay Bay. So you can go through Paco's article. Strongly recommend you do that. I did it slightly differently. So this hub and spoke approach, I wanted to have, because I know I was going to throw this into a large language model, I had this hypothesis that if I had more relationships within that graph that I was more likely to be able to ask questions and get some more detailed answers out of it. Jury's still out whether that hypothesis bared out, but I'm going to show you some examples. So this is just a little bit of the graph as I assembled it in Neo4j. Let's do some queries now. Okay, so I had the data both in SQL, so just tabular data, and I also had it in Neo4j. And I decided I was going to be asking questions of that data. So here's a question, and I'm using Python and Langchain here just for interfacing with the data. So I said, find all references to union cabs in all of the tables. Okay. And the output said that they found one reference. I had three tables. I had one table for each of the safe graph, the department of labor and the PPP loan databases. So three tables in my database. And it said that it could only find one instance of union cabs okay well i'm going to tell you a spoiler alert that's not true okay now i uh what do i have here okay i how many violations or does union cabs have any violations meaning department of labor violations it says it appears that union cabs does not have any violations recorded in the daily base database wrong okay let's actually now look at the graph around union cabs and what i see here is oops there's actually other entities that are not named union cabs that are part of this database so like this is this is one of the great things about a tool like sensing is it's going off of lots of other data points other than just a name or an address to try and say, are these the same thing? So we have LVCabs. We have NLVCabs. We have ABC Union Cab Company. Okay, they all happen to be cab companies. None of them are in existence anymore. They've all shut down. But. I said, tell me about Union Cab Company. And you can see Union Cab Company, also known as ABC Union Cab Company, was located at, it operated in the taxi industry, got some information. And you can see that it also has other related entities such as Vegas Western Cab, ANLV Cab Company. Okay. So like automatically you can see, I just got a lot richer information by linking my entities here. Super cool. Okay. So then let's how many violations have entities union cab company is related to been involved in? Aha. Now we have our answer in blue at the bottom. Entities related to union cab company have been involved in a total of six violations. That is the correct answer. Okay. So by the fact that we were able to take this tabular data that appeared somewhat interlinked and somewhat not, and we were able to resolve the entities of it, now our LLM is getting correct answers, whereas it wasn't before. Even though we were doing RAG, we were doing RAG with SQL before. We could have done RAG before and tried to link the entities manually just based on name or something or address, but it's not a great approach because like I showed you, those addresses can vary. The names can vary um you know we we can think about like if we were talking about people um let's say you've got records in your database and let's say you've got um elizabeth smith and beth smith or you know sometimes jack is the nickname for john how are you going to get that you you can't have like this endless supply of you know dictionary here of all the possible permutations of every single name and then if you start talking about names and other languages and names in other languages they can change based on somebody's age and you know where they're at in life and you know this gets incredibly incredibly complicated really fast so if you are not doing something to somehow create that higher quality data in your graph you are going to get the wrong answer in your rag so let's wrap that up really quick um we should give serious consideration how to achieve the best RAG performance. Now, what I showed you very early on in that core data set was using vectors, meaning word vectors. We have graphs. We should probably be looking at hybrid solutions using both graphs and vectors. And you can make vectors out of graphs too so maybe we could bring that in here as well so don't think of your rag as being you know this either or thing think of this as a both and thing to get the right answer second it doesn't matter the problem graph or otherwise this you know hopefully at this point you know, hopefully at this point, you know, we're all on the same page here. If you have better data in, you get better answers out. And as I hope you've seen, entity resolution is key to fighting that GIGO problem within RAG. Last thing I want to say is just about the rapid pace of development in this area. There is code that I have written for generative AI a month ago that doesn't work today because the tools are just changing that fast. This has a lot of implications for graphs and for otherwise. But what I want to just kind of leave you with is that these tools are evolving all the time. They're getting better all the time. Stick with it even when you do the same thing I do and pound your head against your keyboard. Oh my gosh, they changed the package again. And now my dependencies are all broken. You know what? It's still giving you better answers. And so just kind of want to put a shout out, you know, to those people who are developing these tools out there, you know, particularly in the open source community, like to encourage people to get involved in the open source community, help make these tools better. But yeah, at this point, I have come to the end of my talk and it is question time. So Paco, why don't you come back in here? Let's do some questions. Wonderful, thank you very much, Claire. That was fantastic. And I have a quick question for you just on that last point, starting out, you know, the tool from Neo4j that automagically constructs a graph, it's more of like a low-code kind of a thing. But then you show going into using Langchain. Along with the idea that these libraries are evolving very rapidly, as you were saying, what's the level of difficulty of working with them? Like, are they really approachable right now? Or like, are there any caveats you could give about that? Because I think that might help a lot of people trying to get into this. For sure. Yeah. So that Neo4j tool that I showed, that was put out by kind of the research arm. It an experimental thing um you know it's very much in beta but it's completely no code so all you do is you go to neo4j and you can spin up a free aura database or a db they call it and um is there any code demo uh Yes. Okay, we're going to come back to that one. So go, yeah. I don't know how to bring that question back up onto the screen, so I'm going to need some help there. You got it. Okay, cool. So yeah, all you do is drag and drop your file in. It creates the graph, and then there's a button you can click to visualize the graph, and then there's the question bot right there, so you don't have to program and then there's the question bot right there so you don't have to program anything it's super slick um what I showed for that uh thing with SQL and um the graph I I created the code myself and I think that's actually going to be a segue to that question if you can put that question up on the screen so yeah we had a question from Greg. There we go. Yes. Okay. So Greg, great question. If you go to, there's the two links and I think the slides are going to be made available. The, I am going to quickly scroll back to that though, just to get those links on the screen for people. Okay. So these two links, the top one there, that's Paco's article, the bottom one there is my article. Both have code in them. Mine, I can't remember Paco, does yours have a GitHub repo? Mine has a GitHub repo with it. And you can run top to bottom, yeah, just starting with the raw data, running it through Senzing, and then there's Jupyter Notebooks notebooks and it goes all the way through the whole thing um so absolutely code is code exists uh another question just came in let's see let me go back through uh christine had asked about support for rdf and like we're working in graphs. So, and I'm going to speak from the Neo4j perspective just because that's one that I'm most familiar with, but like RDFs can be pulled into any number of other graph databases. Neo4j has this thing called NeoS semantics. It's a package that works with RDFs. So yes, absolutely doable there. If you go to my GitHub page, and I'll tell you that it's just GitHub.com slash CJ 2001. There's a whole series there called bite-sized Neo4j for data science. And if you go into there, there's some videos and some content about how to create graphs from RDFs in there. So that's another place you can go for that one. Fantastic. Let's see, Mitash had a question. Is it preferable to have the knowledge-based schema predefined or use a tool like Knowledge Graph Builder from Neo4j and just let it run wild? If, okay, so if you have, if you're able to define your schema in advance, great, I love that. It's really hard though if you're talking about raw text. You could define a schema and you could say, I expect my verbs to be these things. I expect my nouns, my subjects and objects to be these things and then throw away everything that's not those things. You could do that. You're missing data when you do that. The Knowledge Graph Builder is, you know, it's great at detecting automatically what that schema and building it for you as you go. And the nice thing about graph databases is that you don't have to have a schema. It will create one for you, but like, you know, it's not like SQL where you say I've got a table and it's got this schema to it. You know, in graphs, it's schemeless, schema free. So you can say like on the fly, okay, now I'm gonna bring in a whole new thing that doesn't look like anything else in here. And that's fine. That's great. One question I have for you, probably a lot of people who are more familiar with working with say data warehouses, data lakes, they're used to having some notion of schema predefined, um, in your experience working on graph projects, uh, how often are you handed schema ontology control vocabulary upfront? Like how often does somebody actually provide that for you when you're starting out? Rarely. Exactly. And I mean, it could. I've had people who've given me what they thought was a good schema, and a stack of PDFs. And then we realized that that's not a good schema. So I've had to go the other way of, yeah, you said you wanted the graph to look like this, but you're actually missing the data that you're looking for by giving that schema. So I am more in favor of bring data in and find reasons to filter it out than starting with a very prescribed schema. That's just me, though. So iterate, iterate as you start to find out. Oh, schema and ontology. Ooh, now we're gonna. Okay, so. This is a tricky question. This is a tricky question, I love it. Okay, so when we talk about schema, what we're doing is we're making the database people in the world, we're making their hearts go pitter patter, right? Because we can understand that there's this concept of a table and it's got columns and it's really easy for us to select this column where this thing from that table. Okay. That's when I think schema, even when we're going kind of into a NoSQL domain, you know, we still are providing terms that we're searching over. When we're talking ontology, now we're getting messy. You know, we're getting into language and language, like I said, is messy. So like understanding how things relate to other things in language is, is, is a little more complicated than that. But Paco, I really want to hear your, your thoughts on this one. Well, it gets very messy because, you know, people put so much work into trying to resolve, uh, standards for ontology and RDF and L and all that. And the more you dig in, the more you find it doesn't really resolve unless you're in a tight domain. And then Google came along and said, hey, we have our version of RDF, we're going to call it schema.org. So now we're just going to confuse the whole thing. I think there's a lot of wisdom to what you're saying, like get your data in there and see what's actually there before you can really start to build out a data model. I mean, maybe if you're working in like catching bad guys and you have the, like Friedrich Lindenburg had that model of like event link, sorry, risk link event, you can have some kind of a use case specific model, but otherwise good luck because uh you've also got taxonomies which are different from ontology and probably woven through them and you probably start there and you've also got different different kinds of controlled vocabularies that get layered in uh so yeah it's tough bringing that back to the RDFs, this Barack Obama graph I tried to do it on, I also did, I was trying to create a graph once with RDFs off of Wikidata, and I wanted to create this graph of brew pubs in like England, I think, or no, Ireland. And I mean, why not? I like beer. And just sitting there and trying to understand what all I needed, what relationships or Q values, there's Q values and there's P values in Wicked Data. One of them is the relationship and one of them is the note. I always get it backwards, which one is which. But like I had to know absolutely every single relationship, every single Q value that I needed to look for. I couldn't just flexibly look for it based off of just language. And there were so many of them. With Barack Obama, I think I wound up coming up with a list of like 80 different relationships. And I still probably didn't hit all of them right I just it's it's just that it's when you're when you're talking about a very prescribed schema where you have like these P and Q values you're really locked in to what it is that you can type versus when you're talking about language and I can just be like tell me everything I care about about pubs in Ireland you know that's you're gonna get a lot more information back and not be locked into a very specific type. Ooh, what do we have here? Oh yes, I love it. The joy of pubs. So yeah, it's a really great question. Thank you for that one. But yet, I'm going back to that point about the importance of getting a data model, getting some kind of ontology. When you are using GraphRag, having good definitions between the nodes, good descriptions, semantic descriptions on relations, what impact does that have if you have just sort of a haphazard ontology or have a good one? Yeah. And that's the hardest part too, right? The relationships are the hardest part. Named entity recognition, you know, that's a fairly mature field in NLP, but getting the relationships as they link those entities together really hard. And if like, you know, there was that example where I was talking about using my ugly knowledge graph to figure out where Barack Obama was born. And I had to know inside out what, you know, what relationship type I was looking for in order to do that query properly in Cypher versus, you know, if you have a language model and I'll need to sit down and actually use that graph and throw it into, you know, this same approach like I did in the blog post, having a language model that would understand, you know, if I said, where's Barack Obama from, you know, okay, well, what verbs, you know, in that graph, what relationships could apply here, you know, so bear the dilemma of born, be could apply, could apply in certain cases, like if it's a place, but if it was, you know, if it was Barack Obama be president, okay, that's not going to be, you know, so letting the language model understand and have a little bit of flexibility and play over that, that schema that's already in this ugly graph could potentially help. There's a question Alan has, which is very closely related here. Can you generate a fuzzy ontology that includes nearest neighbor terms? If so, would it be helpful as a as a spring point okay yes and i can i can add something yeah i because because we talked about this before i think you just need to put that one out there that was a great one well with the uh contextual the nearest turning the nearest neighbors into a summary. What was that called? There is an idea. We're working with it. A little bit of a preview for a couple of upcoming episodes, but one of them is the idea of how do you link this structured and unstructured data uh if you look at the graph you look at a node and understand a textual description for like here is bob smith bob smith lives at 101 maine and is a member of xyz organization if you can start to to basically give a contextual summary uh for a node somewhere in the graph then you can use that as part of your embedding model for how to draw connections between what is the prompt, what are the text chunks, what are the elements in your graph, but have them contextualized so that if you're talking about MLP, do you mean natural language programming or do you mean low neuro-linguistic programming? you can start to make those kind of distinctions by having better context. The other thing though, too, is if you start to end a preview for maybe if we can get David Hughes and Amy Hodler up here, a preview of the idea of if you take a lot of embeddings of your SVOs, the subject word object triples that you generated as you're showing, and just put those somewhere kind of fuzzy and have embeddings of them, then it's a numbers game. The real relationships that tend to happen between certain types of subjects and objects will start to pop out. So we'll show that a little bit later on, but there's some interesting work in the space of relation extraction. And yeah, you're calling it fuzzy. You might want to call it associative. You might want to call it probabilistic, but all along there. I like probabilistic. Let's see here. Okay, great. Okay. Shared how the graph ML was able to get superior performance compared to word vectors. Can you elaborate on that? What features did you use? Did you use the graph embeddings as part of your feature set? Excellent question. Thank you, Matesh. I I did not use anything other than this approach of fast RP. Basically, you can kind of think of it as this money Carlo approach where you've got a node, and you start at that node, and you're going to randomly then hop out to other nodes and see where you can get to. And you're going to do that a handful of times and see which nodes you hit and you mathematically use that to create the vector. There's nothing about the any node properties that go into that approach. It literally is just what's connected to what else. So there are other, that's what I used, was FastRP just because it's incredibly, incredibly simple. And I was just wanting to demonstrate the power of just looking at what's connected to what else. But there are more sophisticated embedding approaches to graphs where you do bring in embeddings. GraphSage being one of them, there's a handful out there based on neural networks. And so then you are embedding property information, but that's not what I used in that particular one. And if you want to know more about that one, again, if you go to my GitHub which is just my GitHub alias is CJ2001. I have a whole bunch of links to blog posts and stuff in there and you'll you'll actually find a big blog post on that particular approach if you want some more information along with the code. fantastic how precise do natural language queries have to be to query knowledge graphs effectively how are vague queries handled oh this is this is a great question it's a can of worms too the answer is it depends um it depends on a lot of things for example what how good is the prompt so when you're when you're typing something to many of these bots, they've been told you're a helpful assistant, you have information on XYZ, things like this. And sometimes they include what we call few shot learning. So for example, if you're asked this question, here's a good answer. So when you give the prompt a lot of information, you can get away with a lot more on your queries in terms of how vague they are. But if you are, if you have not told that prompt much, if it's a maybe a less sophisticated model, you have to be much more specific. Now, if you read my blog post, you'll see that I was actually pretty specific in what I asked it because I did not spend any time doing anything with the prompt. The prompt was very basic. I think it was basically you are a helpful assistant because I didn't want to worry about prompt engineering and prompt tuning and things like that. So you'll notice like in that union cabs example i i had to tell it union cabs and companies related to them in order for the um for the bot to create the correct cypher query and i will tell you that was that took me some time i don't want to you know gloss over it you might read the blog post and think oh she just thought to ask that question immediately no i actually i spent a lot of time futzing around with the bot just to see what you know what i needed to do for it to generate the right cipher that's probably a whole blog post in and of itself right there but um but yeah it does depend in the case of a narrow and well-defined domain with a high requirement for data reliability, do you have any recommendations for tools and pipelines to map a schema-less knowledge graph to an ontology? And how about converting natural language questions into knowledge graph queries reliably? Okay, let me take the second half of this question first. Conver uh natural language questions into knowledge graph queries reliably um there is a built-in uh function in lang chain which is why i picked that one that will uh do knowledge graph rag um and that's in the blog post so if you go to the blog post you'll see that code there um the first half of that query i do not have a recommendation for tools on that one do you paco yeah i'm gonna put something into uh the comments right now but um there is one company i know of that's been doing a lot of work in that area called whyhow so whyhow.ai and they have a tool a knowledge graph builder uh you can import your triples if you're doing construction outside, like we were just showing. But that's one of the areas that they're looking at is if you have a well-defined domain and you have some strong notions of ontology, how things should fit ideally, how can you try to align the data that you have? And yeah, there's some others definitely. And also this area of entity linking is something that's been a little bit, people haven't been paying attention to it for a while. If you look at the spacey pipeline support so far, but there's some new work. So I think that we'll see more coming out. Yeah, that was so hard and that was why it was so ugly. And then I was crying tears when i saw how easy it was to do in chat gpt this morning so the other thing that's been on the table for a long time too is like coreference it's a really important problem when you're doing prompts and it's kind of been ignored in nl for a while, except for maybe core NLP. Yeah, I think Tomaj Britannic wrote a really nice article about co-reference. If you go look him up on Medium, he had a, this is a couple of years ago now, but it was a good article. Yeah, and there's another thing here too about Ask News. So yeah, a little shout out. Ask News is a project that comes out of emergentmethods.ai, and here is, I'll just put a knee to the arrow for it. And they're looking at fine tuning models like Liner for NER, and also, what was it, Phi3 Mini, for looking across the world at different news sources and being able to understand bias in news, but being able to put together a nice pricey report based off of, as Claire was showing here, going out, grabbing the news sources, blending them together into a graph. Good, okay, yeah, Tom got it too. Okay, oh, Roberto. This sounds like a Roberto I know. Hi, Roberto. This sounds like a Roberto I know. Hi, Ro. I'm a musician. One of the hardest things to do is to find people that might like my music. Beyond text analysis, are there tools to help with graph analysis of music? Interesting. Yeah, that, that, hey. Clara, I'll let you take an idea um there's tons of data sets out there have i seen it grabbed i i i don't know that i've seen that data turn into a graph um but it could be a lot of fun especially like as artists like um co-produce with other artists or maybe you know different um the production houses and whatnot i i think that there there could definitely be a lot of fun there tell you what roberto challenge show it to us make it yeah yeah there okay for whatever reason my search engine is is not doing too well right now. There has been some interesting work published by Spotify on how they worked with graph neural networks and doing graph representation of preferences. In the hardest cases they had were with audio books, but it also applies to what they were seeing in user preferences. Short answer is they spent a lot of money to be able to get just a little bit better than they've been doing before. Absolutely, let's go. And I mean, there's other things you might look at, like what Indora has done. And I mean, if you think about Spotify and their recommendation engine, they're really good at, you know, recommending music to you. So you got to know that there's a graph in the background there somewhere, right? And thank you, Fernique had posted here. A friend of ours, Dennis has a great project. I've been wanting to post this online. Oh, yes. It's about Afrobeat. So that's actually a really good example there. Very cool. And while we're at it, let me put in a link for GraphGeeks. So Dennis is right here. Yeah, there is a group called GraphGeeks.org, and definitely give a shout out. It's an excellent community if you have questions about getting involved in Knowledge Graph projects or learning about these kinds of technologies, it's a great place to go and ask questions too. Yeah. And both Paco and I hang out there. They have a nice Discord channel and Discord server and just a lot of really super helpful people. Awesome. Alrighty. Is it time time i think we should do a drawing i think it's time to do a drawing i can't wait all right here we go okay big bucks and no whammy spinning random numbers are being generated left and right. 12! Our lucky winner is 12, which is a great number. I love 12. And what do we, oh, on the channel, let me get back to there. Okay, our lucky winner. Our congratulations to Christine. Christine Kieran, who has just won. And Drummle here, I should announce. From Kaylee and Suzanne also, can we announce which game we're going with? Which game has Christine just won? Actually, no, wait, I'm going to take that back. I was instructed about this. I remember my instruction. We're going to have to figure out, we'll be back with you. We have a set of games, but we also ran into some restrictions like where we can get what shipped. So we have a set of board games that are kind of on a data science theme, not all. And we're going to have to talk with Christine and find out like geographically where we can ship what. But again, congratulations to Christine. And thank you very much for participating in our draw. Also, thank you very much, everyone for uh wonderful questions here q a definitely thank you again paco and sensing for hosting me wonderful talk claire thank you so much thank you if you would like to get in touch with us and learn more about any resolution uh we have a qr code there that has the url and also there's a way to sign up for our newsletter. We'll be having more of these usually about every month, maybe even more later on. But we'll be having our event newsletter. I know that, Claire, you've got some talks coming up, it sounds like, at Day-to-Day Texas in January. I will be at Day-to-Day Texas, yes. Great conference. Everybody should come. Huge graph focus. Me too. I'm really looking forward to that one. That's one of my favorite conferences. And I'll just add to, if you want to get a hold of me, we'll be doing more of these. So we'd love to see you here. Please get in touch. And with that, thank you, Claire. Thank you all. Thank you. you, Claire. Thank you all. Thank you. We'll see you next month. | GraphPowerHour EP2 GraphRAG good bad ugly Paco Nathan Clair Sullivan | 4,395 | Senzing | 20241010 | Thanks to the GenAI revolution, more people than ever are turning to graphs to solve problems like retrieval-augmented generation ( GraphRAG ) and minimize hallucinations common in GenAI models. Graph structures excel at representing complex, interconnected data—critical for improving the contextual understanding that generative AI requires. By leveraging graphs, AI systems can better discover relationships, link entities, and enrich data, making them more effective at handling real-world complexity. As AI evolves, graphs are emerging as key enablers for more sophisticated, context-aware insights.
However, applying graphs to back GenAI applications is not without challenges. Extracting nodes and relationships from unstructured text via natural language processing (NLP) is difficult, as language is rich in context, idiomatic expressions, technical jargon, and implicit meanings that don’t neatly fit into graph structures. While large language models (LLMs) reflect general language usage, they often fail to capture subtle nuances and technical details, leading to oversimplified or inaccurate graph representations. This disconnect between the richness of language and the structured nature of graphs can hinder a system’s ability to generate significant insights.
In this talk, we will explore the use of graphs "from the trenches," examining real-world challenges encountered when applying them in complex systems. While graphs offer immense theoretical potential, translating that into practical applications often reveals unforeseen difficulties. From handling incomplete, ambiguous, or unresolved data to scaling large, interconnected datasets, the journey is rarely straightforward. We'll also discuss the challenges of integrating graphs with AI models, where the structured nature of graphs can clash with the messiness of real-world information. These practical insights will highlight the gap between the idealized use of graphs and the realities faced in the field. | 2025-01-05T09:32:44.133203 |
https://www.youtube.com/watch?v=8HxOWa7myYQ | Hi, everyone. Welcome. My name is Paco Nathan, and welcome to our first ever episode of Graph Power Hour. This is a new webinar series, which we're launching here at Senzing. And we're really, really looking forward to talking with you all uh presenting speakers interviewing people getting a lot of live q a but really understanding what's going on in the knowledge graph community which is adjacent to entity resolution we call it entity resolve knowledge graphs so uh again my name is is paco nathan we'll be doing this series. I'll be your host. And probably about every four weeks or so, maybe we can go a little bit faster. Once we start understanding, we've got a new platform here that we're working with, and we're all just learning it. So please be advised, but I think it'll be kind of fun. Now, generally speaking, I will be interviewing other guests or having them make presentations, and then we'll have live Q&A along with that. For this first one, I wanted to give a talk. So, I will be the presenter this time, but next time we'll have some other people. I also want to mention that, okay, we've got a little bit of a surprise. We weren't going to put this on social media anywhere, but please, you know, stay to the end. We're going to have a drawing. And so one lucky winner is going to get a prize. We'll spin the number wheel and show about this. And we'll let you know through email afterwards, but we'll give details toward the end, basically. All right. This episode here in our first ever Graph Power Hour, we wanted to provide a talk, which has turned out to be kind of popular about GraphRag, understanding about GraphRag and how to enhance applications based on large language models by using knowledge graphs. RAG has become a really popular topic in terms of preventing hallucinations, providing updates and facts and context sensitivity for how you're working with LLMs. And then GraphRag has come along a couple of years later to show how to use graphs to really even get better facts injected, better recall, for instance, better lift on the downstream applications. And we'll cover that. I wanted to give this talk, though, because when you look at what the graph part of GraphRAG means, I mean, there's at least a half dozen different definitions. And some of them are complementary or conflicting. It really depends on who you talk with, or rather, who is talking. So this here, this talk is a presentation about GraphRag, trying to break it down, understand where the parts are, understand what graph actually means in all those different definitions, so we can compare and contrast, and start to understand some design patterns for how this is built. The main takeaway though is you do need a graph. So how do you get that? And once you have the graph, what is you do need a graph so how do you get that and once you have the graph what do you do with it um these are some considerations and in particular when we're looking at uh use cases for ai applications that might be in regulated environments or someplace where you you really have to be careful about audits and provenance and evidence and all the rest. We wanted to give some air time to approaches for those kind of more mission-critical types of applications. And so that's a lot of the work that we do at Sensing. Really about 60% of our end-use cases are in air-gapped environments, so highly regulated environments. And so I'm particularly interested in how people work with knowledge graphs and with entity resolution for enhancing the data quality in particular for the graphs and for the downstream AI applications. All right, so we'll throw in a poll at this point, if I could ask our poll failure. Ah, great, poll is active. So we'd love to know who you are. We want to try to understand our audience here. So we've got just a few different polls, but there's one that's real quick. It's multiple guests, and we'll show some results later about it. Just understand who's in our audience and really what kind of themes and all. And while you're doing that, I'll give a few blurbs here to introduce myself. For what it's worth, about 40 years ago, against all advice, I decided to study AI in graduate school. And then I went and did a deep dive based on a friend's recommendation, but everybody else was telling me it was something that would just never pan out called neural networks. And so I got involved in doing neural network hardware accelerators in the late 80s. There was this thing called the AI winter. It's actually happened a couple of few times. I've done a lot of network engineering and other software engineering throughout the times when people didn't really want to talk about AI. And then oddly enough, back in the mid-aughts, I got a strange phone call out of Seattle from a friend and ended up being a guinea pig or rather they bounced some questions off me. And then I was later pointed toward a website to sign up. So I became one of the very early users of something called Amazon Web Services. And my teams were guinea pigs for evaluating new services coming out. We were a case study for what became Elastic MapReduce. We had run one of the largest HADF instances on EC2. So I really got to see cloud evolve from the ground up and be involved in those communities. A lot of early work with big data and leading data science teams back before that was really even a term. And eventually I got into Databricks when we could all fit in one room and was the director of community evangelism for Databricks back when Spark was going through hypergrowth in like 2014, 2015. And from there, I went to O'Reilly Media and was director of something called the Learning Group. We had a team of people who were really good at working in conferences and teaching technology topics. And our team would go and coach other authors coming in to help them improve tutorials or, improve tutorials or hopefully become a bestseller, that kind of thing. Basically building a lot of learning materials. These days I am working at Senzing. I'm in developer relations for the Knowledge Graph practice area. And so again, really focused on entity resolution, but all the areas that are adjacent to entity resolution, like where is it getting used and where do these practices need to be coupled with other practices? Okay, let's see. I wonder if we're ready for the poll. Can we show? Overviews, good, awesome. Do we have the results ready for the first poll okay awesome thank you very much cool okay uh 20 software engineer 20 data science excellent uh eight percent data engineer um ops six percent product manager six percent okay excellent this gives us a really good idea of the composition and it fits really well with the kind of topics we're going to go into. I mean, this here, this presentation in particular, it's a lot of pointers out to primary sources, or other tutorials, or just whatever kind of resources to various communities. If you want to dive into Graphrag, hopefully you'll hear some really good starting points to jump off. And for the slides that we're showing, we're going to follow up with slides that have full links that you can access as a web app and click right through. I often hear people say that this style of presentation, you know, somebody will come back about two weeks later saying, hey, I just went through all the slides and all the links. So hopefully there's too much information there. I want to start out with an overview here of what is retrieval augmented generation, what is RAG, and then what is GraphRAG, how do these parts all fit together. So let's see here. What's in a name? GraphRAG is something that's definitely been in the headlines a lot about AI. The notion is that you use a knowledge graph to enhance something called retrieval augmented generation, RAG, to for grounding the results that come out of a large language model, an LLM. So basically, LLMs are, I like to think of as sort of syntax engines. They're really good at recognizing being trained on very large scale amounts of effectively syntax, sequence to sequence with attention ahead. And they're also good at generating syntax to that extent. They're not especially good at reasoning. There are other methods that are fantastic for reasoning that can be paired with them. But what we've seen is that large language models can be really fun to generate things that may or may not be entirely accurate. And if you're doing in-painting with images, that's really fantastic. It looks great. If you're trying to give answers for somebody preparing a legal brief, that's probably not a good idea, as has been shown in court. So, you know, people want to use LLM applications, but they really want them to be grounded in a particular set of facts that might be specific to some use case or domain context. And so we'll talk about RAG, but GraphRAG is an enhancement over that of really how do we get the facts in there. And it's been gaining a lot of momentum since really about a year ago. It was 2023, September, when we started to see some of the early mentions of GraphRAG. There may have been others that I can't actually find in the literature. I'd love to know if there are earlier references to it. And definitely, by the way, I'm trying to provide primary sources. If you have better primary sources to recommend over what we're showing here, I'd love to hear that. One other thing, Graphrag, it means a lot of different things. We'll cover that. It also has a lot of different terminology. So you'll probably see graph rag with or without a space. You'll see KG plus rag. You'll see graph enhanced rag. You'll see Grag. You'll see a lot of different names that are roughly speaking talking about a similar thing. And so speaking of primary sources, when we look at rag, and we'll dig into what rag is or isn't, but the dates on this go back about four years. Kevin Gu at Google and Patrick Lewis at Facebook, these are two of the papers. If you look at the contemporary papers, these first two are the ones that are usually cited as primary sources, some of the earliest primary sources. And the work out of Google and Facebook on those are really compelling, and it's picked up a whole lot. Early primary sources for Graph Rag, as I said, only really date back about a year. As far as I can find, first mention that I've seen so far was from September 6th from Face Off, the people who do nebulagraph out of China. And they had a demo that they launched in a blog post along with some open source code. And it was really super interesting. The demo wasn't working for a while. It may be up now, I'm not sure. But the blog post is linked here. And then other people picked up on it i know that uh llama index and langchain and others they had instances of graphrag running in open source tutorials not too long after that uh there's a paper more recently out of last month uh by bossy peng which is i feel one of the better survey papers about what's going on with GraphRag. So that's the other one that's linked here. And just as, just a kind of level set as a, I don't know, it's, this is like a generalization, but I like to have some type of diagrams or textual diagrams, if you will, to work off of. And I'm not saying that every application would be using everything on this diagram. It's more of a generalization kind of approximate, but these are what you would expect to see in terms of the data flows and the different components that may be there. So when you're working with RAG, there's a couple of different categories of sources. Generally speaking, you'll see some structured data sources and some unstructured data sources. And that's a truism for working with knowledge graph in general, is that you start out with, say, some schema, ontology, taxonomy, controlled vocabulary, whatever, some type of schema to try to organize. And then you bring in structured data sources, and then you bring in unstructured data sources. And probably, you know, I mean, there's kind of a Pareto ratio, so there's like an 80-20 rule. You'll probably see like 80% of your data is going to be unstructured. So that's actually a really big component. But it's important to start out from the structure and move outward so again i i at least understanding what your schema needs to be is really very important as a first step and from there bringing in structured data sources and then layering out your your unstructured data sources on top um an approach not already does it but an approach that we would recommend is take your structured data sources bring those data records into your graph uh aligning them with your schema your ontology and then uh we work in entity resolution and so the idea is that if you have two or more structured data sources and for each one of those you have two or more features of what would be PII, so private information, so like names, addresses, telephone numbers, birthdays, employer, tax id, whatever. You've got some identifiers that may or may not be unique, but we can kind of triangulate on them. So if you look across two or more data sets and you have these kind of connected features that are appearing regularly in those data sets, then we use resolution to figure out like what are the consistent entities that span across these different data sets. And so we like to think of that as a kind of semantic overlay. We're not changing the data records. Instead, we're sort of watching from the side and then overlaying on top some connective tissue, if you will. We'll say, okay, here's some entities that describe these five data records, and then here's the relations between them that we think are appropriate. Basically pointing back to evidence to support how do we organize what's going on in the graph. So that's sort of the structure at the top, going from structured data sources into a graph. What you're going to see a lot in the GraphRAG literature really indexes heavily on the unstructured data sources. So the notion is in RAG, you will take typically some text and you'll chunk it, split it into, oh, I don't know, you know, five, 12 characters or so many sentences or some sort of chunking factor. But you'll take chunks of it and then for each one of those, you'll run those through an embedding model so that the text in that chunk is projected out into a very high dimensional vector space. We can understand where all those different chunks of text fit in that vector space. That's what drops into typically a vector database. And so now we can come in with some query, some prompt, put it into that vector space as well, and figure out what chunks are the neighbors that we should supply out to a large language model. That's the idea of RAG, but with GraphRAG, we're enhancing it with this graph on top. So one of the other things we can do, as you'll see in some of the examples that are run, is as we're chunking that text, we can also run it through parsing. Run NLP tools to parse that text into sentences and then tokens, and then apply labels, parts of speech, understand where the noun phrases, where the verbs connecting them, clauses, etc. From there we can use techniques like named entity recognition and relation extraction to be able to understand how to construct what is more and more called a lexical graph. That is to say we take the parse trees from parsing the text and stitch the common elements together and you come up with a graph. And you can do some pretty interesting graph analytics on lexical graphs to be able to understand things like, you know, centrality. What's the most common phrase that's showing up across these documents? But it's a way of being able to stitch together from our structured data sources to our unstructured data sources. If you are using entity resolution, which I highly recommend, then one of the things you can do is you can derive essentially I would call it a linguistic analytic asset. Like if you're familiar with WordNet, if you're doing NLP, that would be a good category. If you're not familiar with WordNet, I can just say it's kind of like a thesaurus. Like I can look up a word and I can find synonyms that are closely related to that word and i can look and say okay well a cat is a kind of feline and maybe a subclass for a cat is a kitten so now i've got some structure some hierarchy and i can navigate that so effectively any thesaurus is a way of navigating a graph or the semantics of a graph. If you have that, then you can perform something called entity linking. So you can take the graph that you've built out of schema and ontology and your structured data sources and entity resolution, and then you can make a context-sensitive entity linking approach to pull together what you've built out of your lexical graph. And in this way, you can tie together a graph, effectively what's in your graph database with what's in your vector database and have references from the chunks in the vector database in your graph and use both of these together. That's kind of the gist in general, if you blur your eyes enough, squint hard enough, that's kind of what's going on in most of the graphic approaches. And then downstream, once you've built out your graph and your vectors, when you get a prompt coming in at runtime, then it can go a couple directions. One is you throw your prompt through your embedding model and you get some kind of vector similarity search and figure out those chunks that are closest to what the question that's being asked pull that out of the director database you end up with a ranked index of here's the chunks that i think in priority should be fed to my llm you feed that the llm it does its its syntax engine thing and sequences together the different parts of the raw text uh to a response. But effectively, what you're doing is you're converting from a prompt in raw text and a whole bunch of text chunks in raw text and then coming out with some kind of answer in text. That's the LLM trick. But we can make that better. One of the problems is right here, the vector similarity search, it has some downsides. There are definitely some trade-offs. So we can juice it up. We can make it better by going in and taking the prompt, running it through our graph. We might take a query, we might do other methods, but effectively we're going to come out with some elements of that graph that resonate, that are really close to what that prompt is talking about. And once we find those, we may be able to go in and use our thesaurus, our word net, etc., that we've constructed out of entity resolution. We may be able to go and do some fancy tricks on the graph side, such as a semantic random walk, which is to say, if I've got a node that says cat, I can walk out and find kitten and feline and lion and tiger and other things that are in it that are related to it, but may not be necessarily in the vector similarity search. So we can find other things a couple hops out. We run some graph analytics. We build up this ranked index because we have links between these nodes and the vector database. So we augment this ranked index with what we found out of the graph and feed that to the LLM and hopefully get much better kinds of results. That's what has been shown. We'll give some papers about that. At the end of the day, a lot of what's going on with RAG, roughly speaking, it's very familiar if you've done recommender systems work. And so in a lot of ways, one could overgeneralize and say that this is basically reapplying some of the techniques out of recommender systems to help improve AI applications. And frankly, look at what's going on here. You've got vector databases. Vector databases, of course, have been extremely useful for recommender systems and for that matter there's been a long history of using knowledge graphs to enhance recommender systems search engines other kinds of discovery process and certainly wherever you look at where rexs are being used commercially at scale for very important applications you're going to find knowledge graphs very large large practices. This is certainly the case at Amazon, Google, Microsoft, LinkedIn's economic graph, meta social graph, Pinterest, on and on and on and on. And I didn't even mention Twitter. And I've known some of these teams, and they're doing fantastic work in knowledge graphs. We'll probably have some of these people on later in different episodes but if you if you do want to dig into rexus what's going on there um here's a link to a white paper uh i did some work with the recommender systems group at nvidia a few years ago we did a white paper interviewing the top nvidia customers in recommender systems talking about their their views of you know where this is heading. And this was right about the time that RAG was starting to come out and still a couple of years before GraphRAG. So give you a little bit of background. Okay, I think it's good. Let's throw in another poll. Let's go to our next poll and we'll do that while I dig in here on what does graph mean. Great. Okay. So graph reg has become quite a buzzword in industry, but the graph part of it actually means different things. And I wanted to break that down because I think it's very important, especially as you're starting to build applications some of these may fit with your use case you may need to mix and match um on the one hand you can talk about graph being okay i've got a bunch of text uh chunks i've got embeddings for them i can look at the neighbors for each text chunk and see the distances and here let me actually use my hands a bit to try to describe this. So if I have a bunch of different text chunks and they're in a vector space, I can look at their neighbors and find out how far away they are. And I can start to use those relations to build out a graph. And this is something you can do with pretty much any embedding model. We'll show some examples of this later on. But the idea, particularly if you look at the Microsoft Graphrag paper, that's what they're doing or that's one of the instances of what they mean by graph. It's really building something out actually in NetworkX from the chunks of text. Another way though is to think, great, I've got a prompt. I'm going to throw my prompt up into converting it into graph queries, and then I'm going to figure out what graph elements are closest to that prompt, and then I'll take some representation of those nodes and edges and properties and whatnot as strings and feed that into the LLM. And that works well, too. Definitely Ultima and others have been going down this kind of route. Another way is to say, OK, as I showed in the diagram, there's that whole workflow there of taking the text chunks, constructing a lexical graph, and then understanding from a lexical graph what are the best chunks of text to feed into the LLM? Definitely, if you look at Neo4j, a lot of what they've done really exemplifies a lot of this. And we'll look at that in more detail. And another approach is to say, okay, I'm going to build a knowledge graph out of structured data, blend in some unstructured data, and then we will go through that resulting graph to feed whatever text we need to into the LLM. And if you look at, for instance, Esri with ArcGIS knowledge, that's I think a good example of what they're doing there. And then, you know, we could say you could take any combination of the above, but use graph neural networks to expand up the nodes at any given point. So that's bringing in an entirely different technology gnn's um they they are expensive uh they were all the rage from like 2019 to about 2022 uh until chat gpt kind of wiped out that i shouldn't say wiped out but suddenly everybody changed their research uh description um but gnas were really popular they didn't really show up in industry for a couple few years later, but then in 2024, LinkedIn and Spotify both published papers about how they have revenue varying GNNs in production. So what I want to say is there's a lot of different meanings of the word graph, but you can mix and match them. And we haven't seen enough of this built out yet, but there are some examples of being able to mix and match components. And so just to give a little bit of background, this idea of building a graph off of embeddings, if you wanna play with some code, here's a notebook from the KG Lab project, open source project about graphs that we did, I think like four years ago. If you wanna see just even a simple example using recipes from food.com, we were able to use a word embedding model and build out a graph from that. And if you want to look at how to do lexical graph construction, we've got, here's a GitHub repo that shows this is for another talk that's related to this. But there's more. Graph has multiple meanings in this context. Another use of the term graph that you will see related to LLMs would be, for instance, graph of thoughts. And so this is a little bit more on the other side, not what you're feeding into the LLM, but rather organizing what you're getting out of the LLM. And so if you've heard of chain of thoughts, the idea is you take your prompt that you're gonna ask a question to the LLM. And so if you've heard of chain of thoughts, the idea is you take your prompt that you're gonna ask a question to the LLM and you decompose it into parts and you run your large language models to get answers for each one of those parts and then you reassemble them back together. There's another thing called tree of thoughts where you start to branch it out. And then there's graph of thoughts where you can say okay well let me let me try several different methods but also keep track of the cost because some of this stuff is really expensive and so by using a graph of thoughts i can decompose my prompt and i can start to work on the different parts start to reassemble them if i get down one branch and i find out it's a dead end, I'll just backtrack into some other. And also if I get into some branch and like I'm kind of getting answers, but gosh, this has already become really expensive, then why not use some of the less expensive branches first? That's a technique out of operations research if you're familiar with ORSA, that kind of approach gets used a lot. So cost optimization for time, money, accuracy. There's some really great work. Masih Vesta is one of the leaders in this field out of ETH Zurich, and definitely keep this in mind in terms of graphs with LLMs. But there's some lighter weight versions of this too, which I think are actually really promising. And this is where you're doing more graph reasoning. And so LLMs are not the best for doing reasoning. And we can show that. There have been some great studies recently. But there are other ways of doing reasoning. So the idea here is there's a collection of research, even dating back to 2019, this paper here. I really love this one called Brack's Wife Hilary, about how you can use graphs to try to avoid confusions that would otherwise be coming out of LLMs or other machine learning. The idea is use the LLMs to create logical prepositions, put them together in a graph, and then do reasoning inside that graph to feedback on the answers that would come out of the LLM. One of my favorites, I think that this area has a lot of way to go. And just to kind of close up on that part, a couple of friends, Ben Lorca and Prashanth Rao, tried to take a stab at defining what design patterns are being used, categorizing all the different types of GraphRag. What they came out with is, gosh, this is really varied. So we'll just have a sampler of what the popular techniques are but they really do give a lot of names to these different approaches and there's a great uh interview subsequent after this article came out uh ben lorka interviewed cto from neoproj a friend of ours philip rodley and uh he actually broke it down into a completely different kind of taxonomy so there's more and more discussion about what does graph rag really mean, and I wanted to point to these resources. All right, do we have the results from the poll? Let's see if we had the most recent poll that was going on. Ah, interesting. Okay. Entity resolution in conjunction with knowledge graph practices. Great. So it's 30-ish, 30 across the board, split into thirds, basically. Yes, in production use case is 27%, evaluating 38%. Okay, that's great. Because I mean, this will be a lot of the theme for our webinar overall. Now, let me start to address, there's a lot of questions of, well, you know, if we've got RAG and that's working, why do we need to go to the extra step of having graph rag um quantify can we quantify it as doing anything better so first off i just want to point out three surveys uh chia yang from why how uh my my friend and colleague uh there and also uh well all these folks are good friends louis and i are actually working on some projects right now. Hopefully, we'll have Louis on the podcast before too long. Webinar podcast, I should say. But Louis had a comparison of different types of RAG and graphs and language back from February, which was right about the time that Microsoft was publishing about it. So this was looking at comparisons of, say, vector rag versus graph rag versus hybrid kinds of rag. And then Philip from Neo4j has Graph Rag Manifesto, which really goes into detail. And also Emil Efrem, the CEO of Neo4j, has done some talks that are up on YouTube about this, going into a lot of detail. And also Emil Efrem, the CEO of Neo4j has done some talks that are up on YouTube about this, going into a lot of detail. I highly recommend those resources, but a good starting point is Philip's article about Graphrag Manifesto. I think that these give a lot of characteristics for how can we go in and start to quantify and measure. And so some of the top papers there to analyze the lift of GraphRag. Three, there's one from LinkedIn that is showing more than 20% median per issue resolution time lift for customer support at LinkedIn. So that's really substantial in terms of the benefits. There's another one from Glasgow talking about 14% lift on trace Q&A datasets. There's another one from Glasgow talking about 14% lift on trace Q&A datasets. There's another one from Emory looking at comparing different benchmarks for multi-hop reasoning. So I think that we're starting to see this quantified. There's definitely a lot of lift coming out of using GraphRag. At the Knowledge Graph Conference KGC, which was held in Manhattan in May, I got to host a track about understanding the lift of GraphRAG. And so we had three talks. One was from John Stevens at AbbVie, the big pharma in Chicago. They were showing their work internally using chatbots based on RAG and training with internal knowledge that their people need inside pharma, showing the lift of using RAG versus training with internal knowledge that their people need inside the pharma, showing the lift of using RAG versus GraphRAG. Great talk. I think there should be a video about that out. And then Atsunas Kirikov and Pia Popov from Ontotext, they showed what they're doing at Ontotext, which is one of the graph database vendors that really jumped in early on doing GraphRAG, especially even with RDF and L semantic web directions. Also showing the lid for some of their large customers. Then most importantly, Juan Cicada and also Dean Alamong and others at Data World, they've had a couple of papers coming out really quantifying this area of generating SQL queries by using RAG versus using a graph and using graph queries first, then to understand how to structure the SQL queries. So there's a lot more that's coming out formally about that. And just the big picture takeaway is that what we're seeing here is that GraphRag is addressing a problem that we would see otherwise in LLM-based systems, commonly reported about recall. Like you might ask a question once and you get a pretty good answer. You ask the question second time, you get a so-so answer. A third time, you get something that's just bizarre. Even if you use rag you still might see some issues with recall because the similarity measures that are inherently used in vector databases they're relatively shallow so graph allows you to traverse the graph and find the things that should be included that might be a couple of hops out that are sort of non-trivial or non-obvious if you will um semantic random random walks are a great way to do that, graph ML, graph analytics, etc. But it's leveraging domain knowledge. And overall the big picture here is that we're taking machine learning models, what we can use with them, more statistical approaches, and then we're combining that with more of say symbolic AI where knowledge graph of course is you know the big area of symbolic AI right now. We can put these together into more hybrid AI, which is a much, much better approach. And I've linked here, if you follow this link, there's a talk by Franklin Harmelin about symbolic and statistical hybrid methods. I think it's a really great perspective. Okay, let's try another survey. We'll go for the next one. And I'm going to scoot through some of this just because I think we're making pretty good time. But when you're using GraphRag, obviously, one of the things you need to have is a graph. And so that begs the question, how do you get that graph? And how you build a graph and how you use it, of course, are very consequential. So, in terms of the diagram that we had before, you're basically at the build stage. You are here, looking at the build, not the use yet. So, there are some really great tutorials about this, but a lot of what you'll find in the literature, in the open source, the tutorials, it's really focusing on using LLMs to build knowledge graphs from mostly unstructured data sources, building this automatically. There's really great demos here from Neo4j, LLM Graph Builder. I think subsequently they've moved it out of labs, it's actually part of product now. But point at a set of wiki pages build a graph about it and then start asking questions i i saw an early demo of this uh a few months back and uh it's really fantastic it's amazing what can be performed there is some cautionary advice um what has this paper is from a couple months ago now but from from july and uh colleague or people at bosch nai i want to shout out to some friends of us uh but they're working with you manheim and they're able to show um you know you can go to a lot of effort to train llms and fine-tune them for a particular domain i think i would uh reference uh wali kadus chief scientist at any scale he had a great talk last year at ai conference where basically training an llm is for form and structure whereas like fine-tuning or using rag is more for fact. So it's expensive to train LLMs, less expensive to fine tune, particularly if you're using synthetic data projections, but it's even less expensive to use rag to get new facts into your system. If you do rely on the LLMs to be doing your reasoning and understanding your domain, proofably we can show that doesn't work very well. And so this is one of the early papers that I've seen really, really quantifying some of the cautionary advice about over-indexing too far on the LLM side. You do want to use something outside to try to ground it. And maybe scaling laws will prove this differently later, but all the trajectories seem to point toward this. So there are better practices than just relying exclusively on LLMs. We work in entity resolution at Sensing, and most of our customers are in regulated environments, as I mentioned. And so the notion of using an LLM to automagically generate your graph through some black box approach, yeah, that doesn't go over well, especially if you're trying to catch bad guys and take them to court, maybe try to put them in prison. It turns out that judges and juries like to see evidence. They don't like to have some magical black box just coming up with answers. And also, you know, this speaks to the idea of investigations uh when you are doing any kind of investigation it's typically very iterative uh you you make some progress you go back and do some some work you bring in new data you make more progress and i don't want to the science in general is more about investigation than it is about prediction i think that that's a problem silicon valley has had over the last several years is they really over-index on this notion of prediction and sort of a cult of prediction, but really it's about investigation and that's certainly what we see with a lot of our use cases. So if you are an enterprise team and you've built a graph that you've, you know, you've taken time to curate it, you understand where all the different pieces come from, why not use that as a core asset? Why not use that when you're doing graph rag because in these really critical kinds of applications uh you know accountability evidence provenance uh being able to do audits and the consequences downstream these are really crucial so um you know i i want to explain it just real briefly about if you are building a graph, let's break it down, as opposed to doing it as just one magic LLM thing. Why don't we break it down into pieces that we can understand and that we can sort of audit and give some feedback on the different parts. And when you jump into this, you really need an explainer because you start to use some of the natural language work. You've got terms that sound a lot alike, but they're actually very different. So named entity recognition, that's where you've got unstructured data. You parse it. You build a electrical graph. Okay, great. Now, if you've identified a noun phrase, named entity recognition is a way of giving a label to noun phrases. So I can say it's a person, place, a thing. I can say it's a name of a country or a unit of currency, et cetera. But it's a noun phrase and it's come from unstructured data. Relation extraction, RE, is a way of saying, I've got a couple of entities that are going to go into a graph. I want to put a label on the edge between them, give it some kind of semantic relation. And that's still early days, but there's some really good things coming out of research for that. Entity resolution, where I work, is completely different from NER. Instead, you're starting with structured data, and you're trying to disambiguate consistent entities across data sets, two or more data sets based on two or more features that are connected. Entity linking is a way of actually blending how you can take structured results from structured data sets and entity resolution and unstructured data and named entity recognition and blend them together into a KG. You'll also hear zero shot learning used a lot. I definitely wanna qualify that. That's where I can take a deep learning model and be able to give it a set of classes to recognize at runtime without having to go and retrain it, which is very powerful. So the entity relation, or sorry, the entity resolution that we're talking about, it's a way of generating building blocks for a knowledge graph. So we can take and resolve entities across data sets and then come out with an overlay that says, here's some entities that connect different data records together. Here's the relations we think that should be between them. We'll add some properties if you want to trace provenance, et cetera. So kind of a way of preserving evidence as a semantic overlay. And this talk, plus a few other tutorials, plus some others that we've got coming up really all filled together they segue so there is a talk about using entity resolve knowledge graphs i did a tutorial a few months back and claire sullivan my colleague also did an excellent tutorial recently uh just came out about using that same use case that same data but then working with graph rag based on lang chain uh to to show what you can do with GraphRag from structured data sources. And Claire and I are working on some other tutorials coming up. Also, there is a sister talk that goes along with this. It's about how to construct knowledge graphs from unstructured data and how to bring that into your structured data. And that was hosted by GraphGeeks a few weeks ago, and definitely highly recommend that, especially the code. It might be very useful for you. We've got some case studies about people working with Entity Resolved Knowledge Graphs. And I want to give a shout out to that, because I think it really does start to give a flavor for what these use cases look like, what the practices are. And it's a nice counterbalance to what we see in industry, which i'll talk about next okay let's see if we have a poll to run uh we'll run the next one um i want to talk a little bit about graph rag in practice and then we'll wrap up and go to questions um okay so now looking at the architecture diagrams, we're at a different part here. We're talking about usage of what we've built from graphs and vectors. And in terms of GraphRack, they're really turning out to be, you know, this set of popular open source libraries. Certainly Lama Index, which has excellent tutorials. Haystack, of course, has gained a lot of traction. You see this all over. Really like what they're doing. Actually, from a code standpoint, I kind of tend to favor a lot of Haystack. Langchain is super popular, probably arguably one of the more popular ones of these. Although Microsoft, of course, has gained a lot of headlines. But they have open source for all of these. And I'll be MC at AI conference coming up next week in San Francisco. And I'm sure we'll see a lot of the principles from these different teams presenting. I'm showing on the right-hand side here with Lama Index, they've got a class in Python called Property Graph Index. It's a way of building up a knowledge graph based on a property graph that you can then plug into RAG. And so it's a quick way of building up a graph RAG solution. Haystack, Lanchain, Microsoft, they all have some ways in a relatively short amount of code. Although I definitely have some criticism about these libraries as well. So I think that it's still early days. Most of these libraries are really big and they kind of have everything about the kitchen sink. So they tend to be adding new things all the time and you get these really huge installs. And as far as APIs, I mean, the design patterns there are still kind of being figured out. So I do expect to see a lot of refactoring for all of these and really building it out into different components and starting to find, you know, then look in detail at each of those components. So that's a caveat that I would give about working with these. And frankly, for a lot of teams in production, they build their own. And so if you want to look at building your own kind of graph reg, you know, there's components that are definitely recommended. The graph side of things, NetworkX, of course, is kind is the workhorse of graph analytics. It's not a database, but it's very useful. And there's the GPU accelerated version of it, Kugraph, which is so powerful at scale. KuzuDB is another team I work with that does property graphs. And so I've got some examples in the links here of code repos on GitHub, showing use of KuzuDB, also very friendly to Python. And then Lance DB is, for example, a vector database I like to use. Also, I think that Kuzu and Lance are both sort of the DuckDBs of their respective worlds. Ollama is a really great way to orchestrate hosting your own LLMs locally or elsewhere, but basically model orchestration. So there's these parts that you need to have if you're going to build. And I'm going to reference a book coming up here in just a minute that actually goes through step-by-step building your own GraphRag. I want to shout out to a few examples. I think that Esri is doing really well with ArcGIS knowledge and what they're doing there exemplifies when you look at RAG or GraphRAG in general, arguably the heavy lifting is because of the embedding models. And when you look at multimodal embeddings, this is where a lot of the power is. I think that ArcGIS knowledge is one of the products that exemplifies this area. How can I bring in graph elements with geospatial elements, with some video streaming, with some segmented imagery from satellites, and then some other text that's come out of news reports, and pull it all together? And so here's a case study that our team at Cinzing did working with Esri, again, to pull those pieces together and go after illegal fishing fleets. So, you know, finding a fleet that's operating off the coast of Peru, checking its beneficial ownership chain through the graph, finding out it's actually being operated out of Beijing. And by the way, if we look at the graph pattern and use some GraphML, we can find similar fleets operating elsewhere in the world because of geospatial data. Highly recommended checking that out. Another one I think that's really good to look at is WhyHow. What they've been doing in Knowledge Graph Studio is a recent release, but basically RAG native Knowledge Graphs. And very interesting, they've just opened up their SDK to be able to import triples so you can build your own graph and then bring it in. But then building out RAG solutions, Graph rag solutions graphic solutions also building out agents um another one that i think is really interesting is at news out of emergent methods really interesting ways they have been uh customizing like the phi3 llm and they also did a fine tuning on on gliner uh really really interesting work to go after news reports and understand what's sensationalized versus not where the bias is etc but bringing it all into a graph and so it's effectively leveraging llms to build up unstructured data to build up graphs to focus llms very nice feedback loop all right uh can we show some poll results if we've got the latest poll we could show that aha cool uh what experience do you have implementing graph rag in production um okay try so a lot of people are kind of trying or or just learning about it um which is fantastic. And I think this actually fits. It's still fairly early days, but there's more and more resources coming up. So to that point, let's wrap up and we'll go into some Q&A. There's some great community resources. I really love graphgeeks.org. We've been involved there. I'm actually involved with a study group understanding some of the data sources for ERKG. And we'll have more of that material on our webinar here in later episodes. But definitely recommend, check out the Discord for GraphGeeks. It's a great community. There are a lot of people who are interested, coming from different directions. We can all learn together. We've got the ERKG discussion group, Entity Resolve Knowledge Graph discussion group on LinkedIn. I'm one of the moderators. And we've had a lot of great material coming across there. I definitely welcome you to join and post. I'm curating a collection of papers, actually several different collections of papers on the hugging face, but papers particularly about knowledge graph construction. Some of these things I've mentioned, there's a lot more material there. There's a great discord that's been hosted by Neo4j but it has lang chain and lom index and haystack and all the others involved there um so definitely check out our discord about graph rag um there's also uh we cover a lot of this material on the data exchange podcast i'll be doing an episode actually we're we're coming up on that two other uh communities i really like ai alliance and mlops, both have a lot going on. Actually, some conferences coming up very, very soon. And a bookshelf, I will point out this first book here, Knowledge Graph Enhanced RAG. It's in early release on Manning Publications now. It's written by Tomas Botanek and Oscar Hane. And this one goes step-by-step with code of how to build your own GraphRack. I highly recommend getting the early release on that, checking out. Really great book. There's some others that are related here. Definitely a shout out to that. And here's conferences that are kind of in this area, which we'll be involved with. KFirstWorld coming up in a few days, AI conference next week, and then we've got Nodes and ML Ops World, et cetera. And definitely Connected Data London coming up later this this year uh and if you want to get in touch with us please please do uh here's some codes to be able to reach out we can give you more information learning about any resolution and any resolve knowledge graphs these practices um our engineering team will work with you on these. And also if you wanna stay up to date on different talks, tutorials, new editions of our webinar, et cetera, please sign up to our newsletter, but here's how to get in touch. And if you want to follow me here, please link up on LinkedIn. There's's more information also on SessionEyes and my portfolio site too. And so from here, I will mention we will be doing this webinar with guests and Q&A probably about every three, every four weeks or so, maybe a little bit more. So we'll have another one coming up in early october and we'll be announcing that very very shortly and now let's go to question and answer time uh would love to hear what you've got to say and if there are results from the i don't know if we have we covered all the polls do we need to report any results if so splash that up if not let me cut over and take a look at the Q&A. Oh, great. Okay, how familiar are you? Oh, interesting. We use, so 20% are using entity resolution from sensing in production, 6% are different kind of ER, and a lot of people haven't really encountered the ER side of this before, but we'll be covering a lot more about any resolution, how this fits in with knowledge graph practices. That's some general okay uh please post your questions to q a and then i'm going to scroll up and take a look and start to answer some of these um okay um first question uh michael asks many thousands of companies worldwide have migrated massive corpora of their customer-facing content to structured component topic document architectures. Which is the content often used for customer-facing chatbots? Hey, that's a great question. Yeah, I mean, when you look at intranets or just corporate approach in general, you find things like Confluence or SharePoint or many, many others. But there's a lot of kinds of content which can then be put forward to customers. Effectively, how can that be mined to be able to develop graphs and then from there use it in GraphRag, for instance? And short answer is, this is an excellent question. I think it's still kind of early days for a lot of the open source uh graph rag libraries certainly i haven't seen a lot of them going in this direction of like here's our sharepoint connector because that would be a lot of work and it's also kind of more of an enterprise thing and the llm people aren't quite on the enterprise wavelength just yet but we're trying to draw through line there so this is a great question i i think we'll see more of that and um i will actually research that and see if we can put anybody up here in the webinar to address that. Let's see. And another question, best description somebody recently coined about LLM generated graphs is speculated graphs versus curated, validated, managed knowledge graphs. Yeah, you know, I hear that. LLM generated graphs is speculated graphs versus curated, validated, managed knowledge graphs. Yeah, you know, I hear that. LLM generated graphs, it's a little iffy. I mean, there are a lot of great ways to build graphs. I do think you need to have kind of a backbone and understand that and really focus for domain. I don't think that the approach of let's just build a ginormous graph that does everything, I have yet to ever see that work in my years of working in knowledge graphs. So I think you need to be a little bit more focused. There are great reasons for the speculative graphs. Certainly, I mentioned about Ask News and emergent methods. I think I love what they're doing and kind of more in this method. But you do need to blend the two together for most use cases. Another question, can people who have graphs with LLM applications in production share their use case? Oh yeah, okay, great. That's just kind of general, but put that in the chat. We'll try to summarize that. Great, Christoph put in a note here about graphs with LLM apps in production. Thank you very much, Christoph. Yeah, excellent resources there from GraphAware and we hope to have you on here soon um let's see another question do you guys work with entity resolution it was structured er and unstructured okay great so there's some dialogue going on this is maybe i'm not looking in the right part i'm still learning here um ryan had a question okay i think i'm in the right place now um how do you encode time into a graph and the fact that relationships can change over time time-based graphs are are super popular especially in enterprise there's a lot of call for this um it's something you can kind of manage by having say information in properties but realistically it's a structural graph that's changing over time and so there are some notions of streaming in graphs to be able to represent this and also look at some of the machine learning to take into account the time-based changes um it's an active area of research i know of a couple of these projects um this sounds like a good topic to add maybe we can bring somebody on for it and then another question can you talk about potential use cases uh please i know you alluded to detecting fraud and maritime more examples like that would be helpful okay great um let me scroll back real quick here to um yeah if you want to see about use cases here's some different ones. They span across fin crime, maritime, and illegal fishing, looking at understanding financial SEC filings, really great one there from Deanna and Kiniviz, looking at beneficial ownership, AML, Cetra, GraphAware, Linkuris, Enterprise Knowledge, and others. Definitely working with different vendors who are expert in these kinds of use cases whether you're talking about ubo ml kyc pep esg on and on all these three-letter acronyms but we'll be bringing a lot of these people on for this webinar um and uh and i'll share this obviously in the deck there There's some good ways to start looking at that. Also come and check out on the GraphGeeks site too, because I think that we've got a lot of discussions about that kind of thing. I'll see if there's anything else off the public chat that I could mine. If not, shall we cut over to our drawing? And let me go into chat here. Okay, let's do our drawing. Okay, lucky drawing, and let me cut over, and oh, great, okay. Give me just a second. I'm going to download, where is this? Here we go. Let me download something, right right and I will talk maybe we can show the results of some of those polls while I'm doing this I'll take just a moment and grab there we go downloads this is a graph and then This is graph. And then... Okay. Okay, I have, I'm all loaded up. All right, should we spin the wheel? I think we need to have some music on this, but we're going to show... There we go. Okay, we've got our wheel of numbers here, and I'm Vanna White. Ah, 40, and the answer is 42. Okay, awesome. Well, okay, especially for certain fans of certain science fiction there, we already knew the answer was 42, right? All right, now let me go ahead and use my magical Python software here and run this. Boom, okay. We have a lucky winner and I will announce, but for the sake of privacy, I'm not gonna put any Gmail up or put any email up there. But our lucky winner is Pi Buap Tong. So congratulations, Pi. We will get a hold of you offline here. This is just going to be something that we're not going to announce or put on social media, but we're just going to show it live here. We have a special prize. There is a board game for data science, and it's really fun. It's pretty hilarious. It's called Charty Party, and it's a game of coming up with absurd explanations for charts, which is just like you would do in data science. So congrats again. Thank you, Pi, and thank you, everyone. I appreciate the questions. I'm going to look one last time to see if there's any other questions I can go into. Great. With that said, we're right on time. Thank you very much for watching, and we look forward to hearing from you and seeing you next month. Thank you. | Paco Nathan's Graph Power Hour: Understanding Graph Rag | 3,594 | Senzing | 20240912 | Watch the first podcast of Paco Nathan's Graph Power Hour. This week's topic - Understanding Graph Rag: Enhancing LLM Applications Through Knowledge Graphs. | 2025-01-05T09:37:37.597073 |
https://www.youtube.com/watch?v=jQMq9FbkZAI | Happy New Year, everybody. Thank you for joining. This is a Graph Geeks Explainer talk. My name is Amy Hotler, and today we're going to be talking about how you build knowledge graphs that can process millions of new entities and relationships every day. We're very fortunate to have Rob Kalk, founder of Emergent Methods with us. And you'll also be learning about how their team enabled what they're calling on the fly sub graphs, which is very exciting and helps you kind of enable more targeted domain specific expiration of news. I'd like to turn it over to Rob to take it away. And I think Rob, you have a poll for me, don't you? Yeah, let's start the poll. Hey guys and gals. My name is Rob. I'm the CEO of Emergent Methods, like she said. And we have a mission to distribute real-time information using the latest tools, especially focused on some applied research. Our background is in academia and research. So we bring our approach to hypothesis conception and testing and identifying, you know, really important conclusions. We bring that into all of our work, our software and our products. So today we're going to chat about exploring the news at scale, which is kind of our claim to fame, the largest news knowledge graph on the planet. And we haven't found anyone that has debunked it yet. I'm perfectly happy for someone to come challenge us. But it's true that we're upserting a million entities a day into this thing. And that requires some very creative structuring of the data, as you might imagine, and you'll find out shortly. But let me just get a sense of first, how everybody describes themselves so I can tailor the presentation, okay? Looks like we're majority developers, and zero citizen scientists, which I'm sad about, because I think everybody is a citizen scientist. So I'll just put everybody in the secondary category of also citizen scientist. But that's great. So let's get started. First, I just want to say the work here is a lot of a team effort. So there's Ellen Tornquist and Wagner Costasantis out of Brazil. These are really smart people who work really hard in the trenches, and this would not be possible without them. So let's see if we can move forward here. Ask News. I just want to give you a brief overview of what Ask News is so that you can give it so that you have context to why we're building this thing or how we have that type of information at our fingertips to build the largest news knowledge graph. We're tracking thousands and thousands of news sources, open web across hundreds of countries and many languages. And we're applying all of the latest tech, injecting our own research, especially on entity recognition, and obviously obviously the graph extraction which we'll talk about soon i'll say a lot of what you're going to see here is actually not a graph database but instead it's a vector database there is some sprinkling of graph dbs around and we'll talk a little bit about that if the questions head that direction. But yeah, Quadrant is a big piece of this. So what's our general goal? We want to provide high quality data to real time information to pretty much anyone, citizen scientists, analysts, and LLMs. How do you communicate that information? First, how do we get it? And then how do we communicate it? A lot of this is content enrichment. I'm not going to dwell on this because you know i want to move to the more interesting stuff but generally speaking there's just a lot of enrichment extracting entities extracting sentiment classification a lot of the traditional stuff that we used to do with older tools are now having higher quality and a lot more customizable to a domain specific stuff, you know, talking about extracting statements and evidence and attribution. You couldn't really do this before you had a large language model that could read the text and say, hey, here's the statement. Where's the evidence? Where's it being attributed to? That sort of stuff is some extraction and enrichment. enrichment. You can head over to asknews.app and read our copy. We have an editor in-house, so this is human in the loop stuff. We take it very seriously, and we believe we're the least biased source on the planet because it's an algorithmic enforcement of diversity, and we take pride in that. We love our our sentiment analysis we're doing all sorts of stuff evaluating provocativity extracting the entities etc etc we have a claim to fame which is gliner we didn't build the architecture but we find to engineer the data set to fine-tune the data and that's kind of the start of building out a graph is how do you actually get the metadata to create a graph, right? You need to extract that data properly. And can you extract Eastern names and African names and South American names and Western names all at the same time? That's a big piece of reducing bias in your index, right? And by the way, if uh if you want if you have any questions i'd love to you know clarify things and make sure everybody's on this page um okay so let's move forward here entity relationships right that's what we're here for uh let me stop droning on about Ask News. Every graph is different. You always have some different application. In our particular example, we're trying to explain relationships of text and communicate that to either an analyst or an LLM. That's a big piece of the puzzle that we're working on. And so that means that the relationships are not necessarily predefined. And that's where you enter an ontology-free approach to a graph. You can see the relationships are not predefined. They are nearly unlimited. So what does that mean? Here's another example of extracting a graph. I think everybody's here at least comfortable with graphs, but I just want to make sure everyone's on the same page that really we're just trying to find out how are people related to locations, locations related to organizations, even things like an item like a shake. Sally bought the shake and Roger bought the shake. When you extract this information, all of a sudden you've represented that in a very different way. And you can communicate that and store that information into what we're calling the largest news knowledge graph on the planet. But you can communicate that to an LLM or to an analyst and it almost opens up other insights. So let's see if we can move forward here so how did we start right that's kind of one of the questions that we're answering in this presentation we needed to start with well we're going to process what uh you know a million news article texts per day that's a lot of text and putting that all through openai gpt4o So that is going to's a lot of text. And putting that all through OpenAI GPT 4.0, that is going to cost a lot of money. It might work pretty well, but we're a tiny startup. We're not able to devote those kinds of resources. So instead, we said, let's see if we can fine-tune something. Selecting model we have phi3 it's really powerful base tool base tiny model it sits on your customer consumer grade gpu which is really impressive so that's lightweight easy to fine-tune open weights a pretty decent size context window 4k all the way up to 128k tokens all that means is you can throw a lot of text in text in and get a lot of reasoning out so the problem here is you know we want to take five threes it has this really great base ability of reasoning. It can extract relationships. It can take a text and it can see that Roger and Sally both were at McDonald's together and they did this and McDonald's was open on Thanksgiving. But we want to tune it so that the output is structured and capable of representing our expressing our vision of how this news text should be graphed. Right? There's a lot of values there that we want to express, the least of which are kind of, you know, journalistic integrity. We don't want to say the wrong relationship. We don't want to say that Sally killed Roger. That would probably be incorrect information. We don't want that in the graph. Maybe that's an extreme example, but we do want to make sure that we're fine tuning the model for our values. So we need to engineer the data set. And that comes down to engineering diversity, to be quite frank, right? we want to make sure that it is seeing all sorts of examples of what we would want it to be able to extract in the future so you don't want to you know hope you want to hope that uh that it has it can extract a graph on names from south america if it's never seen them, it's not going to do as well. But if you've already injected that, engineered that data set, you're now making sure that, hey, at least it's aware that these types of names exist. You're building out what's called a parameter space when you do this. And so really the goal of engineering your data set is to cover that parameter space, cover the linguistic diversity that you're going to encounter, cover the geographical topic diversity. Do you want to be able to recognize only sports, sports, politics, and finance? That's a key piece of the puzzle. And so, you know, I don't want to dwell too much. I don't want to but I want you guys to be aware of how the data was created and how it was engineered. We have a very high quality synthetic data set here in all of our stories, which are human in the loop edited, right? That's our we have our editor in chief in house who's made sure that these stories are being written with our journalistic integrity. So we want to fine tune on text that we find to have journalistic integrity so that it's building graphs that have standards. We extract. You go ahead. Rob, there are a couple of questions that have come up. Oh, no, no, this is fantastic. There's a first question is from Mitesh. I would like to appreciate your time, but would like to understand how you handle contradictory information in the news. And, you know, or if you're going to cover that later, we can talk about that later as well. So that's the first question. Yeah, that's a great question. It's a little bit of a different topic, but just as a short response, we enforce diversity from different languages and countries when we report on a large topic. So if it's Israel, Gaza, we're not going to just only take sources from Israel. We're going to say, OK, source from Israel, source from Gaza, source from Germany, Sweden, America, and that allows us to identify the contradiction, identify the alignment, and report that. So we're not actually saying what is fact and what is fiction. We're saying this is what is contradicting, based on understanding how different languages and continents are reporting on it. Yeah, there's another question from Prashanth, which is maybe a little more related to what's on the screen right now which is um how are the verbs in the relationships decided yeah so um that is that goes back to a couple points this is the reason this is the llm's ability to detect what is going on so we're not pre-defining relationships that's a key piece of this puzzle we are extracting information in a graph graphical way based on what the LLM is capable of reasoning so the LLM is deciding these okay what I was just getting to here is building the synthetic data which is your training set. We agreed that Phi-3 is kind of small and maybe a little or a lot dumber than something like a Claude 3.5 or a GPT-4. Those are very smart, large models that have a lot more intelligence behind them. So they're able to detect better relationships. You could say better verbs in some way, but generally speaking, just better relationships extracted from the text. So we actually leverage that intelligence by using GPT-4.0 to build that synthetic data set. Here, let me see. We have another question. How stable are the cross comparison of sources and how do you decide what is passably coherent? I like these questions. I think they're off topic of the graph. Should I tangent off? Do we have time? I'm happy to answer it. I just feel it's a little different. I mean, Jason, let me ask you, Jason, does it relate to the graph? Can you maybe clarify why it's related to the graph construction? So I would say, Rob, we can proceed on the graph side of things for a little while, and then we can always circle back uh about um you know the you know the the how you're extracting like sentiments and things like that which i feel like a lot of these are personal related so yeah yeah i mean if it's quickly before or maybe we yeah we can move forward to this one um towards the end or whatever section it fits into yeah no it's a good question i just but it is slightly tangent um okay so we'll extract that information using a smart lm like gpt4l so this is building our training data set our eval data set and our validation data set which is an important aspect in machine learning when you're trying to properly train one of these models. So the beauty of this is, hey, you can, as long as you've established and painted your parameter space properly across language and topic and country of origin, now all of a sudden, you've done this with a very smart model, you've established your training set, you can now train PHY3 and it's going to emulate how GPT-4.0 was making its decisions when it was extracting those relationships and extracting those entities. So that's really powerful, because what you've done, for all intents and purposes, is a knowledge transfer from a very smart LLM to a very small LLM. For the actual fine tuning itself, we take that data set and now we've we use pretty easy tools right now the the abstraction level is extremely high and very easy you know what's underneath this these like 10 lines of code here is probably million around the on the order of millions of lines of codes that are of code that's all being leveraged. So I really encourage people to explore. And if you think you can't find, you probably can. It's easier than most of the analytics you've done, for sure. The tools are really easy to use. So the actual fine tuning parts, the easy part, it's the engineering, the data set. That's the hard part, right? You can play with a bunch of these are these parameters. So these are just parameters that you can choose to. And so there's a bit of an arc here, a lot of trial and error, and really with a human eye looking at the output. And that's what we're going to look, talk about right now is, you know, it's one thing to have a metric, which says, hey, this worked well, this was trained correctly. And it's another to just look at the output and say, hey, does this smell right? Does this smell how we're expecting it to smell? Especially when you're working in an ambiguous world where a graph for a text, you could probably create a near infinite number of graphs. Yeah, we'll share the slides for sure, Max. Thanks for asking. You could probably create a near infinite number of graphs. Yeah, we'll share the slides for sure, Max. Thanks for asking. You could probably create a near infinite number of correct graphs for one set of text, especially if it's two paragraphs or three paragraphs, because there are a lot of right ways to explain something. With that said, we can still identify some metrics that can help us say, yes, this is closer to right, and this is a lot wrong. This is called essentially a loss function, and that's what you use when you're trying to tune the weights of that model. So what you're doing is you're applying a small set of your training data set, you're running it through your model, and then you're how wrong was this based on these metrics and we created these metrics out of necessity and this is a more or less a json similarity how close is it to a correct um to the json itself the fields and the values are they close to what the right answer is and the consistency so this is. Is it, you know, are these edges even right? Does this edge make sense? Is this connecting to actual nodes? Are these random? Because LLMs, as you well know, are pretty random sometimes. So this is us trying to train it in a place. You can go check out the model. It's open source and it's also Apache 2.0. Or no, sorry, not this one. Gliner's Apache 2.0. But it is open source. You're welcome to use it for academic purposes. If you want to use it commercially, you can come talk to us and we'll try to help you out. Let me see if I can get to the actual comparisons here. I was talking about how you could make an unlimited number of graphs per text this is where hey we would train it then we look at the output compare it is that to the to the right to the right answer how does that look in this case this is great this looks really good it's really close you've got a little some differences right like in this example silly barily Barakov fled from, but in this case, it was Vasily Barakov. The crime location was in, I can't pronounce these names, I'm sorry, but they're both suburbs of Moscow. So you can see there's some differences, but generally speaking, it's a good representation of the information. But what about when it's just ambiguous, right? The GPT-40 output here versus our GPT-53 mini instruct graph output. Honestly, I prefer ours because it's a bit more structured. It's more higher fidelity. This one is kind of lazy, in my opinion. This would be if I asked an intern, hey, make and they're like yeah everything's related to this right it doesn't really that does that's not compelling me that doesn't make me feel like i'm building a great graph um and then for example here right like this is a bad loss because we had a 57 json consistency um with the output of cloud 3.5 on it but we we had 100% JSON consistency with ours. That was fine-tuned. And that's an indication of, okay, this thing is creating JSON, that's consumable. That JSON can then be structured and stored and used in the future. Yeah, I really like this question too. So when you're engineering a data set with your train, your eval and your validation, you want to try to avoid as much data leakage as possible. Sometimes it's really difficult to avoid data leakage. And what that means is in your eval, you have some information from your training. And this can be as simple as, hey, Trump, hey, Trump won the election. Okay, it's in the training data. And then in your eval, there's some something, some eval about Trump winning the election. It's another text. And you've seen it, your model has now been trained to know that Trump won the election. So it's a little bit of a cheat. So in order to chop that, we used temporal awareness, which goes back to the question from Steve, I appreciate, which is tracking an event through time. And so we're able to cut it off and say, okay, the training data set is only going to be between this date and this date. And then the eval set is between this date and this date. That is a segregation and avoiding leakage. So yeah, so the temporal segregation is really important, and I think it's one reason that we have great extrapolative abilities. And it even goes beyond that. We even just segregate a whole topic. So we'll say, hey, this was a topic that was written about 200 times during the month of July, that is in the training data set, and then we're not going to go talk about that topic ever again. So maybe Ukraine, Russia is mostly only in training. But then, you know, we want to be able to extrapolate out in in eval and make sure that the eval is capable of extrapolating on the future stuff that's happening in Ukraine, not necessarily what occurred during July. Okay, so these are some performance comparisons. We'll kind of move through because the demo is the fun part, but generally speaking, we do better than Claude 3.5 Sonnet and better than the base model, which would make sense, right? We've literally trained the thing to be better than the base model. But Claude 3.5, to run that, you're looking at somewhere on the order of $3,000 a day, $3,700 a day. But if you want to host our model, you're going to pay less than $100. So a massive cost savings. You probably also will have rate limiting problems if you're using OpenAI or Cloud 3.5 to run this. So there's a lot of... And then by the way, you're also sending all of your data to OpenAI. So that's just on their servers. So there's a lot of benefits then by the way you're also sending all of your data to open ai so that's just on their servers so there's a lot of benefits of fine-tuning here let's see let's look at it in action here's an example of israel gaza um here you know this would be kind of the extraction i don't we don't have the let me see if i can pull out the relationships yeah here we go this is. This is what it would look like, selecting only the Islamist fighters here, and then going through and saying, okay, these are the relationships we've extracted for this subpart of the general subgraph. And so this is really good for communication to an analyst. When the analyst says, hey, I want to bring myself up to date on top on this topic i can very rapidly click okay what's going on with islamic islamist fighters and then rapidly consume this information to find out some ex something else like who is hassan abdel ghani you know apparently he leads the islamist fighters but i want to know more, and I can click, right? That's a communication of information, a rapid communication of information, a rapid exploration of a subgraph. So, in some ways, the design of this comes from the consumption of it. At the same time, this is a great way to pass context to an LLM. Let me see if we missed an important question here. As the graph grows this large, how are you preparing for future bottlenecks in query performance to retrieve against specific node or relationships? So structuring, the underlying structuring of this information is one of the most important aspects of it, right? So what you have is a lot of information that is not stored in a GraphDB, because if you're constantly upstoring millions and millions of nodes and entities to a GraphDB, it's not going to work. And so instead, we are indexing this information in a much more traditional way, to be honest, on S3 in some ways, right? So we're indexing it to S3, we're using Quadrant very heavily for a lot of aspects of this. There's a hierarchy of, there's a structure of how we're passing information and indexing information. Now, that doesn't mean we won't use a GraphDB to still do a query because luckily you can still build a quick graph, which is kind of what you see here, right? This is a quickly built graph that was built on the fly. You can bring that graph and put it into something like a memgraph to start doing more local, more traditional graph traversals. And I talked about this with Demetrios over at on the mlops podcast so i would refer you there to get even more details but generally speaking uh that's one of the approaches but yeah there's just a ton of information and in the end you have to store it on in some uh a more creative way the last thing i'll say before moving to the demo is is the kind of variety of people that are using it and what they're using it for. One of our favorites is Utexas because they're using this for detection and classification of misinformation, which we find really, really powerful. These researchers reported a 24% increase in accuracy over old methods of misinformation detection by integrating Ask News and integrating this very high fidelity context. um and then the other one i'll just point out which i'm sorry we like to brag about it uh metaculous a lot of the the bot builders the winners ended up using uh ask news to help provide context to make more accurate forecasts, and they won money. And so what I really want, the picture I really want to paint beyond on the fly graphs and this awesome large knowledge graph is, it's about context engineering. This is context engineering at its core. And if you take the right steps at every point and you do a really strong engineering of your context, you can leverage the intelligence of the LLM at the end of the day in a very powerful way. But you need to paint the context correctly to get the accurate output. And so a lot of what we're doing is just getting you, you know, nine steps of the 10 of the way there so that you can then take that high fidelity context, include it with some of your other pieces of information, and then make some business insight or, you know, combat human trafficking. Here's the team, by the way. And so these guys are really, these guys and Gal are really, really important. So let me jump over here. I made a little notebook. I'm happy to share it with you guys. And I just wanted to kind of go through how this sort of thing can be used. I think one of the, the query that I would love that I want to investigate with you through this demo is understanding the negative media attention, right, of a company or a person, and then trying to figure out why it's happening. And then leverage, again, leveraging the high intelligence of these really powerful LLMs to make recommendations to ameliorate that, right? Because that's kind of the whole gambit. You've got your context for what's currently going on in the media. Is it negative? Great. Do your search, build that graph, filter on the negative nodes, filter on the technology-based nodes, but then grab it and then build out that high-fidelity context, hand it to an LLM, and then get some actual insights out, because that's what people are actually after. People don't really care about the graph. They want to actually get information. And so that's kind of what we're going to walk through here. So, okay, we've got dependencies here. It's just, we're just installing Ask News and Langchain and RapidJSON. We have some helper functions. You can come check those out. These are just to help color the graphs below and stuff like that. They're nothing too important. This would be our client for Ask News. And so you just import it, add your secret and define your scope. And now you can get all the context, build graphs, do whatever you want. Here, this is how this is the call we've used to build the graph. In this case, we're filtering on only negative news in the category of technology. And we're going to look back in this case, 10 days. But we could change this. We could filter on a lot of different ways. We can slice and dice, I think 20 different ways. So I'm showing you three here, but there's a lot more. We could add a lot of our clients do something like, you could do something like this, or even if you want, there's a lot of value in checking only stuff that's sensational. But we won't do that for this particular one. Okay, so I pre-ran this so we didn't have to wait. This might take between 30 seconds and 5 minutes, depending on the number of entities and the number of um uh articles and so you know as you as you can tell a lot of this is disambiguation related and you guys probably know paco from zenzeng they're doing a lot of powerful disambiguation as well we're focused on disambiguating news related entities and understanding how to do that. And that's a few months of research from the fall where we really dove in and said, how do we do this the right way? So this call does a lot for you. This is an on the fly construction, going to S3, finding the right stuff, pulling it out, putting it actually into a GraphDB, doing some traversals, and then getting you the information. In this case, I want to visualize it with Cosmograph to show you guys what's there. So let me see if I can open that up really quick. Cosmograph, really powerful. This is a really cool software which allows you to also visualize the fly and convey information. So just as a reminder, this is all of the, this is the top 500 negative technology news articles and their relations to one another from the last 10 days. You can see Apple here is apparently a larger node, but you can see how the chains, the chain of relationships move through. Cosmograph's really great. Let me move back here. So that's the data that we just we brought in. Here we're going to, we want to find the most important one in the graph, and so you can just go grab it. So this is pretty traditional. You can say, hey, we're doing it in Python here. You could do it in Memgraph, right? We can upload this to Memgraph, and also you do node traversals that way. Here, it's a notebook. I wanted to make it light running Memgraph. You need a Docker container running, but generally speaking, it's more or or less the same thing grab the the node with the most uh or basically count the entities for from uh the relationships for all of the entities uh here we're just going to grab the most important one and it looks like we grabbed apple now we could look through a couple of them to see what what else is is out there besides apple let's if there's, so there's Microsoft. So I'm just kind of going down the list to see if it looks like Apple has the most, Microsoft has the second most. And we're just looking at tech company here. So this will stick with Apple here, but we can also go look at people. We can look at kind of whoever we want in this negative sub graph that we've created right you could even go find your company if you have a particular company but in this case we're just going to take the most negative press from the last 10 days um now we're going to do a little bit of context engineering together okay the idea is let's grab the relationships up to x hops away from the most important entity, because in order for us to understand why this negative news is occurring, I want to get all of the news that is within one hop or two hops or three, whatever you want. I think three is probably pretty far. It's, but at least you're getting a big, you know, contextual picture, even of information that's very tangentially related, it may still play an important role in understanding why is everything so negative around this company. Direct relationships are obvious. Secondary relationships are less obvious and tertiary, maybe useful, maybe completely useless, right? So we grabbed those relationships here, and that's all we've done. We've just filtered them out. Looks like we got 102 entities here. We visualize it with NetworkX, and it looks a little messy, but yeah, App is at the center of this thing. Cosmograph is a much better way to visualize it, but this is in your notebook here. But this would be kind of your sub-subgraph just around Apple that we've extracted. Here we're going to say, okay, let's grab them and get some LLM input. So when we're doing that context engineering, again, we want to convey as much information, information to the llm as possible when we to ask it to generate that business insight so we have the articles we just iterate through the articles and then create a string so it's just literally a string that looks like this right so you can see you got the summary you've got the published date you've got the the title the english title and the original title we could add more metadata here if you want we could add the entities we've got the summary, you've got the published date, you've got the title, the English title and the original title. We could add more metadata here if you want. We could add the entities, we could add the sentiment. You can really go crazy with your context engineering. But generally speaking, this is the string that we're passing, that we will pass to the LLM. But we're not done yet. We have one more piece of context engineering that we'd like to do to pass to this to this LLM and that's to get the graph out. And so what we do here is we just grab the graph. First this is for network x just to make the graph but here we're again we're making a string for the graph for the LLM which is the graph. And so that's going to look something like this. And this is, as you can expect, a very, very primitive, simplistic representation of information. But again, this is highlighting the most important relationships in that subgraph around Apple, which are allowing the LM to continue solidifying its context around the negative media situation. So that's what this string looks like, as you can see. Again, we plotted the network x graph, and so now we're going to actually pass it to an LLM to get the answer. Let's get the, how do we ameliorate the situation for Apple? That's our, that's our goal. I chose to use LangChain because it's quite readable in this case. So here we're just pulling Langchain. We could use Claude or we can use OpenAI. It's kind of up to you. It makes it easy to swap between the two. So we pull that chat anthropic client. Here's our prompt. So, you know, again, this is more context engineering. This is kind of a basic one that you're welcome to start from, pull from. But generally speaking, it's pretty straightforward. We're just saying, hey, here are the steps that we want you to follow based on the information that we're going to give you. So we're going to give you a set of news articles, a graph, and we want you to analyze the news. We want you to analyze the news. We want you to analyze the graph, identify the key issues. This is kind of chain of thought prompting, if you will, which is improving the probability of the next produced token by forcing the LLM to speak the way of the things you need it to speak of in order to draw conclusions of what follows. So that's a mouthful, but essentially what you're doing is you're just saying, hey, in order for you to actually understand how to get me recommendations, you probably first need to have talked about the key issues. And that will make the probability of your next token when you start talking about recommendations much higher because you've you've kind of pigeonholed the llm into you forced it to think and talk about the right things if that makes sense let me know if you have questions i'm happy to to chat about it it's kind of an art it's kind of fun it's context engineering and we just say hey uh write your report and then we just give it a little outline of the report. You could change this, add whatever template you want. These things are really, really flexible. So that's the system prompt. Here's the human prompt that we give it. And this is where we inject the article string and the graph string that we created earlier. So you can see here, this is where the entity will go, and this is where the use article string goes, and this is where the graph string goes. So we've engineered the context, we've told it how we want it to think, how we want it to report, and we've said this is the information that you have at your disposal to actually generate that information. Then you can call it. This is a Langchain syntax. This is the report prompt that we just created right here. And then we chain it to the LLM. And in this case, our LLM is that chat anthropic. And then we invoke it. And this is where the entity gets put into the entity spot. This is actually a glorified f-string of's going on. And so now we've engineered the context and all we need to do is call the LLM to generate that output. Google Colab makes it a little hard to actually see this. So what I'm going to do is switch to a Google Doc. Let's see if this wants to... Here we go. And then get that output, right? And so what it's done is it's read all of that news and taken that graph and given you exactly exactly what we asked for an overview of what's going on uh the the issues so it looks like currently apple's facing some privacy violations there was a secret logging of a calculator app without user user consent siri recording private conversations i read about that one a couple days ago yeah it looks like they are they definitely have negative surrounding them right now. Compliance issues. Okay, this is great. Now, what are the possible outcomes? We asked for it to kind of... Oops. So my iPhone... That's hilarious. So my iPhone just heard me talk about Siri eavesdropping and just started talking, just started to answer a question. So it's been eavesdropping on the entire presentation, just so that we're all on the same page. That's the most ironic thing that's happened to me in a while. Okay, so I won't say her name again, she who will not be named. We talk about the possible outcomes and and then make some recommendations. And this is where we're trying to leverage the LLM intelligence. I'm not a media expert, but I know that the LLM has been trained on a lot of media textbooks and a lot of content talking about how media relations work and how you can help avoid bad press and improve and proactively improve your future press sentiment. And so we just give this to the CEO and then we run, we head on our way. So that's kind of a general approach to context engineering with the graph, with the Ask News SDK. Let me see if I can answer some questions now. There was one question earlier about whether end users can edit the local subgraph. Yeah, you've got the data at your disposal. You can do it kind of with it what you wish. So if we If we go doc you can absolutely edit it and that can come in the form of um using it with memgraph to edit it uploading it to quadrant to edit it or editing it with python you can do whatever you want with it you know we stop right at the point of giving you a graph with a bunch of metadata in a structure that we believe is useful. But we have clients absolutely take that and run with it and do kind of any, they do the last mile, more or less. Okay. I did have one other question about how developers are using Ask News. So it's pretty clear how a researcher would use it. But how are some of the ways that developers building solutions are using it? A couple. So at any point where your application requires some real world context. So, for example, a finance app where they've got ticker data, they've got a lot of their own data sources that they're using to help their users track portfolio analytics. Maybe they want their user to ask a question about their own portfolio. Well, it's great to have ticker data. It's great to have some of your own internal information, but what about the news surrounding that company that extends beyond finance, right? Like we just saw here, probably a lot of this news related to Apple is political. And so understanding that can really make a difference in your portfolio analytics discussion. And so it's helping add that corner of context into their app. So that's one example. So go ahead and ask your question, Steve, if you're having trouble with that, I would say drop it in the chat if for some reason Mike isn't open yet. Can you hear me now? Got it. Okay, cool. Thanks so much. Great session today great really great session today rob yeah i guess i did want to ask have you as your team encountered like especially um like any challenges with trying to detect synthetically generated content on websites versus you know any uh distinguishing felt like sort of more authentic because i Because I know we've definitely seen that in terms of, you know, sort of the mis- and disinformation angles, just lots of regenerators, synthetically generated. So I don't know if that's been something that y'all have had to struggle with. Yeah, that's an awesome question. There's a lot to unpack with that. Our website is in some sense synthetically generated information. And so, yeah, it's more about how the quality of that synthetically generated content. And we have a quality control pipeline, which aims to assess the quality of what's coming in. It's not perfect. It's not only for detecting synthetic content. Actually, we don't even try to detect if it's synthetic. But a lot of what we do is when we're actually deciding what to go out and look for, we look into the website and we say hey what's is this thing a legitimate website producing open web content that is has some value if not then we just don't go and so there's plenty of times where you know i think i i found what we'll we'll spot them ourselves sometimes we'll say oh that yeah that website like one of them leak it it just regurgitated its prompt and it was just clear that they didn't have control over whatever was going on over there so there's that but there's plenty of low quality human-made information as well and so um that's one of that's a big a big piece of this is is diversifying the context and making sure that you're not just pulling from one area. So if you ask about Apple and its privacy violations, you're getting 30 points of view on that. You're not just getting one. I think that's a big piece of it, and that allows you to really at least sequester the lower quality information. Yeah, sounds great. Thanks so much. Appreciate it. Thank you so much. Appreciate it. So Rob, there might be an adjacent question here from Kevin, a curious attendee. Following the idea of garbage in, garbage out, if I understand you correctly, the graph entities and relationships would depend upon how reporters define those. So different outlets countries cultures reporting on the same story you're getting different results and possibly a different graph so depending i assume um kevin the question is depending on how you filter you might end up with different sub graphs as well and how you deal with that yeah um it's a great question and so you can purposefully bias your search right you could actually say i only want sources that originate from south africa and then if you only trust south african sources and so then you'd get only a graph that's built from that at the the same time, you can enforce the diversity. We have a setting where you can say diversify sources, where we go cast a net and try to guarantee that you have the distribution of countries and languages that's present in this general cluster that you're looking at. But there's no way to guarantee that anything is right. Okay, so I think if the question is, how do you make sure that the relationship is correct, you don't, you can't, but you can, I think, again, it's about ensuring that you have a large enough coverage of this reporting that you're comfortable with, okay, generally speaking, the alignment is here, the contradiction is here. And so as long as you trust the democratic approach to the reporting, then you have a better picture of the news. But you cannot eliminate errors and you cannot eliminate the bias even. Even us as a company are injecting our own bias into how these relationships are formulated in some ways indirectly based on how we engineered the data set. I think if we look at... Let me see if I can bring this back. If we look at the data set engineering itself, here's an example of some diversity, which would certainly impact how relationships are extracted. We don't have a great representation for Zambia here. We ensure there's some, but there's just not enough information. So this is kind of our approach of balancing. Hey, we need a lot of training data. We don't want to over overemphasize certain countries. We want to guarantee some representation of some countries. So that's another approach to consider. So we, and we have two, thank you for addressing that. And then related to engineering, actually, we have two data engineering questions. First, I will actually take the second one first, because I think it'll be quicker, is whether you're using glider for the triple extraction. So that's the first question. Sure. And I'll just wrap. I think maybe that other question got edited to the graph does not contain the truth. Okay, so that's an important aspect of what's going on. The graph is just a relation, a picture of the articles that you you selected and what relationships were present in those. So it's not actually presenting any truth or fact or fiction. It's just a graph. With the asknews.app front end, we get closer to a truth. We still don't say anything is fact or fiction, but that's where the graph is a component, but it's not the end-all context engineering. You still need to present the information itself, the actual text, in addition to the graph, to say, hey, what's going on? What are the key alignments? And what's a contradiction? Just to make sure I finished Kevin's question. Yes, we're using Gleiner. Well, okay. We absolutely use Gleiner for entity extraction, not for triple extraction. So the triple extraction is the Phi-3. So that is what Phi-3 is doing more or less. And I think this is essentially what it looks like when PHI-3 extracts information. Thank you. And then probably another, yeah, if you want to look at this other question from Sri, question from Engineering Aspect. Once you receive the information, the data from a few different sources, can you please walk through the pre-processing you do to eventually store in a quadrant and elsewhere before your app takes over yeah uh there's a full pipeline and i probably won't even remember all of the steps there's a lot of them um but you know this this goes to their sporadization there's extraction i think we had a lot of this here. This is a good view of it. Extracting statement evidence and attribution. This is building essentially the synthetic representation of that article. So yeah, extracting the statements, extracting a small summary, extracting the entities, extracting the graph sentiment, reporting voice, this is all processing that information before we even start looking, before we up-sert to Quadrant, right? So, this all happens, and then we up-sert to Quadrant. And actually, a lot of what's stored in Quadrant are just UUIDs in some way, right? Because Quadrant is actually quite scalable and very very powerful um but you don't necessarily want all of your your text inside of quadrant it's sometimes better to just keep uu ids and then reference those uu ids to something like an s3 because s3 is going to be cheaper and more accessible and so you know that know, at least that's how we've approached a lot of the underlying data handling. But does that answer your pre-processing question? Yeah, thanks. When you said about using S3 as a sort of cheaper option and using uuids can you can you please elaborate on that um yeah i mean essentially so a common technique in engineering with especially with databases is to index uuids and then the uuid references some information somewhere else on the web. That could be an image, right? That could be a video. That could be... Like I guarantee you when you're searching in YouTube for videos, you're basically interacting with UUIDs and then when you actually want one, then it streams from somewhere else. And this allows you to be a lot more efficient with meta... You're basically compacting your metadata, you're attaching metadata to a UUID. And that UUID can be used to go fetch a larger amount of data from somewhere else. So that's more or less the approach. And it works very well. Although, to be honest, Quadrant can still store quite a lot of information. So it's kind of a balance. It depends on your app, on where you want to put the information. In some sense, storing your information somewhere else can be cheaper. But then you have another failure spot. So if that connection fails fails you have to make a secondary call that adds latency so you know but in some ways it's more resilient because you could still access it if quadrant goes down there's it's engine the engineering side is really fun there's always decisions to be made and um but generally quadrants very flexible it makes these decisions easy. I see. Thank you. Bye. Sure. Thanks for coming. So we have probably about a minute and a half. It looks like a final question that may have actually already been answered. Good question, Giuseppe. No, Phi3 can extract the entities and relationships by itself, which is really cool. But this is a good point, and I'll try to wrap it up on this one. And this is where the ontology-free aspect is kind of controversial and useful for us, for our application, but maybe not so useful for others. In some cases, you actually want to limit the entities and you want to say, hey, I only want to extract relationships between people and organizations. I don't want anything else. And in that case, PHY3, the fine tune we made would not be very useful. It would be quite a bit problematic for you because you would end up getting entities that you don't want. And there are other models that have been fine tuned. I. I think it's called sci-fi or something. And it's really, and you can actually say, I only want these entities, or I only want these relationships. But no, Gliner is more or less separated completely from the phi3 component. Thank you, Rob. Thanks for joining us. How do people get a hold of you or what can they follow up on if they want to get more information? Sure. Head over to asknews.app and use it as your global news, please, if you'd like. We believe that algorithmic enforcement of diversity has a lot of power and is really useful when you're trying to properly understand what's going on in the world. So asknews.app is the best place to find us. From there, you can find all of our different products. We have an API. We have a New Plunker, which is an analyst tool that allows you to really visually interact with this information. So yeah, my name is Robert Kalk, C-A-U-L-K. You can find me on LinkedIn and send me a DM. And I'm always happy to geek out if there's ever anything. We have a Discord as well. You'll find that on asknews.app where we chat. I think I'm in the GraphGeeks Discord. So I'm happy to, to go chat over there as well, but thanks. I really appreciate everybody coming in and listening and, you know, we'll see where this world, where this kind of ecosystem evolves to. I think we're all sitting on the forefront wondering what's going to happen. And that's the funnest part of this. Thank you, Rob. Thank you everyone for joining. I'll see you all online and in discord and definitely follow up with questions if you have them thank you again Rob really appreciate it | GraphGeeks Explainer S2 Ep1: Exploring News at Scale with On-the-Fly Graphs | 3,393 | GraphGeeks | 20250122 | How do you build a knowledge graph that processes millions of new entities and relationships every day—and make it easy to explore through natural conversation? In this talk, Rob Caulk, the founder of Emergent Methods and open source veteran with over 1000 academic citations, will share how their team fine-tuned Phi-3-mini-4k to outperform Claude Sonnet 3.5 in graph extraction for dynamic knowledge wrangling at scale.
Rob will take you behind the scenes, breaking down key innovations: creative dataset engineering, novel loss metrics, and subjective evaluation methods that significantly boost accuracy in the AskNews software.
You’ll also learn how the team enabled on-the-fly sub-graph creation to enable targeted, domain-specific exploration of news. We’ll wrap up with a live demo, showing how to build and interact with a custom finance sub-graph—giving you practical strategies to apply knowledge graph exploration for your own use cases.
RESOURCES:
https://asknews.app/ | 2025-02-03T09:38:05.583267 |
https://www.youtube.com/watch?v=jgBd8iHWYXY | What I did was model all of that data to try to predict revenue and discover that actually to predict revenue, we had to do a product with it that eventually rebuilt in six months and drove about $500 million in gross processing volume. Welcome to GraphCakes and Discussion. I'm Amy Hodler. Today, we're talking to Claudia Natasha, a product leader who searched for revenue revenue growth led her to start a company that uses graph tech to uncover customer sentiment. Thanks, Amy. I'm really excited to be here. So I actually started my career as a data scientist and I worked in some wonderful companies trying to figure out how to get the company to its next state. So that reminds me of an experience I had at a company that actually reached a point of plateau. Revenue wasn't growing, we couldn't figure out why, and we had to really help the company get to the next state, which is either a fundraising state or some sort of exit. And so at that point, I had already transitioned from my data scientist career to become a product leader, and I also managed user research team. and that allowed me to get access to quantitative data so the types of data that data scientists work with qualitative data the types of data that user researchers work with customer interviews etc and what i did was model all of that data to try to predict revenue and discover that actually to predict revenue, we had to do a product of it that eventually we built in six months and drove about $500 million in gross processing volume. So if you're in FinTech, you'd know that I'm talking about a payments product, that's the metric GPB. And because that was so successful, I ended up presenting my experience at different conferences. And multiple product leaders, research leaders, data scientists, designers would reach out to me saying, Claudia, what tool did you do or what tool did you use to bring this practice to your organization? And can you build it for us? And with that question, the idea of Riley really was born. I decided to build Riley to make data-driven decisions accessible to everyone to be the simple. and with that question the idea of riley really was born i i decided to build riley to to make data-driven decisions accessible to everyone to be the simplest way that that anyone insights leaders can combine disparate sources of data to find clarity and strategy i've been a product manager and a product marketing manager and it's always a struggle to get in the heads of your customer like what do they not just say that they want but what would they actually pay for or you know stick around for it sounds like this started out more as a process than a tool or at least this idea is that correct and could you explain a little bit about that yeah it originally it originally did start more as a model. I've been tinkering with this model actually since 2011 when I was a student at Berkeley. At that point, my mom's an entrepreneur and she was trying to also figure out what's wrong with her business. So on a flight back to Indonesia, I modeled something for her and proved her wrong, whether she likes to admit it or not. She probably will say I did not do her wrong right now. But I digress. So from that experience, I started building these models, getting better and better over time, as we all do in our careers. And eventually, I got it to a state where it's good enough, at that company that I brought up the example earlier, it actually made some remarkable impact to the company trajectory. And when I started presenting the idea at conferences, I took two or three consulting gigs after. So I was still working full time, but a couple of different companies wanted me to help build it for them. And so I built the model for them. And they'd reach out after saying, hey, we don't really have anyone to manage the model. It's awesome that you did it for us at first, but we don't really have anyone technical enough to take it a step further. And that's when I realized that it just can't be a model. It just can't be a process. It has to be an actual platform and I want to be the person building the platform and designing the future where this exists so so for listeners that might not uh intuit what you mean by model here can you and I realize this is more than a model but can you explain a little bit of the elements that might be in the model? And if there's an example of maybe a surprise that you uncovered using your models. Yeah. Yeah. So it's a combination of a few different things. So of course, the base of our predictive, the most basic version of the model from a few years ago is a regression model that predicts revenue right and as with any regression models we have different variables that we put in so an example of a variable that i'd put in on what predicts revenue is let's say i'm analyzing a bunch of customer service calls. I'm trying to distill how those calls went and the general themes within those calls. So I do a combination of two things, one thematic analysis using k-means clustering. And based on those clusters, I turn those clusters into variables that I then input into the regression model. Another level that actually would answer the question also, what surprising thing did I learn is sentiment analysis. So themes are the only thing that's important. From a call, we might discover that there's three top themes and one of the theme might be, let's say user, I'd like to improve the UI because for some reason i can't find the button that clicks to update my my profile or something so things like that we might find general themes like that but without knowing the sentiment behind those themes we don't know whether people were actually upset or happy and so another variable that we put into the model is sentiment as well. And then with sentiment, I realized that existing models that are available to use or existing models that we can build ourselves could never quite be good enough if we keep them general. Let's say someone says, oh, great. Yet another feature. feature sentiment analysis models interprets that as a positive sentiment but we know that that could also be interpreted as sarcastic right oh great yet another feature so i think the human input is still important to help discern whether that is in fact a positive or a negative sentiment and to bring the cultural context as well. I grew up in Indonesia and a sentiment like that would actually be perceived as more positive in Indonesia as opposed to negative. So how do customers use Riley? Yeah, so we are lucky to have a number of customers even at our closed beta state. And we're really excited now that we've launched publicly to open it up to more of you. So how customers use Riley is they're able to input any qualitative data. So whether it's in text form exports that you have from customer service calls or whether it's directly through through Zoom meetings that you might have with your customers. We analyze Zoom transcripts directly. Also, we have an integration with Zoom. And you're also able to input any survey data. So in the forms of CSV, so quant data, click a button on Riley, and we'd, within minutes, generate insights for you that are prioritized by customer impact. So that part's really important because after spending my career in product, up until today, every time I talk to PMs and researchers, the one pain point that no one has solved is how to prioritize insights. I'm sure Amy, you've had this experience also as a product marketing leader, where we end up with a large list of feature requests and we don't know how to prioritize it. And we end up in meetings debating which one's number one. Everyone has different opinions. Everyone has different inputs. So Riley combines all of your opinions, all of your inputs. We allow commenting discussions on the platform that will then all come back into the AI, inspired by that model that I created many years ago to produce a prioritized list of recommendations on insights that we can feel more confident about. We know that if we prioritize number one on the insights that Riley recommend, that that is what's going to have the largest customer impact. There's just the value of having some quantitative tools to help figure out, like, what do we, what does our roadmap really look like? You know, whether it's revenue decisions or maybe marketing decisions, like what theme do we focus on for the next year or, you know, six months, and what other things should be, you know, moved to later? Because as you said, everyone has their favorite feature, pet project, theme, whatever it might be. So since this is GraphGeeks, can you tell us a little bit about the platform? Because as we've spoken earlier, I know you've got a graph there in the back end. So I'd like to understand why you chose a graph and how that's being used. Yeah. So to keep it very general for now, and if anyone would like to know a little bit more, feel free to reach out to me. How we use Graph right now is a lot of our recommendations are powered by very deep knowledge on industry knowledge of what drives business goals, right? Because at the end of the day, if we're trying to build AI that prioritizes insights and help you determine which insights are the most important, the highest impact of customers, we want to make sure that our recommendations are based on deep knowledge. So we currently train our graph to reflect industry knowledge that we've gathered from subject matter experts and also from widely available literature on, let's say, what drives adoption, what drives retention. And all of the nodes in our graph represent essentially all of those different knowledge that we've gathered over time. And as more and more users use the platform as well, our AI will learn from your behavior. So back to the example that I brought up earlier of Munzey, Amy and I are working on the same team. Amy is the product marketing leader, and I'm a user researcher. Let's say we're debating on an insight and I discovered through a recent user research study that we have to improve the filtering functionality of our e-commerce site to drive more checkouts. Amy is debating with me and saying, Claudia, I recently went to a conference and I actually discovered that it's not just the UI that needs work, it's also our pricing. Our pricing is terrible. No one really finds value from the way we price. Usually conversations like that might be lost. But if Amy were to write it as a comment on Riley or as a Slack comment with the Riley integration, we capture Amy's comment, bring it back to our graph and present it as like, is this is part of the actual recommendation and as more and more people bring up ideas similar to amy as we discover let's say on product analytics we discover that it turns out the filtering functionality is not what's the problem and we discover more things that are similar to amy eventually the ai the AI will get better also and recommend AIME strategy ranked it as higher. So why couldn't you do that without a graph? What drove you to choose that kind of technology? Yeah, I find that graph technology is more flexible and also can learn and grow with the different types of data that we hope to ingest and the different the different transformations that we hope to do with our data over time and so i've the way i think about the types of technologies that we use and it's aligned with the future of riley and because data especially like if you if you think about data in the past 10 years we've seen it transform in so many different so many different types so many different variables we used to just work with one-dimensional data back in 2011 and now since then it has transformed into multi-level analysis so i think um i like to choose what we use depending on really where the future of the company is heading. When you were working with the graph, maybe I don't know if it's implementing and maybe I don't know if you can tell us whether you're using a graph database or some kind of a store and then instantiating a graph, if you can explain a little bit about there, are there things that you learned worked well, didn't work well for your use case? Yeah. So I would say if anyone here is starting a project or starting a startup and you are starting with a small amount of data for the near term, I wouldn't necessarily start with using a graph database. I find that graph database are more useful once you have a substantial amount of data. Now, what we define as substantial is a little bit subjective, right? But at the end of the day, if you're experimenting with a project and let's say you're still at the experimental stage, so you're just trying to figure out if something would work or not, and you have a training data or you have initial data from, let's say, only two or three customers or two or three testers contributing, it would be hard to really scale your graph in a way that would make sense. Because at the end of the day, if you're trying to create multiple nodes that connects to other nodes that predicts a particular outcome, it's really hard to do with limited data. So I'd recommend that at that point, use some sort of other system and then think about how you would eventually incorporate graph in your technology. This also brings up a wider philosophy that I have that I often talk to my co-founder Kevin about actually, is that sometimes it's not about making the tech work for us, but it's about making, only finding the tech if it does work for you. So I personally love graph technology, but if it didn't work for Riley, that's not something we would have chosen. We choose the tech that really we feel like is the most valuable to propel the company forward and aligns more with our company ethos and what we hope to build at Riley. I love the fact that you call graph tech cool because, you know, most of the people here listening would agree. Yeah. Yeah. Any other recommendations or resources for a startup kind of getting started in this area that maybe wants to look at some of the technology you talked about, even if it's outside of the graph space? Yeah. So I personally use two different ways to actually learn more about graph technology. The first is I reach out to people who are actually early adopters of graph technology or the early builders of it. The most fun story I've had is reaching out to some early employees of Neo4j and asking them how they've built it. And in exchange to ask them how they've built it, I also gave them feedback from using the product. And so I know after talking to a lot of early builders, I know that you all, myself included, can feel like a little bit nervous reaching out to people and ask them for advice. But the best gift that you could give them to get their attention really is product feedback. So I know that we're all using these types of technologies very frequently. If you ever have a feedback, don't hesitate to, don't just email it to the supportline or keep it to yourself. If there's someone you can find on LinkedIn to share that feedback and have more of a wider conversation. The second way is also reading papers that incorporate this technology and reaching out to the people who write the papers. So I've met some of my friends, some of the people with opinions that I respect very, very and that i would often talk to whenever i want to bounce off ideas early ideas with through this this whole process of just reaching out to people after reading their papers i love those two recommendations because they're both they're both about making human connections and reaching out to the people that are passionate about what they've created whether it's a service a solution a product what you know whatever it might be and i do i i would agree i find that actually you know surprisingly successful that people want to learn how other people are using what they're working on and they want that feedback which you're helping with in general and to summarize, but also just kind of that human to human, you know, interaction and other people who care enough to actually reach out. So I love that, those recommendations, because I think they, they kind of both fit into that, like making connections, which of course, graph people should all be excited about making connections. You originally said, you know, you mentioned the closed beta, but now I think it's more available. Can you tell us about that? Yeah, absolutely. We started the company, my founder, Kevin Ma, and I started Riley about eight months ago. And since then, we've been building in a closed beta state where when we decided to start Riley, we had a wait list of customers about 50 different companies signed up to use the product, mainly from the conferences I used to, I mentioned I spoke in. And as we're building the product, we started onboarding some of these users to help us become really design partners. Because this is an AI that I feel very strongly should be built with the community since as every good ai should reflect the needs of the community we we decided to build very closely with our design partners and so for the last eight months we we slowly onboarded our wayless customers and built alongside them launch all of our features one cool thing thing that we did actually was one of our design partners. He's from Indonesia and both Kevin and I flew to Indonesia to meet him and to really have a conversation about two months ago on what's next for Riley. Just really, really exciting to see people worldwide starting to incorporate Riley. And now we're finally ready to launch publicly. So we're really excited to make Riley available as of today on the 23rd. And what's really exciting is now anyone can just sign up to use Riley simply by going to ask Riley.io and creating an account. We have a free tier for a few months. a few months when you start using Riley, you're able to just run projects, start experimenting with it, and feeling the magic of being able to prioritize insights within minutes just with the click of a button. I'm sure everyone in this community or some people in this community would like to be founders one day as well. I'm going to share a little bit about my journey and Kevin's journey in the blog in the next few weeks. It sounds like you're very collaborative in the way you work. Is this also going to be either, are there going to be any open source elements or developer tooling that maybe a developer could hook in and integrate with? Yeah. Yeah. That's, that's, we will absolutely have all of that, just not right now. So the plan is to eventually have open APIs that developers can integrate with in the next few months. Awesome. So I have one final question that I like to ask people, thinking over 2025, is there something you would like to do if you knew it were impossible to fail? If there was something that just could be nothing but successful, what might that be? I think there are a lot of industries that are extremely underserved that would benefit from having a tool like Riley. And I don't know if I mentioned earlier, but I'm originally from Indonesia. And so every time I go back, I see all of these different companies, all of these different businesses. They might be small businesses, like restaurants, et cetera, that have so much data and they don't know how to do it. They don't know what to do with it. They don't know how to make sense it and i my my dream really is to have every single organization that uses data to use riley and i know that that's such a strong feed we're starting with tech right now but my hope really is to get us to a state where every single organization uses riley because i think what we're building is extremely extremely important and can provide so much clarity and even the playing field for even small businesses. Awesome. I love that. Many of us often think about the enterprise, but there's just so much data out there in the mid, in the small market. It's not utilized. And just the power of that is kind of amazing. So, well, I enjoyed it thoroughly. It's a great way to kick off the year. Really appreciate your time. We would love to talk to you and your co-founder back at some point. So again, thank you very much. Really appreciate it and happy new year. Thank you. Happy new year to everyone. | GraphGeeks In Discussion S2 Ep1: Capturing Elusive Customer Knowledge | 1,326 | GraphGeeks | 20250128 | Today's conversation with Claudia Natasia, CEO of Riley, takes us into the fascinating intersection of graph technology and customer behavior. As a data scientist turned product leader, Claudia discovered that the key to unlocking revenue growth was hidden in the complex web of customer data. That insight led her to found a company that's revolutionizing how businesses understand their customers using the power of graph technology. Join us as we explore her journey from data-driven problem solver to innovative tech founder, and learn how companies are uncovering elusive customer insights.
More information on Riley can be found at https://www.askriley.io/
HOT NEWS on Venture Beat https://venturebeat.com/business/introducing-riley-ai-the-first-ai-powered-product-insights-assistant-to-supercharge-growth/ | 2025-02-03T09:40:43.219882 |
https://www.youtube.com/watch?v=bbFEYPx9Hpo | All right, Wiz, today we go deep on DeepSeq. First question, is it really legit, this DeepSeq R1 model? I mean, in all senses of the word that I can think of, you could mean legit, yes. Yeah, it's kind of sent me personally down a rabbit hole trying to figure out what's really going on. and it's something like there's real rl happening you personally down a rabbit hole, trying to figure out what's really going on. And it's something like there's real RL happening here, instead of kind of some of the stuff we're used to, right? Let's say RL instead of SFT. It's not just supervised fine tuning sort of masquerading as RL. It's like legit RL that DeepSeq is using. Is that right? That's so right. Yes. It's really RL. It's not RL asterix. It's not RL being more generally forced into the SFT domain. It's really, it's doing the reinforcements and the learnings. Okay. Okay. And what we've seen so far in the field though, we're sort of alluding to this. It hasn't been like truly legit reinforcement learning. It's been this RLHF thing, right? it's been this reinforcement learning with human feedback and we've got to split this hair today don't we yeah it's it's really been fancy sft now that doesn't mean it's not valuable that doesn't mean strategies like dpo aren't useful for exactly what they are are said to be useful for right which is aligning the model to our preference but they're definitely still uh stapled to that sft domain in a way that prev just aligning the model to our preference. But they're definitely still stapled to that SFT domain in a way that prevents us from getting what we're gonna, we're gonna see as like the aha or Eureka moment during the training of these larger, you know, reinforcement learning approaches. Yeah, and when we talk about reasoning, we're talking about teaching the model basically to think, right? And so there's this real interesting idea at the bottom here where we can say, when we're not stapling outputs and demanding that they align with human feedback, the model can actually learn to, let's say, truly think in ways that are aligned truly with the universe, not just what humans think. Something like that is kind of where I get to. Is that a fair way to think about this as we head into today? Yeah, absolutely. The idea is exactly as you described, right? We're telling the model where the start and finish line are, right? But we're not telling the model how to get there. We're telling the model, figure out how to get there. Take whatever path you need, figure out a process by which you can get from start to finish reliably. And that's something that we don't do when we use things like RLHF and DPO. We instead say, hey, model, please do this. And stick as close to it as you can. And that's kind of like the magic of the approach that the folks at DeepSeek really popularized. And I guess it's not that surprising then that they're aiming for AGI, baby. They're aiming for AGI. Okay, Wiz, we're going to go ahead and have some discussions throughout, but it's time to kick off. Thanks for your insights. Welcome, everybody. We're going to talk DeepSeek R1 today, and we are going to try to go as deep as we can. There's actually just so much richness and depth at the bottom of all these different optimization algorithms. We don't have time to do all of them justice, but we look forward to having a rich discussion today and a little bit of a interesting story to tell along the way. If you want to jump in with your questions or in chat, please do. There's a link to the Slido. We will prioritize answering those questions at the end of today's event. But if we're having a discussion, feel free to jump in the live chat. All right, let's go ahead and kick off our second event in our large reasoning models series. This is our event on DeepSeek R1. So much goodness to come in reasoning in general. We're going to try to remind ourselves what we know today as we align our aim to the session and see if we can take it a step further. We really want to understand DeepSeq R1 today in context, in a proper technical context, building on what we've already learned about reasoning models. We want to try to take a wide angle view today of this reinforcement learning approach called group. what we've already learned about reasoning models we want to try to take a wide angle view today of this reinforcement learning approach called group relative policy optimization or grpo that was pioneered by the deep seek team and we want to talk and try to get some intuition about how it actually differs from something like proximal policy optimization and the optimization scheme used typically in RLHF implementations. So we want to take all of this and we also want to do some building shipping and sharing, of course. So shout out to the team at Unsloth. Those guys just continue to crush it, putting out amazing stuff. We're going to leverage their implementation of how to actually train our own version off the shelf of a Lama 3.1 model. So it can do reasoning. We can do this with a very, very small amount of GPU RAM today. It's amazing the technical advances we're seeing in LRMs. So what we want to do is we want to talk about DeepSeek as the company. Then we want to talk about DeepSeek R1 and kind of some of the way we got to where we are today before we do our own fine tuning. And we'll talk a little bit about this idea of distillation and how it differs from exactly what we're going to do today. All right, DeepSeek. The first thing we want to know about DeepSeek is that it's a competitor to OpenAI. Top line on DeepSeek, which I don't hate, is unravel the mystery of AGI with curiosity. Answer the essential question with long-termism. That's a new one for me, but I actually kind of love it. Let's learn a little bit more about the company. So, Leon Wenfang, he actually is a billionaire, the founder of DeepSeek, what is now known as DeepSeek. He founded a company in 2016, and he was doing stock trading with deep learning, with GPUs, and he was crushing it. you know, he's kind of ahead of his time. 2019, he founds High Flyer AI, a hedge fund, a hedge fund, really focused on AI trading algorithms. And that went really well, enough to make him a bunch of money. And in 2021, they really started to take to the next level with their Fire Flyer 2 lab that they were going to to stock with NVIDIA GPUs. They were trying to get like 10,000 A100s. And, you know, in this 2021 timeframe, this guy's already very well known in China. So he wrote the preface to sort of one of the biggest quant books from Jim Simons, uh or the man who solved the market is the name of the book. It's about Jim Simons and the quant revolution and sort of talking about this idea of quant trading. So very, very well known in the space, coming up in the right space at the right time. And, you know, so all of this kind of leads to 2023, there's like this bigger initiative to say, okay, let's go not just high, not just fire, we're going straight AGI. And that ultimately led to kind of incorporating DeepSeq. And since then, they've been shipping and sharing like a bunch of legends. We saw DeepSeq Coder, DeepSeq LLM, DeepSeq Mixture of Experts Model Series, DeepSeq Math from V1 to V2. And then in late 2024, we see DeepSeq R1 Lite Preview. Feeling quite familiar for those of us that have been following OpenAI, DeepSeq V3 and DeepSeq ultimately R1. So this has been a long time coming and it shouldn't be really surprising to folks in the industry to see that this is the kind of lab that came up with this. So, you know, putting all the headline stuff aside, this is really brass tacks, what was going on behind the scenes. This leads us into DeepSeek R1. And DeepSeek R1, I really like this quote from Jay Alomar, who put out the illustrated DeepSeek R1 recently, very, very worth your time to give that one a read. The same guy who put out the illustrated Transformer so long ago. It's the latest beat in the steady drum roll of AI progress. He goes on to say that, and we're going to kind of focus the session around this quote in some sense today. Like most existing LLMs, DeepSeq R1 generates one token at a time, except it excels at solving math and reasoning problems because it's able to spend more time processing through this idea of generating thinking tokens, which is very cool and Wiz is going to go into quite some depth on. And that helps us to deal with these chains of thought. So there's a number of pieces that we need to break down here. The first one is one we've seen before, this idea of chain of thought. This is the idea of thinking through something step-by-step. The original paper from the Google Brain Team in January, 2022, introduced this idea of chain of thought, a series of intermediate reasoning steps. And so chain of thought reasoning, you can see it's very good for things like simple math problems, or also it's very good turns out for coding problems. And there are many examples of the way this kind of went. It started off in 2022, looking at this sort of grade school math, Thank you. And there are many examples of the way this kind of went. It started off in 2022 looking at this sort of grade school math data set. Can we do well on it? It's like, yeah, we can do really well on it. And we're starting to reach the limits of just crushing it. And there's also this other sort of tangential idea that's related that's, I think, important to keep in mind as we use these systems. You know, there's also this other sort of tangential idea that's related that's I think important to keep in mind as we use these systems. You know, there's also this idea that we can think through something, we can also sort of refine our thinking about something. This idea of self-refinement came out in March 2023 in this paper, and it's just sort of the LLM is able to sort of be a judge to its own thinking. And this is, you know, quite llm as a judge as well and there's a lot of sort of core ideas here but you know you could imagine that you just are kind of having the llm think through and you're kind of having the llm judge some of the outputs and you're kind of having the llm do some self-refinement. And all of these kind of ideas are related to reasoning in some sense. They're related to these loops, or if you are familiar with us talking about agents quite a lot, this idea of looping through and thinking through things step by step. Those core foundational pieces have all led us to today. We have to fast forward quite a bit to sort of catch up. But this idea of being able to spend more time doing this is where we're kind of focused on as an industry today. In this paper from Google, again from deep DeepMind here in August last year, we can scale what we call test time compute now. And we can basically say if we think longer on difficult problems, the humans or the machines, we can probably do better. And this is the same idea we saw the following month come out from OpenAI when they released O1 in September. O1 thinks before it answers, these long internal chains of thought. This idea of test time compute is nothing but time spent thinking, right? So all of this is sort of core foundational stuff and what we what we see when we look at anybody's explanation is we see that they say things like through reinforcement learning, model A, B, C, D learns to hone its chains of thought and refine the strategies that it uses. In the spirit of trying to hone chains of thought and refine the strategies, we want to walk you through a set of demos. We're going to start with a simple off the shelf model we're going to see how it performs we're going to give it some chain of thought prompts and then we're going to see how it compares to something like deep seek r1 off the shelf it's time for some lm wizard in the house wiz passing it over to you oh yeah okay so, so this is the demo that we're going to do. The big thing that we want to focus on is the outputs that we're seeing, right? So when we talk about the ability to, you know, reason and everything like this this we kind of have to define you know what what we actually mean by reasoning etc uh what and what we mean in this case is the ability to work through problems in a way that uh you know emulates or or imitates uh the way that humans think so the idea is that we have this, you know, say, a classic fun toy question, right? How many R's in the word strawberry? When we ask the model, it doesn't really think about it, just says there's two. So, you know, there are two R's in the word strawberry, and also two other R's, making a total of two R's and two other R's for a total of two main R's. I don't even know what's happening, right? This is obviously a, you know, this is a great model. It's 3.370B, but it's just absolutely missing the point here. So what we do or have done to induce this kind of reasoning is something called chain of thought prompting, right? So chain of thought prompting is this think through your response step by step idea. Now, this is basically like a, like a crutch that we've used, right? It's a way to get the model to slow down a second, think through it, you know, don't just output whatever, whatever the first thought that comes to mind is right right? Just like you would hope to have normal people do, right? We're trying to induce this quote-unquote reasoning. And we get a different output, right? To find letters, number of R's in the word strawberry, let's break down the word into individual letters. We get strawberry, count the R's in the word. First R appears to position three, position eight, position nine, there are three R's in the word strawberry. Okay, so now, you know, it can do it, right, we have to do this like weird little prompting strategy, but it can get there. Okay, that's nice. What we want to do though, is we want to have a model that doesn't need that strategy, or more leverages that strategy to uh to expanded effect right and so that's where enters the scene uh deep seek r1 right so now you'll notice we don't have to provide this chain of thought style prompt we we certainly can and for math problems we we are told that we should but the idea is that you know we have how many R's in the word strawberry, and then we get thinking for 42 seconds. Now, that's a lot of thinking for this question, right? So there's no, make no mistake about it. That's, I would say, too much thinking for this particular question. But the idea is that we get this whole extremely interesting thought process, right? And some of this thought process helps us understand potentially, you know, why the model is making some of those off-the-cuff mistakes. So we have, I need to figure out how many times a little R appears. The word strawberry, let me start by blah, blah, blah, blah, blah, blah, blah. We get so much stuff. And then it goes, wait, that would be three R's, but I'm a bit confused because sometimes I might think of the spelling as strawberry, which has two R's at the end. Wait, no, let's check the correct spelling of strawberry. So we kind of can learn a little bit about the fact that, okay, well, it's just looking at berry, right? It's not yet looking at straw as part of strawberry, right? So, and you see it it kind of does this like weird doom spirally loop where it goes but maybe there's two r's and then it discovers again that there's actually three r's and then maybe there's two r's and then we go back finally to uh it's spelled like this there are three r's the answer is three uh the idea is there's and there's great comments about this in the chat right now, right? LLMs don't really think, chain of thought helps where we know the answer. This is largely true. The idea of chain of thought is essentially to fill up our context window with as much relevant context as we can so that we're able to pick out the correct answer from the pile, right? So it also helps us push a little bit deeper, a little bit further, and explore parts of a problem that, you know, that weren't explored at first blush. So again, you know, we can say all kinds of things. It's not really thinking. Okay, fine. Yeah. It's not really reasoning. That's probably, you know, somehow literally true in some senses, perhaps, you know, certain definitions, it is true. But the idea is that we're basically just blowing up our context window with things that we imagine are going to help us get to, or at least uncover a path to the correct answer. Now this is the most important piece of this whole puzzle, right? Is this idea that we're exploring the possible solution space and we're, we're therefore increasing the likelihood that we stumble upon the correct answer. So whether or not we want to call that reasoning, whether or not we want to nail that in as the model thinking is certainly up for debate. But I would say that this is, for the most part, how we're emulating that process. It is certainly how we're allowing the model to spend more compute at time of inference to increase the likelihood of getting the correct solution. Which is why this pattern fits in something called test time scaling or inference time scaling. Because it's allowing us to spend more resource outside of training and instead spend that resource at time of inference to stumble into the correct response. So with that out of the way, we'll go ahead and we will pass back to Greg who will explain us a little bit more of what's happening during the training process that lets this very fun, very useful effect emerge from our training. All right. Thanks, Wiz. Okay. So we've got this idea of chain of thought now and let's go a little bit deeper into the history of deep seek r1 and how it came to be i mean if we look back and we we sort of see okay the first primary thing that's worth paying attention to in the DeepSeq paper is this idea of GRPO. And we'll talk about that shortly, Wiz, we'll talk about that in the notebook. That policy optimization algorithm came from this DeepSeq math paper released in February last year. Now, this is interesting because DeepSeq math, titled Pushing the Limits of Math Reasoning in Open Language Models, is quite reminiscent of the path that we could follow that OpenAI took, right? So everybody's competing for the same thing. OpenAI October 2021, they were like, hey, we can solve math word problems. And it was like GSM 8K, got them. In February 2022, they were like, we could even solve harder math problems. And these are like more formal, like high school level math problems they explored and they found out, oh, okay, this method of expert iteration works pretty dope for this. We talked a little bit about this during the last session. Importantly, in May 2023, they released a blog and a paper titled Improving Math Reason with Math Reasoning with Process Supervision. And this gets to the point that we opened the event with today. So if you think about this process supervision idea, what we can do is we can look at each step of the reasoning and we can reward each step of the reasoning. Alternatively, we could look at just the outcome and we could reward just the outcome. Reading from the paper from OpenAI, in addition to boosting performance relative to outcome supervision, process supervision has an important alignment benefit. It directly trains the model to produce a chain of thought endorsed by humans. Okay, all right, that one's more HFE there, human feedback. We can train reward models to detect hallucinations using either outcome supervision or process supervision. Either way, we're looking at those COTs. And they said, yo, we crushed this math data set and we're releasing this other data. Enjoy. That was May 2023. DeepSeek comes in February 2024. Again, this is about open models. And they said, hey, our DeepSeq math, it's pretty small, but it's crushing it. And it's crushing exactly that same benchmark, the math benchmark. Let's take a look at what we can learn additionally from the paper as we dig into it. This is the place where they introduced group relative policy optimization, a variant of PPO, one that enhances math reasoning, while concurrently optimizing the memory usage of PPO. Now, this is really cool. And it's been speculated here on this last point, at least by the guys on the all in pod is like, they were resource constrained here. Fundamentally, I mean, they didn't have as much GPU compute to use as somebody like OpenAI. So, you know, it's a question of like, was the creativity and the innovation of this GRPO something that came from constraint or was it just a really great, cool breakthrough? It's not really clear one way or the other, but it does make a lot of intuitive sense to say, hey, we don't have enough memory to use. Let's figure out something to do that doesn't require as much memory to use. And, you know, this paper, I want to walk through the contributions of this paper, because they're kind of staggering, actually. And this first one, GRP, okay, dope. GRPO forgoes the critic model, instead estimating the baseline from group scores. There's a lot packed into that sentence, much more than we can get into today properly, but we're going to do our best to sort of cover it at a high level. Again, significantly reducing training resources. The idea here is connected in a lot of ways to a lot of different things. But essentially, it's the same idea we talked about with. We're not having the human feedback happening. Instead, we're using a true RL approach where we can look at a number of different kind of more objective views of analyzing how we think through problems like math problems, like code problems. Next, we demonstrate that GRPO significantly enhances the performance of our instruction tune model. Furthermore, we observe that enhancements in the out-of-domain performance are happening when we use this RL. Meaning, if we learn to think through stuff like math, stuff like coding, if we come up in the game as scientists, in other words, we can use our incredibly articulated thinking through ability in all sorts of other domains. And of course, we as engineers, we know this already. It's cool to see that AI can also do this. They go on to say we provide a unified paradigm. Okay. This is totally beyond the scope of today, but it's really cool. And I'm quite excited to dig in further to this, to understand the different methods, such as reinforced fine tuning, direct preference optimization, proximal policy optimization, and group relative policy optimization. Now you don't see RLHF in there, which is quite interesting. But finally, based on our unified paradigm, we explore the reasons behind the effectiveness of RL. So, you know, I would say these guys are pretty legit, the deep-sea guys. It certainly seems that way from reading some of their papers. And if we kind of zoom in on these ideas of using RL, enhancements in the out-of-domain performance during RL, unified paradigm to understand the different methods that use RL, the reasons behind the effectiveness of RL. Then we also have to deal with when we read something like the OpenAI blog through RL, O1 Learns to Hone Its Chain of Thought. Wiz will talk a little bit more about this in his notebook, but this is the image we get from the DeepSeek math paper. And there's a lot going on. The one thing I want to draw your attention to is that we get this sort of group of ways to reward our model in a GRPO. And this is something that we can sort of choose and we can design ourselves as we'll see when we do fine tuning with unsloth. And that's very cool. That is very cool. We still see KL divergence. We still see reference and reward model. There's a lot of similarities here. And, you know, if you guys want to see a follow up event to continue to go deeper into this, let us know. It definitely seems like something that further into the deep seek r1 paper that came out just this past month it takes this and it leverages this in a way that adds additional layers of complexity that we're going to try to highlight again as best we can here to go end to end today. We introduce our first generation reasoning models, DeepSeq R1, DeepSeq R10. DeepSeq R10 was a model trained via large scale RL without SFT as a preliminary step. And this is where we get into sort of how do you train models? And this is where there's just a lot that goes into models that get trained these days, whether you're talking about SLM type models that you can pull off the shelf, or whether you're talking about reasoning models, there's just a lot of steps that go into it. The easiest way to think about this is, here's the training pipeline. And this is from the illustrated DeepSeq R1, definitely encourage you guys to check it out. So we basically take DeepSeq V3, we do some supervised fine tuning, and then we do some fine tuning with RL. That's it, super easy to understand. When you start getting into the specific details is when it starts to get less easy to understand. For example, what are we using for supervised fine tuning datauning data exactly? Well, you probably won't be surprised to see that we're using reasoning data, COT-type examples like the ones that Wiz showed earlier. And then what we can do is we can sort of split up these two training pieces into this idea of like reasoning oriented RL and general RL. And all of a sudden we're in this space where now we have two different types of RL and neither one of them seem to be RLHF proper. And it's probably time to remind ourselves a little bit of how a typical model is trained today that we might be familiar with already. This is also from Jay Almar in their book on hands-on large language models. But I like this because basically you have to train the base model with some unsupervised pre-training. You have to do some instruction tuning to make it generally good at following instructions. And then you have to make it good at sort of understanding how humans want you to follow instructions. Be harmless, don't be toxic, etc. A way that we typically can think about this is by looking at our good friend the Shogoth where we train the big model. We then make it helpful and we can then make it harmless or we can dial in other aspect critiques now i want to go back again bring this back to what we were saying at the beginning if we think about instruction tuning in a classic sense this is fine tuning this is supervised fine tuning with instructions if we think about rlhf it really is also fine tuning with instructions. If we think about RLHF, it really is also fine tuning. It's just a finer fine tuning. It's really dialing this in. And we've covered PPO and the proximal policy optimization scheme for RLHF a number of times, we put in prompts that are maybe going to be a little bit challenging. And we want to make sure that we get a good response. It's not going to be harmful. It's not going to be toxic. So we might try to bait the LLM when we put in these prompts. And we're going to try to make sure that our policy model that we're training, that one is going to stay pretty close to the initial model through this check, this KL divergence check. The reward model is going to score the output. Again, it's going to score the output. It's not going to score each step of the chain of thought process, although we could do that as well. And then it's going to sort of decide on how it wants to change the value, change the state of how we're going to deal with a prompt like that in the future. We're going to go update the weights accordingly in our typically LoRa adapter. And then we're going to go through a number of iterations and we want to pump up these reward numbers. So we have RL, we have RLHF, we have rewarding the outcome, we have rewarding steps. Wiz, I want to bring you up to sort of have a little discussion here. I'm confusing myself again, right? It's like, are we using RL only in one of these? Why does this distinction really matter for us to understand right now? Yes, we are using RL in the case where we're talking about GRPO, the deep seek method. We are not using specifically RL for the RLHF pattern. This is largely due to the fact that what we're really doing at the end of the day is we're just doing SFT with extra signal, right? So now that good but there's no there's no real uh exploration that's occurring right we're just saying you know hey when the model produces a response we're gonna add some additional uh datums right to the to the final uh you know uh the the final weight update process, right? To try to be as simple as possible. The kind of magic of the... the final weight update process, right? To try to be as simple as possible. The kind of magic of the GRPO approach is that we don't care at all what the model outputs are as it relates to determining what the weight update should be, right? What we instead care about is we care about this kind of, you know, you know, as it relates to determining what the weight update should be, right? What we instead care about is we care about this kind of group, how well did it score on these metrics, right? And we're going to update based on that. And those metrics are not related to directly the, or necessarily directly the, the output itself. All that means is, you know, we care like that, that response is correct. We give score for that. We care that the response is potentially follows a specific format or prompt format. You know, we're going to, we're going to talk about that when we do the actual, when we do the actual build. But the idea is that we have this ability to say, hey, what we actually want to do is we want to explore this space unrestrictedly and not just say, hey, I noticed your generated sentence didn't score well, you know, do it again. It's more like saying, hey, you're scoring well on our metrics. I don't really care how you're getting there. You know, like you got there however you want. And why does it matter? Well, that exploration process is what allows us to uh have these kinds of moments high impact moments in training that uh that unlock quote unquote unlock uh abilities you know to to say the least so uh that's that's kind of where we're where we're at for that uh you know why it's important okay okay so yeah i like this idea of we're sort of allowing our model in a grpo scheme to sort of play right yeah you you know why it's important okay okay so yeah i like this idea of we're sort of allowing our model in a grpo scheme to sort of play right we're sort of allowing it to explore almost the environment if you will almost like the way you actually would think about an rl kind of system in its classic sense and instead of sort of imposing something that we say, this output is of the ultimate value, we're sort of saying, hey, figure out the answer. Whatever you do is dope, just get me the answer. And if the answer is right, good job. I like the sort of thinking about how you used to be graded in school and what is cheating and what is not cheating. I like the sort of idea, right? Where it's like, okay, if I give you just the right answer, is that enough? Well, GRPO kind of says yes, even if they got there a number of different ways, something like that. Correct. Whereas if you do explicit process supervision, it's kind of like when the teacher graded each step of your process. And even if it was right, you got the question wrong. Something like that, right? That's exactly it. Yes. Which is the worst, right? It is. The worst, yeah. We're going to go into this with an assumption, which is, which is we, we hold to be true, which is that if we get the right answer, most of the time, let's say that our process is working. You know, if we're able to arrive at the correct response more often than not, or in the case of math benchmarks, like more often than most other models can, then the process, something is inherently right with that process, right? We may not care to understand it or the way it looks might not be appetizing, et cetera, et cetera. But there's something that allows the model to get to the right answer about it. And that's the thing that we care about. Okay. And so is this Shoggoth? I mean, it feels like it's starting to break down for me a little bit here with a little smiley face on it. Like, is this still valid for you to think about this? I think the answer is that mask that we see, that smiley face that we see, is just a lot closer to the surface than I think I would have said pre this kind of like RL boom, where it takes so little effort to get to something nice, you know, from that. We just have to let the Shoggoth figure out how to interact. Yeah. We just throw at the food and the little bits. That's right. Just choose them up. That's right. Okay. Okay. And when it does that, the process of chewing those little bits up actually can be extrapolated to many other domains beyond math and code, which is cool. Right. Yes. Okay. All right. Excellent. Thank you, Wiz. And it's time to get into a little bit of fine-tuning, everybody. It's time to figure out how we're going to fine-tune our own reasoning model. We're going to use Unsloth today. We're going to fine-tune our own DeepSeq R1 style model. And shout out to the guys at Unsloth again. They're all about easy fine-tuning, training LMs faster. They are way down in the computation. We did an event covering more on Unsloth. Check it out to check out more about what they're all about. They joined us for that event. Why unsloth? Well, they're just really good at making stuff go burr with less. And we're going to see under the hood, still we have LoRa and Q LoRa. In fact, that was part of what they were able to do. They were able to take what other folks had done previously and make it work with QLaura and LoRa. And so we actually only need seven gigs of VRAM on our GPU to train our own reasoning model, which is big hype. Now note, and I really like this note from the blog, this isn't fine tuning DeepSeq's R1 distilled models, which they created a bunch of distilled models that they shot on the paper. And it's not using distilled data from R1 for tuning, although you can do all that stuff on Unsloth. This is literally taking any model off the shelf, like we're taking Lama off the shelf today and turning it into a fully fledged reasoning model. So with that, I want to give Wiz a little bit more time here. So maybe we'll save the discussion on distillation. If you're confused about what distillation is and what it means, a lot of times the way that we can easily think about it is we can sort of generate a lot of synthetic data from a big system, a teacher system, and then we can sort of use that synthetic data to train a smaller student system. And this is one way that we can think about distillation today with this synthetic data generation fine tuning. I want to again note that this is a little bit different than what we're going to see here. We're going to take our standard model, and then we're going to just do some straight up fine tuning with reasoning data. And that's the piece we're going to focus on using GRPO. We'll cover more on distillation and more on other ways to think about using and leveraging reasoning in our applications in future LRM events. So I want to bring your attention back as we head to Wiz to one of the key things you'll need to know to use these models today. And that is the idea of the thinking token. This is a game changer and something that's very worth paying attention to you, worth you paying attention to. Wiz, walk us through what we need to know to do SFT that actually induces reasoning on an off-the-shelf model today with these sweet thinking tokens. Over to you. Oh, yeah. Okay. So the did the the notebook so this notebook is uh is basically just extended from the unsloth notebook of the same uh of the same you know approach uh we're gonna check out some of the responses at the end from the unsloth notebook but the idea is is that uh yes uh we can do this in collab in a t4 instance uh as well as we can do it in a 100 instance as well as we can change hyper parameters uh basically this is all extraordinarily straightforward to do thanks to a number of innovations from people we love like unsloth like trl there you go okay so first things first we We have unsloth, like TRL. There you go. Okay, so first things first, we have Unsloth GRPO training. So this is not like exactly, exactly what happened in the paper, right? This is very close to what happened. The actual training volume is much different. The architecture is technically different, et cetera, et etc uh but this shows you how you can use this method right to train even like a llama you know in this case 3.18b model uh and in fact how we can do it with very little uh you know gpu power uh okay so What is the GPRO training process with RL? Basically, we have a few phases. Group sampling. Basically, what this says is that our policy is going to generate a batch of outputs. This is a method that we can kind of think about as doing some explore, right? So instead of just producing one output, we're going produce N outputs. And those N outputs are all gonna be very slightly different. And that very slight difference is kind of what allows us to explore the solution space as we move forward. Then we have, of course, our reward scoring. Our reward scoring is gonna be based on some reward functions that we have, which we'll talk about once we get to them in the notebook. And then we have our group-based advantage. Group-based advantage is, it's like thinking, how well does this group perform? And how well do the constituent members of the group perform relative to that right so the the idea is that since we're generating n examples let's just go with three right we're gonna have some average reward and then we're gonna assume that there's gonna be some number of responses that individual score will be above or below or or perhaps precisely, but more likely above or below that average. The ones that are above that average are considered advantaged and the ones that are below that average are considered disadvantaged. So the idea is that we're going to say, hey, these are the ones that were better than average, right? So we're going to build up a group of responses that are better than average. And then we're going to do some policy updates. Policy here is just a I don't know. right so we we're going to build up a group of responses that are better than average and then we're going to do some policy updates policy here is just a rl specific term but what we mean is the model the base model thing we're training the thing that we're trying to get to go uh burp right so the policy update is basically going to say hey these responses that were better than average let's be more like them and let's be less like these responses that are worse than average and then of course uh we do it again and then we do it again and then we do it again and then we do it again we do it many times in a row until we get some score that we are happy with right some some performance that we're we believe is desirable all right so we have this group-based approach uh it it avoids needing having that value head ppo situation right we're basically saying hey here are here are examples that we love uh we should we should be more like those and here are examples we don't love we should not be like those uh okay there you go so we have some imports we have a specific uh you know version of the TRL nightly this will be updated once they've locked in how they're doing their their sweet sweet GRPO unsloth has also provided a fast patch for the uh GRPO process once again I kind of spoke a little bit about it but we're generating like a bunch of responses right to the same thing because we're generating a bunch of responses to the same thing uh what we what we actually care about doing is we care about making sure that those bunches of responses right are generated quickly and so that's what this patch does um then we can load our model with regular old laura so again we're doing this with laura which is fun and exciting and interesting the idea with laura is pretty straightforward we have a uh we have a you know adapter that we're going to train the adapter is going to be what learns during this process this is part of what is necessary to make this you know possible to shove into a tiny notebook, of course. So this is how you can use the T4 if you would like to. But the idea is the same as every other LoRa. Fine-tune. Let's go. You love it. Okay. We're going to then use unslossed fast language model, which is basically just LoRa but fast. Okay, good job. Then we have to prepare our data. Now, this is the part that's a little bit spooky, right? Because we're still providing questions and answers, right? We still have a input and a target, you know? Like, that's what our data set looks like. Well, how is this not-tuning? We'll get to that. But the idea is that we still have, it's a very standard format dataset. We don't have some magical dataset. Now, if we're talking about supervised fine-tuning distillation methods, so we're talking about using SDG to generate reasoning traces and then using those to train another model sure that's fine uh but uh you know the idea here is that the the data set's pretty standard pretty stock uh you'll also notice that we have some xml uh chain of thought uh formatting as well as our system prompt which is going to contain a reasoning quote-unquote reasoning. And then we're going to have basically just an extraction process. Our data set just looks like this, right? We have some question and then we have some answer. We have some prompt that's been formatted. And then we have some content to format it with. That's basically it. The one thing that I want to make sure stands out to people is that there's absolutely no indication that the model should reason here, right? There's no, the prompt isn't saying reason about it. The, the, uh, the, uh, uh, you know, the specific answer doesn't contain like reasoning that we should emulate. Now, this is important to note because we have that process in the actual rl uh or the r1 model so not r10 we actually do this cold start that greg talked about right where we kind of we nudge the model a little bit at first to get like kind of a little bit good at this and then we let it play around right in this case we're just going straight to the playing around uh which is which is fun um and then we have bunches of reward models so there's this box right we're basically we're saying hey we need to we need to figure out like who's doing the best on what well this box is powered by these uh reward functions so these reward functions are basically just ways we can say if our if our process was doing well or not doing well if it was doing well we're going to give it cookies if it was not doing well we're not going to do it give it cookies and that's going to get us because we have a number of these reward functions that's going to be able to get us that average for each generated response and then a group average as well well. And that group average is what gets us the A, which is the advantage, right? Tells us, was it better than or worse than the average response in the pile? We have a number of these reward functions. There's the correctness reward function. And this is from work as done by a Twitter user, WillCCBD. you go. Great stuff. But the idea is we have correctness. Was it correct? We have some amount of like, was it an integer? Okay, that's good. Did the output fit the format we want? Exactly. And then did it fit it loosely? Do we have the correct number of XML tags? Right? And then finally, did we, this is just implementing the count XML reward pattern. But the idea is that we have like, in this case, we've got five different ways that we can record each or we can reward each generation based on what we are saying is loosely correct. Now that we have that, we are ready to start training. Our GRPO config looks like normal. You will notice there is not a lot of specific hyperparameters that we are caring caring about here this is all very stock uh hyper parameter instantiation you love it um there you go if you want to see the pretty graph you can change this to report to wand b that will produce that uh the the graph of where the line goes up and to the right as you train uh but otherwise we're we're not worried about it and then we can train and you'll notice in training that we get our reward kind of starts off excuse the massive scrolling starts off kind of whatever and then over time we slowly start to see higher and higher reward right now you would you would typically want to train this for uh you know 200 plus iterations which is going to take some number of hours in Colab. But the idea is that even with this, this, you know, limited subset of training, we can kind of see that this reward does slowly creep up a little bit as we start to learn, which makes sense. Now, you'll also see that we have these, you know know sample answers and these sample answers are very good because they help us understand how our model is uh how how the learning is going right so we have things like step you know steps uh in some of the responses we have things like you know these responses and our xml tags okay very cool you know we have these uh these attempts right and the idea is that as it trains it's going to get better and better at doing the things that we want because it's going to get scored higher and higher and higher and higher until eventually we get to the outputs we desire which are these kinds of long reasoning chains of answers i'm'm going to show you the outputs from the actual unsloth notebook, just because it's, they, they, they, the training went longer, so it looks a little bit better. We have this idea of the base model doesn't have a reasoning token, still very verbose, right? But it doesn't have a reasoning token. And then of course we have our, the, the fine-t, which has the reasoning tokens. And this is the, this is the thing that the model is learning to exploit these, learning to use these. The idea is that, you know, we, we can just hammer it into the model through this training process. Again, it doesn't take as long as like, you know, 20, 20 days, it still takes a little while. And the best part, the most fun part is that it does nothing for quite a long time. And then suddenly it starts getting better. And that is a special moment for anyone who likes training models. With that though, I'll pass you guys back to Greg, who can take us to some questions and close us up. Before I do, I got to remind you that we're here every Wednesday. Please don't forget to hit the little bell icon and subscribe, whatnot. We'll be doing more of these as the weeks go on every Wednesday. Catch us. Okay, there you go. All right, Wiz. Yeah, love to see that aha moment in training. Okay, let's wrap up everybody and head to some Q&A. What did we learn today? Well, we learned that DeepSeek has been in the game for a long time. DeepSeek R1 really came out of this tradition of high quality models coming out of an AI lab. Very similar to OpenAI. We see this sort of similar trajectory of the math and code and reasoning really come to fruition here in this R1 model. It uses RL, not RLHF, and uses less memory, perhaps because of the constraint of fewer GPUs. Unsloth, we saw, was a really great resource for us a shout out to those guys and then trailheads for you guys to explore more for me to explore more illustrated deep seek r1 shout out to jay alomar and the unified paradigm of all of these proximal policy optimization methods whether it's reinforcement, you know, learning, fine tuning or it's GRP, Oh, group relative policy optimization or classic ppo there really is something at the core of this that the deep sea guys are on to that i want to understand better okay so to wrap the magic is we already know chain of thought is good We already know that when we give the model steps, we can get better answers when reasoning is required. We know that DeepSeq R1 uses for real not just SFT or rather not just sft masquerading as rl like in rlhf it really do be rl in deep seek they taught it to play games with math and code and now it's pretty dope at lots of stuff it wasn't learning preferences okay it was learning like something true about the universe and it was learning specifically to do stuff in between the thinking tokens remember this ability is very close to the surface of llmsMs. It's pretty easy for us to bring out. Let's go ahead and wrap on that note and head to Q. We got some good ones, Wiz. We'll stay around for a couple extra minutes today. Jan Bors, is it true that because of the training process of DeepSeek, though through RL, no external data is needed, the model effectively trains by self-play. Biases are mitigated. Is it true that biases are mitigated by training through self-play? Probably not. But yes, this other stuff is true. You don't need, to be clear, you don't need like preference external data. You still need a target, right? You need like a question and an answer, say for instance, something that you can reward or determine the correctness of. So, you know, things like code, math are very easy to build verifiers for. Obviously you can get a little bit more complicated and build more complicated reward functions, but you don't need like external preference data or external chain of thought data, though the cold start implies that that is still useful, still helpful. That's right. Okay, so next question. What are the deep seek implications on startups and larger enterprises? I'm going to assume this means building, shipping and sharing production LLM applications, not the implications on, let's say, large GPU compute enterprises from the news. Like, how should we think about using these reasoning models like DeepSeq in our apps? We saw Strawberry, but where is it really useful? Yeah useful yeah i mean basically uh reasoning models are good at reasoning they're not good at other stuff though you know so it's like i would say right now they fit in places where you want that enhanced reasoning so for planning task decomposition stuff like this but for like the actual just work a day you know function calling like function colleagues not even supported in in the deep seek model yet uh you know you're you're probably there there's some frameworks that get around that et cetera et cetera but the yeah yeah it's good at reasoning yeah it's like it's like a phd right when do you hire a phd right it's like you don't hire them for everything in fact they're they're out of the running of many jobs because they're too educated, right? It's like, you don't hire them for everything. In fact, they're out of the running of many jobs because they're too educated, right? You know, it's like, so, you know, do you really need something that needs all this deep thought or just do you just need something that's going to just do the job, boom, you know? And I think we really have to ask this question more and more, which is super interesting, especially as we get into sort of multi-agent systems. And then we have the PhD guy on the team and we have the lower level individual contributor and the manager, and we've got all these different roles and maybe we choose different models for them, is sort of my intuition here. So, okay. A B asks, what's the most effective way to detect and monitor LLM hallucinations in reasoning models? They do be hallucinating. Basically, the same stuff that we used before just applied to the final result. Inside the reasoning chain, they hallucinate like mad the whole time. That's almost part of their charm, right? That's almost part of why they work. so you know detecting the hallucinations inside the reasoning chain probably is a fruitless uh endeavor uh but the final result yes we still want to you know whatever faithfulness style metric you you most prefer uh we want to make sure that final result is is uh important yeah yeah yeah like do you want you want your team member to come up with crazy ideas and you throw most of them away? You know, it's like, we're in a lot of questions about hallucinations. I think this is going to be a very interesting thing that enterprise has to deal with is when to use these and how to constrain them well. What are potential use cases for deep seek 1.5 b uh okay so this is just a small deep seek model this isn't even a reasoning model um i don't know do you have any opinion on deep seek models in general beyond the r1 model uh the distilled ones are like much worse at reasoning than the then the big boy right like for sure it's not it's not even close uh but uh there's still they're still better than uh than models that don't reason that reasoning you know it's like i i hate to be being a broken job but like if you were using a a 1.5b model to do some kind of reasoning workload first of all that that's brave all the power to you but second of all like deep seek 1.5b will be better however i i would still say like the the place the unlock here exists in those higher uh parameter count models that's where we get kind of like the real the juice is worth the squeeze right uh i i would say i i would be hesitant to recommend to to to use deep seek 1.5b for like heavy reasoning tests for things like specialized uh tasks so a a a by now famous example uh of the uh of this is like the countdown game right where you have you know five numbers and you have to build an equation that gets you the sixth target number right this very specific task can be handled very well by a very small model so if you have instances like that which are very focused hyper specific reasoning tasks I would say that's where we're looking to see things like deep seek 1.5 B style models really shine yeah and just to just to clarify for the audience here That's what we're looking to see things like DeepSeq 1.5b style models really shine. Yeah. And just to clarify for the audience here, that is an R1 model. That's a distilled version of QN of 1.5b that we're talking about, DeepSeq 1.5b. Okay. A couple of quick rapid fire, and then we'll wrap here, Wiz. One, is it feasible to use these in rag chains uh yes but why is sort of my intuition is that right uh yes you can use it in rag i just wouldn't it's not very good at following instructions it's not really you know you but you can uh there are a number of workflows where it does make sense to use this agentic workflows uh you know all kinds of fun you know bits and bobs you but you can uh there are a number of workflows where it does make sense to use this agentic workflows uh you know all kinds of fun you know bits and bobs you can do and and create and leverage that reasoning style but for things that are very direct where like rag is a knowledge task right it's not like a reasoning task really and truly uh for most cases rag is just like get the right context in the context window, model's going to crush it, right? If you have a RAG system that really relies on reasoning for whatever reason, then perhaps you'll see slight improvements, especially as we get better instruction following variants coming out the pipeline. But for now, I would say it is of course it's feasible okay alright alright final technical question how is prompting reasoning models different from prompting chat models yes so this is this is great uh no clue we're still learning uh this is a this is in flux uh we are we are not yet there uh you know deepek provides some guidance on how to prompt their model. Other people are learning strategies for open AI as closed-source reasoning models. Again, we're kind of having to rethink how we approach prompting these models. We certainly don't always need to think step-by-step,'s still useful, it turns out, right? We don't always need to be focused on the same prompt styles that have worked before. We got to do some of our own exploration in order to figure that out. But yeah. All right. All right. Yeah, we are. We are. We're all trying to learn. And I mean, I don't know the answer to that question either. Go play with it. Play with it all the time. You know, and this gets to the last question here is the excitement around DeepSeq warranted and, you know, do its advancements justify the massive sell off in the market is another sort of aspect of this. But I mean, I think, yes, and I think we talked about this before, where sort of reasoning is kind of having its alpaca moment here. We're sort of seeing that we're getting this distribution of reasoning capabilities throughout the entire community. Now with Unsloth, that's equally as exciting to me as the DeepSeek thing. Although I don't have any skin in the game in terms of specific stocks. So, you know, I mean, is it warranted for you all of the excitement around Deep seek maybe not all of it from my opinion but but much of it uh it seems like maybe the technical community you know sort of is has aligned more with the public a little bit more on this than than i would have expected but i think it's warranted for us as builders what do you think i have i you know i have all kinds of thoughts about this but i'm not a market expert i'm not a financial expert i my my job isn't markets and stuff uh lots of great readings you can do on on the the impact that deep seek did or didn't have on the on the market that uh you're you're welcome to read and synthesize reasoning is dope though and more coming LL is coming up soon, right? Okay. The model technically is something to be excited about and the paradigms of making RL easier and more aligned with what we're trying to do with models is hype, no matter what happens with all the other stuff. Yeah. That's right. That's right. Awesome, Wiz. Well, thank you so much for your expertise today and teaching us how to do that fine tuning on reasoning. We'll see you next week for Coconut. Make sure that you guys join us next week as we continue our Reasoning Model series. We're going to be talking continuous chains of thought with Coconut. And we're We're also going to be within the next six weeks, launching the next cohort of the AI engineering bootcamp. And, you know, it's rapidly evolving space, even in our current one, we've got to add vision language models. We've got to add some reasoning. So it's very, very exciting time to get in on the game. There's never been a better time. If you're interested in accelerating your career, check it out. Otherwise, stay tuned because following next week, we also have a very special event on cursor that will be exactly what any of you guys that are just starting to get into the game need to know to get up and running to build ship and share your very first ai engineering application so that's going to be a real fun one join us for the next couple of weeks on youtube live join us in the discord community to build ship and share with all of the legends there we're meeting weekly on monday mornings you can meet directly with me and many others in the Discord to talk about what you'll be building, shipping, and sharing this week. Thanks to everybody that joined us on YouTube. Thanks to everybody that joined us on LinkedIn. Until next time, keep building, shipping, and sharing, everybody, and we'll certainly do the same. Have a great week, and we'll see you soon. Bye, guys. | Deepseek-R1 & Training Your Own Reasoning Model | 4,198 | AI Makerspace | 20250213 | DeepSeek is dominating global app stores, but what’s behind its latest breakthrough? Join us as we dive into DeepSeek-R1, the first Large Reasoning Model (LRM)—how it was trained, how it compares to OpenAI’s o1/o3 and Gemini Flash Thinking, and what it means for the future of AI reasoning.
We’ll break down the multi-stage RL training, distillation process, and key takeaways from the DeepSeek-R1 paper. Don’t miss this deep dive into the next wave of reasoning!
Join us every Wednesday at 1pm EST for our live events. SUBSCRIBE NOW to get notified!
Speakers:
Dr. Greg, Co-Founder & CEO AI Makerspace
https://www.linkedin.com/in/gregloughane
The Wiz, Co-Founder & CTO AI Makerspace
https://www.linkedin.com/in/csalexiuk/
Apply for The AI Engineering Bootcamp on Maven today!
https://bit.ly/AIEbootcamp
LLM Foundations - Email-based course
https://aimakerspace.io/llm-foundations/
For team leaders, check out!
https://aimakerspace.io/gen-ai-upskilling-for-teams/
Join our community to start building, shipping, and sharing with us today!
https://discord.gg/RzhvYvAwzA
How'd we do? Share your feedback and suggestions for future events.
https://forms.gle/z96cKbg3epXXqwtG6
#deepseek
00:00:00 Exploring the Legitimacy of Deep Seek R1 Model
00:04:30 Understanding Deep Seek R1 in Technical Context
00:09:09 Innovations in AI: Understanding Deep Seek R1
00:13:39 Evaluating Off-the-Shelf Models with Chain of Thought Prompts
00:17:45 Exploring Chain of Thought in AI Models
00:22:10 Process Supervision vs. Outcome Supervision in AI Training
00:27:00 Understanding Reinforcement Learning Methods
00:31:22 Training AI to Follow Human Instructions
00:35:41 Unrestricted Model Exploration and High-Impact Training
00:40:18 Exploring DeepSeek R1 and UNS Sloth Innovations
00:44:59 Understanding GPU Power in RL Training Process
00:48:59 Preparing Data for Fast Language Models
00:53:03 Understanding Stock Hyperparameter Instantiation
00:57:18 Understanding Deep Seek's Role in Reasoning Models
01:01:20 Detecting and Monitoring Language Model Hallucinations in Reasoning Models
01:05:31 Differences Between Prompting Reasoning Models and Chat Models | 2025-02-15T16:59:34.002027 |
Subsets and Splits