AI has been around for a while, simmering in the background on our devices and in wider society. But generative AI has become a hot topic of conversation following ChatGPT’s launch.
Naturally, you may be asking, how can I use generative AI in my business?
But a word of caution. We believe if you want to build something the right way, every decision and every tool you use needs to be carefully considered.
In our first Built Right live webinar, we welcomed Jason Schlachter, Founder of AI Empowerment Group and Host of the We Wonder podcast, to share his methods for identifying and assessing generative AI use cases.
Keep reading for some takeaway points from the episode or tune in below.
What is generative AI?
Generative AI is artificial intelligence that generates content such as documents, words, code, images, videos, and more. It’s the type of AI that everyone’s talking about right now.
On the surface, it’s incredible technology, but Jason is quick to say that AI shouldn’t be regarded as the solution. It’s a tool, not a solution. Instead of trying to make AI work in your organization, you need to see if you can find any genuine use cases for it.
Questions to ask yourself before using generative AI
Jason suggests asking yourself a couple of questions to help frame your perspective. One is, if you had an unlimited number of interns, how would you deploy them to maximize business value?
This will help you zero in on which areas of the business require the most help for low-skill tasks – which are prime candidates for automation.
Another question Jason suggests is asking what you would do in the same scenario with an unlimited number of staff or an unlimited number of experts. What would you have them do to help?
Jason says this last one takes things up a level because one of the things generative AI can do is empower people to do things that they’re not experts in. With this exercise, you can start to uncover which areas of the business need the most help and what type of help they need.
How to assess use cases
It may mean that you come up with several different use cases – all of which could benefit from generative AI. The next step, in this case, is to figure out a way to prioritize and assess them.
Jason shared a real use case in our discussion about his upcoming trip to Japan. He’s visiting Japan with his family and wants to find activities that are off the beaten track. It’s a complicated vacation to plan when it consists of booking hotels, navigating public transport, buying tickets, working out travel times, and everything in between.
He could go with a travel agent but prefers to be in control of the planning. Expedia and TripAdvisor are great, but you still have to break down an itinerary and research everything yourself.
Instead, Jason could ask generative AI tools such as ChatGPT to build itineraries, plan trips, break down costs, and explain which options are best and why. It would be like having your “own executive team working on this.”
The downside is that Jason would have to put a lot of trust in the AI that everything was 100% accurate. The last thing he wants is to be stranded with kids in the middle of Japan because ChatGPT got some travel times wrong.
However, if it worked, and you could query it and change things, it could potentially up-end the travel market. It’s something that Jason believes will be dominated by generative AI in the future.
So, once you build an idea of different use cases and prioritize which ones are most needed or important, you can move on to the next step – figuring out if they are viable.
How to determine viability
1. Assess business value
You need to be able to assess the business value of implementing generative AI. It may be that you want to rapidly prototype something or build a customer chatbot that not only shares technical information but can also adapt to questions from customers.
Assess how valuable the input from the AI will be – will it reduce costs or speed up processes? Will it improve and speed up customer service?
2. Fluency vs. accuracy
Another way of looking at viability is to determine whether fluency or accuracy is more important in your use case. Fluency just means the ability to generate content well. Accuracy is about generating information that’s factual.
If you want AI to write a short story, it’ll probably turn out something that reads well and can help creators with structuring their content. However, if you’re looking for generative AI to contribute to a new chatbot that gives out medical advice, you need an AI model that prioritizes accuracy.
Getting AI that can produce accurate results every time is more difficult, but one way around it is to train models with your own data. That way, you can control everything the model learns and produces as an answer.
3. Low vs. high-risk
An important thing to always consider when using AI is the risk potential. Some use cases may be fairly low risk, for example, AI helping you write a blog post. Others can be high-risk – such as using an AI travel plan that leaves you stranded in a foreign country.
There are ways to reduce risk, however. The example Jason uses is if T-Mobile used AI in a chatbot, you could reduce the risk of it giving a false answer by only training it to give answers it can back up with a document. This also means weaving your own data into the model and making it truly unique to your organization.
Tuning your own AI models can be difficult, but it can help to improve accuracy and performance. It also doesn’t need to be a huge model with billions of pieces of data. It could be so small that it can run locally on a small device.
4. Defensible vs. non-defensible
Jason says that it’s important that the model you’re using and the use case you’re building is defensible from a business perspective.
So, you would need to take into account profit, turnover, the entire cost of implementing AI and changing processes, training time, getting support, maintaining the model, and so on, into account. It may be that AI in your particular use case isn’t defensible now, but it might be in the future if your major competitors go down that route and you’re forced to adapt. It may be something you want to revisit in the future to see if things have changed in this case.
After this big-picture view, you can decide whether it’s truly defensible from a business perspective – and ultimately worth it.
Deciding to implement generative AI in your business isn’t a decision you want to make lightly. There are so many cost and value factors, accuracy issues, risks, and then the impact on the business’s bottom line to think about.
For more insights into identifying viable use cases for generative AI, tune in to the full episode with Jason.
[00:03:02] Matt Paige: Jason, let’s kick it off. So welcome everyone to our first edition of Built Right Live. If you’re not familiar with the Built Right podcast, we focus on helping you build the right digital solution the right way. Check it out on all the major podcast platforms. We drop a new episode every other week. We got a good one to drop today too with Erica Chestnut, Head of Quality at Realtor.com.
[00:03:24] Matt Paige: So go check that one out. And like we said, please drop in comments as we go along. We’ll be checking the comments and that may tailor our conversation a bit as we go. But today we got a really good one for y’all. We got special guest, Jason Schlachter, founder of AI Empowerment Group and host of the We Wonder Podcast.
[00:03:44] Matt Paige: So he is got some, podcast chops as well. But Jason, give us an introduction so folks have a little context of your background, your history, and what AI Empowerment Group exists to do.
[00:03:57] Jason Schlachter: Awesome. Thank you, Matt, for the introduction, and I’m glad to be here. This is exciting. I see a lot of comments coming through chat which is making it also great to see the, participation already.
[00:04:07] Jason Schlachter: Yeah, so my background is in AI primarily. I spent about the last 22, 23 years in the AI industry. I went to school for a master’s in AI in 2001, back when there were basically no jobs in ai. And that led me down a path where I started off as a researcher doing a lot of work for DARPA Defense Advanced Research Project Agency, Army Research Lab, Naval Research Lab, Nasa,
[00:04:34] Jason Schlachter: Intelligence organizations, all kind of stuff that you could imagine would use AI before mainstream businesses were going crazy for it. And at some point I left that world moved into a strategy role led AI strategy at Stanley Blacken Decker for their digital accelerator. And then from there went over to Elevance Health, which owns Anthem Blue Cross/Blue Shield.
[00:04:56] Jason Schlachter: And there I focused on leading the RD portfolio and strategy mostly around AI and then as a product lead for their clinical AI work. And so, since leaving Elance, at AI and Empowerment group our focus is really on solving the people part of ai. That’s the way I like to sum it up really nicely cuz what I’ve seen, and I think a lot of research supports this, is most efforts to deploy ai.
[00:05:19] Jason Schlachter: Do not return the business value that people expect it to return. About 90% of AI initiatives fail to deliver on the business value that’s promised. I’ve seen many organizations where it’s a hundred percent. It’s almost never a technical reason. It’s almost always something at the organizational level.
[00:05:39] Jason Schlachter: So there was a maybe a misunderstanding of what was expected for the project. There wasn’t a deeper, nice deep, enough vetting of the use case. There’s maybe misunderstandings by the sales and marketing team, so they weren’t able to sell it. The project was canceled at the last minute because of legal concerns, data concerns, contract concerns.
[00:06:00] Jason Schlachter: So AI Empowerment Group really addresses all those non-technical. Challenges by upskilling the workforce, getting them AI ready so they can make the right decisions by holding workshops to help figure out which use cases are worth pursuing, building out the strategies to support that and much more.
[00:06:18] Jason Schlachter: But that’s, a highlight.
[00:06:19] Matt Paige: Nice. Yeah. Yeah. Awesome. Jason. So everybody listening, I wasn’t lie when we said it. We had an AI expert. He’s been in this game for a while, that the hype around generative ai he’s, been at it much longer than that, but those who don’t know HatchWorks we we’re your trusted digital acceleration partner delivering unique solutions to achieve your desired outcomes faster, and really on a mission to leverage AI and automation.
[00:06:45] Matt Paige: Paired with the affordability and scale of nearshore to accelerate your outcomes. But Jason I’m pumped about this conversation today. We’re giving people a, sneak peek into our generative AI playbook, but hitting on one of the most foundational concepts with, which is how you actually identify and then vet some of these use cases.
[00:07:05] Matt Paige: But let’s, start at the foundation in order to start defining use cases. Let’s ground people in what generative AI is and what it isn’t to set the stage there. Awesome. Thank you, Matt.
[00:07:19] Jason Schlachter: Yeah. So let’s talk a little bit about generative ai. Generative AI is a subset of the field of ai.
[00:07:25] Jason Schlachter: And the field of AI has been around for a long time, like thousands of years. And I know this is sounding crazy when I say it like that, but I’m gonna, I’m gonna back it up for a minute. So even going back to the biblical tests of the Old Testament there are like parts that talk about ai, they talk about people creating autonomous machines and systems that can do tasks that can operate autonomously to take away the, menial work that people don’t want to do.
[00:07:53] Jason Schlachter: And they talk about these, systems as created things that just don’t have souls, don’t have consciousness. And I think philosophically they were already addressing a lot of the use cases that we could even think about today. So thinking about the use cases for ai, for auto automation, for robotics, It’s been happening for thousands of years, which I felt was shocking when I figured that out.
[00:08:17] Jason Schlachter: And, so moving forward to today the, modern field of AI emerged in the 1950s and in the sixties and seventies. It was research in the eighties and nineties. It was commercialized. It was already a multi-billion dollar industry in the eighties and nineties. I think a lot of people don’t fully realize that.
[00:08:34] Jason Schlachter: And then of course, in the last 10 years or so, it’s really gone completely exponential. There’s been big data, deep learning, generative ai, adversarial networks. It’s just a full breadth of everything. And I think most recently we like to see things through our human eye like lens.
[00:08:50] Jason Schlachter: We anthropomorphize everything. So for the first time, like in a long time, it’s not some system in some enterprise that’s making some pricing decision. It’s this thing you can talk to and it talks back to you. And that’s scary and exciting and interesting. And I think that’s what’s driving a lot of the hype.
[00:09:08] Jason Schlachter: And it’s generating things. So for a long time, we’ve often said that creativity when, and creativity is hard to define, but like creating things is the human quality that machines will never have. And now they’re doing it. And so there’s questions like, what is art? What does it mean to compose something?
[00:09:24] Jason Schlachter: Who can win an Emmy? Who can win a Grammy? And so this is like really what’s, causing the hype? So, generative AI is artificial intelligence that generates content. And the kind of content it can generate in today’s world is text like, documents, words, phrases code, because code is text.
[00:09:46] Jason Schlachter: So it’s just a, certain type of text. It can generate images videos, 3D content, like for games it can generate music. You guys might have seen there was a Drake song that came out that was supposedly like, pretty popular, actually. Sounded good.
[00:10:02] Matt Paige: Matt, did you, I’ve not seen that yet. Was it produced?
[00:10:05] Matt Paige: They did some generative AI to produce it.
[00:10:08] Jason Schlachter: Drake didn’t produce it. Somebody else produced it, but it was Drake singing it. Oh yeah, Drake, he found out about it after it started becoming popular and it was like his voice and his style to his music and, somebody just basically trained a model on his voice, his style, and dumped it out there.
[00:10:26] Jason Schlachter: And there’s just these questions of what does it all mean? It can generate speech and audio in that same use case. Other side of it is like very, like hard sciences. Generative AI can generate like biochemical sequences like protein molecules. So very, open. In terms of what’s possible it is probability based.
[00:10:48] Jason Schlachter: It is based on, deep learning architectures. Which means that it’s, probabilistic. And I won’t go into the technical side into exactly how it works, but it’s not thinking and reasoning in like a symbolic, causal way. It doesn’t understand that if it rains today, the ground will be wet in, in like, a a very expressive way.
[00:11:11] Jason Schlachter: The way we understand that, it just has some miracle representations that, are able to connect those concepts together. And so it might respond intelligently but, it doesn’t actually think and understand in the way that we typically would expect. It also will reflect any kind of bias or flaws that are in the training data.
[00:11:29] Jason Schlachter: So if you had healthcare training data, and in that healthcare training data, certain members of the population are not getting the care they need for like societal reasons, not clinical reasons. And then you trained an AI system to make decisions about what care should they get, when they should care, when should they get that care for the best outcome that bias would pull forward into the model.
[00:11:52] Jason Schlachter: There’s ways to mitigate the bias and but, generally this is a challenge. If you have bias in the data, you have to account for it the best you can and the bias will show up in the end. And so with generative models, it’s the same. If we write with prejudice or bias or hate speech, like it shows up in the generative models as well.
[00:12:12] Jason Schlachter: It also pulls us into the post content scarcity world. Like up until this moment we basically lived in a world where there was a limited amount of content. At some point it was hundreds of books in the world and millions of books in the world. Now there’s no number of books in the world.
[00:12:30] Jason Schlachter: There’s an infinite number of books in the world that can be generated on demand. And so that really changes the whole world in which we operate.
[00:12:40] Matt Paige: Yeah. No, that, that’s awesome context setting there. But what was really cool was the, history dating back to biblical times. I was not aware of that.
[00:12:49] Matt Paige: That, that’s super interesting. But you, like the Drake example. You mentioned you can think of whole business models changing here. That’s a big piece of this. You also think of the accuracy of the data, and we’re gonna get into that in a minute when you’re talking about vetting some of the, viability of these use cases.
[00:13:08] Matt Paige: But I think one big piece of it is with a hive cycle, you sell this in the dotcom boom, a lot of this, there’s a lot of people. With a hammer in search of a nail, right? Yeah. The hammer being generative ai. Yeah. Let me go find a nail, lemme go find something I can do with this. Yep. And back to basics. It’s important to flip that and focus on the outcomes and relative use cases first, but maybe take us through like how, to think through some of the like, higher level business outcomes to start to bucketize where you can focus some of these generative AI use cases.
[00:13:42] Jason Schlachter: Yeah, absolutely. And, maybe I can start to Matt, with a, bit of the, why we’re going through this and what it means to find these use cases and, I’ll segue into, some of those. Okay. In this talk finding the use cases, validating the use cases I wanna talk about a couple like preamble type things.
[00:14:04] Jason Schlachter: So first, if you’re out there with customers, if you’re out there. Trying to solve problems, trying to figure out how to make your, product better, trying to reduce your claims processing costs, like you are the expert and you are the person that knows the, opportunities and the needs that, you could address.
[00:14:23] Jason Schlachter: And so in that sense, like you’re the perfect person to find the use cases for AI and generative ai, and it really is on, on, on your shoulders to elevate those opportunities and bring in the rest of the stakeholders. And so I think to do that, it’s really critical that you understand at a high level, at a non-technical level, like what is ai, what can it do?
[00:14:44] Jason Schlachter: What’s hype, what’s not hype? What are the opportunities and risks in, in pursuing this approach? How would I frame out and scope and describe this use case in a way that I could bring in the other partners? To be a part of it. And so there is this, ability for you to do that with a fairly basic understanding of how to think about these things.
[00:15:05] Jason Schlachter: And that’s our goal here today, to get that, basic understanding. And then if you think about finding the use cases, making the plans there’s a need to make a plan, there’s a need to find the use cases We don’t plan to have a plan. We plan to get good at planning.
[00:15:22] Jason Schlachter: And the reason why is because your plan doesn’t survive first contact with the customer or because of where I spent most of my career, first contact with the enemy. And so understand, right? I, had to adapt as I shifted from the, defense world to the right consumer world. I had to change a lot of my phrases and sayings.
[00:15:42] Jason Schlachter: And this is one of them. First contact with the enemy to first contact with the, customer market. And I, and we live in this dynamic world. So in finding these use cases, like it’s not that there’s gonna be the perfect use case. Like the goal here is to get good at finding use cases, to get fast at validating them.
[00:16:00] Jason Schlachter: And, trying them and learning. Because the faster you can do that the, better you’ll get to keep up with this exponential curve that’s ahead of us. And then the last thing I wanna say is we are here to talk about generative AI because it is exciting and there’s lots of things you can do, but for most businesses, Most of the use cases for AI are not gonna be generative ai.
[00:16:22] Jason Schlachter: Like most of the business value is gonna come from the stuff that is not taking up all the headlines right now in the media. It’s gonna be pricing your products dynamically or better. It’s gonna be automating some of your internal customer service or claims processing. It’s gonna be facial recognition on your I don’t know, like your product, that makes something a little bit easier for your, consumer to, to log in.
[00:16:48] Jason Schlachter: So even though we’re here talking about gender and AI and it’s very exciting, I just wanna put that in perspective because you don’t wanna be looking with this hammer for all the nails in your organization. This is just like one tool and it’s a very, powerful tool and that’s why we’re talking about it.
[00:17:04] Matt Paige: Yeah. And I like the way, I like the way you framed it. It’s like building the muscle. That’s the essence. Building the muscle of how do you go through this process. To get to the end outcome that you want to get to. So that’s a, foundational piece of what we’re trying to do. Think of this as like a workout.
[00:17:20] Matt Paige: Y’all this is the, intro. We’re, the trainer. This is the beginning of the workout. Yeah. So and I think there’s different areas you can find opportunity, right? There’s internal areas, there’s external areas. It can be revenue generating co so there’s different focuses where you can start to think through where do you wanna focus some of these efforts.
[00:17:40] Matt Paige: But any thoughts on that?
[00:17:42] Jason Schlachter: Yeah, exactly. There’s a great quote by Douglas Adams, which says that technology is oh God, I’m forgetting the exact verbatim, oh, technology is a word for something that doesn’t work yet. And I think it’s a great phrase because if we’re talking about ai, it means we’re not talking about a solution.
[00:18:05] Jason Schlachter: It’s a technology, it’s not a solution. And so we want to pivot to what solutions could be, right? So optimizing your internal company operations could be improving a product or service for a customer, could be optimizing like your defenses, your cyber your, it could be improving your documentation.
[00:18:27] Jason Schlachter: So there’s all these different kind of use cases that are either optimizing your business or innovating your business, helping your customer in some specific way. And I think if you look at it at like the industry level we can dive deep into some more like industry level type stuff. There’s a lot of specific use cases at the industry level.
[00:18:45] Jason Schlachter: So like on the financial side these kind of models can be used for customer segmentation. You could custom, you could segment out customers by needs and interests. Targeted market campaigns. You can do risk assessment, fraud detection in healthcare. You can do drug discovery, personalized medicine, medical imaging.
[00:19:05] Jason Schlachter: On the manufacturing side, there’s product design, there’s manufacturing planning and quality control. There’s on the technology side, there’s more efficient coding, software development and processes, cybersecurity, automating data science. I’m just running no one’s. You don’t. Do you guys want to remember all these?
[00:19:22] Jason Schlachter: I’m just trying to give you like the shotgun view of oh my God, this is a lot because this is only a small bit of it.
[00:19:26] Matt Paige: There’s something you said just leading up to this, we chatted about, and there’s this sense where people can stay at the surface level of what AI, generative AI can do, but where you get the gold is where you focus into a specific domain discipline, where your area of expertise is.
[00:19:46] Matt Paige: That’s where you find something unique. So it is important to think about within your industry, within your business, within. The, problems that your customers have. Yeah. That’s a key element to where you’re thinking how you can apply these things. And another, I heard someone talking the other day when you’re thinking about what you wanna roll out in use case and all that.
[00:20:07] Matt Paige: Take the word AI out of it, and does it still have value? Yeah. Does it pass that smelt test? Like you reference like the, Google and Apple events recently, apple didn’t mention AI really at all, but it was foundationally in just about everything.
[00:20:22] Jason Schlachter: Yeah. That’s a really stark, that’s a stark example of that.
[00:20:25] Jason Schlachter: Google talked about AI a lot. Apple didn’t talk about AI at all. And I think Google positions themselves to be a company that delivers AI as a tool, right? Like they’re selling ai as a solution. Apple doesn’t really try to sell you ai. Apple tries to sell you a good experience, a seamless experience.
[00:20:46] Jason Schlachter: So there’s not a strong need to talk about AI specifically. They might talk about like intelligent typing or smart notifications or something like that. And that makes a lot more sense. Matt, I think maybe if you want we could jump into some of these sort of questions that help. Yeah.
[00:21:03] Matt Paige: So, just to set this up, this is one of my favorite areas. So many folks I think get stuck early on thinking in an incremental nature versus kind of a stepwise trans transformational nature. So j Jason, take us through these questions. Great place to start. If you’re talking with folks in your business trying to facilitate and exercise around this, take us through some of these questions and how to think through ‘
[00:21:28] Jason Schlachter: em.
[00:21:29] Jason Schlachter: Okay. Awesome. Yeah, so these questions are, very simple. They don’t even say anything about AI specifically, but they’re gonna help you get to the core of the use cases where you could deploy generative ai. And in a bit we’ll talk about and how you validate and assess those opportunities.
[00:21:44] Jason Schlachter: All right, so this is a question that I, I heard from some buddies of mine at pro Lego. It’s a AI consulting company. When we were talking about use cases if I had an unlimited, a number of u of interns how would I deploy them to maximize business value? So that’s a question to ask yourself.
[00:22:01] Jason Schlachter: You have an unlimited set of interns, you’re in charge of them all. Where do you put them? I think like some people might be like, I don’t really know. Other people might be like, oh my God, yes. Like they need to go do this one thing for me. Because that will save my life, right? They need to go and sit in our call center because that’s where our customers suffer the worst.
[00:22:23] Jason Schlachter: Or you need to go and review all these claims because we’re six months behind on processing claims. If you can do that, then you can talk, you can find friction point or an opportunity that would benefit your company or yourself. And there’s some different variations of this question that I would ask too.
[00:22:44] Jason Schlachter: Matt and I were going over these earlier and just spinning up different versions that hit at like different slices. So another one would be like, in addition to unlimited interns, what if you had unlimited staff? So you manage a team of infinite you go from team of however many you have now, 5, 10, 50 people to unlimited people.
[00:23:04] Jason Schlachter: What would you have ’em do for you?
[00:23:07] Matt Paige: Yep. Another way you framed it too, I think you said, what if I had a small country working towards a problem I had, just to put it like in context. But what that’s doing is that starts to sound kinda honest. It Yeah. That it does. You’re right. A lot they’re. There’s all kinds of dystopian stuff we could get into as well.
[00:23:25] Matt Paige: Yeah. But it’s it’s that reframing though, cuz it’s not so much about the people element of the resources. And that’s the beauty of starting to trigger some of these questions when you are de dealing with technology like ai, it takes some of those constraints out of the equation or it flips the script a bit.
[00:23:44] Matt Paige: So that’s the idea behind some of these.
[00:23:46] Jason Schlachter: Yeah. And so I’m gonna continue this, Matt, with a few additional questions. Yeah. Keep going. Gone from interns, so not super skilled, but maybe very eager and capable to staff, know what they’re doing. Next one I wanna ask is, what if you had unlimited experts, you could bring experts from all fields to your team to help you, what would you have them do?
[00:24:09] Jason Schlachter: That takes it up a level now, because one of the things that generative AI can do is it can empower people to do things that they’re not experts in, but they can do with generative ai. So I’m not an expert painter, but I love art. I have a lot of ideas. I’ve seen art. If I can describe verbally my perfect vision for a painting, then I can use generative AI to create that painting.
[00:24:31] Jason Schlachter: And, it’s gonna look really good. It’s gonna look like a professional work of art if I do it right. So I’ve become like an expert in the sense that I’m now an artist. There’s probably a lot of philosophical arguments about did I create it really? And can I view myself as an artist?
[00:24:47] Jason Schlachter: But, practically speaking it will be difficult for people to differentiate between that AI created painting and someone creating a painting. So if you had experts, how would you use them? Okay, we’re gonna keep going. So
[00:25:02] Matt Paige: we got some good questions popping in the chat. Not that we have to hit ’em right now, but there’s some Keep, keep ’em coming, y’all.
[00:25:07] Matt Paige: We’ll, try to weave some of these in a minute. Yeah. Keep hit, keep hitting the questions.
[00:25:11] Jason Schlachter: Okay. Okay, here we go. All right, so this is one of my favorite ones. Up until now we’ve been focused a little bit on that internal optimization of my business, right? So how can you optimize your business internally?
[00:25:23] Jason Schlachter: Yeah, you could have used those interns to follow your customers around and give them an amazing experience, but it’s been a lot of like internal locus of, you. Yeah. Now we’re gonna shift it to external. So if you could give every one of your customers a personalized team of as many people as needed, five people, 10 people, a hundred people, and their sole job is to give your customer an amazing experience, what would that team be doing for your customer?
[00:25:49] Jason Schlachter: I think that is one of the most like, Powerful things to think about.
[00:25:53] Matt Paige: That’s powerful. Yeah. And it, that is taking it from incremental to potentially business model changing disruptive use cases. And that’s the idea of this exercise, right? It’s starting to get into more of that blue ocean starting to just generate the ideas, get them out there with a reframing and hell are some of them in chat G P t and give them some context and ask ’em there you can have them play a role in your in your facilitated exercise as well.
[00:26:23] Jason Schlachter: And so this is not to imply that you generative ai currently can fill the issues. It’s not, we’re not meaning to imply that if you’ve created this, imaginary team that you’ve given it to your customer and is doing everything to make your customer have an amazing experience, that then generative a can meet those needs.
[00:26:43] Jason Schlachter: Most likely it can’t. The point is though, is that you’re starting to get to the core of thinking with a different framing of I could write unlimited articles on behalf of my customer. I could book I could book everything they need for the entire week on their behalf. I could go clothes shopping for them.
[00:27:02] Jason Schlachter: There’s a lot of things that you could do with an AI model that can generate things and also summarize and explain things and, represent design and stuff like that. So that’s the gist of this. And then there’s one more question, the kind of fear mongering question here.
[00:27:20] Jason Schlachter: And this is if your customer if your, sorry, your competitors, if your competitors could do the same, your competitors had unlimited staff, they probably would use that staff to make great customer experiences as well. But I’m gonna frame it in an adversarial way. If they use that staff to put you outta business, what are they gonna do?
[00:27:42] Jason Schlachter: So now you have to think about this because. Most of your competitors won’t do that, but the best of your competitors will be doing that. They’ll be thinking through these potential use cases and when the technology is ready or when it makes sense from a value perspective to apply the technology in that way, they’ll be ready and waiting to do that.
[00:28:02] Matt Paige: Yeah, and this is the one, it when we were talking about these, it hits a different area of your brain when you frame it from oh shit, the competitor’s trying to put me out of business. What are they gonna do? And it, does, it gets you to think about it from a different lens in a lot of ways.
[00:28:19] Matt Paige: So those are the framing questions in essence. So this is all about idea generation, reframing how you think about things. The last point you made was interesting too. Even if it’s not perfect right now, you can still begin testing and playing around with this cuz things are progressing at a rather Alarming, crazy, whatever adjective you want to add rate right now.
[00:28:42] Matt Paige: So what may not be possible today could be possible in three months, six months, a year, five years. So it’s core that you start thinking about this now and how it’ll impact your business model, how you operate,
[00:28:57] Jason Schlachter: right? Yeah. Cuz it’s not the technology that fails to deliver value in almost all cases.
[00:29:01] Jason Schlachter: It’s, yeah it’s the, system point of view. It’s the organizational failure. You, your organization and your team should be able to, frame these opportunities in the right way and, be data and AI driven in their thinking process so they can act fast.
[00:29:20] Jason Schlachter: Because when that new capability emerges, and we saw it when chat g PT four hit the market. There were some companies that like overnight had applications. Some of them were bogus and borderline fraud, but those have fallen away. And now we see like Adobe deploying image creation models inside of their Adobe platform so that you can like completely generate a new background for your foreground.
[00:29:46] Jason Schlachter: Or you can erase an object and then ask it to generate a new object. And it will do that in, the application. So those are starting to become more more, mainstream for sure.
[00:29:59] Matt Paige: Yeah. That’s one we’re playing around with it HatchWorks right now. I think it’s Firefly is the name of the Adobe product, similar to a mid journey or something like that, but it’s within the Adobe ecosystem.
[00:30:10] Matt Paige: I think this is where we’ll start to transition into some of you have ideas, you have a list of ideas you’ve generated, but how do you begin to test vet the viability of. We should do these over these that’s the one of the most important things is how do you start to prioritize some of these use cases And there’s a bit of a call it a rubric or Yeah.
[00:30:35] Matt Paige: Analysis. You, take it through. So Jason start to take us down this path of how you begin to wait and prioritize some of these ideas.
[00:30:44] Jason Schlachter: Absolutely. And Matt, let’s stick, let’s throw up our our, use case that we’re gonna use to, to illustrate some. Yeah, let’s do it. Go through it.
[00:30:52] Matt Paige: Okay. So yeah you, set it up you got the, real story.
[00:30:56] Matt Paige: And I’d say too, we got a couple, there’s one related to the stock market. There’s one related to chat with a customer, interactions. We may play around with a couple of those later, but yeah, let’s hit the, main one. Jason’s taking a big trip in about a week or so. So Jason set up the use case for us.
[00:31:14] Jason Schlachter: Okay. Awesome. Yeah, and Matt let’s, make sure we get those questions in too. I, so the use case I am, I’m most focused on right now is travel. So I’m, heading out to Japan in a bit with the family and trying to book our, travels and I, want to be on the edge of the like touristy kind of stuff.
[00:31:34] Jason Schlachter: I don’t want to be like deep in it and, so that means I’m looking for experiences that are just like a little bit off the beaten track. And so booking hotels, looking for national parks, trains, buses, do they can, it says that the hotel room sleeps for, but I only see two beds.
[00:31:53] Jason Schlachter: Are they charging us extra for kids? Like all this kinda stuff. And it’s a huge amount of time to really dig into it if you wanna make it right. And I don’t really want to hand it off to a travel agent because. I, like the idea of being in the details. I like the idea of having the controls.
[00:32:08] Jason Schlachter: But with Expedia or Priceline or TripAdvisor what I’m having to do is I’m having to like, break down the larger itinerary in my own mind. Research all these different places, of which, most of them, which I’m not gonna go, some of ’em I don’t really understand and then look at for individual things.
[00:32:25] Jason Schlachter: Can I find a train from point A to point B and what does that mean and how much does it cost? And how do we, where do we put our luggage and can I find a hotel in this city? And I don’t really know which district to stay in all this kind of stuff. So, if I had the ability to give myself a team of, staff that were gonna work on my behalf as a generative AI might, I would wanna say to the generative ai, I’d like to take a travel, I’d like to take a trip to Japan with the family.
[00:32:51] Jason Schlachter: We want to be outdoors hiking. We want to, get our hands dirty, doing archeological digs. We want to take lots of photos. We wanna be at local cultural events. We want to be at the GU Festival in Kyoto on these dates and Super Mario World super ipo. Super important to my kids and to me. So we wanna go to that as well.
[00:33:12] Jason Schlachter: Give me some itineraries and, like figure out all the connection points and show me like cost structures and explain to me which ones are better than the others and why. And from that, It’s like I as if I had my own executive team working on this for me, and then I could look at it and I could say this looks cool, but I don’t want to go there.
[00:33:33] Jason Schlachter: Or I could even query it like, Hey, why are you putting me in this city? I didn’t ask for that. And I could even respond with we found that like people like you who have gone to Japan and visited the city really enjoyed it for these reasons and it fits comfortably with your schedule here and there.
[00:33:49] Jason Schlachter: It would just be like a very easy conversation. And from a, an, let’s say like a TripAdvisor perspective, like it’s all AI driven. There’s no customer service agents, it’s scales, there’s no time. So that’s, to me like a, use case that very clearly is gonna become dominated by generative ai.
[00:34:09] Jason Schlachter: Yeah there’s one catch and we’ll get into this moment with the things away. It needs to be right. Do not wanna be stranded with, two kids at a bus station. With a hotel that only sleeps two people, even though it, booked it as four. And that’s where generative AI is not so great.
[00:34:29] Jason Schlachter: And so we’ll talk about that.
[00:34:30] Matt Paige: Yeah. What are the stakes? And first of all, I’m jealous of the trip. That’s awesome. You’re getting to do this. But we can put ourselves in like the seat of Expedia or a company looking to disrupt Expedia. How should they be thinking about this? And, frankly yeah, Expedia should be very wary cuz this, is the type of emerging technology that literally could upend an entire business model.
[00:34:53] Matt Paige: And just as an aside we, gotta episode coming out later with Andy Sylvestri or leads up our design practice. There’s potential for this shift from a imperative to a declarative approach, A point and click approach to like declarative where I’m talking and interacting.
[00:35:12] Matt Paige: With the interface, so it changes how user interfaces are designed. So be on the lookout for that. It’ll be coming out in a few weeks. But Jason start to take us through Yeah. You just set up the context. What are the different dimensions that you can start to way a use case that’s right.
[00:35:32] Matt Paige: To determine how viable it is.
[00:35:34] Jason Schlachter: That’s right. Okay. So we’ll start with business value, but we’ll keep it really short because business value is something that is, well studied, and so you want to be able to assess the business value to assess the business value with generative ai.
[00:35:49] Jason Schlachter: You may wanna rapidly prototype, you may wanna do wizard of Oz kind of things where maybe you give a customer a chat bot and you label it so that it’s very ethical and transparent as you’re talking to a, generative AI bot. And it’s, very expressive.
[00:36:05] Jason Schlachter: It can look through all the documentation the all the manuals. It’s not just dumping technical information to you, but it can, reformat it and answer your questions. But at the same time, you have this whole thing that it’s AI bot driven. What you could really be doing on the backend is you could be having some of your expert customer service people quickly typing stuff out.
[00:36:26] Jason Schlachter: And so you haven’t really implemented anything technologically, but you’ve started to assess the viability of a customer accepting that they’re gonna engage with an AI and understanding how they engage with an ai if they structure their queries differently if they scope their, requests differently.
[00:36:44] Jason Schlachter: So that’s an example of the business value where you could start to get to it. Next, and this is a really big one, Andy. This is, or sorry, Matt. We do have Andy on the call. He’s MCing it all in the background. Matt, a really big one is fluency versus accuracy. For these generative models, fluency means generating content and, that’s what they do.
[00:37:05] Jason Schlachter: They generate content really, well. Accuracy means that the information is factual. And so if you asked a generative AI model, text-based generative AI model to help me write a short story about, and you explained what you wanted to write, it could dump a story to you. And it’s probably gonna read really well.
[00:37:23] Jason Schlachter: It’s probably gonna be great for creators that need help structuring their content or want to add some details to their content. It just really speeds up that kind of workflow. In that case things like hallucinations, which is a term for when generative AI models say things that aren’t true.
[00:37:43] Jason Schlachter: There’s a lot of technical reasons why that happens, but they do that. Then in that case, it’s okay because like fantasy create creativity abstract thoughts, like those are all interesting aspects of a short story. But if you have an agent that’s meant to give you medical advice and you’re asking it do you go to the hospital like, what’s going on with me? You really want it to be accurate and it’s not as important that it generates creative content or that it, yeah.
[00:38:13] Matt Paige: And this is a new kind of dimension I feel like with AI and generative ai, the importance of this one moves very high up the list of considerations where it wasn’t as nascent as a concept.
[00:38:27] Matt Paige: I think in the past you mentioned business value, that’s still critical. Always gonna be there. Yeah. This one’s interesting cuz. It, it lit things. It can go rogue, it can hallucinate like you mentioned. And what is the, risk or the outcome of if something goes wrong? Yeah,
[00:38:45] Jason Schlachter: So you have to, think about your use case.
[00:38:48] Jason Schlachter: Is it a use case that demands fluency? In which case it’s something that we can you can address more easily with the models. And if it’s accuracy there’s, ways to mitigate this. If you do demand accuracy you’re able to train models, you’re on your own, you’re able to tune some of the existing models.
[00:39:09] Jason Schlachter: So there’s, like foundational models emerging for generative ai. These are like open AI’s chat g PT four. But also Google has Bard, etta has I think llama, so like a lot of these companies are, building their own models. These are foundational models.
[00:39:29] Jason Schlachter: They have very large representations of language and semantics. And then they layer on top of that with ability to be prompted and respond appropriately. So these are models that you could use off the shelf for some of your business use cases. And if fluency is your goal, those are probably great fits.
[00:39:48] Jason Schlachter: But if you have a, need for accuracy, you may need to tune them on your own data. And so this is where you start to, to ask yourself, do I have enough data to do that? So it wouldn’t be impossible to generate a model that answers medical questions. It’s a great use case for generative ai if it is highly accurate and probably highly regulated.
[00:40:13] Jason Schlachter: Yeah. Maybe even reviewed by a clinician in certain, in, in certain or many use cases.
[00:40:19] Matt Paige: Or if it reaches a state of getting into the unknown, territory, can the model be geared in a sense to where it’s not. Spitting out a random response, but it is saying, I don’t know there’s that element of it as well, which how do you start to actually monitor that, may be a bigger, totally different problem.
[00:40:44] Jason Schlachter: Yeah. There’s not a lot of, self-reflection is a challenge right now for these models. They’re, they know everything, even when they don’t because what’s in their mind. Yeah, exactly. They’ve been trained on a certain set of world data and they have a partial understanding of that data.
[00:41:01] Jason Schlachter: Yeah. And, they look pretty convincing when they talk about what they know. But when they’re asked to talk about something they don’t know, they don’t say, they don’t necessarily say, I, I can’t talk about that. They, try to answer it in the context of what they do now, and because they have like partial understandings of what they do now.
[00:41:18] Jason Schlachter: There’s not like a, an explicit like expressive representation of these concepts and some kind of logical reasoning and causal kind of way. It’s all very probabilistic. You get very weird emergent phenomenon because you can find weird edge case paths through the, probabilities of these models.
[00:41:35] Jason Schlachter: So fluency and accuracy is, a cornerstone of how you should think about your use cases. The other really big one is low risk and high risk. We talked about this just a moment ago, but what’s a low and high risk? Like Expedia sending me to a foreign country with my family and telling me to go stand somewhere on a corner because there’s gonna be a bus and there isn’t, is high risk, right?
[00:42:00] Jason Schlachter: But me jumping onto like T-Mobile’s website and asking a question in natural language and getting back like a personalized explanation. It’s pretty low risk, especially, and this is interesting. So you can do retrieval, augmented training on these models where in order to sup, like in order to suppress errors and to build confidence for the user you can force it to only say things that it can back up with a document that’s retrieved.
[00:42:28] Jason Schlachter: So in the, oh, okay. It could pull up some kind of like knowledge base article that exists in, T-mobile’s data set. And it could say this is the thing I found, but I’m not gonna make you read it. Here’s like the two sentences that directly answer your question. But if you need to dig deeper this is the document that I use to generate this answer.
[00:42:52] Matt Paige: And and this is taking it a step further than just let’s just get the op, the open AI chat, G P T A P I and just integrate right now you’re starting to weave in some of your own company’s data information to enhance.
[00:43:07] Matt Paige: The experience, the model, all of that. So that’s upleveling it a bit versus just slapping AI on your, product service or process.
[00:43:19] Jason Schlachter: Yeah, exactly. And then that’s a fundamental question too there’s a lot of use cases you can unlock with off the shelf stuff, but there’s a lot you can do to tune these models.
[00:43:27] Jason Schlachter: And so when you tune these models, que do you have the data? Cause you, if you’re tuning, let’s talk about if you’re tuning them. So if you’re tuning a model, why would you do it? You might do it cause you need more accuracy in the kind of these case we explained. And in that case, you need to ask yourself, do I have the data to tune it?
[00:43:43] Jason Schlachter: And so what do you need to tune it? You need your own documents that represent the, knowledge sets and the way of speaking about the things you care about. So in T-Mobile’s case, it could be like their, knowledge bases, their tech technical documentation. You also, Need, you may need prompts and answers.
[00:44:05] Jason Schlachter: So one of the ways these models get built is a very labor intensive step where, people literally write out a prompt and then write out an answer, and then they show the model both. And they use those to train the model as to this is what a good answer to this prompt should be. And, some of these bigger companies like Google and Microsoft, they have like thousands, if not tens of thousands of people employed full-time, like writing prompts and answers.
[00:44:28] Jason Schlachter: It’s a very labor intensive part of the process. So that might be something you do to tune a model. The other reason you would tune a model if not for accuracy might be performance. So maybe you don’t need a huge model. Like maybe you can run with a really small model that takes less compute. You can run it on a locally, on a device or just costs less.
[00:44:50] Jason Schlachter: But you need to tune it because you’re, building an auto mechanic helper generative AI system that that helps your auto mechanic rather than reading car manuals for cars that he hasn’t worked on for a while, he just asks the question and gets the immediate answer with reference back to the model, the manual pages or something like that.
[00:45:08] Jason Schlachter: Like in those cases it could be small, it could run on device. So those are some considerations there. And then the other piece here is what’s defensible and non defensible. Is it important for you that the model that you’re using and the use case you’re building is, defensible from a business perspective?
[00:45:30] Jason Schlachter: So yeah, let’s get back to the travel example. Would it be defensible if Trip, if TripAdvisor built that capability? I’m gonna pause, I’m gonna I’ll, throw you the question.
[00:45:44] Matt Paige: Yeah. And, folks in the audience too, if y’all want to answer. You know what’s interesting? If it’s simply if you could do the same thing referencing chat, G P T or some large language model that’s open to the public, I’d say no.
[00:46:00] Matt Paige: It changes the whole business model and defensibility of their business. Now, if it’s leveraging to your point data that an Expedia or a TripAdvisor has that they can supplement, Into the model, then I think it does begin to have an element of defensibility. But what’s, your take? I’m curious.
[00:46:22] Jason Schlachter: Yeah, it would have to leverage custom data from, TripAdvisor. They’re not gonna get, yeah. Anything that’s capable of doing that kind of use case off the bat. They’re gonna have to spend a lot of time and a lot of money leveraging their own data to tune those kind of miles. And even then, I think it’s really gonna struggle with being accurate.
[00:46:43] Jason Schlachter: Cuz there’s so many connection points, right? Transportation, hubs, hotels, sites, but if you think about what they have they, can trace member trajectories through like cities and tourist areas and restaurants. So I, do think there’s a lot to it that they probably could do. I think it’s partly defensible on the model basis.
[00:47:03] Jason Schlachter: Yeah. It’s partly defensible because Expedia might be able to do the same. Priceline might be able to do the same. Booking.com might be able to do the same. I would argue that there’s nuances that TripAdvisor has, that they capture, like extensive photos from users and very bi like multimodal, like hotels, cars hiking restaurants.
[00:47:29] Jason Schlachter: E everything is, so across the board. But I, think even if it’s not fully defensible, they still need to do it to be competitive in their industry space. So, it’s somewhere between like differentiated and, highly defensible to the competitors might ability to do the same, but maybe not quite in the same way.
[00:47:49] Jason Schlachter: But I, think ultimately what’s interesting is like non defensible doesn’t make it bad either. Like things can be very high value, but non defensible. So in this case of TripAdvisor, it might be that the model is non defensible. Like it might be that they can build this model, but so can every other travel service.
[00:48:11] Jason Schlachter: So then there’s like other levels of defensibility, right? Like, use cases and business models were defensible before AI came along. So yeah. What other ways is it defensible? Like it could be that, that their brand alone is, helping to make it defensible. Like I don’t necessarily want a startup, an AI startup, even if they’re well-funded, sending me and my family off to Japan for a while I, might not trust it.
[00:48:37] Jason Schlachter: I would much rather go with a TripAdvisor. It might be defensible in that they have partnerships and integrations in a way that this actually works, right? Because the rubber has to meet the roads still, if they’re gonna book these itineraries. So there may be other ways to make it defensible that isn’t the model.
[00:48:55] Jason Schlachter: So I think when you think about these use cases from business perspective, a defensible model is great if you can do it. But you’re not gonna get a defensible model without spending a lot of money and having a lot of data. Yeah. So it may not be critical.
[00:49:07] Matt Paige: I think it deals with, is it connected to your inherent value prop or the customer, facing side of the business model itself.
[00:49:18] Matt Paige: Then this defensibility question becomes really important. But you mentioned brand, actually is a differentiating thing. Now I’d say most folks it’s, the level of apples and the big ones where that’s where you see the, brand defensibility truly shining through. But that’s a critical piece of this.
[00:49:38] Matt Paige: I’ve seen there’s like websites that track how many AI startups are happening being created like every day. And there’s some where they’re literally just putting a skin on top of a foundational model and there’s no inherent defensibility to it. Somebody could have spun it up over the weekend and it’s like, how do you weave through. That in es in essence, is there something, is there substance behind it that makes you unique?
[00:50:09] Jason Schlachter: Yeah. That’s an interesting example cuz those companies were serving a market need some in some ways. Like in the very early days, the average non-technical person probably didn’t know what open AI was, didn’t they had a website, didn’t know they could go to the website and subscribe to their model, just saw it in the news. And then they get a friendly cartoonish bot popping up on their their, iPhone ads, Yeah. Saying here for access to the model. And it’s that was, a marketing niche that Open AI was neglecting.
[00:50:40] Jason Schlachter: I think they’re, picking up on that now.
[00:50:42] Matt Paige: A great example is the, Chat G P T app. They didn’t have an app for a little while and there were competitors that created an app just leveraging chat, G P T. Yeah. And they were able to get some amount of, yeah. Probably actually crazy scale, but then Chat, G P T OpenAI came out with their app.
[00:50:59] Matt Paige: And you probably just completely killed their whole business model. So that’s like the whole defensibility piece. How easily can a competitor in and just take it over?
[00:51:08] Jason Schlachter: Exactly. Exactly. Okay, so there’s two more things I wanna touch on here. Yeah. One of them is whether it’s internal or external facing, like this kind of relates to risk, but it’s not directly related to risk. So if you think about internal versus external if you’re using it to create, and this is really where, these generative models have the most value to create content where fluency is, the highest need and, risk is low.
[00:51:34] Jason Schlachter: So this internal facing use case of help me compose emails to my colleagues faster, or help me create marketing content that I can post online faster, or like generate blog posts for me that I can just tweak and, send out. Or summarize to me this. This document that I received from one of my partners or explain to me th this chain of emails, like those kind of things can really boost productivity.
[00:52:05] Jason Schlachter: They’re fairly low risk. There’s a human in the loop. Human in the loop is, maybe the magic word here. If there’s a human in the loop and it’s just proposing information or helping to accelerate something low risk. Those are often internal facing. But when you’re customer facing there’s, higher risks.
[00:52:21] Jason Schlachter: So that’s another thing you wanna consider too. And then part of that is, is doing the AI ethics component. So in all of this there’s a need to consider the implications of the ethical implications of using AI models even in your own business, but especially if they’re affecting customers.
[00:52:41] Jason Schlachter: At Elavance, we were building AI models for healthcare and we were impacting people’s ability to get care with those models. Yep. Our intent was to improve their health outcomes and to make things better. But things can go wrong. And even when they go right there’s always risks that you have to assess.
[00:52:59] Jason Schlachter: And so we would hold these, ethics workshops and the idea here is to, dive deep into what it means to build this. And so I’ll just, I’ll spend a moment on that, Matt. Yeah. But I, think it, it this, happens like really early on it’s not like something you do at the end of your use case pitch when you’ve got your funding and you just need to move forward.
[00:53:24] Jason Schlachter: It’s really like early on in the process of the viability of the idea and the business value. And so there’s a, there’s ethic workshops you can do where you can work with a team of stakeholders and you start off really small low, overhead, an hour or two, get the basics.
[00:53:42] Jason Schlachter: And as you grow your, business case and your, plans and your, funding, then that’s when you start to land more and more layers of this. And this is actually something that, that we do for our customers. We help them to, work through these kind of ethics workshops where you, want a third party that has experience running these and understands how things go wrong to, to run this internally.
[00:54:03] Jason Schlachter: And so you look at your users, you look at your stakeholders, identify all the stakeholders you try to understand There the, values and the interests that the users and stakeholders will have. What kind of tensions might arise? Like how are you gonna test your assumptions? Do you think about the impact you could have changes in behavior that might emerge?
[00:54:27] Jason Schlachter: A great example for me is like cars, like Atlanta, where we live, was built after the invention of the car, primarily because the original Atlanta city was burned down and they rebuilt it really after cars came to be. And at the time the the mayor of Atlanta said I, dream of building a city that is a car, first city.
[00:54:49] Jason Schlachter: And it’s that seems like an anathema today for us, but that was the AI of the time. They wanted to build an ai, first city, a car, first city, right? Yeah. And now Atlanta’s like really difficult to walk in and traffic is bad, congestion’s bad, and we’re slowly peeling back the layers of that a hundred years later.
[00:55:07] Jason Schlachter: So that’s an example of like changes in behavior. If. There was an ethical review committee for the car, first City, like maybe some of those things would’ve come up. So there’s also things like the group interactions that emerge. So how it affect groups. There’s questions around data and privacy explainability.
[00:55:29] Jason Schlachter: So if a model is, impacting your life, like you should be able to understand why it’s making those decisions. We don’t want to take the, distributed bias and distributed failures of our, current sort of like business ventures and centralize them in a way that nobody can question and understand them.
[00:55:47] Jason Schlachter: There’s questions around do you have a human in the loop? How do you monitor performance? How do you mitigate things? How do you get feedback? And so all these kind of things are discussion points like what is fairness? What does it mean to be fair in this use case? This is part of the validation cycle, but you just, you start light an hour or two on the first pass and by the time you’re, funding like a big use case in a big program, like it should be very rigorous. There should be processes in place, accountable stakeholders and all that stuff.
[00:56:17] Matt Paige: No, that’s awesome.
[00:56:18] Matt Paige: And, great example of something that can be facilitated with AI empowerment group in, Hatworks there. So we got about five minutes. I’m wondering, Jason, we could jump into some of these questions and topics in the chat if you’re up for it, unless there’s something else you wanna cover. That’s it.
[00:56:35] Matt Paige: That’s great. Yeah. Yeah, is there’s, Jacob had one. Does anyone use AI for scheduling appointments? And I don’t specifically know of a tool. I’m sure there’s several folks trying to achieve this, but this is like a, perfect example of a use case that you could disrupt a Calendly or products that exist out there.
[00:56:56] Matt Paige: How could that impact that workflow? I need to schedule appointments, plan out my day. I don’t want to be the person having to reach out to somebody and say, Hey, does this time work? Does that time work? Jason, that was an interesting one. Any thoughts on that?
[00:57:11] Jason Schlachter: Yeah, I think there, there are use cases like Calendly that, that do that today.
[00:57:16] Jason Schlachter: And I think there’s, other AI startups out there that, that do something similar. But I, guess I would challenge the, notion of what it is that, what is the real task that, that you want done or that I want done? It’s not strictly that like I wanna schedule the meeting with Matt and so I want Calendly to go figure that out for me.
[00:57:38] Jason Schlachter: That’s still that that like process level where I have to get it done. I would love to just have a, more robust agent where, I said Hey, I want to talk to these 10 people this week. Go figure it out. And then Matt gets an email from Calendly saying, Hey Matt, Jason has identified you as somebody who’d like to speak with this week.
[00:57:56] Jason Schlachter: What is your availability?
[00:57:59] Matt Paige: And what you just did there is you took the question from earlier if I had a team or a staff that could go and do this, how would they solve the problem versus me having to be like the main point of failure bottleneck in the process. That, that’s a great example of how to reframe how you think about a use case.
[00:58:19] Matt Paige: I, like Chris is creating movie scripts about Batman’s early days, which is it’s funny, but like it does change how that whole industry works, potentially. Yeah. From a creator perspective and all of that. We
[00:58:36] Jason Schlachter: are so for people who are not like deep into stable diffusion or, Dolly, there are models out there right now, generative AI models creating movies.
[00:58:47] Jason Schlachter: And writing the scripts for those movies. And so it’s, emergent. Like I, I believe like in the next year we’re gonna see like TV shows where the script has been written. The Yeah. The actual animations have been completely created by the ai. They may not be successful, I don’t know, but it’s happening.
[00:59:06] Matt Paige: But this is this is one of those big transformational disruptive type of things you think back in the day from music going digital. Yeah. Same kind of thing. Yeah. And there’s gonna be the movies, the studios trying to fight this change of AI and generative AI playing a role.
[00:59:21] Matt Paige: But it has the, feeling of something similar that’s happened not too far in the past.
[00:59:26] Jason Schlachter: Or what if it’s make, me a commercial that’s gonna cause people to hire AI empowerment group to, to help them with AI strategy create music for it. Like some kind of like amazing techie humanistic background and, write the script and then use my voice to make, to create it.
[00:59:44] Jason Schlachter: And Because it can speak like me. Cause it’s trained on my voice. Like it will just speak for me. Yeah, it’s possible.
[00:59:51] Matt Paige: Yeah. And Clause brings up an interest. Interesting One. How can businesses leverage the potential of utilizing chat sheet PT to enhance customer interactions, streamline various business processes while ensuring data privacy and compliance?
[01:00:05] Matt Paige: Yeah. Particularly when it involves sending data via the API back to open AI cloud. I think this is an inherent like risk type of aspect.
[01:00:15] Jason Schlachter: This is a good one. Yeah. So Klaus, you mentioned you’re with a, German company and, the EU is passing measures to, to require that any use of generative AI be approved by committee and be licensed I believe.
[01:00:32] Jason Schlachter: And I think we’re gonna continue to see pushes for that. I don’t necessarily think that we should be. Regulating generative AI or ai at that level? In the broad sense, I think there’s specific use cases that should be regulated. Just we regulate food with the F D A or drugs.
[01:00:51] Jason Schlachter: Certainly in certain domains and where there’s certain need for precision, it should be regulated. But I think for a lot of these startups with low risk, that should be able to get out there and do it. But, in Europe, you’re probably gonna be faced with that, challenge. One way to, to mitigate what you’re asking about is not to send it to OpenAI.
[01:01:08] Jason Schlachter: Run your own models, run them in your own cloud, host it in your building push it to the end user, run it on their client machine. And so in doing so, you’re not necessarily sending their data to open ai. There are open source models that are emergent in generative ai. And some of them are pretty mature.
[01:01:32] Jason Schlachter: Stable diffusion is a great example. It’s first class generative AI model that’s open source. There’s a lot of large language models and chat c b T type capabilities. On the open source side, I’m a firm believer that the open source models will overtake the closed source models given time.
[01:01:52] Jason Schlachter: So yeah, it, you may not
[01:01:56] Matt Paige: have, there’s even like a there’s a leak document I think from Google. I, believe it was real, but they were cautioning get this exact thing internally that hey, the open the, and it’s funny, they call themselves open ai, it’s not really open per se, but you look at like meta tech taking that strategy and there’s other kind of foundational open source models out there, but they have the potential to overtake things that are being developed internally.
[01:02:23] Matt Paige: Yeah,
[01:02:23] Jason Schlachter: Meta’s a great example. So open AI originally founded with Elon Musk and, others to, to open source these AI models so they wouldn’t be closed source then strong armed overtaken by Microsoft. Yeah. Now Microsoft owns it. They, make them closed sourced. Meta, has and, Zuckerberg has surprisingly shown up to be like the big open source creator of these models.
[01:02:48] Jason Schlachter: And I think from a business strategy it makes sense. Goo Google’s playing to win, Microsoft’s playing to win. They want to, be the winners in this generative AI race. I don’t think Meta wants to do that or necessarily needs to do that. They’re playing to not lose. If, they raise the water for everybody, then everybody, is okay and nobody loses.
[01:03:11] Jason Schlachter: And I think that’s Meadows play. And, that’s a good strategy against these two giants that are dumping all their money into it.
[01:03:18] Matt Paige: So the network effects element there too, right? If they’re, At the foundation of it it kind ofra raises their, business. It happened
[01:03:27] Jason Schlachter: with stability stable diffusion there’s thousands and thousands of, versions of stable diffusion being spun up because it’s open source and Dolly has its trajectory.
[01:03:37] Matt Paige: Yeah. We are at time. We could go a little bit longer. I just to close it out though, I love the last comment there. It’s heard that these tools are an expansion to your imagination.
[01:03:53] Matt Paige: They totally agree. One of my favorite uses of chat, G B T is telling it to graphically describe any concept, great foundation for any type of media creation. But it’s an interesting concept. It’s like that co-pilot and it’s like a whole nother topic. Yep. Yep. Matt, there’s one. Yeah we can keep going.
[01:04:11] Matt Paige: Let me do just the, call out and then we can stick on for another couple of minutes. But yeah, so like, we mentioned earlier, patchworks and AI empowerment group, we are partnering together. So like these type of custom workshops is the exact type of thing we can take your organization through.
[01:04:28] Matt Paige: Jason, you mentioned the ethics based workshop this is the part where having an expert is critically important and hit up Jason or I and we can help you help get that facilitated. But any other closing thoughts? And then maybe we can jump to a few other chat items.
[01:04:45] Jason Schlachter: Yeah, Matt, totally agree. I love the ideation process, the creative problem solving piece, and I love hearing about the kind of problems that are real and concrete and those kinda opportunities would be a lot of fun and productive for, both of our organizations.
[01:05:00] Jason Schlachter: So hopefully we’ll hear from some of you. I would love to. Pick up this one question from Monica Lapera. Which is the biggest fear for some people is that AI can replace some jobs or even professionals. How do you balance the pros and cons that AI brings to the world? A great question.
[01:05:18] Jason Schlachter: We’re not gonna answer it in the last moment here, but I, think it’s a great question just to surface, because there is immense responsibility. This is really the dawning of, the an age in which how we work and how we live and how wealth gets distributed and who has what is gonna dramatically change.
[01:05:37] Jason Schlachter: And there’s a lot of hype out there. Generative AI is not everything that it’s hyped up to be. And, it’s gonna take a long time for a lot of these things to happen. But the reality is, That we, under, we over predict the short term change, but we under predict the long term change.
[01:05:54] Jason Schlachter: And so this is a, great question of service and I think we just have to really be deliberate in the ethics of all this and try to build the world that we wanna make and not, the world that we can. There’s just
[01:06:06] Matt Paige: tools I’d say too. It’s do you have the opportunistic mindset or the negative or positive, I’m forgetting the, correct terminology here, but think about 20 years ago, majority, a large portion of jobs that exist today did not exist previously.
[01:06:26] Matt Paige: So a lot of times transformational, disruptive, things like this create new opportunities we don’t even know exist yet. So I think this is like one of those things that has the potential as well. Even though it may be replacing some jobs, I think it’s gonna create a whole host of new ones in the process.
[01:06:43] Jason Schlachter: Absolutely. And a lot of what it’s gonna do is not replace jobs, but replace tasks. So if you’re like a medical claims reviewer, like I’m just taking a wild stab in the dark here. You might not love reviewing medical claims. It might cause it well, and you have some training that makes appropriate for it or, it’s easier than being out on the ER floor all night.
[01:07:06] Jason Schlachter: But you may not love all aspects of medical claims processing. And so this is where I think AI can remove some of the, burdensome tasks that you don’t enjoy so that you can focus on the stuff you do enjoy. So what if you could focus on the really interesting clinical challenges or like the really puzzling situations and not the mundane minutiae of comparing numbers or checking dates or understanding the timelines and stuff like that.
[01:07:32] Jason Schlachter: So I think for a lot of people, for most people it’s gonna, it’s gonna remove the mundane, more automatable tasks but, not their jobs. There certainly will be people whose jobs are lost. But like you said it’s, always changing.
[01:07:49] Matt Paige: Yeah. It’s like back to jobs to be done communication is the job that’s existed for a very long time.
[01:07:54] Matt Paige: From talking to physical mail to email, to slack and keep going. The job remained the same. It’s just how you did it changed. Yeah. And just the last one, just because Chris is hitting on it how’s it gonna impact the stock market? Anything being done to regulate that? I’d say, I don’t know.
[01:08:15] Matt Paige: I think there’s a lot of stuff already being done today leveraging ai Yeah. In terms of stock trading and that, that’s already prevalent in a lot of ways today, but I don’t know. Any thoughts there to wrap us up with the last kind of q a question? I don’t know.
[01:08:30] Jason Schlachter: Yeah I would imagine that most stock trading right now is already done by ais.
[01:08:35] Jason Schlachter: So maybe the question is if theis get better, like what does it mean for us? Yeah. Yeah. I don’t know. The, only stage that buy I can give on that is put your money into a index fund and forget about it. You go anything else is a gamble, whether it’s AI driven or not.
[01:08:54] Matt Paige: That’s right.
[01:08:54] Matt Paige: That’s right. Cool. That was really appreciate you being on Jason. Thank you everybody that came and participated. We will be putting this out there on the podcast and sending out the recording to everybody that joined. And we got a few I think good takeaways in terms of templates and things we can share from this talk as well.
[01:09:15] Matt Paige: But really appreciate the time, Jason and everybody. Have a good rest of your day. Thank you, Matt.
[01:09:21] Jason Schlachter: Thank you guys for the questions. It’s great to be here.
[01:09:24] Matt Paige: Thanks everybody. Bye.