Watch on Youtube

Full Transcript

Noah: All right. I'm here with Lauren. We met when I attended an AI event in Cleveland last Monday, and she gave an excellent presentation on all things AI. Lauren, do you want to go ahead and introduce yourself?

Lauren: Sure. Yeah. Thanks, Noah. Happy to be here. My name is Lauren Burke-McCarthy. I am an associate principal at Further in the realm of AI strategy, data science, and AI products. So, I have a background in data science—very technical—moving through the product space and now into the AI strategy space. A lot of my work is helping organizations understand what's going on in the world of AI and how to make it work for them. Do it responsibly, do it with value, and ultimately do it sustainably.

Noah: Yeah, that makes a lot of sense. AI has kind of taken the world by storm a little bit. There’s been a lot going on over the past few years. Can you give me just your general take on the AI space and what you've seen over the past couple of years?

Lauren: Yeah, absolutely. I think, obviously, the biggest turning point in the accessibility of AI over the past couple of years was when ChatGPT came out on the market; it really took things by storm. AI is, as I like to say, more accessible than ever. There are so many tools out there, so many options, different models, and with that comes a lot of different use cases. And so I feel like what I've seen a lot over the year is some FOMO. There's a lot of interest in seeing what others are doing and seeing how you can stay ahead of the curve. And I think now we're actually starting to think more inward on what's good for our organization, our team, and our use case. Alongside that, I think we're starting to lean into the more human-first approach to AI because people use systems that are built for people—and that's what's going to make it sustainable long-term.

Noah: Yeah, it makes a lot of sense. And there's so much out there. I think sometimes, if you're not following it or working in it directly, it's hard to follow everything because you can't really tell what's real and what's not. Sometimes people like to exaggerate things, but in other cases, things are underexaggerated—like there’s not enough talk about some of the cool tools that are out there. With that being said, what tools are you using professionally? What AI tools do you use? Gemini, GPT, and then maybe more custom things that you might use?

Lauren: Yeah, and I love the point you made. I think there is just so much out there. I think we focus on a lot of the big names, but sometimes when you start digging in and seeing what's actually out there, there might be something that is a little more niche but fits a gap that is really impactful. I think with a lot of the small models you're seeing in specific industries, we're moving into that unique use case and support. But personally, I use some of the more common ones: I use ChatGPT, Gemini, and Claude. I love using them for getting a first draft or an outline going. I think that was something that plagued me so much in college and high school—getting a paper started and just getting my thoughts together. So that has just been such an immense help for me. Alongside that, deep research tools are great. Set it up, make sure that you're getting a deep-dive summary, and then come back later and check some of the resources that have come through. It’s really a great way to keep a pulse on the market and the industry. I love Notebook LM for just dealing with a ton of documentation and presentations—getting it all in one place and asking questions. I like making the podcasts out of that; it's super great. I can listen to it in the car. I’ve played around with some code assistants, and then I have used it for years, but now it's even more AI-driven: Grammarly. Now they're superhuman, but I love that for communication.

Noah: Interesting. I really like the podcast concept. That's cool—putting it into Notebook LM and being able to hear about that data and that information everywhere. I like deep research too because I think it cuts down on hallucinations. I think if the LLM understands what it's doing or goes in-depth into it, and then you ask your questions about that, it's a lot less likely to give false information. What kind of challenges have you had with AI, or what have you seen other people have challenges with, and how have you kind of gotten around those challenges?

Lauren: Yeah, and you just touched on one with deep research. Yeah, I think they do a pretty good job with deep research in general of sourcing the content that you're seeing, as well as any references, any metrics, or any data points. It’s always good to still check and validate that. Regarding what we call hallucinations—where the model is trying to fill in a gap in its knowledge because it wants to still be able to answer you—they want to make you happy with the response. So hallucinations are one of the biggest challenges that can be mitigated with just a simple gut check now and then to really make sure that you're still on track, especially before you cite it or send it off to anyone else. Another thing is just the maintenance required, especially if you're looking at a RAG-based system or a knowledge base; the data has to be kept up to good quality. And then, in general, risk management with AI—I think the sooner you start that, the more control you have over it. You don't have to stop later; you can just keep going one step at a time, making it better.

Noah: Yeah, that makes a ton of sense. So AI has gotten to the point where it's not necessarily fully replacing jobs, I would say, but I think it's replacing a lot of tasks that people do. Say you're working on something or you're hiring out for something; if you have a one-year project ahead of you, you know it's going to take about that time in general. Just that idea of that length—would you rather have AI tools at your disposal, or would you rather have a worker at your disposal, someone who might have just graduated college or somebody who's been in the field for a while?

Lauren: Yeah. And I think this is such an interesting question, and I'm probably going to answer it in a way you're not hoping I do, but I don't think it necessarily has to be that either-or question. I think we have figured out over time that there's a big category of things machines do well, and there's another that humans do well. In the middle, there's that middle ground where either could support that. And if it's a one-year project where you have a process and you want to make that process better—it's maybe consistent, it's manual, it's something where it's pretty rule-based—sure, see if you can take a year and automate that or augment that with tools. If it's something that you need human judgment on, if it's something where it's high risk or it's consistently changing, I would probably rather have a person on that and see if you can build into an augmented process. But yeah, I think it comes down to the risk, the accountability, and the judgment you need. And I don't think we necessarily have to write off the hybrid process and the balance of both.

Noah: Yeah, that's a good answer. I think that hybrid process is important—somebody who knows what they're doing with AI. And just off of that too, how does somebody who's maybe interested in learning about AI but doesn't really have the full weight to grasp it or hasn't really dived in yet—what advice would you give them, or what would you say to them to bridge that gap between interest and action?

Lauren: Yeah, absolutely. I think you don't necessarily need to have a deep technical background, or even a technical background, to be AI savvy. I think we saw that with digital literacy and data literacy, and now we're moving into AI literacy. Some of the biggest things that are important to understand are what is a problem that AI can solve and should solve, and what you should not use it for. Alongside that, just knowing what risks are associated with different types of usage and figuring out, looking at your day-to-day and saying, "Where can this work for me?" A lot of times it's easy to get overwhelmed just thinking about all of the different ways that you could use AI. But once you start really thinking about, maybe, looking at a couple of tutorials that reflect things you commonly face in your day-to-day—if there's something that ties more directly to your own experience or your own need—I think it's a little bit easier to just take that first step and then continue learning from there.

Noah: That's a great answer. Because, yeah, I think there's so much out there that it's hard to figure out, but if you just take the simple path, which is looking at what you're doing each day and then trying to figure AI into that a little bit, I think that's a much easier and clearer starting path. All right. Last question. There's a lot personally I'm interested in—some biology stuff and robotics with AI—but there's kind of so much that we're seeing maybe in the five-to-10-year future. Is AGI coming, is it not, and what's going to happen? What interests you in that five-to-10-year future, or what do you just see coming down the line?

Lauren: Yeah, absolutely. I think a couple of the biggest things—again, I'll come back to the human-centric, the human-first approach. I think there's an emphasis now on not just making more AI products and AI features, but figuring out what makes people use them. I think we've graduated into the "what makes it valuable" stage of generative AI. Alongside that, I think some of those features and products are going to be heavily video or image-driven. I think that's the next step we're taking alongside voice technology. We're starting to see a lot more potential with voice technology; it just makes experiences so much more accessible as well to folks. So I think that is a great move. And then I think we are starting to see more shift into previously really high-risk industries: the legal space, finance, and even critical infrastructure like travel. The ability to use AI to model scenarios and capture risk earlier, I think, is something we're going to start seeing more and more use cases for, especially in a high-risk context.

Noah: Yeah, a hundred percent. A lot of interesting use cases there. Absolutely. All right, I appreciate you coming on. Any plugs or anything you wanted to mention that we didn't get to or I didn't ask you?

Lauren: Yeah, I'm on LinkedIn. We have a podcast called Women in Analytics After Hours. I am involved in the Columbus and Cleveland general technical communities. So if you're interested in hearing about AI or if you want to talk responsible AI, you can always hit me up on LinkedIn or send me an email. I'm happy to chat. And thanks, Noah. It was great to chat with you.

Noah: Yeah, thank you for your time.

Keep Reading