Categories
July 9, 2025
32% of data is contaminated, yet AI adoption surges without strategy. Discover why EQ still matters and how to avoid the AI hype trap in this episode.
Check out the full episode below! Enjoy The Exchange? Don't forget to tune in live Friday at 12 pm EST on the Greenbook LinkedIn and Youtube Channel!
A shocking reality: 32% of organizational data is contaminated, yet companies are rushing to build AI on this foundation. With only 22% having clear AI strategies, the industry faces a crisis of implementation without strategy.
This episode reveals why emotional intelligence still beats artificial intelligence, which tasks humans must never outsource to AI, and how to avoid the cognitive decline trap. Cut through the hype and discover what really works.
Many thanks to our producer, Karley Dartouzos.
Use code EXCHANGE30 to get a 30% discount on your general admission IIEX tickets!
Stay Ahead of the Curve! Subscribe to The Exchange Newsletter on LinkedIn Today!
Lenny Murphy: We're laughing because Karen has a pre live dance Welcome Karen back from IEX Europe and not seeming jet-lagged at all. Thank you. Thank you I literally just realized as I say, I'm not jet-lagged.
Karen Lynch: I realized oh my goodness my computer's not in and my battery is at 39%. So fun fact, if I disappear, it's because I don't even know where my power cable is.
Lenny Murphy: I'm not jet lagged at all, Lenny. Well, we'll power through, no pun intended. No, for real.
Karen Lynch: So I got back in, I guess I got home at around 7.30 p.m. Eastern last night. I did get a message that Dana's connection, he flew from Amsterdam into New York heading back up to his home in Maine, in Portland. And he didn't leave his connecting flight till 2 AM. So Dana probably wins for the most jet lag on the team right now, because that's a brutal travel day.
Lenny Murphy: But yeah, but yeah.
Karen Lynch: I mean, it's good to be home, but it was a really good week, Lenny. Like, just really good, really good.
Lenny Murphy: Well, I have not heard that much when I knew I was going to say, have you talked to him?
Karen Lynch: So one of the things about this Europe event, which is different from North America, is that our team sort of scatters afterwards. It's sort of like, you know, like Cara goes on vacation, Bridget's taking a couple of days off, you know, Lukasz stays overseas in Europe. And it's like, so we all are kind of scattered, you know, like somebody turned the lights on and we're mice that ran away. So, it does feel like you're a little bit disconnected. Listen, not only do we have record attendance, but we are at a number of events. 90% attendance rate, I mean, record registration, but a 90% attendance rate, we couldn't believe that, you know, on day two of this event where we usually see attrition, you know, we had, you know, nearly 800 butts in seats. And so it was an extraordinarily successful event for us in terms of attendance and, you know, just really extraordinary. Anyway, like an extraordinarily successful event. And, you know, we kept, as a team, we kept looking at each other and thinking like, is there any real issue? Like, are any shoes dropped? I kept saying like, I am waiting for a shoe to drop and no shoes dropped. So like, you know, we're really, I mean, it was like, this was a great event. So we're excited. And the energy was amazing. The engagement, because there were so many kinds of, you know, we say butts in seats, which sounds, it's like a- It's a core metric, guys.
Lenny Murphy: When you're in the event business, as a core metric.
Karen Lynch: So, you know, but also what was happening was the engagement was incredible. So from, not only do we see it, you know, when you're on stage and I share, you know, the plenary talks, but you're seeing the questions come in. I mean, the question is unlike North America in Europe, where in Europe, they really are asking questions, really more questions than we can answer in the time that we have. So just people are kind of riveted by what they're seeing and hearing and they engage very differently in that audience. Most of them come in anonymously, which is an option that you can, you know, ask your questions anonymously, but they're very, very freely asking questions of the speakers, which just leads to the conversations that we really need to be having. So I just, it's really good stuff.
Lenny Murphy: I mean, I heard that the, the, some of the AI sessions, uh, were standing room only. Um, the, the, the level of focus on the AI platforms, specifically. So let me tell you about that.
Karen Lynch: So because there were two dynamics that I think maybe what you're even talking about is the demos. So that normally at our events, demos are like, yes, we'll go and check out some demos. But this year, the attendance at the demo stage was like, you know, busting through the door practically, like one of our most attended tracks were our two demo tracks. And that doesn't usually happen. So especially from brands.
Lenny Murphy: Sorry, what I heard specifically was that that was the brands were there going, Oh crap, we've got to, we've got to figure this out. And we're really engaged. Yeah. And so here's what's happening.
Karen Lynch: Because obviously there was a lot of AI on the agenda to the point where, you know, I'm the one who tags all of our agendas to make sure people can find it. And I had to come up with a different tagging system this year because you can't just say AI, you know, that doesn't work anymore. Now we have to get very specific with what kind of AI tag so people can find relevant AI content. So there was a lot, but also on the vendor, on the exhibit floor, and also in some of our sponsored spots. And the difference is lots of people now, like everybody has an AI feature, but the people that have smarter features or smarter AI tools, they're starting to stand out as differentiated from everyone who's like, yeah, we got a chat bot. It's not interesting, right? So I think somebody on LinkedIn had said, this is like, you know, vanilla AI versus, you know, full flavor. So I think that that's one of the things that, you know, we've got synthetic personas, we've got fraud detection, we've got AI interviewers, we have some really cool specific AI use cases. You can't just say we use AI to synthesize data, because everyone's like, yeah, so do we. Right, right. The car has a steering wheel.
Lenny Murphy: Yeah, exactly. The car has a steering wheel.
Karen Lynch: Great analogy. So, but it was things like, and I'll shout out a few sessions. You know, the most popular, the highest attended session was a blue stage plenary, but, you know, by design, because there's no breakout groups. But our speaker from an athleisure brand called Sweaty Betty, he talked about tinkering with AI. And that was our highest attended, even among all the plenaries. And it was really what he was explaining, very much like a kind of Green Book philosophy of throwing spaghetti at the wall and seeing what sticks. He's developing internal use cases for Insights. He doesn't have a large team. It was called Tinkering with AI. It was really inspirational because he was just talking about how he's saying, let's use AI for this, let's use AI for this, let's see how to use it for this. And he was doing it in very innovative ways that were, people were saying, oh, that's an interesting use case, that's an interesting use case. So it was one of those, like, people were excited by the experimentation. And then we have, you know, in contrast, like two others that come to mind kind of rising to the top, also Plenary Talks. Florian Bauer was from Samsung. And of course, it's like a super, oh, and like Tino Colani from Lufthansa Innovation Club. Like they're using AI at a higher level of AI use cases, right? And then of course, I know you've talked to Michael Ramlett from Morning Consult and talked about their AI and making their AI platform accessible. So there's like big data AI that's happening. And there's tinkering that's happening. And that really is the big takeaway about AI, or the two big takeaways. One, if you are a supplier or a partner that has AI features, you've got to rise above the vanilla steering wheel of AI. You have to get to the big, cool stuff. You have to experiment, be bold, et cetera, et cetera. So that's really the AI takeaway. Just take those chances. Another thing that came up that I do want to talk about, because I think it's really an important trend, is that a couple of different speakers talked about the importance of making sure, because we can get answers so much faster with AI, making sure you're asking the right questions and making sure that your research briefs, like you have to really do critical thinking in advance. Is this the brief that I want to, you know, are these the right questions we should be asking? Is this really the right brief? Like taking the time saved on the back-end analysis and making sure we're kind of, you know, boldly, boldly asking. So a woman from Nihon Sahan, from Unilever, quoted from her presentation, what if the real power in AI isn't the answers it gives you, but in the questions it forces us to ask? Which was super compelling. And I think, you know, a big deal. And then, like, of course, the third big thing for me to share with you is the thing that, you know, lands. It's a human story. So, like, the combination of human insights, qualitative, ethnography, you know, like, video, you know, bringing human beings to life and then telling their stories. Because AI can't do that as well as human researchers can. You know, it's really, there was a lot of video storytelling going on about actual people and great photographs. And I'm not saying that synthetic personas won't be able to replicate some of that at some point. It probably can. But right now, in this moment in time, you know, the kind of human stories greater than AI, right? We had a gentleman from Foot Locker, and I'm forgetting his title, Uh, his name is, uh, his name is cheer Kim. Uh, Keo, I think is his last name anyway. Uh, working at Foot Locker and he shared these video interviews with like kind of basketball culture, sneaker heads, you know, and, and they were talking about their affinity for the store and, and how, how they feel that they are part of this sub group. This culture of basketball sneakers. And when they go into the stores, they feel kind of seen and it's building this brand affinity for them buying their sneakers there because they walk in and they're like, they're the same as the staff. And it was really heartwarming to kind of talk about this subculture of sneaker buying across the Foot Locker brand. And it was moving because it was these great video interviews. Is shown on screen. Findings are interesting and might stimulate your brain to say, aha, but it's the video storytelling and the people that make you feel something. And so that's where we are. At this point in time, that's where we are. So leading into the human element while we're leaning into the tech is critical.
Lenny Murphy: Yeah. I guess it's something in the zeitgeist. I am recently convinced that we have some unified, you know, new sphere of consciousness. These topics were coming up for me as well this week. Exactly. And so did you see this? My read right now is that it's kind of always been this way, right? Yeah. If it's just tactical. Yeah. You know, it's who what when we're and how, yeah, that stuff is just increasingly going to be just driven by either methodologically pure AI, or even within the synthetic sample, because you don't need, it's a mile wide, an inch deep, you don't need five inches, right? You don't need to why you just, and it's okay, that's acceptable. Yeah, so, and that's, and I believe that's being mirrored in the kind of macro things we're seeing run ad spend, to Within the advertisers because I think ad testing is a great example of that So that's going that direction It's just the next evolution of automation. So to your point where the interesting innovation is happening Yeah, there's certainly the data synthesis piece of things But the real is kind of the macro equal kind of idea of wait, let's get to the why? And explore more deeply and that that is, and increasingly, folks are recognizing the value is in first party individualized data driving that, not just pulling from everything that's online.
Karen Lynch: Yeah, and point in case to all this, and I'll, as we go through kind of the stories, which we probably have to do because we're chewing up our time, I knew that would happen, but as I was going through them, so you know, I was, I was traveling this So Lenny was kind of, you know, curating and watching the news. And as I'm looking at things that came across Lenny's desk this week, I'm like, oh yes, and this, and this. And I'm putting together all these pieces that are all part of this site, right? So while I was getting this intel in one area, you were pulling it up another. So there's a few things I made a note on here where I'm like, you know what, yes, actually that really tracks with what we heard. So let's get into it.
Lenny Murphy: Yeah, let's get into it.
Karen Lynch: Yeah, let's talk about the M&A news because you know, we might as well be leaning into where the money's going, right?
Lenny Murphy: Right. I thought so this first one thought was really interesting And it's really company called AI platform taste wise 50 million Series a funding now as interesting as they are So, you know, we've joked like God's a I can replace sensory in my when I first saw this. Oh crap You know, are they? No, they're not. But they're driving innovation by analyzing, you know, recipes and people, you know, posting on, on Read.AI board, et cetera, et cetera, about food and how brands are utilizing that as a front end of innovation feed, um, to, uh, and to look at trends in, in food, uh, ingredients, et cetera, et cetera, and repurposing. So, so hats off. They thought that was really cool. Yeah, it is really cool.
Karen Lynch: Especially because one of the things that my daughter so again I mentioned her a lot because she's you know the epitomizes Gen Z right now and she like she's going to you know chat GPT to help with recipes all the time you know I these ingredients tell me what to do with them and Having in a conversation with you know her recipe partner You know she's she used to get her recipe ideas from tik-tok and Instagram reels and stuff like that and now she's like I think I can do this better with my LLM assistance. So I just think the idea of, yes, there's mining what's out there, but then also how will these LLMs be driving some of that innovation?
Lenny Murphy: Cool stuff. Yeah, interesting. Our friends at Aques continue. Yeah. Now with their acquisition of Safio, I think that's how you pronounce it, gives them a foray into Africa. Yeah. Hugh and Keith, we see what you're thinking.
Karen Lynch: When I clicked on this, I was like, there was just this concept of the idea of why they're doing this. In my head, I'm trying to like, all right, what's your play here? And it's because it's not just to have global access, but it's local insights feeding global expansion. So it's very much like that's who they're targeting, is people who are looking to expand. And now they'll be able to provide local insights for companies that are looking to expand.
Lenny Murphy: And I'm like, yes, you know, look, the African continent, you know, is underserved in many ways, but by God, from a population standpoint, right to, uh, for emerging markets, uh, I mean, Kenya, uh, you know, we, uh, the here in the U S we kind of get that all the, you know, the African, the, the spammy stuff, right. But that is a rapidly emerging technologically advanced, um, uh, market and with a massive population. Yeah, so yeah, yeah, good stuff.
Karen Lynch: Good product launches that are a bunch.
Lenny Murphy: Yep. You want to just load your group?
Karen Lynch: Yeah, you know, they are, you know, one of one of our big sponsors, they are not necessarily in Europe this time, but they are calibrating a data quality and fraud detection system. And what I thought was interesting about this is, again, going much of the conversation that was happening, not necessarily on stage, but, you know, around the exhibit hall, and, you know, kind of, you know, client to client is like, okay, like, we, we are all very aware of the data quality problem, problem that we are facing that, you know, because it was newsworthy just a few weeks ago. And, now we need the solutions to rise to the top. So I love this one, because it's focused on research integrity. And they came out with a stat in this particular press release that they shared. When they conducted an internal evaluation, 32% of the completes were fraudulent or poor quality. So they were very transparent about it, here's what we found out, so we're now building something to fix it. Everybody has to be doing that. End of story. Because the brands are skeptical. They're not sure if they're trusting anything right now. So if you want to win back their trust, you 100% have to be doing this kind of work. So hats off to Logic Group. Absolutely.
Lenny Murphy: And we've been talking about this for years now, and here we are, this garbage in, garbage out, contamination risk for AI. If our future is driving AI training, which I believe that it is, we can't get by with the crap that we've had.
Karen Lynch: Well, isn't that in one of the, as I scroll to kind of look at your list, isn't that in one of the things that's specifically talked about? In your recommended reading, there was something that, you know, is absolutely about that. Like, we have to be training, you know, you have to be training the platforms with good quality data. If you're like, we've put, you know, we're trying to build some, you know, build our own proprietary LLMs so that we can access it for future insights, and you have fed it garbage, you're already, you have set yourself up for a problem. Like, do not put data that is unchecked for quality into your system. To build and build proprietary GPT. It's Nope, you're starting off with the shit system.
Lenny Murphy: Yep. All right. The office sports, you may not know this, we use these off sports, their grip partner, they are the competition longtime friends. And in this, we've talked about this example, right? It's like, Oh, what they turn Excel and PowerPoint into an analytical package. And what? It's kind of boring and cool. No, no. I've always been a huge fan of them and other things like this, because that is still where the bulk of our work is done from a deliverable standpoint. So cool to see them back in the news. They had drag and drop to Excel reporting for drag and drop cross tabs, which, you know, that's still, that, that is, that's something that we, we do the most people use and it makes it So hats off to Torben and Fred for continuing to champion the bulk of researchers who still use a Microsoft Office. Pretty cool. Pretty cool.
Karen Lynch: So cool. Take this next one too, because I know L&E, full disclosure, right, about L&E, but you should go ahead and share about what L&E has done.
Lenny Murphy: They continued, I mean, L&E, their Conducts platform, which they launched earlier this year, they keep expanding their capabilities. They're incorporating AI. The cool thing about this, people might not know this, James Hubbard, who is the face of this, he was an Apple engineer. He was brought over and he has that Apple mentality and is building this platform with that Apple view of things, which I just think is really interesting. They just keep expanding their capabilities. Focus is on making it easier to recruit. They've added behavioral profiling, yada, yada, bringing qualitative recruiting at scale into the AI age. So, so good for them. The Qualtrics piece was interesting. They're AI co-pilot. Yeah, yeah.
Karen Lynch: What I like about this, is that this, this, this kind of Qualtrics assist for CX tools, helping organizations act on customer feedback quickly. And I think that when you click on this one where it said, there's no point in asking for feedback if you're not going to do anything about it. And, you know, I smile because at Green Book, you know, we do ask for feedback after every event, right? And we then go through painstaking measures to try to address the negative feedback. And we use it as, you know, like, oh, what can we do about this? How do we solve this? Like, it's a slower process, you know, because it's usually feedback around our events. And whereas, you know, yeah, it's like we ask for feedback, we absolutely have to use it. So that really resonated with me. And anyway, so hats off to them for continuing to push innovation in their ecosystem.
Lenny Murphy: It's good. Yeah. I mean, they're integrating CX with CRM, which is always what it should have been. They continue to do that. Speaking of the full AI, our friends at Quilt, a full ad testing and creative collaboration platform. So people still, you know, Quilt, Quilt is just an interesting company. If you haven't listened to my interview with, um, with the CEO, you really should, uh, this, they've, that is, you didn't pay attention to Quilt.
Karen Lynch: Um, they're doing, you know, yes, it's, you know, ad testing, et cetera, et cetera, you know, highly active or highly accurate according to their stats, but what I read in there was about cultural reception. They've done something, which I haven't dug into because I was just reading about this here, but to show how they're continually training it on cultural data. They're making sure that it's relevant to today, culturally relevant to today, whatever is going on today in whatever region. So I'm like, that's interesting. So it's predictive, it's predictive of real world cultural reaction. So it seems pretty cool.
Lenny Murphy: That was really, their approach was to take anthropology and make it scalable with kind of a digital ethnography model. And it evolved from there. So that was what they were trying to do. Let's look at these tech developments, because I want to make sure we get to the recommended because some of these are deserved.
Karen Lynch: And some of them are related, like, some of these you might need to like, so go ahead and start the ICO one, you know, talk about that, because that one, I didn't really dig into too much.
Lenny Murphy: Yeah, the UK's ICO, it's kind of one of their accreditation boards around data, guidelines on smart tech data collection, and smartphones. So, it's kind of a data privacy and usage perspective from the UK. Increasingly, data synthesis, behavioral data, those things, those are feeds that AI does particularly well. So it was important to kind of put that data privacy framework together on what that looks like and utilizing them. So if you're doing work in the UK and want to leverage anything besides attitudinal data, you should probably pay attention to this. Yeah.
Karen Lynch: Well, and it was interesting because when I was looking at this, I was thinking, because as a wearable wearer, I do sometimes think, do I really want to tag? It tells you to tag things that it's coming up. So there's a reason why you're sleep deprived. You were just on a flight. Do I really want to tell them? You have this weird interaction with it where you're like, I don't know that I want to give it more data than it's already getting. So the consumer is thinking about these things, even if they don't really know how to reconcile it. They know that to give it more data means it will be more useful. And yet there's something that gives them pause. So I like that we're doing some work in that area.
Lenny Murphy: Yep, absolutely. This study on future work with AI agents. I mean, so this is, so from here on, we're going to kind of two extremes, guys. I think everyone, it's like, whoa, look at all this awesome stuff that AI can do. And then we're also going to cover a bunch of stuff saying, whoa, wait a minute.
Karen Lynch: So this is a Stanford study. That made the rounds this week. Many people who are tracking these sorts of things probably found it. But Stanford's new Salt Lab, you know, audited kind of how AI agents were, you know, doing some things, you know, automating or augmenting tasks. And it started to explain what some of these use cases are like and what worker preferences are.
Lenny Murphy: And so It's the first of a few interesting studies that hit that well I'd say what so why he take race from that was that the stuff that we think that a I should be doing Yeah, actually probably isn't right So so it is definitely worth taking a look at yeah it from go back to our original covers the human, you know elements, etc We're starting to get a hand. Yes that and the next one well Staying with this one, because I want to get to that next one.
Karen Lynch: So but one of the things in this study was, you know, AI taking over tasks, and it does mention in there, that frees our brains up for storytelling and collaboration and decision making support. And it's all the things that we've been saying is like using AI mindfully frees us up for more critical thinking, which points to not using AI for critical thinking. So now talk about it.
Lenny Murphy: Because it makes us dumb. And now that's been validated. So another study that made the rounds, I forget the universities came out of, but the...
Karen Lynch: I'm pretty sure it was also Stanford, but I have that wrong. You go, talk about it.
Lenny Murphy: It leads to cognitive decline. They hooked folks up to fMRI and EG and blah, blah, blah. Reduce brain connectivity and writing ability and also memory. MIT. It was MIT. Sorry.
Karen Lynch: MIT. It was MIT. Silly me. People from universities would be like, Karen, don't do that shit. OK. Sorry.
Lenny Murphy: Don't do that again. Yeah. MIT. Very interesting and I will be the first to admit And I was really beginning to rely on AI a little too much. Yeah from a productivity standpoint particularly writing yeah, the and Was starting to realize I probably shouldn't do that reading this or like no, I really shouldn't do that. Yeah, the editing that's one thing but yeah putting the pieces together, the creative process, et cetera, et cetera. No, because it literally, according to this study, makes us dumber. Like literally. Yeah. So there's a good, um, there's a good post about this.
Karen Lynch: If any of you follow Ethan Malik, if you know who he is, you can find him on LinkedIn. He put, um, he put a post up today. He's a Wharton school professor and he put a post up today that didn't make it to this cause I just, uh, checked LinkedIn this morning about this, and he's saying that it's being misinterpreted as AI hurts your brain. And basically what he's kind of talking about is that the two things are working in tandem, is that you don't want your students to rely on it for one use case, but you want your students to rely on it for another. So it shouldn't do their work for them, but it should be a study assist.
Lenny Murphy: So, you know, outsourcing a piece, I think that's the part where you're just doing it all.
Karen Lynch: Do all your thinking for you. Make sure you're continuing to say it is here to support you, not do your thinking. So like, you know, um, when you're looking at findings, um, you know, maybe come up with your own hypotheses first and do the thinking first so that you stay sharp. Uh, you know, maybe outline, this is what I want to write about. Now you can maybe give me some draft text or like to do the heavy thinking first. Then it can support you and you will be smarter as a result, but don't use it blindly to do the work for you. So that's where we have to be really deliberate about how we use this technology.
Lenny Murphy: Absolutely, 100% and agree. I mean, I find myself now in a far more iterative process and kind of collaborative, I guess. But, but here's the interesting question, which we'll get, we keep getting to this, right, that we have to get our hands around. What is low value work that we could outsource? I will give you an example. This week, I needed to analyze some stuff we talked to people I was going to do working with how I was going to do it utilize an AI. I would have gotten to the same result hours and hours later doing it manually. So, this was a process. There wasn't much value in that. The output was valuable. The process was not. So, so using, making the decision to utilize that, to do that, I felt that was good. It didn't, you know, I had to direct it. Here's to your point, here's the outcome I want. Here's how I need you to go about it and, and go through that. That was you. And it wasn't cheating, right, so to speak. But that's where we are, I think, is figuring out where this value is from a cognitive standpoint.
Karen Lynch: Anyway, And I think that we have to, when we talk about things like upskilling, it's not just upskilling by saying, hey, we have this tool available for you to use across our systems at the enterprise level, but it's teaching things like this, like, okay, how can you best use it? You know, and really helping your staff understand the best use cases instead of just saying, here, have at it. You know, being really, really mindful that we're helping future generations who are entering the workforce know what the expectations are, what appropriate use cases are, and really guiding them in an organization so that their brains aren't mush. Yeah, absolutely.
Lenny Murphy: So let's get a couple. I think we can get through them pretty quickly. Yeah, probably, probably.
Karen Lynch: Yeah, let's go fast because this article was really interesting, this article, the one I'm pointing to in our brief next about how, you know, the New York Times on AI might take your job. Here are 20, what, 22 new ones it could give you. I really wanted to read this article thoroughly. So point in case I went to AI with it and I said, summarize this for me before I go in. And, and, you know, what, what is useful information to you? For example, for an insights professional. And one of the things it's like, trust jobs are popping up. So you read all this, right? So companies need to double check AI's work, explain it to others, and take responsibility for what's going wrong.
Lenny Murphy: I'm like, yes, this article was written based on everything else we've been talking about. Yes, yes. And it really is interesting, the idea of trust, wisdom, human cognitive capability, right? So these oversight roles are more oversize than a QA check. That's a piece of it, but really a managerial role to a great extent in a variety of factors. It is interesting. We touched on this next one. Enterprise AI is coming, but it's best learned all the wrong lessons from our friend Rick Bruner, marketing effectiveness. It's about data in, bad data. Having bad outcomes. So, for those who don't know, Rick Bruner, he leads a group called Research Wonks. He's on the, anyway, big brain in that world from a data standpoint.
Karen Lynch: And just, there was a concept in here that's basically like about, you're scaling up insights, but what if you're scaling up bad insights? And then it's like, now you've got bad insights at scale or something. Ridiculous. Yeah. Right.
Lenny Murphy: And one piece that he kind of hints at here too is like, okay, yeah, we can be testing with new creative like that now. Sure, we can test 1,000 ads in a few minutes because we're creating them. But the traffic underneath is bad. So we can't. It's just give and take. This next piece you found was also similar, right?
Karen Lynch: Yes. Are the AI models cannibalizing their own creations? Which I don't, again, I didn't have time to get into this one. Same thing.
Lenny Murphy: Recursive models, right? This has been out there for a while. But you can't train a model off of AI data. It truly becomes psychotic. I mean Hallucinations, I'm crazy. So if we have all this bot traffic online, that is AI generated These models that are just sucking in everything online Now is inaccurate data that they the models themselves can't do it so One of the best examples I saw where they were doing this recursively off of images. And each iteration of refeeding it, the image got more distorted, more distorted, more distorted until it was like, you can't even recognize it's supposed to be a human. And so first party human-centric data, high quality is necessary for these things to work correctly. So, you know, buyer beware.
Karen Lynch: So, uh, uh, this next one that you have here also, like this is where I'm like, there were some connections to what was happening in Europe and I don't remember which presentation it was, although I can almost picture the slides. So I apologize in advance. It might've been Tinos, um, at the Lufthansa innovation hub, but I, I can't, don't hold me to that. And sorry, Tino, if I'm misattributing it to you or whoever the speaker was, if you type in the comments, you know, know, when you hear this episode, you know, it was me. But the idea that we really do need governance over our AI, we need AI strategy in our workplaces. So this one, AI uses surges yet at work, but worries over AI's impact grow. So you, you should trust your team, even though their usage, we remember when Lenny and I were talking a year ago, you know, you said people were slow to, slower to adopt, like everyone's adopted, like we get it. It's, you know, it's, it's happening, especially in our ecosystem. But despite rapid adoption, I think this study says only 22% of organizations have a clear AI strategy. Just 30% have formal policies. So it's inconsistent. It's causing stress. It's causing anxiety. It's causing uncertainty. And you need to make sure your team has strong, clear AI goals. So have an AI strategy in place for your organization. I'm sure AI can help you develop an AI strategy. You are carefully trying to do that work, but maybe outline your thoughts first. And then ask AI to help you out. But I just thought that was a really interesting connection from this talk we saw on stage also.
Lenny Murphy: Yeah. So you guys see the dots here, right? Where we said there's something zeitgeist. I mean, yes, the AI train left the station long ago, hurtling along. But now we're realizing, oh, wait, how do we steer it? What's this button do? How do we? What's the right way? So it's interesting as we grapple with this. This last one I thought was really interesting. They are tech guys, developers, et cetera, et cetera. They create a synthetic sample for their specific use case without even realizing that's what they were doing, per se, and compared that to market research and came to the same conclusions of, ah, you know, it's like 60%. So it was good enough for some things, but not good enough for others. So the headline here is, I owned 2,000 hacker news users to predict viral posts.
Karen Lynch: So that's not a clickbait title. I don't really know what it is.
Lenny Murphy: But yeah, 60% accuracy in predicting which headlines would go viral.
Karen Lynch: So that's interesting.
Lenny Murphy: Right. So very interesting from a tech perspective, backing into the same thing that we do and recognizing, well, 60% is good for some stuff. It's not going to be good for others.
Karen Lynch: Yeah.
Lenny Murphy: The, uh, interesting times. Um, I guess that's it. Okay.
Karen Lynch: That's it.
Lenny Murphy: We're done.
Karen Lynch: No, we've been tied all together with a bow. Right. So yes. Interesting times, really great, smart thinking happening across the industry about all of this, which is the heartening thing for me. And, also we're at this point where things we're reading are different than just new features, new features, new features. People are finally, I think, you know, I think it took a long time for people to catch up. And now I think people have caught up. So as long as we kind of stay in this space for a while, our thinking is evolving at the same pace for at least a little bit before our next takeoff. I think we're at a really interesting time because I do believe that the industry's brains have caught up with where we are. And now we need the tools to actually meet us, because we're, we're thinking we want more, we want more. So that would be kind of my takeaway on all of this brief.
Lenny Murphy: I think that's a wonderful summary, right, that we were all a little taken by surprise, you know, the last few years by the pace of change. And now, I think, very thoughtful breaks being applied, not in a, in a, we're not engaged, you know, the to the train, we can't do that. But hey, hold on, let's make sure we don't wind up like the humans in Wall-E. That's probably not the outcome that we want. So let's think through this a little bit. Although a floating chair is still pretty cool. I wouldn't object to that.
Karen Lynch: All right, I won't argue with a floating chair. As long as it can be floating over water with a cocktail nearby, that's my wish. Yeah.
Lenny Murphy: Yes. Unless it can exist.
Karen Lynch: Going to serve me well, let's be clear.
Lenny Murphy: Happy Solstice. Enjoy the longest day of the year.
Karen Lynch: Right?
Lenny Murphy: Yeah. It's a big birthday weekend in this family.
Karen Lynch: We have Tim Lynch's birthday. On Monday, we have my daughter's birthday. So this is what I typically refer to as birthday weekend. And it might not rain in New England tomorrow. So I barely want to say that out loud. But this might be the first sunny day, first sunny Saturday in three weeks, I mean three months.
Lenny Murphy: Us too, hence why I'll be doing yard work all weekend. Well, happy birthday to Tim and to your daughter.
Karen Lynch: Yes, thank you. Yeah.
Lenny Murphy: All right. Everybody have a wonderful, wonderful weekend. We'll talk to you next week.
Karen Lynch: We'll talk to y'all next week.
Lenny Murphy: Bye-bye everyone.
Lenny Murphy: Bye.
Israeli Food AI Platform Tastewise Nets $50M Series
Ackwest Expands into Africa with Acquisition of Safiyo
Logit Group Launches Anti-Fraud Tool Calibr8
OfficeReports Adds Drag & Drop to Excel Reporting
CondUX | Enable agile & reliable insights
Qualtrics targets ‘action gap’ with AI copilot
CEO Series with Quilt CEO Anurag Banerjee
ICO Issues Guidelines on Smart Tech Data Collection
83.3% of ChatGPT users couldn’t quote from essays they wrote just minutes earlier
A.I. Might Take Your Job. Here Are 22 New Ones It Could Give You.
Enterprise AI is coming, and it's about to learn all the wrong lessons about marketing effectiveness
Are AI models cannibalising their own creations?
Comments
Comments are moderated to ensure respect towards the author and to prevent spam or self-promotion. Your comment may be edited, rejected, or approved based on these criteria. By commenting, you accept these terms and take responsibility for your contributions.
Disclaimer
The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.
More from Karen Lynch
AI is collapsing research from weeks to minutes while specialized models win. Discover what Databric...
AI highlights from 100 episodes of The Exchange with Lenny Murphy & Karen Lynch—unpacking trends shaping insights, data, and analytics.
Databricks’ $100B valuation marks a data economy shift. Learn how AI speed, partnerships, and specia...
Sign Up for
Updates
Get content that matters, written by top insights industry experts, delivered right to your inbox.