The Current and Future Role of AI in UX Research with Cheryl Platz

Aurelius Podcast – Episode 60 highlights with Cheryl Platz:

  • What is AI, Machine Learning and how will it impact UX Research
  • What AI is currently good at and how it really works
  • Surprising ways you’re probably already using AI for UX Research today
  • Ethical considerations in the application of AI for UX Research
  • The impact of AI on the current UX job market
  • The future of UX Research and AI

Hey there! Our guest this time is Cheryl Platz, one of many talents, including UX researcher, voice UI designer, author, teacher and actor too! 

We discuss the intersection of artificial intelligence and UX research. As I’m sure you’ve heard by now, AI is a thing. There’s been a lot of conversation in our community about its potential impact, practical applications and more.

Our chat covers the whole spectrum of topics, from demystifying what AI actually is, how it works and where it can accelerate what UX researchers do today. Of course we had to ask Cheryl about the ethical implications of using AI tools and whether or not AI will replace the critical role of a human in UX research, as well as what role humans play in the future of insights in general. 

Things are moving so fast with the development of AI tools that by the time this episode launches there may even be more to cover! That said, you’re gonna take away some nuggets from this conversation to consider as the landscape of UX Research changes.

Links from this episode:

This podcast is brought to you by Aurelius, the powerful research and insights tool. Collect, analyze, search and share all your research in one place.

Cheryl Platz – The Current and Future Role of AI in UX Research

Episode Transcript

(this transcript was automatically created using our very own transcription feature in Aurelius and has been minimally edited, please excuse any typos or weirdness 😀 )

Zack 

All right, I am here with Cheryl Platz. How’s it going?

Cheryl Platz

It’s going great. I’m so excited to talk to you today.

Zack 

I am very happy to have you on. I was excited that you were able to join us. You got a lot going on. But some of the stuff you were talking about recently I thought would be really great discussion topics as an episode of our podcast. And you and I have chatted in the past, and I’ve always enjoyed it, so I just thought everything about it would work.

Cheryl Platz

Yes, and I’m glad we’re finally getting a chance to come together. It has been very spicy out there in the UX world, and it got me out of my writing slumber. My medium blog is Spiced back up. So, yeah, it’s a good time to talk.

Zack 

Awesome. So before we actually jump into any really meaty topics, I usually like to ask for you to talk a little bit about your background. Introduce yourself for folks who maybe haven’t heard of you or heard of your work yet.

Cheryl Platz

Absolutely. So, to those I haven’t met hi, I am Cheryl Platts. I am a many sided woman. I’m a user experience designer. I am a teacher, author, actress, gamer, game designer, Pokemon trainer, goes on lots of things. I’m the author of Design Beyond Devices creating Multimodal Cross Device Experiences, which was published by Rosenfeld Media. And the way I describe it to folks who’ve never encountered that book before is it’s the annual for folks who aspire to design the bridge of the starship Enterprise. So when you think about people trying to coordinate experiences across different modalities, you’re talking to the computer, you’re projecting experiences onto multiple screens, you’re using physical controls. It all looks very easy on TV. But from experience, having worked at the companies I’ve worked at, that’s very hard under the hood. My previous employers have included places like Microsoft, where I worked on everything from server technology to cortana. I worked on Alexa in the early days. I designed the Alexa notification system. I also worked on the Echo Look, the short lived Clueless closet that had Alexa enabled. I’ve worked on a number of video games, and I’m currently director of User Experience for a large video game company, although today I speak on my own. And I am also owner of Ideaplats, a design education company through which I’ve traveled the world, sort of sharing my skills as a voice user interface designer or a new product designer, and sharing talks about design, ethics, all kinds of things. I’m really passionate about engaging with the community and helping folks wrestle with the future of design, because I remember what it was like being locked in a literal closet at Amazon working on a new product, and there were only four of us who knew what the project was. And I kind of resolved then. I wanted to help other designers figure out how to work on new things like AI and hardware and all of those sorts of experiences because it was a lot to deal with.

Zack 

That is definitely interesting and maybe to dive into another time as to why you were in a literal closet. But in your illustrious introduction, you mentioned something that I think would be really useful to dive into this topic of AI, which is really what I was hoping to discuss with you. AI is very hot right now, so to speak. It’s looking to be applied across many different industries and platforms. Now, as UX researchers, people in UX, I think the topic naturally starts to slide into what does that mean for UX research? What does it mean for UX? And I think a really good place to start, honestly, is to just say, when we talk about AI, what are we actually talking about?

Cheryl Platz

It’s a great question, and I can see the unease and the fear sometimes. It’s been a rough couple of months. I just want to acknowledge that ever since November 2022, some of us have been working with the concept of AI for a long time. But all of us have had a wild ride since last year. The speed at which this technology has accelerated has caught everybody a little off guard. But to the core question, what is AI? What is artificial intelligence? The public perceives it as a monolith. Artificial intelligence is an all knowing thing that can make decisions and is smart and discerning in reality, what we have today. Artificial intelligence is a blanket term that typically ends up referring to systems that are a series of services held together sometimes by duct tape, that are powered by what we call machine learning. In most cases, most cases not always. And machine learning is a phenomenon where we take typically large amounts of data and feed it to essentially let’s just use I’m probably going to use a lot of analogies and metaphors and such today, but a baby algorithm? So to speak. We feed a lot of data to a baby algorithm, and the baby algorithm looks at this large amount of data and starts to learn patterns. And we also teach that baby algorithm what to do about those patterns. On the TV show Silicon Valley, there was a hot dog, or not, I think app, or if I’m remembering correctly.

Zack 

Is it a hot dog? I think is what it.

Cheryl Platz

And so you take the baby algorithm, you show it a lot of photos and you tetlet, this is a hot dog, this is not a hot dog, this is a hot dog. This is not a hot dog. And over time, it learns by watching you and it’s able to take the next photo and say, that is a hot dog. Pretty sure. And at its core, that’s machine learning. And what’s interesting about it is you show this algorithm, this model, we call it the baby algorithm. Once it’s got all that training, you show it all this data so that it can handle the next piece of data, which we do not know it’s to give it the ability to handle an unexpected piece of stimuli. It’s just like training a kid to go off into the real world. We don’t know what’s going to happen when they go out on their first date or they go off to their first job. But you teach them. You put them in school, you teach them stuff, and you hope it’s going to go well. But no. So you do your best. But the algorithm is usually guessing because rarely, if it was an easy problem, if it were black or white, if it is binary, we would just write an algorithm to binarily determine something. Usually when we use machine learning, it’s because it’s squishy. And so the machine learning algorithm is guessing, and it has some sort of confidence that it was correct. But they can get things wrong. And the likelihood that they get things wrong is often controlled a lot by the quality of the data you showed to it. So when we get to like, hey, what school did you go to? What textbooks did you have when you’re raising a kid that can influence what facts, you know, how good you are at a particular subject? And the same thing is true for these artificial intelligence machine learning models. And that’s where we get into a lot of the controversy. Where did the data come from?

Zack 

Yeah. So I really actually love the analogy of, like a child learning and growing because I think it is very similar based on the understanding I have. In full disclosure, I am not an expert in this for sure, but that being said, to kind of summarize a little bit of your answer to the question, I would say it’s not Jarvis in Iron Man suit. Right. Like people imagine it might be. It is learning. And arguably, despite how impressive it is, it is still in its sort of infancy stages at this point.

Cheryl Platz

Yes, and there’s different types of models, too. So, Jarvis, to your point, if we were to make something that seems like Jarvis, because we have Alexa and people be, wait, wait, hey, you worked on Alexa. Don’t tell me that this doesn’t fair. Fair. But having worked on Alexa, back to the duct tape analogy, it is a bucket of individual systems. There is a text to speech service that renders her voice. There is a service that handles music requests. There is a service that handles weather requests. There is a service that handles timers and reminders and things. And then there’s a layer of what we call natural language understanding. There’s two layers of language processing. One which just tries to turn your voice into our first guess at words, and then a second layer which is like, what did they mean by the words? So by the time your stuff is processed and there’s some protections to make sure all of your speech doesn’t go to the cloud, because it’s very expensive to send things to the cloud. But the wakeword, there’s local technology to keep your text from going to the cloud. If it hears the wake, they there’s a service that determines what words do we think they said? Another service that says, what did she mean when she said play Pirates of the Caribbean? Was it the music or was it the movie? And then another service that handles the like, she meant play the movie, so go do that. And that one that says, what did she mean? That one is like the secret sauce where there was a ton of training watching a bunch of people, like, giving it a bunch of examples. And that’s the one where we have the biggest example of that confidence, where it’s like, I’m 80% sure she wants the movie, 20% sure she wants the soundtrack, but 80% is pretty good, so I’m going to give her the movie. And in that example, we could give that algorithm, that AI piece context, to help make that decision. Like, if I’m on a Fire TV, way less likely I’m going to ask for the music. So that could influence why it’s making that decision. Yes. AI as a concept, as a monolith in its infancy, individual models, not in infancy, some of them are actually pretty sophisticated and powerful. The concept of one big monolith, just knowing everything is not true. And I’m surprised even inside companies, sometimes people sort of buy their own hype and they’re like and it’ll just know the thing I’m like, exactly. Who’s building the service that teaches it the weather? We have to give it the data somehow. So there’s that distinction. But we do have this incredible capability, now accelerated since November 2022, to give a baby algorithm a bunch of data and have it start spitting out art, which is hugely advanced from where we were like a year or two ago.

Zack 

Yeah, for sure. And I really appreciate the fact that you broke it down to say there’s more than one flavor of AI. AI is not just this. It is a singular technology that you plug into whatever and it just takes over and it does all these things. There’s applications to this, just like there are different applications you maybe have in your job or in your life, like you said, one that plays music, one that plays movies, one that you can write with, things like that. So there’s all these different applications. One of the things that you said sort of way early on in starting this conversation was that it’s typically used for stuff that’s squishy, because otherwise you just write a program for that, which are all over the place we use every single day. So really, what’s the distinction there? What do you mean when you say it’s used for stuff that’s squishy?

Cheryl Platz

Yes, a technical term I’m sure all of my AI specialist friends are like, well, I hate that. Hate that. Because writing my AI chapter in my book, Design Beyond Devices, that was the chapter that I spent the most time on, actually, because it was so important that I got what I did say right and the friends I did work with on making sure I got it right. Very detailed, because it’s so easy to get some of these pieces wrong. And there is a lot of sensitivity in the community, too, about metaphor, because I think the Skynet thing has caused a lot of folks to sort of bounce off the concept of artificial intelligence without learning the details, which is harmful, I think. And harmful both for folks who want people to understand their work and harmful for the public, where we really need them right now to start understanding a little bit more about the stuff under the hood so they can question companies. But what is squishy? Squishy is stuff we can’t predict. Squishy is stuff that is contextual that needs to take into consideration a lot of factors. Typical things are predicting what your next destination is based on a lot of factors in your general behavior. Maybe that’s something we could train a model to do. A lot of situations in healthcare. Squishy, does this x ray contain cancer? It’s very squishy. There’s a lot of judgment calls in there. And if we’re easy, that’s the problem we would have solved. But there’s this hypothesis that, hey, maybe there are factors that maybe we’re not perceiving. So what if we just took the baby algorithm and said, here are all the x rays where we found cancer, and here are all the x rays where we didn’t find this type of cancer? Not all cancers, but you have to focus on a really specific one, lung cancer or something, or melanoma. And do you see anything? What is the next like this picture, does it contain lung cancer? And see what it comes up with. And there have been experiences where folks have done this. Now, the challenge is, in these cases, and off the top of my head, I don’t have the study in front of me, but there have been cases where they do this, and they have unexpected consequences where it’s like it’s making inferences based on conditions around the set of x rays. It’s getting false positives because it was other things that were happening in the x rays. It’s hard to say, but the results weren’t as accurate as they wanted them to be because of other factors, and that was hard for them to pull out. And this is one of the biggest problems with this Squishiness, and the way the models work right now is when you teach the baby model and you show it all the things, it doesn’t write down why it thinks something is true. If I say, like, I show it the pictures of the hot dogs and not hot dogs. And I show it a picture and I’m like, is this a hot dog? It says yes or no. It says no, and I think it looks like a hot dog. And I say, why don’t you think this looks like a hot dog? It’s like, I don’t that’s it. That’s all. It doesn’t have a set of rules it wrote down. It’s just statistics. It’s like probability depending on the type of model it is. It’s like comparing pixels or pixel relationships or something. Most of the time people there are researchers who are trying to get more sophisticated and get AI models to write down the rules that they’re coming up with. But that’s really complicated. Now we’re trying to get into something that resembles human reasoning. Write down the reasons you think that this is true. That’s really hard. These squishy problems, like pulling in context. A lot of times AI is used to take big set of data and find correlations when we don’t necessarily have causation. Like, you usually go to this point, you usually go to this place at 03:00 P.m. On Tuesdays. We don’t know why, but I’m going to give you the estimated time to departure.

Zack 

Right. Happens to me all the time. I get in my car at a certain time. It recognizes a pattern. This pattern. It’s going to take you this long to get to this destination because I think that’s probably where you’re going.

Cheryl Platz

Yeah. And that’s why when you talk to some AI folks who are like, we’re so far off because the system hasn’t connected that to intent or reason. But that doesn’t mean the system can’t cause harm just because it doesn’t know why you’re not doing that. The way it interprets that cluster of data can still cause harm if they interpret it incorrectly. And it’s a cancer algorithm can still cause harm if it’s interpreting a cluster of data and it’s interpreting it based on to figure out whether or not you’re likely to commit a crime that can really cause harm.

Zack 

About that’s some pretty interesting scenes, right? Yeah, totally. Totally. Okay, so awesome background to just what it is that we’re even talking about this hand wavy thing of AI. Well, we work in UX. We work in UX research and there’s massive debate and I mean, like across the entire spectrum of it’s going to be our savior to it will never replace anything that we do and everything in between. I’m sure you’ve seen a lot of this, and I’m sure that’s what probably inspired you to talk and write a lot more about this recently. Where to begin, what really is. I guess maybe the right question is what impact have you seen AI having on UX research today? And then let’s talk about where we go from there.

Cheryl Platz

I mean, the biggest stuff I’ve seen has been on that direct application of specific types of models for the raw processing of data. Back when I was at the Gates Foundation and doing large amounts of video based research, the ability when I ended up going into tools and they were able to take my video, auto, process all the text because I was like, hand transcribing, because I’m a big one on full context and detail. And people like you’re literally just, wow. I know, but I get so much from it. But finding a tool that could support just full natural language transcription. But not only that, but tie each word to a point in the timeline of the video so that I could scrub and clip and then connect those words so that you could do searches on tag clouds and things like that, and could run cluster analysis on the frequency of appearance of words in a study. Those things start to be those things are maybe not the big AI you’re picturing when you look at what’s in the market today, but those are applications of machine learning in many cases. Now what I did not see at that time was the recommendations kind of engine like, hey, you need this tag. That stuff was still kind of left up to me because I’m now in a leadership role the last year or two. It’s possible that stuff’s out there and I’m not playing with it. That’s where I think the next evolution might be, is like, hey, based on the way other people have tagged data like this, we think you probably have a tag around productivity and a tag around collaboration and a tag around crashes or something. But as a researcher, you can already probably feel the unease with that. You’re like, wow, how do I trust it? Is it right? Is it real? And that gets down to one of the biggest sort of philosophical challenges around how we should use AI when it comes to the aid of humans doing their work. Microsoft’s AI principles were very much that you should leave agency in the hands of the would in their perspective and may have changed, but while I was there anyway in their perspective you wouldn’t just tag the thing and say the tagging is done. You would say, like, these are our suggested tags. Maybe you should review them and make it easy to make changes. Because acknowledging that the human is always going to be at the center of the work and I haven’t had a chance to use copilot yet, but I’m hoping that whatever they do with copilot, that’s going to be their approach. And you saw that kind of like with their smart advisor in PowerPoint. There was a little bit of that kind of thing going on there where it wasn’t like, I’m just going to pop your slides and it’s like these slides seem like they might be helpful. I don’t know. So I think I certainly, obviously encoded in my responses. I think tagging is a place where we spend a lot of time analyzing clips, looking for the places of most interest. And so, like, sentiment analysis tagging, finding places of most interest, clipping, auto clipping, and doing excerpt reels and things like that, those are places where potentially you could over time train one of these. And I’m going to just keep using the analogy, the baby model, to get good at identifying where people’s emotions are heightened and then say, like, this seems like an interesting point of the video. So I’m going to make it easier to get to this point or understand, well, these clusters of words mean this tag. So I’m going to tag this segment so it’s easier for my researcher to go and find this later or if I’ve clipped some stuff. Get good at creating excerpt reels based on clips and things to help me with that. Now the question there is, how do we train the baby model ethically? Because all of us have NDAs and participant agreements, and this model has to learn somehow. And I don’t believe those NDAs are like the zoom NDA thing that just went out, where it’s like we have an Irrevocable license to use your likeness in perpetuity to do whatever we want, including train our AI. We probably had pretty well scoped NDAs where we’re going to use your research content to do the research we’re telling you, and then we’re going to delete it. So in those cases, that data shouldn’t be used for training the models. And so to train these baby UX Research models, it starts out you’re like, oh yeah, it’s going to be super easy, barely an inconvenience great. But in reality, it’s an ethical problem because whatever companies are doing these models, they’re either going to have to generate a bunch of fake data or they’re going to have to convince their customers to allow their customers, get their customers to agree to have their data scraped. Well, scraping is the wrong term because it’s consensual, but it’s the term that’s coming up in the news. But their data used and analyzed, shared with this model and processed. And that’s really tough. So that’s the tricky part there, I think, for UX Research when it comes to tools and things like that. The other thing I’ve seen, I know there was a big burst of energy around one or two tools that popped up where synthetic customers were generated. Yeah, I don’t know how much time I need to spend on that because I don’t know how many folks were thinking of that as like a viable output. And it cuts back to the whole personas versus user profiles debate. I feel like every company I’ve gone to, there’s been like a religious debate about personas. I don’t know if you’ve encountered this.

Zack 

I absolutely have.

Cheryl Platz

Yeah. Okay. Yeah. At its core, personas themselves are not the problem. It’s the overfictionalization of personas. Right. Like, if a persona is based on real data and it’s conveying real data with a mental model that is easier for our partners to understand, great. But what usually happens is people don’t want to do the hard part, which is going and talking to people and correlating data, so they just create Anna, who is a soccer mom who wants to drive and loves Taylor Swift.

Zack 

Why did I know you were going to Taylor Swift? For some reason, I just knew it was happening.

Cheryl Platz

But, I mean, it’s like, yeah, they pick things because they think it’s connected to public events or something they’ve heard. They’re like, that’s user research. I’m really connected to people, but that’s not actually what we do. Right? And so I would absolutely think that there’s probably a raft of personas right now that are absolutely swifties. But is it research? Are their customers swifties or are they just pulling from the ether? So these tools, what if you’re like, yeah, we’ll generate customers for you. It’s not a lot different than just listening to the news and picking some random stuff and plopping it down on a persona. And so, sure, if you literally have no one you can talk to that you can confirm is actually in your pool, I can’t tell you that that’s not better than zero, but there’s nothing to guarantee that that fictional story is actually aligned with the customers you’re going to use. And if you buy the Hype and you tell yourself those are your customers, you can cause actual harm if you don’t think about who you’re excluding or the things it’s not seeing. And that’s my concern with those sorts of tools. I talk a lot in my book about optopessimism. What’s the worst consequence of success and how do you deal with it, who do you exclude? I talked about the Pix framework for evaluating the impact of your work, and so it’s the type of mindset that gets us to like, well, I’m not going to talk to actual people. I’m going to look at these fictional frameworks of people. There’s so much out there where you could actually talk to people that there’s so many people in the world. There’s billions of them. It feels weird to generate fake ones.

Zack 

Yeah. Just because I can doesn’t mean I should.

Cheryl Platz

Exactly. Thank you. Thank you. The Jurassic Park quote. I use that stuff. I use that all the time. Ian Malcolm it’s like, yeah, right there.

Zack 

Yeah. I love it. So the thing is, what I hear you saying is it sounds like there’s good news and there’s bad news for UX researchers in the advent of AI. On one end of this, there is an absolutely ethical challenge that we need to address. One way or another, it’s going to get addressed, hopefully, on the right side, right of things. But then on the other end of it, we’ve got some good news, which actually suggests there’s a lot of work that we do today that this can just help us do faster. One of the things that I like, which by the way, very strongly agree with. And having not worked directly in AI like you did in past roles, I still have always shared this sentiment of this artificial intelligence, this natural language processing, this machine learning. This stuff really should take us to the edge of decision. Like you called it agency or maybe Microsoft’s sort of philosophy on this was that the agency should stay with the person. And I really agree with that. I don’t think that anybody should go into here expecting that AI is going to take their job. I’ve heard a lot of that and it seems kind of crazy to me. But it’s worth talking about because people believe this. I don’t think AI is taking anybody’s job, not if you’re doing your job well. So to your point, if you’re making up personas, for example, well yeah, AI can do that. It can make up personas. Right. But what it can’t do, and you talked a lot about this earlier, it can’t bring the context that you have at your company about your products with people building them and combine that with data research data. You do that. So really what I see AI being able to do is help you get to those points faster, where you can then work with those people, helping them understand the crunching of that data in that context because that’s where your real value is.

Cheryl Platz

Yeah. I love the phrase edge of decision. I’m going to borrow that. It’s really great. Free of charge. Thank you. It is tough. Are the bots coming for the jobs? Question, because in many industries they are. And we have to face that head on. And this one is interesting because can AI do exactly what UX researchers do? No. Does that mean that executives aren’t going to try to tell themselves that they can? No. So it’s so I think there is this challenge. How might we take advantage of the tools that are available to amplify our impact, to help us weather the next round of layoffs? Because I know UXR got hit harder than past layoff rounds, possibly even harder than proportionally than UX design got hit. And I used to in past layoff rounds look and be like UX research is usually pretty good because we always need to know about our customers. But this round of layoffs, that did not seem to be the case. And so it is a question of like how do we our ratios are off again, we were making progress and we’re going to have to impact larger orgs as researchers. And so how do we use these tools to help us do that as humanely? And if that’s using tools that help us transcribe and tag more efficiently, if that’s using better data analysis tools or projection tools or things like that, I think that’s important. The more we can show this kind of value, the more resilient our UXR. Effort as a whole becomes, and I think that’s a lot of it. I think it’s absolutely possible that in cost cutting environments, people are especially if you’re working on a live service, it’s one thing if you’re working on a new product. It’s hard for an executive to be like, an AI could tell me what I need to know. Like you haven’t launched yet.

Zack 

Right?

Cheryl Platz

Once you have live data, it is absolutely possible for an executive to be like, I can find an AI which will run analysis of my live data and we will figure it out. I don’t need you to tell me about my customers. It is possible. And so the more we can get creative about using these tools to amplify our impact or both, either amplify the types of impact we have or amplify how we communicate our impact. That was what I liked about some of the tools I used in the past was it was trying to start changing the way we communicate user research and build on it. How do we codify insights and then snowball them so that they don’t just kind of get discarded? I saw so many efforts to build research repositories at Microsoft come and go and come and go because it’s a really hard problem, but it’s not even necessarily an AI problem. But the more you have insights at scale, the more it could be like it looks like this work you’re doing relates to this work someone else.

Zack 

Yeah, yeah. For, you know, the whole thing. I’m reminded of a conversation I had recently where people were talking about the role of UX Research, or really any role if you think about how a team in a company grows. So Microsoft is huge, it’s very long past startup or a growth phase. Right. But the fact still remains that when you’re hiring a role at any company, someone is outsourcing a part of their job. And that’s what’s happening with UX research. Right. Everybody gets the value of understanding customers. So even if they’re outsourcing that part of the job to AI, there’s still somebody that has to do something with that work. And there is a role for UX researchers as people, actual human beings to play in that. And I think I would argue that a, that’s still very necessary. So even if you do have an executive that says an AI can do that, it’s like, yes, but do you have time to work with the AI? The answer is probably no, because you’re outsourcing that part of your job. Right. So in that, what I’m suggesting is, I think for the future of UX researchers specifically, I think that it’s really important to focus on that. How do we use that to augment what we do rather than resisting against it? It is going to become part of our job whether we want it to be or not. And I think it’s really important to just take a look at how we use that to, in your words, amplify.

Cheryl Platz

What we do, leaning on the spirit of Sam ladder’s, mixed methods and things like that. Like, you could have a data scientist who’s doing only that, but we could be a bit more holistic, take our look at the data and combine it with other things and bring much more business connected set of and human connected set of insights. So I think those things are and can be true. I think another interesting possibility for the industry. And I don’t know how this manifests, but there’s so much potential insight about how humans work, and that in the academic world and in the publicly published world, like Nielsen Norman and stuff, think about if we go back to the Tagging case when I’m talking to people. It’s possible that if we did have a system where I’ve had my videos processed, it’s Tagging things. That system could also help bring other relevant research to me. Yeah, because that’s one thing I at least have observed in corporate environments. Rarely do corporate folks have the time to go pull external knowledge to them. So everybody just ends up reinventing the wheel all the time. They’re like, whatever our customers tell us is real. That much is true. But am I going to go do a big paper search to see whether or not because I know when we worked on Windows Automotive, we published white papers about what the best minimum size of text or touch targets were that we validated in car studies and things. But did anybody else ever see those things? I don’t know, because there’s no good method for it, and usually people are stressed and had time. But I could see a system like that where it’s not so much that the AI is telling us how to do the work, but the AI might be able to pull an extra context, a lot like PowerPoint’s giving us slide suggestions. And that really could amplify our work. It could shortcut us, it could get us to better recommendations. It could take what we think is a small signal from a study and turn it into a really big signal, like, oh, this seems interesting, and say, like, actually hundreds of thousands of people have had this problem. This is just the first time you’ve seen it.

Zack 

Yeah, I love that. I think just to continue to add to that, completely agree where when I say that edge of decision, a decision still has to be made. And I don’t believe, certainly not now and maybe not even in the interim future, that these AI type tools will make a decision for you. And it certainly isn’t one that you’re going to want to take a whole lot of confidence in. That’s the role you still play as a UX researcher, in my opinion. Right. So, yes, use these to help you get to that point where you can help somebody interpret that. Okay, this happened 100,000 times. Is that good? Is it bad? What do we do about it? It’s the whole thing where there’s still a lot of conversation around, is quant or qual better? And the answer is yes. Quant is awesome. And again, to use your language as a signal, then you use qual to sort of paint the broader picture and say, well, here’s why we’re seeing that. And then you talk about what you do about it. I think these things can bring us to those much faster. And that’s not a bad thing.

Cheryl Platz

Faster can be good. If it’s thoughtful, faster, that’s great, because we all know how difficult it can be to operate in the environments we’re in, especially when in the context of downsizing and locked budgets. And I know how hard it is. Our research team does so much with so little. And it’s not like we don’t make the money, but we’re trying to maintain responsibility so that if the markets do remain unstable right. Being fast and smart is a great aspiration if we can figure out a way to use these technologies ethically but intelligently.

Zack 

Yeah. And in addition to that, the fast and smart is, I think, really simply but well said with regard to this, because then what can happen is maybe there’s a lot of companies that can’t be fast and smart because of resources now, but this actually levels that playing field. So if you take that one step further, maybe everybody can start to be faster and smarter. Well, that means everything kind of gets a little bit better. And by the way, that means everybody will kind of need at least a UX researcher. There’s a lot of places that have none today. Right. And so even one person in that role helping them be faster, smarter, because that person can be amplified to what may have taken a team of five in the past. I actually see those as good things. It broadens the field, and it can apply to a lot more places that maybe UX research doesn’t even touch today. So I love this conversation, obviously, but we got to be respectful of your time. And I see we’re kind of running out of that. One of the things that I like to do when we wrap up each episode is I ask the person I’m talking with if I were to develop temporary amnesia. Forgot everything we talked about. Somebody came to you and said, Cheryl, what was that episode all about? How would you summarize it for folks.

Cheryl Platz

Listening today, we were talking about the power of artificial intelligence and its impact on the user research community. But we started at the very basic. What is artificial intelligence? What are some mental models you can use to understand the difference between the big, scary artificial intelligence and what’s actually going on under the hood? And once you have that understanding, we took that all the way to what might the future hold and how you might use these technologies to amplify the work of user research moving forward.

Zack 

Love it, love it. Before we jump off, I can imagine there’s folks that are probably going to listen to this, want to continue conversation, find out more about your work and things like that? How can folks find you, reach out to you, learn more about your books and things like that?

Cheryl Platz

Thank you for asking. I am on the scattered socials. It’s a difficult time out there. You can always find me@cherylplatz.com and all of my stuff is collated there. My book, Design Beyond Devices Creating Multimodal Cross Device Experiences covers a lot of the topics we talked about today, from seeking additional context about your customers in a world where you’re doing AI work or cross device work to the frameworks like Optopessimism and Pics and the chapter about AI and Core Concepts there that’s available at all online booksellers and also@rosenfeldmedia.com where you can get the digital copy for free if you order the print copy. And all those links are on my website. Sheldplatts.com of course you can also go to Ideaplats.com, which is my design education company. There you can see a selection of the workshops and talks I offer. If you’d like me to speak at your conference or teach at your conference, I’m always happy to talk. Obviously talking is a fun thing for me and I loved this conversation. As far as socials are concerned, I’m either Funny Godmother or the Cheryl Platts depending on the platform I picked Funny Godmother and it turns out some platforms take God and then just don’t allow usernames with that. And it was already a rebrand for me so I didn’t want to rebrand. So I’m like, fine. Fine. So on Instagram and Twitter and Blue Sky and Macedon is fun and twitch and YouTube is funny. Godmother. On TikTok and a couple others, it’s the Cheryl Platts. But yeah, you can probably find me on the platform you’re on totally well.

Zack 

And we’re going to have a lot of those links in the show notes for anybody listening, go ahead and jump over to our blog where it’s going to be posted. We’ll have all the links for that stuff too. Cheryl, just have to say thank you again for taking the time, really enjoyed the conversation and I’m sure everybody else.

Cheryl Platz

Will as well you thank you so much and I hope I’m wishing all the best out there to everybody. I hope that your research contains lots of fun and really exciting surprises for you in the next few months.

Zack 

Awesome. All right everybody, we’ll see you next time. Bye.