Contextual Surveys, AI and the Human Touch in UX Research with Ryan Glasgow

Aurelius Podcast – Episode 64 highlights with Ryan Glasgow:

  • The story of Sprig and the gap Ryan identified in user research and market research tools
  • Problems and challenges with typical surveys for product research
  • Effectiveness and application of in-context survey research
  • AI in UX research and the true value a human brings as a researcher
  • Concerns and tips for using AI in customer research analysis
  • A founders perspective on customer research and the power of in-context research

In this episode, Ryan Glasgow, founder of Sprig, joined me to discuss a range of topics. Ryan has been designing and building products for a long time. He’s held roles in design and product management, where much of his success has been driven from smart, insightful customer research. When he eventually started Sprig it was born out of a need for better in context research and he found that other tools weren’t quite cutting it for him. Now, Sprig is helping customers do exactly that.

Ryan and I talked about his background and the story of starting Sprig. We also spent a lot of time discussing the role of AI in UX research, including some tools available today (through both Sprig and Aurelius) as well as what we think the future holds. Naturally, we talked about how the role of researchers may evolve given the rise of AI and we touched on concerns and considerations for the application of AI in customer research.

Links from this episode:

This podcast is brought to you by Aurelius, the powerful research and insights tool. Collect, analyze, search and share all your research in one place.

Ryan Glasgow podcast on Contextual Surveys, AI and the Human Touch in UX Research

Episode Transcript

(this transcript was automatically created using our very own transcription feature in Aurelius and has been minimally edited, please excuse any typos or weirdness 😀 )

Hey Ryan, how’s it going? Hey. Excited to be on the show. Yeah, yeah. I appreciate you taking the time to jump on.

You’re a busy guy, sprigs, a busy place, and you were nice enough to take time out of your day and kind of come on and chat with me on our podcast. So I appreciate that. Yeah, excited to be here and talk research. Nice. Well, we are going to do, you know, as we typically do, though in the show.

Before we jump into any meaty topics, I like to have people introduce themselves, maybe talk a little bit about your background and kind of like your perspective in case folks haven’t heard of you or followed any of your work up to this point. Yeah, happy to. My background has always been in product management and joined for four different companies, pre product market fit, helped them find product market fit and go on to be successfully acquired. And then I joined Weebly post product market fit and really helped them scale to around 400 people to the acquisition of square. And I found a common theme around these product teams, researchers, product managers, designers, all having all these questions for their customers, those why questions that all emerge from looking at our analytics data or revenue data and seeing drop offs and trends and patterns and behaviors that we just really want to dig in and ask why?

And saw that it was just incredibly broken using these long winded email surveys that were so disconnected and detached from the product experience. And so how helpful it is it to send out a 50 question email survey months after using a product or trying a new feature? And so really saw the power of in product surveys. When I was at Weebly, we built some homegrown versions internally and it was just a huge impact on the direction, our decision making, our understanding of the customer. And when I left, Weebly took some time off and wanted to really solve that problem for all the other high growth and at scale tech companies out there.

And so was very intentional about working with companies that were post product market fit, really on their way to doing great things in the world. And some of those early companies include Coinbase and Square and Robinhood and Dropbox, who all worked with us before we publicly announced. And since then we’ve been off the races, bringing on just so many of the best in class research teams from companies like Notion, Cash App, Figma, PayPal, just to name a few, and really helping them scale their understanding of their user experience through in product surveys. And earlier this year we just launched session replay as well. And so two different ways to collect data at scale.

We have AI that then analyzes all the data and helps these teams make sense of what’s really happening. Yeah, nice. There’s a couple of things that you mentioned in there that really kind of strike me. I appreciate your. And Sprig’s sort of intense.

Know, you just say we’re a customer research tool and we’re doing all these things to help you capture feedback or in product analytics and surveys and all that stuff, but you’re super specific in the focus of, like, we want to do this for these people really well. And the other thing that really struck me with that as part of your introduction was, my background is in UX research product strategy stuff, and becoming a founder was different. And you start learning a lot about that and you realize some of the most successful founders are the ones who did what you just described, which is to say, this is something I have a problem with, so I want to chase that down and fix that. And that’s exactly what it sounds like happened with the inception of Sprig. Yes, you’re absolutely right.

Just Hyper focused on working with these companies that are quickly growing and specifically around giving them that contextual feedback about their very specific parts of their product experience. It’s not mps that we offer mps, but it’s not really what people use us for. They like to really get in there and ask after a trade or a deposit or an onboarding experience and really get in the moment to deeply understand what’s working and not working about those specific flows in the user.

I mean, the whole mention of NPS is like, especially most of the people listening this are UX researchers. There’s very strong opinions on. Some people are like, oh, yeah, we do NPS. Some people are like garlic to a vampire. But I’m not here to open that debate necessarily.

But it is interesting to hear you say that, because even there, it’s like, look, we offer it. It is a tool. You can use it if you choose to or not. But I really appreciate the laser focus that you have on this. To me, those are two characteristics of successful products and successful companies.

So that’s really kind of cool to hear. I guess one of the things that I’m curious about, as you were talking through that, if you could share more detail about sort of the old way, right. There were just like surveys getting sent out as opposed to as you were describing it in product contextual surveys. I mean, for folks that are UX researchers, they might understand that, but we have lots of people who listen to this, product managers, designers. What’s the difference between those two?

And maybe why one or the other. Yeah. If you think about even going back to how surveys were always collected and conducted with people, whether it’s consumer, whether it’s businesses, it was typically a list of questions, ten to 20 questions. And that’s really where companies like Qualtrics and Surveymonkey and Typeform and Medallia got started. Were these longer, long form surveys that were sent to us, maybe in the mail, maybe they were an email survey, a link to click and take ten minutes of your day to fill out.

And when I was at Weebly, we hired this phenomenal research team and they brought in qualtrics, which is the best in class, long form survey tool out there. And the product team, we were looking to deeply understand about some of the new features that we had launched in the past quarter. And we started to kind of go around and add all of our questions about my features, and other designers were, or product managers were asking about their features, and other product managers were adding questions about their features. And it got to be 100 questions in the survey. And we ended up having to offer $150 gift card for people to get through this really long survey to ask about all the different moments that people had had in the past one to five months with our product.

And I just saw how disconnected it was where you really want to ask the feedback. It’s almost like when you’re watching someone live and you want to tap them on the shoulder and be like, hey, it looks like you got stuck there. Help me kind of understand what happened. And so I thought about first principles and what we can do to rethink how survey data is collected in a very contextual, targeted way. And really the first year, year and a half of the company was building an event based architecture that really served as the underlying architecture for the end product survey platform.

And that had never been done before. To be able to take all the same techniques of analytics and say, let’s go specifically to this group of people who had just used this product, or used this product five times, and really ask them in that moment what’s working or not working about that feature, that flow, that journey as they’re going through it. And so you can think about, as you’re going through a product, you’re booking something, you’re completing a journey, you’re adding a product to your store. That’s really when you have the emotion, that’s when you have the question, the concern, the feeling, and we want to really meet the user where they’re at as that sentiment is really occurring. And so Sprigo is just really about being hyper targeted and really digging into understanding what users think in that moment with these end product surveys that are all based on users actions or inactions.

Yeah, that’s really interesting to me because what it makes me think about from a pure UX researcher standpoint is, like, if you’re going to do interviews or just any kind of feedback session usability testing, you’re going to have to screen and recruit people. If you’re doing a good survey, even if it’s a long form survey, typically you would have to do that. But it sounds like in this case, it actually removes a lot of the need for that because, well, this is actual behavior. You’re able to say, this happened. So let’s trigger this question or set of questions to really learn about that more specifically, too, sort of right at the point in time it’s happening.

Exactly. So we know 100% confidence the person just went through that flow. And to your point, we can ask about specifically what they just did in the product and get that feedback in the. Yeah, yeah, that’s cool. I mean, I can see the benefit of that for sure.

This is a totally left field question, but I get asked it a lot as a founder. So I just kind of want to ask you, I mean, why the name Sprig? Where did that come from? Great question. We started out@userleep.com so some folks might know us as userleep, and we’re about two years in.

We had raised around 60 million at the time. And I thought our $12 domain was a great start. But there’s other companies that start with user out there, and we really wanted to build an iconic brand that we could really own a word that when people heard that word, they thought of us. And I was so inspired by companies like Apple and Slack and Figma and plaid. And even I think Aurelius is a great example.

When people think of Aurelius, they think of your company. And that’s really what I wanted with Sprig and with, there’s some other companies that also had user in the name, and they didn’t always think of that brand that we’re really looking to build and invest in. And so we hired a naming agency, Lexicon. They had come up with names like Fabrize and Sonos and Powerbook, and they worked with them on a list of names we wanted. The.com as well, had to be able to trademark this in the US, Europe, Asia.

So it was definitely a whole ordeal to go through that and to get the domain and make sure we can get something that we can trademark. And Sprig was the one that really checked all those boxes for us and so really excited about that name that we can now really invest in and something that we can really build a brand around. Yeah, I mean, it’s an interesting progression of a company, too, to say this is kind of where we started, and that was fine, and we want to be more intentional about that. That’s why I ask is because those are always sort of interesting journeys as to how that happened. Yes.

And we absolutely have had the same thing where when we first started, people were like Aurelius, that might be kind of hard to spell. I don’t know if that’s a good idea. And I always felt strongly oppositely. I was, because obviously after the stoic philosopher Marcus Aurelius, but completely different. So once people, to your point, get that brand, it’s like they’re actually going to.

Aurelius is never going to get out of their head because it’s not user something, it’s not research this. Yeah, our story on how that happened was actually a little bit more accidental, but a story for another time. Anybody who wants to hear about it is welcome to reach out, and I’m happy to kind of share that. But getting back to some things that you were talking about with this contextual in product feedback and these surveys and stuff, you gather all of that. You’ve got to figure out what you learned, because it’s one thing to be able to ask this point in time.

It’s one thing to be very targeted. But ideally you’re getting responses from several numbers of people in order to then bring it all back together and say, this is what we learned. Right? You mentioned that you have AI and Sprig to do that. And this is a topic that’s really top of mind for a lot of people, for a lot of reasons.

Right? So AI and UX research is trickling down everywhere. Aurelius has it, Sprig has it, tons of other companies have it. Tell me a little bit about how Sprig decided to apply AI to help you make sense of this. You know, when I was at Weebly, we were collecting all this really great data with our in product survey and homegrown solution that was hacked together.

We also had some really great survey data from our email surveys as well. And I saw our research team spending sometimes up to a week sifting through all the open text data, all the survey data, and many of them, they’re PhD level researchers, really great at what they do. But they always told me they wanted to focus on the more strategic projects for the company, understanding new markets and company directions, and help inform the leadership team on customer challenges and really bigger topics that they feel like they could have more of an impact around. And they often say like, hey, this is so critical, though, for us to look through all these open ended responses, the thousands that we have in a spreadsheet, to really understand. And that’s really where the magic happens with research, these open ended questions.

And so it is extremely critical to analyze this data, but I often heard from them that it’s something that they wish they could have offloaded to someone else. And so starting Sprig, the first person to join me at the company, is our head of AI, and we actually started building out the text analysis at the same time as writing the first line of code to build the product. So we’ve been in the field of AI for 2019, the year the company was founded. And I remember at the time, the other players in the space, they were doing word clouds for their text analysis. I say price, you say price shows up as price in a single word.

You say cost, I say price. Separate. Right? It’s not able to piece those two together, but I saw just how critical it was to be able to look at these responses. Even if users don’t have any overlapping words or phrases.

Can we group those into the same theme? And so we had built all of our own tooling. We had human loop process of researchers training our models again back 2019 using Google’s open source models. At the time, it wasn’t called AI, it was called machine learning at the time. Exactly.

Which technically it still is. Right? But yeah, that’s a whole separate topic. Sorry, I didn’t mean to. And then earlier this year, we just switched to the GPT four models for a text analysis.

And with the new advancements in AI, we’ve significantly accelerated our AI roadmap. And so we started the text analysis in 2019. We’ve significantly up leveled the accuracy, the speed, the sophistication of the open text analysis, and we’re now considered the leader in the broader research category around open text analysis. And then earlier this year, we launched really ingesting the entire survey into AI and be able to get a summary of the data to ask questions about the data, correlations, strengths, opportunities, is all being surfaced in real time as the data is being collected. And so a lot of us thinking about going back to the work that the researchers say, hey, this is really critical, but I’d rather focus on something else and seeing if we could really use AI to help better empower those researchers to have that impact.

And researchers often they tell us they measure their work by the impact they can have on the. And so our goal is how can we drive more impact with the work that they’re doing by better supporting them with AI. Yeah, interesting. That was actually something I was going to ask you about, because again, you say AI and UX research, there’s strong opinions on both sides of that. I think that there’s a lot of very well founded, healthy skepticism about the application of AI in UX research in a couple of areas.

Right. Accuracy gaps, compliance, privacy. I think all those things are like super relevant and healthy skepticism to have. And then on the other end of it, I think there’s a lot of reception to that. That’s what I was curious to ask you, is that as you rolled this stuff out, you’re serving a broad range of folks with a tool like sprig, but especially UX researchers.

What’s been the reception to that? Because I think, like I said, it can kind of be hit or miss, depending on who it is and where they’re coming from. It really is. I think going back to the fundamentals of research, it’s about having empathy for the people using our products. And I think where AI could take a wrong direction is not really focusing our research on the people using our products.

There’s some companies in the space right now around synthetic users that don’t actually exist, and asking models, asking other models questions about a product or a screen or a workflow. But research is digging into the people. It’s not about understanding a model that’s been trained on the Internet. It’s about understanding a person and their emotions and feelings and thoughts in the moment of them using a particular product or flow. And so where we’re really focused on applying AI is really helping these researchers have more impact.

But sticking to the fundamentals of research, of that empathy for the people using our products. And ultimately, our goal is helping companies better understand how users better understand what they think of their products. And so that’s really our singular focus. And we see AI really helping these researchers get there so much faster, but also have such a greater impact by taking on some of the lower level work, like the text analysis, where it might get them 95% of the way there. They can then take it to that last mile, that 100%, really make sure it’s perfect, but get that whole week back and think about those other higher level strategic projects.

Yes. So I think that this is really key. I’m glad I asked you that question, and I almost feel like I was leading you, but I didn’t. You ended up in the place that I was hoping you would. And that’s very much our philosophy at Aurelius on AI for UX research.

So something that we launched called AI assist helps you do just that. Now, obviously, in Aurelius, it’s a lot more about longer form qualitative stuff, like interviews and usability tests or whatever, but that is exactly our intention. With that, I think people out there who have this apprehension, again, justifiably so, that AI is going to take my job or people are going to try to use it to replace UX research. I have strongly argued that that’s not the case, because the value of researchers is so far beyond those things that I share the philosophy and sentiment that AI can help us get from point a to point b faster. Right, where you finish that last 5%, as you would say it, because that’s where the most value of your work is applied for the reasons of impact, like you even mentioned earlier.

Absolutely. And I think where we see AI broadly going is that as humans, we move to more of an editor role. Instead of a role of creating or doing the core work. We’re really pulling together the final pieces. We’re curating these suggestions from AI.

We’re doing a final pass for the work that the AI has done for us, and that’s where it enables us to have that impact and really increase our impact. And so many research teams we work with, they say, I’m only able to deliver on 20% of the team’s questions, or 50% of the team’s questions, or 10% of the team’s questions. But with AI, we can maybe get a little bit further. Can we get to 70% or 80% or 100% by really just being able to cover so much more ground, with less time, with less resources, and have that impact across the organization. And so until we’re at that 100% coverage and every question and every product is perfectly researched, which I think is a very aspirational goal, we just have such a big gap to close, and AI can really help us get there with the impact of research across these companies.

Yeah, and it’s really funny because that was literally the next question I was going to ask you. You kind of already started on it, which was, broadly speaking, where does this go? But I will ask it pointedly of just how is AI going to continue to impact the work we do as UX researchers and people trying to understand customer needs. Yeah, I think it just goes back to that leverage. Every researcher can really provide that much more coverage, that much more impact, answer that many more questions, validate or invalidate all the ideas, test new designs at 100% of those that are moved before they move on to engineering every new release.

Can we deeply understand how that change is impacting user experience? Measure the incremental impact on that user experience. And that’s something that we’re maybe less in the field of AI, but we’re seeing a really interesting emerging use case of sprig, of integrating sprig, session replay and product surveys into feature flagging and a b testing. And a lot of the companies are saying, hey, I step one, want to understand what users think about my product. And I’ll ask very targeted surveys that are getting a 30% response rate compared to an email survey that’s getting maybe a 2% response rate.

But I want to take it a step further. And every time the team is rolling out an a b test, a feature, flag a new change, a new code change is deployed to our customers. Let’s measure not only the business impact with the analytics, but also the user impact with session replay and in product surveys. And so we’re seeing a lot of companies think about how user research can not only think about these bigger strategic questions, but also measure the impact to the users in a more high fidelity interaction type. And so you wrote a new redesign for a screen and a product.

Can you actually measure a usability score between the old and the new version and quantify the usability with users as they’re actually using that new redesign against users who are still using that old redesign? So less in the field of AI, but I think we’re just seeing the field of particularly quantitative research, be so much more embedded, contextual. I think that’s where bringing it back to AI, it could be really powerful to have AI help understand where to look around all the changes that are happening and measuring with quantitative research, how can we really understand and surface what changes we should be looking into and really spend our time there as research teams on the hotspots that are most relevant to our work and let us really focus on those? Yeah, this ability to centralize focus I think is really useful to talk about, because I get asked this a lot as well. And one of the things that I’ve been talking to our customers, just other UX researchers about, is to say AI is not going to help anybody ask the right questions AI is not going to know.

At least all of this is certainly not true yet. I’ll copy out with that. Somebody listened to this five years in the future, I have no idea where things are going to go. But as it stands now, AI is not going to have context of your organization to know what questions to ask and the reason why you’re asking that. It’s not necessarily going to know how to get you that last 5%, as you referred to, to say, okay, we got from point a to point b faster, but what do we do about that?

And I think the best researchers, the most valuable people doing this work are the ones who really understand that. And I would almost encourage folks to think about it as the ability to have powers of scale for human manpower, but not really. Right. So all the things that you just described to me sounded like awesome. If you’re a big organization with a ton of money and you could hire all these people to be looking at that stuff, that’s definitely what you should be doing.

But not everybody is. And so the technology that we just sort of hand wavy call AI actually will allow certain tools to help you do that kind of stuff at a scale that you otherwise would have required human eyeballs looking at screens, and it still would actually take longer. You know what I mean? Absolutely. I think it kind of goes back to the impact that right now we’re all being asked to do more with less.

And so how do you do more with less? You have to leverage the available tools that are out there. You have to look at what the cutting edge technologies are. And I think that’s where AI could help us get there and really show the impact of the work that we’re able to do. And again, maybe each researcher can cover two squads instead of one squad and really continue to show that business impact by having more research coverage.

Yeah, research coverage is an interesting way to say that, because, again, you would have to hire somebody to have more coverage, traditionally speaking. But this may be a way that allows us, I think, either now or in the very near future to do that pretty confidently without having to just open up headcount, necessarily. And I don’t know if that’s a good or a bad thing, but if this work is getting done and we’re making better decisions, I mean, generally speaking, that sounds like something everybody wants to do. Absolutely. Yeah.

Cool. Something else that people often talk about with is AI coming from my job kind of thing is you mentioned this idea of strategic insights and spending a lot of the time having a different kind of impact at that level, maybe. Could you talk a little bit more about what you mean when you refer to that? We see the research teams that we work with. There is more of the day to day decisions that need to be validated or invalidated.

It could be a quick usability test or on a mockup. It could be making sure your feature is meeting customer expectations. And I would say the more day to day research is what we’re focused on. At Sprig, it’s usually a little more higher frequency. It’s codes being shipped on a daily or weekly basis.

New designs are being created on a daily or weekly basis. And really setting up researchers, product managers, designers to get quick feedback on the changes they’re making, the designs they’re working on, and the area that we’re less focused on. We talk about that focus at the beginning of the call and this is probably more where you guys come in, perhaps is more around those larger strategic projects and perhaps it’s launching an entirely new product line. A lot of companies right now are focused on tool consolidation. How can we be many products under one brand?

And we’re seeing that, for example, in the sales tech space right now, a lot of consolidation of different tools. And that’s a huge investment, potentially millions, tens of millions of dollars in R D and go to market and marketing and branding. And so maybe it’s expanding to a new market. Maybe a company wants to expand into Europe and open up an office in Europe. When we think about these larger, bigger business questions, maybe iterating or updating an ICP.

When I was at weebly, we are focused originally on small business owners and we started to really focus on e commerce, small business owners and really narrow our focus there. And research drove a lot of that strategic shift to where we should focus the business and what that solution is to really meet that ideal customer profile’s needs. And so we think about these larger questions that a business might be facing. That’s where we see the interest in the researchers really wanting to step in there. It’s more exciting, it’s larger, more complex questions, but also they can have more impact on the business.

And that’s where other tools like Sprig can help them really hand off or help democratize. I know a little bit of a. Word, just like mentioning NPS, that’s another one, very strong reactions one way or. The other, very polarizing. But for those that are interested in getting other people involved, or somewhat involved to an extent, there are tools like Sprig that can help them do that.

Yeah, so I’m super biased, but I love where you went with this, because really, my interpretation of what you’re saying is that user research, Ux research can be used as a risk mitigation technique for these businesses. And the thing is, that’s actually true today. AI will only allow us to accelerate that as being a risk mitigation factor because of what you just described. Research can uncover the fact that maybe we shouldn’t expand to Europe at all. Not the ways in which to do it, but the bigger strategic question of maybe we shouldn’t do that at, you know, everything.

What you just described to me very much sounds like managing risk for a business and UX research. The people who do this work today can and will be able to do that at scale. Now, why that matters, tying this all back again to the impact. Most of the people I know, and I mean, being someone who worked in the field for a long time, as a UX researcher and strategy type person, I always wanted to work on that stuff. I wasn’t as much interested in the day to day.

Is the prototype right? Do we have a usability test for just like this update to a feature and stuff like that? I kind of wanted to work upstream more on these bigger strategic things, and this kind of technology enables us all to do more of that, which I have to believe is probably something appealing to everybody. Curious your thoughts on that? Yeah, I would definitely second what you’re making.

I don’t know if I have a lot to add, because I think it was definitely spot on. Nice. Well, good. We could just wrap it up. I guess I nailed it.

No, but, so the reason I say that is to distill what you said. So I’m certainly not taking credit for it, but that’s kind of my interpretation of it. And again, that goes back to the things that I think, and certainly I know from my experience in the field and folks that we talk to all the time, the impact that they want to have is that I don’t even know that I would call it democratization at that point. I would say being able to do other parts of the research that get in the way of you being strategic more efficiently, it is risk mitigation. Putting some of that on autopilot as a means for you to spend the time and energy of what humans are really good at.

Which is the more strategic work. Absolutely. And a lot of times when we think about products and services and different jobs to be done, we often look at what’s in it for the individual, what’s the emotion, what’s the feeling? What’s the outcome for them? And one thing that we’ve noticed with researchers bringing sprig into their organizations is that research, and this is what I saw at Weebly, is that research was often a folder of decks, or maybe it’s a repository in Aurelius, and maybe it’s a hub of insights and what we like to focus on, of helping elevate research and say, instead of Google Slide decks or PowerPoint presentations, that maybe someone’s presented and it gets know, I like to think that both Aurelius and Sprig, we’re helping research teams have a seat at the table and a lot of the researchers will come to us and say, hey, I want to have more visibility on the work that research is doing.

We have all this visibility into other data types. Maybe it’s behavioral data or revenue data, or a data science team is looking at behavioral metrics and causal insights. And where I think we all want to go in the research field is that we’re looking at behavioral data, we’re looking at revenue data, but we’re equally looking at sentiment research data of what the end user thinks. That’s something that we’ve been really focused on, is how we can elevate research data alongside all the other data types like behavioral and revenue data. And how can we get the research data in front of leadership teams, executives, ceos, in the format that they’re accustomed to.

And so that might be a looker dashboard, a tableau dashboard, and really help them understand that they can systematically understand the behavioral data, what the user is doing, the revenue data, but just alongside it, also look at the sentiment data and together really get that complete picture. And by having that research data alongside the behavioral and revenue data, the leadership team, the executive team, the CEO can start to really see the impact and power of quantitative research in a more analytical way that can help them understand exactly what broadly their users might think in the moment. And so a lot of our customers have wired up their critical user journeys of onboarding and making trades and depositing and taking these longitudinal and product surveys, these scores, these themes from the AI, and then connecting it with their data warehouses, with their business intelligence dashboards, and building out these executive dashboards for leadership teams to really understand that this is something I can consume just like all the other data sets that I have. And imagine what the really is, too, is that you’re able to really centralize and make it more accessible, more visible, more people can consume and really understand. And it’s not a deck that gets lost, because I think that’s what we want to avoid.

And I think that’s where we don’t really get the impact of the work that research teams are doing. So it’s really funny that you bring up this focus on, like, a deck or a report, because this is an exact quote from Aurelius customer once that said, what Aurelius does for them to hopefully get this right, it releases insights from the prison of a slide deck. And that was, like, so profound to hear because our focus has always been. It sounds very similar to what you’re trying to do with Sprig, is we want to help researchers do the jobs that they do today faster and easier, so that they have more time, focus, and energy to have that impact up and spend the time helping executives and major business leaders and business decision makers use that to influence their decision, because they want to, those people at that level want to do that. Researchers want to have that impact.

It’s a win for everybody. And again, I think bringing it back to this topic of AI and UX research, I think that we ought to embrace that as the catalyst for this, rather than thinking of it as taking something away from our job, but rather allowing us to focus on the things that have the greatest impact and that are the biggest win for everybody involved. Absolutely. And again, going back to impact, I think that’s ultimately how we want all of our roles, whether it’s research design, product management, engineering, what’s the impact we can have on the business, on the product, on our customers, on the world. And so that’s how we measure success.

How we see a lot of the best in class research teams measure success is what is our impact to the product experience? And can we inform decisions that are being made? And by maybe not democratizing research, but democratizing the data so that anyone can consume the data and have it really accessible and available is something that I think shouldn’t be polarizing, because we do want everybody across the to be able to view and consume and really deeply understand the data. Whether you want someone running research, maybe we’ll save that for another day. But I think getting that data in front of as many people as possible is really around that accessibility and impact of the data.

Right. I share that sentiment. I think that that’s something everybody can probably agree on, that we want everyone to benefit from the value of research. I love it. On that note, I’m realizing we’re coming up to the end of our time together, and I could ask you dozens more questions about this in the topic of Aiux research.

But I got to be respectful of your time. So how we typically wrap things up on the show is, you know, if I got hit on the head, temporary amnesia, and somebody came to you and said, ryan, I heard you did a podcast. What was that all about? How would you answer that? How would you summarize what we talked about?

I think the role of where I think any role is going, but research included is doing more with less thinking about our roles evolving and thinking about our roles evolving, where we’re shifting from a creator to an editor and really embracing that editorial role that AI can really allow us to really step into. And I think being okay with that and being excited about that, embracing that and seeing that as an opportunity to have the biggest possible impact on the customers that we serve and have the deepest understanding and empathy for those customers that we serve. Got it. There’s your summary. Put it on the postcard.

So this is awesome. I really appreciate you taking the time to chat with me about this. It’s a topic that I am quite sure the conversation will continue on. Is there anything else you want to share with folks that we didn’t get a chance to talk about today?

We’re always looking for teams to work with and so work with many of the fastest growing teams and companies and work with research teams like Notion Square, PayPal, Figma, and would love to get to know more research teams out there. And so we have a special offer for Aurelius listeners. So Sprig.com slash Aurelius and so sign up. We’d love to share more about how other research teams are measuring their user experience doing longitudinal metrics and product surveys, session replay to deeply understand exactly how their users feel in those moments, have that deep empathy for specific user moments and journeys, and look forward to meeting you there. Yeah, right on.

And for folks listening to this, obviously you can go to that rate to that URL. We’re going to have links to that in the show notes of where we post this on our blog as well. Folks want to reach out to you, ask you questions, continue the conversation. How might they get in touch with you to do that? Twitter and Glasgow, just the city and both Twitter and LinkedIn DMs are open.

We’d love to hear from you. Right on. We’re going to have links to that too, where you can find Ryan, ask him questions, fight with him about democratization if you want to. No, I’m not inviting that for you. But yeah, we’re going to have links to all that stuff.

Ryan, really appreciate you taking the time to chat with us, and we look forward to continue this conversation elsewhere as well. Zach, this has been great. Thanks for having me. Awesome. All right, everybody, we will see you next time.