Episode 57 highlights – Phil Hesketh podcast about UX Research & Consent:
- UX Research consent
- GDPR and UX Research
- How to gain proper consent with your participants of user research
- What are the rights of potential participants of UX research
- The ethics of informed consent
Links from this episode:
This podcast is brought to you by Aurelius, the powerful research and insights tool. Collect, analyze, search and share all your research in one place.
(this transcript was automatically created using our very own transcription feature in Aurelius and has been minimally edited, please excuse any typos or weirdness 😀 )
Zack Naylor: This is the Aurelius Podcast episode 57 with Phil Hesketh. I’m Zach Nayler, Cofounder at Or Alias and your host for the podcast where we discuss all things UX research and product. In this episode, we have Phil Heskett. Phil is the founder of Consent Kit, which is a UX research tool that helps teams gather and manage consent from their participants and do that at scale. Phil joined us to chatting great depth about gathering and managing consent from research participants and managing that and consent at Scale. He talked about their platform, but also went into a ton of detail about gathering consent. We talked about the difference between consent and informed consent, why it matters, and how it can impact your research in a lot of ways. We also talked about staying legally compliant in doing UX research work and the implications of things like GDPR and how to stay on the up and up to protect ourselves and our participants. This podcast is brought to you by a Alius, the powerful research repository and insights platform. Aurelius is an all in one space for researchers to organize and analyze data, capture insights, and share outcomes with your team. Transcribe audio, visual, realize themes, capture findings, and have a report created for you automatically, which you can share with anyone in moments. Check us out at Arella. Com. That’s A-U-R-E-L-I-U-S-L-A-B dot com. Okay, let’s get to it. Hey, How’s it going? Yeah.
Phil Hesketh: Good. Thanks. How you doing?
Zack Naylor: Awesome. I appreciate you jumping on and joining us for the show today would love for you to maybe introduce yourself, talk a little bit about what you do in your background. So for folks listening, if they don’t already know who you are, give them a sense of who we’re chatting with today.
Phil Hesketh: Yeah, sure. So my name’s Phil Heskett. I’m the founder of Consent Kit, so we are a platform to help research and design teams to obtain and manage informed consent and do that operationally at Scale. So my background predominantly is in design. I’ve been a user researcher and do more research in the last four or five years or so. Before that, for about ten years, I was in UX Design and I’m based here in Sunny Manchester in the UK.
Zack Naylor: Sunny Manchester. I don’t know if I’ve ever heard that many places in the UK described as sunny. Are you cheek or is it honestly sunny there a little bit.
Phil Hesketh: But it is actually sunny at the moment, which is unusual. So if we’re feeling I’m doing a bit more confident about the Weatherhead. No. Okay.
Zack Naylor: And so that’s the reason I ask is because I’ve never lived in the UK, but I understand that the summers can be cloudy and gloomy. Is that right?
Phil Hesketh: Yeah, they’re definitely they keep you on your toes. It’s like sunglasses and waterproofs at all times.
Zack Naylor: Good. Well, I’m glad to hear that you’re prepared anyway. Thanks again for jumping on to chat about this stuff, particularly what you’re doing with a consent kit, and how that pertains to research, I think, is a very useful topic, especially for the people who tend to listen to this show, which are mostly a UX researchers, product researchers, even market researchers. Right. Let’s maybe Zoom all the way out when we talk about consent and things like that with research. I mean, just what does that actually even mean? So for folks who aren’t already thinking about that or maybe unfamiliar, what does that even mean?
Phil Hesketh: Yeah. So there’s actually a difference between just consent and informed consent as well. I think it’s worth calling the house is that consent is difficult. You agree to the terms of some kind of engagement or relationship with someone, whereas informed consent is really much more a lot more detail. It’s not just a simple question. Are you happy to take part on that? It’s like you need to know a bunch of stuff beforehand there so that you can make a reasonable decision about whether or not you want to take part and you’re happy with it with you. We got about to happen. So. Yeah. So for example, in the context of research, it’s things like, what is the project about? Maybe why you’re talking to them? They might expect from the session, what kind of information you’re going to record? Like what you’re going to do with that information? Like, who else is going to see it? And then also, what are their rights under various state of protection laws, which might be applicable to them? So in Europe, for European citizens, that’s GDPR, for example. So what rights do they have to that? And then how can they actually exercise those rights to see if they want to?
Zack Naylor: Yeah, for sure. And GDPR is, I think, probably the best example in the US there’s CCPA, which is very similar. It’s almost US version of GDPR in a simplified explanation, I suppose I’m curious, how did this become such a central focus for you that you said this is a challenge that we want to help people solve.
Phil Hesketh: Yeah, sure. So it’s actually started from me, probably back in mid 2016, and I became really interested in design. I just read Retrained to do a Masters at the time. He was a part of the thesis for that. I was really looking at this opportunity to spend three or four months to really deep dive into something. So I started to when I was trying to find a topic for that, I started to really look at design itself and what design, where design was going and what was happening. And I noticed that there was a thing called the Design index. The Design index was basically they tracked standard and poor 500 SMP 500, and they track 50 companies who were using design as a strategy or a strategic level in terms of decision making within the organization. So that’s like the highest level of this design maturity model. And they track them over a period of ten years. And they saw that these companies massively outperformed everybody else. So I think it was like 213% more than others. So I had this bet that design is really like a strategic differentiator for businesses and for organizations when they can get to that level. And I also sort of started like to think, well, where has that impact been the most? And I noticed that really it was going in the consumer marketplace. And if you look at a proliferation of the Internet, it was almost totally saturated and has been a huge disruption over that with the last sort of ten or 20 years is what the Internet’s done to that industry. And then I like to also then look like, where hasn’t he gone? Things like healthcare, education, government, financial, like, all of these things hadn’t really been sort of affected at the time. And I just made a bet. I was like, okay, well, companies either moving into this space to disrupt them or companies who are maybe going to diversify because he can’t really go much further in the consumer space are going to move into this. And if we then start to bring designs a strategy into that, we’re potentially dealing with more and more vulnerable people and more vulnerable audiences. And it was like, are we prepared as designers to do that? So the the time certainly probably still the cases. There’s not much of a sort of formal education around or things like consent, data management and stuff. And it’s still a bit like the Wild West in terms of how we’re doing stuff. And we’re now starting to obviously, since 2018, we’ve had GDPR and these data regulation are becoming more and more of the thing. Well, certainly these industries, like health care, for example, in fin tech, it’s like much, much more regulated. So you have to be a lot more careful about how we approaching and what we did. And so I started to really pay attention to sort of design teams. And I was really into genetics and being like, how do these kind of conversations happen and what’s going on? And as I was doing that research, I noticed that informed consent really wasn’t was like an afterthought almost it was something that people would do. And they would they would maybe get a consent form from a previous project, copy it over and just sort of use that again. And then when GDPR came out, all of a sudden, this kind of relationship with data became way, way more prevalent. It wasn’t just about ethics in terms of are we doing the right thing? It was actually like this kind of risk now for organizations who are not doing this. And it felt like informed consent, for me, was really sad right in the middle of all of that. And I think that obviously there’s a huge ethical component to inform consent and all of the challenges that kind of wrap around that. But also really in terms of an opportunity, for example, to create psychological safety for people to give them the e. It’s like a legal mechanism for them to pull out or to change their mind and all these kind of things. But also, yeah, there was this regulatory risk as well, which is like, actually, we need to be able to evidence the fact that we have permission to use the data in this way. And then beyond that, really, it was like, okay, well, actually, there’s so many innovations now in terms of research, repositories and stuff like this. And also against this backdrop of democratizing research, they just felt like loads and loads of challenges, load and loads of question marks just around this kind of fairly innocuous form which sits in the in the middle of this process. Yeah.
Zack Naylor: Okay. Really interesting. I mean, there’s a lot that you shared there in a couple points that want to touch on a little bit deeper. I mean, as opposed, I should say, with regard to consent versus informed consent, I think that makes enough sense. Right. Like, it’s one thing to gather consent, but it’s another to make sure that somebody really understands what they are consenting to, what their rights to the data are and all those things. The one thing you touched on with that is I’m sure it sounds familiar to people is like we got a consent form from a past research project, plug and play reuse that had somebody use it again. I’m going to ask the obvious questions because I want to hear your answers to them.
Phil Hesketh: Right.
Zack Naylor: But what’s the problem with that? Why shouldn’t researchers do that if they are today?
Phil Hesketh: Yeah. I mean, it comes back to inform consent, but a lot of the times I see is this kind of a very general consent form, which kind of covers a huge amount of processing potentially that could happen. And it’s not really particularly specific. And I think when you look at the requirements against that are specified in the GDPR, for example, for actually obtain an informed consent is you need to demonstrate a few things. One is not only what the person was asked and what was given in that form. Ball. So how the person was asked as well when they wrestle all of these things about the kind of wrap around that really, if you have a very broad general thing, there’s a pretty good chance that if ever, for example, went to court, which may or may not be likely, it’s hard to tell that would us actually get thrown out and you say, actually, this isn’t adequate. It doesn’t really meet the requirements on this. That’s probably the biggest thing, I think, as well as the other thing, just from a more kind of human aspect of it. I is that it’s really like an opportunity, a lot of the times we use really weird language.
Speaker UNK: Right.
Phil Hesketh: So you’ll say to someone, hey, do you want to come and do a Usability test? And it’s like, what’s a Usability test? If you’re not in and our or bubble, we use these very we can interview you about this thing, whatever. And people get nervous and people don’t really understand that these things are. And then if you imagine you bring eyetracking into that or something else as well, and people are freaking out, we follow we run into because I think there’s probably a sliding scale as well. And if you’re doing, like, a checkout experience or something like that, it’s like one thing. But if you’re talking about someone’s experiences with th an illness or something like that, or you’re trying to understand more discovery stuff in, like, a health space, you’re potentially collecting a lot more data. But also you’re talking about things which may be either previously traumatic for someone or maybe quite difficult for someone to talk about. So I would say one of the main benefits for me is of using this Fing upfront is that you’re actually saying, look, this is what we’re going to talk about, and you can maybe either a pace after it. But also you have the ability to actually say no if you want to. Actually, I’m not comfortable talking about this. Or I’m not comfortable with what you’re going to do with this data afterwards. Obviously, the list of compliance risk there. But I think really, for me, the main benefit of not just taking a cookie cutter approach to using previous consent prompt is that you’re missing a huge opportunity to actually remove to address some of those anxieties really, and just communicate really clearly and plainly with someone before and before they turn up to the session. And I’ve personally been in sessions before where we’ve been talking about usability testing, someone’s really came up. They’re very closed an then as soon as you get the end of an interview and you’re like, okay, I’m going to stop recording now. They just suddenly breathe this sigh of relief, and it’s okay. And then the answer start coming out and they suddenly are mind and they realize it’s not actually that big of that big of a deal. Right. And I think one of the things with informed consent is you can get people closer to that. I’m more comfortable before you start the actual session. So that obviously means you could get better outcomes as well and better research.
Zack Naylor: Yeah. That makes a ton of sense. Actually, that last story was really useful. I think for folks to hear one of the things that I used to do when I was doing research for other companies. If it was usability testing, I would actually sandwich that with so many interviews for that very reason because it happened all the time. Right. Like, when you’re doing research with somebody, especially in these contexts, you often don’t actually know who that person is. And if you’re asking them questions that they could feel very intimate or private for them. But it’s going to help you make better decisions. And so that’s tough because you have an established maybe enough report or sort of a level of comfort with them. And so that definitely makes a lot of sense. And I guess the two big points that I pulled out of your answer to that question was really the reason why you should do this is number one risk mitigation. So, legally speaking, there are certain requirements that we have to meet GDPR, particularly in the UK. And the EU is one of the well, not one of them. The one I would say, right. But the reason number two is actually it sounds to me like it can help you do better research. It can help you build some of that confidence in relationship with that person so that there they film a little bit more comfortable in getting right to the meat of the heart of the matter that you’re looking to learn.
Phil Hesketh: Yeah, definitely. I mean, one of the things that just to build on that Usability test as a term as well is actually one of the times we started to follow with people, and we did consent very verbal. And it was much like a conversation where we were trying to design, what are the things people need to know and how can we communicate that best with them? And one of the main things that came back from Usability testing was people were saying, I think you’re going to test me. And I don’t want to look stupid. It’s just like it’s not about that at all. So we put this extra bit in here is we’re not testing you, we’re testing a product. And if you can’t do it, or if something doesn’t make sense, that’s actually super useful for us to find out the one thing that you just said there to me.
Zack Naylor: It was just a very nice, subtle revelation is actually having the conversation about consent rather than just handing somebody form and saying, we need you to sign this. So it’s okay. We record and it makes it feel very clinical. It doesn’t make it feel like you’re some kind of guinea pig, like we’re almost doing medical tests on you or something like that for a new shampoo or something like that. Right. But to your point, I think that’s really subtle, but big impact that can have to say, well, we’re going to talk with this person. It’s not just this thing we’re going to sign, but we’re going to talk with you about here’s what we’re doing here’s, why we’re doing it. Here’s what we plan on learning here’s what it’s not here’s. All the rights that you have on this. It just maybe helps put their mind at ease. And it’s not just asking them to read the fine print on something, which, let’s face it either a nobody does or B they don’t fully understand. And then to your point, maybe you feel uncomfortable with right.
Phil Hesketh: Exactly. Yeah. And I think that most researchers will know this stuff, right. They’re really good at building a bit of a part. They’re really good breaking the eyes. They’re really good explaining this. One of the trends I see with this Democratization, the green searches. We get a lot of people who are not researchers who don’t have that. So one training or don’t have that background people coming into this, and I count myself as well amongst those people. I’m not a classically trained research, which I’ve read a few books and got involved when you describing from consent, it’s almost like a check box, but it’s actually I think this thing is a really big opportunity for a bigger thing within the part of what we’re doing. Because also what I normally do when we do a session is obviously send the consent form, tried to get it at least a week ahead of time because it gives us some time to think about it and read it like, you don’t want them to be sort of rushing through this thing. And then at the beginning of the session, I’ll say to them, so normally we have these granular options in content where you can say, I don’t want I’m happy to be recorded on the screen or I’m happy for audio to be recorded or whatever. At th beginning of the session, I’ll say, doyou have any questions about the consent of the consent document, and I’ll sort of talk them through that and then just say, I know you’ve told us this. So I like, this is what I’ve done in response to what you’ve told me and children that you have some control and then usually because you don’t know where the conversation is going to go at the end of the session, I’ll be like, okay, now we’ve finished talking about this and we actually know what we’ve spoken about. Was there anything that we said in that sessio that you weren’t comfortable with or another chance really, to be like, okay, yeah, this bit and redact that as you go forward, because really, the consent is almost like, maybe come on to this a little bit Mor detail after. But the consent form is almost then like a distribution license for what you can do with that data going forward. And I think in this kind of world of repos, which is like an amazing thing because we’re still trying to sort of prove the worth of research a lot of the time in organizations. But also when you think about this design as a strategic decision making tool within organizations, it makes a ton of sense to do this. And also it’s more ethical because you’re not asking the same question over and over again to people. So it makes total sense to have repos. I think that presents a new bunch of challenges as well, because what happens to this data and our responsibilities to that data change. It’s not simple as, hey, I’m going to delete this. No one else is going to see this. That nature is gone from like a push where I am the researcher and I am controlling who sees that and the publication of those findings within the organization to like a pool where it’s like anyone in the organization can access that and see it and they don’t necessarily know what they can do and what they can do with that, right? Yeah.
Zack Naylor: For sure. So where you started on that was something that I wanted to ask, too. I mean, I think it seems obvious where, of course, you have to obtain consent before doing research with people. I wanted to ask in terms of doing it to this depth in this thoroughly heroes that in the process, is it more than just simply upfront? Yeah.
Phil Hesketh: I think this is really why we built content. Is that it’s really difficult to you might get one or two people in an organization who are really hot on this, and I’m really good at it. They understand what they need to include forms and they can write them and stuff. But when you start to scale that process up, it’s difficult for people to do that. And the main thing is consistency. Really, with any kind of data governance and data management, you need to be logging what you’re doing consistently. You want to be asking consistently and then also managing that data consistency or deleting afterwards. And it starts with, I think what we saw at the very early prototypes we were using, we were doing things like Google Forms and then spreadsheets and merging stuff together like that and trying to figure out something that would work for us. And a lot of the time that emphasis really fell in the planning stage of research, how you would put a research together. But then after the facts, we realize actually this thing listing afterwards. Like, if you say, okay, our retention policy is I like, twelve months, for example, we’re going to keep your record for twelve months. You’ve got this, like, twelve months gap, which is really easy to forget about, because once you’ve done that project, you’ve done the insights, like you move on to the next project. We’re going super fast researches. And there’s a lot of pressure on us to to to continue to deliver insights. And we want work to have impact. And then maybe you switch projects or you move teams or one something else, like it’s really easy to drop this ball. And it’s such a simple thing, even after the fact is, how you can remember to pick up or if you leave the company or whatever, who else is picking that up and who else is taking responsibility for that data that you’ve created on behalf of that person?
Zack Naylor: Yeah, for sure. So what is the long tail implications of that? So I don’t claim to know GDPR law all that well, which is actually funny in and of itself, because it’s still so new that even the courts don’t seem to know the GDPR law and how to interpret that as well as they claim to. But what are the Longtail implications of that? Right. So you obtain that consent, the example you just gave us. Okay. So you have that and you’re able to use it in the way that they agree to for twelve months. But what happens then? Like, what have you seen occur there?
Phil Hesketh: Yeah. I mean, typically you need to tell them, like, how long you’re going to keep that data for, right. That’s one of the main things in the consent form. And then the expectation is that you delete that data after the fact. One of the strategies that I I’m just really thinking about quite a bit at the moment is, okay. Well, we’ll take this data and then we’ll just fully anonymize that data. And then then at that point, it becomes not subject to GDP, because just about personal identifiable data predominantly. But it’s just like the more and more when you really look at anonymization you look at like, how do you actually how can you actually anonymize the stuff I think there’s something about when if you have a recording of someone telling the story or you have a snippet that’s like an absolutely bomber quote videos. I want to put this quoting because it is beautifully articulates this point. And it’s a really anonymized that it’s really difficult. And it also strips away the power of that storytelling. The so it’s even things like a voice is like a unique Identifier. It keeps you would end up with this kind of X files, like smoking man muscles, voice insights or whatever that would be in an organization. And I don’t know if they when I wonder, on one hand, if they would actually have the power to convey the story in the same way. But also we would use anonymization as this it’s okay. We’re going to anonymize your data. But it’s like when you really look at what that means, it’s so difficult to actually do that. And the types of data that we’re collecting this research, it’s classified as, like, special category data on the GPR, because even if you just have a video or someone with you now, I can tell your gender, I can tell your ethnicity. I can tell all these different things that we’re not even talking about, but they’re totally implicit just in a medium of how we’re capturing this status. It’s very difficult to sort of massacre and whether or not you should or not. So I think the the probably the longer for me. I think the better strategy is to be like, let’s just be more transparent upfront about what we’re going to actually do with this and how this is going to work and its people comfortable with it, then just give them the agency to say so. But at least it’s going not being devious in any way or kind of masking things from people.
Zack Naylor: Yeah, for sure. Really good answer. There. Really well explained. We have much bigger problems to tackle, but that would be a fun and also fairly useless startup where we could build something that just muffles people’s voices. And it gives the X Files filter to all this stuff and just say, yeah, isn’t that cool? He throw your audio video and there I don’t think that’s the right approach, but that would just be a funny. I.
Phil Hesketh: Think there is, like, legs in that, though, because there’s really this weapon anization there’s, like, true anonymization of data, which is what you’re trying to do is to minimize the likelihood of statistical linkage with other datasets, because the way it is really with anonymization it’s like when you combine it with other data sets on its own, you’re like, I can’t tell who this person is, but you put one or two other data sets in there, and then all of a sudden I actually I can tell who this person is now. Or I can narrow that person down to ten people or five people or whatever. And then the more granularity that you have on that, the more you can get to it. So it’s a bit like I was watching the film, remember The Fugitive from the 90s with Harrison Ford into kind of an old movie now, I guess, but it’s a great film. But there’s a bit in this where he’s talking on the phone and they’re trying to figure out, obviously trying to catch this guy and they hear this a train in the background, and that’s an elevated train, like it’s a very unique sound. And it’s like which cities in the US have elevated trains, and it’s New York wherever else mentions. But they narrow down to three cities immediately. So it’s like, right out of this entire enormous country, you’ve just narrowed this down to these places, and then it’s like, okay, now we look at where are all the pay phones that are next to those things and then narrow it down even further. And it’s like he’s been to one of these spots. So even just from that one piece of, like information which I in the background, which you’re not even thinking about it it’s actually possible to really narrow that down to it. So someone has reidentify them. And obviously, again, this is like a sliding scale, right. If you’re doing Usability s with someone, it’s probably not depending on what our testing, obviously. But if you’re do more in depth on ethnographic stuff with people, it’s not. I don’t think we can really say for sure that it’s like we can definitely anatomize this data. But what we can do is to find stuff. So that’s the blowing of a face or redaction of names and things like that where you start to take out the kind of the clues, if you like, which links these pieces up.
Zack Naylor: So yeah.
Phil Hesketh: I mean, I’ve started saying now on my content phones where I’m not actually going to anonymize your data. I’m just gonna Deidentify your data. But then we usually have a conversation by that, because, again, it’s weird. Is it a weird technical sort of like thermal clause? It it just needs a bit more explanation.
Zack Naylor: Yeah. I was going to say share that explanation. I got to believe somebody goes, well, what does that actually mean? So how do you talk to them about that?
Phil Hesketh: Yeah. So we just say, basically, I just say it’s really difficult to properly anonymize your data and what you tell us. There’s still a risk that you could be, like, identified. But why do to try and minimize that as much as possible is I will redact certain pieces of information. So, for example, I can blur your face. I will take you can take your names out of it. If you talk about a place or whatever, we can do all that. But it really depends. I think that’s really contextual based on the type of studies that you’re doing because you’ve got this kind of trade off of is it? Yes, you could do that with all data that you collect. But really, what’s the risk again? So you’re looking at this, making this risk judgment each time. And I think where this gets a bit tricky is that you’re potentially making decisions or we’re making decisions based on the other person. Right. So it’s that you’re removing their agency away from them. And I think that’s really an important point to to pull out. So as we try to have a conversation with him and see, and I think going back to this kind of consent is distribution license. And one of the things we want to start exploring now is that we have these things, like consent cards, essentially, which can travel with data. So it’s like this can potentially map onto a snippet. So if you’re looking at a video in a repo, you could potentially see this is what we can actually do with this thing. And that could be. But you could say I’m going to share it with my product team. I’m going to share it with the wider company or whatever. Who else has access to this? But you could also say things like, okay, what would you like me to do to this? The thing. Would you like me to blow your face? Would you like me to do whatever? No, reduct your name or whatever, or you comfortable with all of that. But it’s really it gets really tricky because I think people don’t fully always appreciate the consequences of what they’re doing as well. So one of the questions I used to ask is, are you happy for me to share this publicly and people someone was like, yeah, that’s fine. It’s okay. How would you feel if this appeared on the side of a bus stop outside outside your home? And he was like, well, no one like that. So like, well, thats what publicly means. It’s like that could be you’ve just given us permission to essentially do that. So you’re actually comfortable with that? I think a lot of the times again, going back to the point about plain language and usability testing is that these things like the identification anonymization and how you share the data. A lot of that is wrapped up in words which which might not necessarily land with people when they’re just thinking about it. So it’s like, how can you reframe stuff to to make that as clear as possible?
Zack Naylor: Yeah. And again, I think having that conversation rather than it just being in a form and that really to me, I’m pulling out of what you’re sharing is really almost at the heart of the differences between simple consent and the informed consent is like you’re having a conversation with somebody. You’re making sure they fully understand what they are consenting to. Right. Which is like why you add the word inform to it. But it also just really helps them understand what’s happening instead of like this very clinical feeling situation. And with all of that, we’re talking a lot about data collection. Of course, now as being a repository, people put a lot of data in there. We get questions all the time. And I’m sure you do. So I want to ask you, because I want to hear your answer to this. How can we be GDPR compliant? How can we be CCPA compliant and how we’re collecting data, how we’re using it? Do you have tips for folks in ways that you typically answer that when they ask you?
Phil Hesketh: Yeah. I think I can probably speak more to GDP than CCPA, but typically you have GDP or a six lawful basis for processing any data. It’s really straightforward when you look at at list is that if you’re going to do something with that data, if you’re going to put it to repo, if you’re going to synthesize it, if you’re going to share with anyone, that’s all that’s all camps like processing data, right. That’s what that means. And what’s your lawful basis for that. So some could be contract that you have with someone. It could be whatever. But typically for others researchers, this is informed consent. And the risk around that really is. Well, if this with the consent form that we’ve asked them to sign, stand up in court given how we’ve utilized the statue. And so really, it’s almost like this is a distribution license. So this is the thing which is going to protect us from it. So I think that I don’t really have all the answers yet about I think in the kind of push pull dynamic of how we share data is changing and also this kind of democratization of research as well. It’s almost like, how do we democratize governance is the next thing? It’s like, how can we equally distribute that and share that out? And that’s something that I think for me if you have this is the best thing we can think of right now. It really is. Can you get the consent forms close to that as possible? How can you be super transparent with what those permissions are so that you don’t want to be someone isn’t going to go into a Ralis, find a really a great insight and then be like, oh, we got permission to use this, and then I need to go into another system, log in to consent it and see if we’ve got this and what we can actually do it. It needs to be like, there you want it to sort of live with that insight, but also in a way that it’s just very simple to understand. We’re humans, too, and we’re rushing and all these other things. It has to be very simple, very easy to get. And then I I guess the other thing is, how can you make that? How can you share that accountability or maybe democratize that accountability out? So there’s something you could do where you could easily see or just check. Do we need to delete this or when do we need to delete this insight by can I actually use it in this context that I want to on this project and who’s the person who’s responsible for this? If I have a question about it or whatever, and maybe I can go and ask them and talk to them, all of these strings to type back. And really, I see I think when you look at informed consent, if you go back to it as a paper form or as a PDF or a word Doc or something like this that works as a format to sort of convey information, not thinking about accessibility and stuff like that. But it’s really it’s like what this needs to do now. It feels I feel like it’s more than that. That’s really why we bought consent. It is absolutely. We need to take this subnet, what’s the innovation inside the forming and how can it react to these challenges? It’s not just transactional anymore. It’s more relational. That what does that mean in the sort of myriad of context that we’re going to use this thing in?
Zack Naylor: It sounds like. And this is the way all things happened. It’s sort of a tumbler effect. But that’s what happened, in my opinion, with UX Research repositories this place to sort of centrally store share reuse research data.
Phil Hesketh: Right.
Zack Naylor: Is that we were using a lot of the tools. That what I would call just general business tools. You mentioned a lot of more word documents, PDF spreadsheets, whatever the case may be to do that stuff. And then all of a sudden this new problem surface where, yes, we have that. And it’s doing it, quote, unquote well enough until you realize that there are connections between these things that you can’t make or you can’t get easily without a bunch of additional effort. And it sounds to me that’s what you’re seeing and what is what’s happening with consent now, particularly as people were focusing more and more on being compliant towards this with respect to GDPR.
Phil Hesketh: And, yeah, I was actually super curious about this. And one of the main things that we found really early doors. We did a study around different organizations and people within the men. Like, how long does it take to do consent? Because you’ve got a number of jobs wrapped around that. So there’s obviously the writing of the consent form or even just finding the consent form. But you’ve got to ask someone, you get someone else and you’re pulling other people off projects at what they’re doing. That might take 20 minutes, half an hour or whatever, or maybe allow me to get a response back. So you got to find this form. And then once you’ve got the form, you need to then rewrite this form, it maybe change the content and it has to hit these kind of points that you need to hit. And that requires a bit of skill, but also like, it’s a bit of previous knowledge. But also, again, it’s more time to think through that stuff. Then you’ve got to get this form out to people to sign it. And then you’ve got to maybe chase people who haven’t signed it. You’ve got to go through this process of being in your inbox, going through email chains and being like, if they signed it, they not signed it. We look to what’s going on with that. And then after the fact is, if you’re keeping compliant with GDPR, you have to log the fact that all of these things are happening. You have to say, oh, when did I ask this person? What did I ask them? What was the message that sent to them? All of these different things, when you add all of these things up, we found that it was about 4 hours for every five participants who actually so part in research. So if you think about that in terms, it’s old. It’s like, I think we’re really good at design to become really good at looking outwards. But we start to look at our own processes. So, wow, this is actually really inefficient. And that’s probably the biggest I win. We found almost straight away by solving what are actually lots of quite small, relatively straightforward to problems. But it’s like when you put them all together, they become like, I mean, that’s half a day you could have a day for synthesis is what kind of difference would that make for your research? It’s crazy. Yeah.
Zack Naylor: I’m actually really glad to hear that you did that and tried to quantify it in that way because I actually think that as a UX industry, that’s one of the things that we do not do well enough. And I say that blanket statement across the board, because this comes back to this larger conversation, too, about UX research, having a bigger seat at the table, having more strategic influence in the direction and decisions that are getting made. I wish more people would think that way because the same thing happens on sort of our end of the world is where you look at all the tools you’re using. You mentioned data synthesis collection, capturing, searching every time somebody comes to you and says, hey, Phil, can you get me all the research we did about X, Y or Z topic? Just think about how much time it takes you. And even if it was nothing other than remove the time. But even just the cost of mental switch for you in all these different systems, you got to look at it adds up really quickly. And if you were to look at that at the end of a quarter at the end of a year, I have a feeling that the number would surprise you. It would surprise a lot of people. Yeah.
Phil Hesketh: I think that context switch in particular is massive. I mean, if I switch from one thing to another thing, it takes me probably 20 minutes, 1520 minutes, at least to get my head back into what I was doing. It’s not like you get to stop doing the thing that you are doing to do this other thing is expected to continue doing that other thing at the same time. And I think as well with COVID. And like, with this switch to remote working, I mean, any well, I see I even can give someone ten minutes back. It’s like a gift, that kind of context, which is everything else going on, even just be going to email chains and stuff like the amount of cognitive load that takes it’s like, it’s huge. So one of the really the biggest feedbacks that we have from people, it’s just so easy to use. And it’s just so much so much it just brings everything up for you. So it’s like that you can see exactly what’s going on at a glance. And I think really like all the tools that we’re using, the more that we can do that that time and efficiency is really the biggest and just head space as well, because what we do is kind of like, big stuff, right? It’s like you’re taking in, like, loads and loads of different bits of information. You’re trying to craft that into a story or into some kind of things. Your head that’s really difficult work to do. And if you’re going to just get distracted by, I need to be back in my inbox have this person don’t have they not done it, whatever. And that’s taking that’s taking more and more time is it. Yeah. It’s really disruptive, I think not only just in efficiency in terms of time save, but also it’s like the opportunity cost of that is what could you have been doing, which could have been further in what you’re actually there to do, which is obviously great research.
Zack Naylor: Yeah, of course, that opportunity cost to me is the biggest one. I don’t know if that’s just because of my own bias or perspective, of course. But again, to come back to the synthesis one all the time, because that is actually the part that gets cut entirely or dramatically reduced almost every single time in research. And I think I feel like we all complain about it. We also I wish I had more time to sit with the data, figure out what it meant, but we’re getting pushed and pressured and have to drive towards this answer to the question or the insights and okay, so what are the next steps? What are the recommendations and those all just get better. The more time you have to really sit marinate with the data and figure out what it is that you learned. But to your point, which is why I love that you brought this up. It’s so critical to look at how our time is spent elsewhere that yes, it seems. Well, okay. I’m using, quote, unquote free tools, which, by the way, they’re never free you’re paying for them in some way. But I’m using this free tool to do this thing that cost me ten minutes. I’m using this free tool to do this other thing that cost me 20 or 30 minutes. And if you do that, you start to scale it times 2345. You’ve lost a day or two. Boy, wouldn’t have been nice to have that day or two extra to sit with the data that you collected after you’ve done the research to really figure out what it meant. Like, what would that do for you? And that’s actually really hard to quantify, because it is sort of a a squishy, more ephemeral type thing. But the fact the matter is that would I would place a high wager, a significant impact, right?
Phil Hesketh: Yeah. Absolutely. It’s harder to see that. But I mean, it’s really what would you do? What would this thing be like for you? Those extra 4 hours? What would you do with those 4 hours? Like, what would that be? Yeah. It’s really one of the biggest wins that we can give people. If you’re building any kind of technology, you’re always trying to sort of innovate back to comfort. But I think to your point about scaling this as well, the next, just through the process of you figuring something out and figuring out a workflow almost by using these different tools, it’s like you build up a lot of as knowledge about how that workflow works. Right. And that’s the thing that you need to transfer to someone else to scale that thing. And when the very first when we were doing these early prototypes with the consent it we were looking at. Ok. Let’s you use Google forms, for example. Let’s use a spreadsheet. It works like this. You need to then add these bits in or whatever. And I formyself just writing tons and tons of documention and guidance. But it got to the point where was like, we haven’t got time to read all this stuff. I just wanted to work for me. I wanted to go. And that was one of the things that led us to thinking, okay, well, how do we actually scale this thing? It worked great for me. I could get it to work great for one person. But then you say right now I need to roll it out to a team of 20 researchers. And then you just hit a whole bunch of other problems. And I think this is one of the main drivers for, like, why we went down this product route. Is that actually, you can really onboard people to that very easily. And it’s a processor they can follow. You can guide them through that with the design. So in a sense, and informed consent really is a process isn’t just a single document, all of this other stuff around it. It is prompt to remind them to do things and remind them to delete the data and all the sort of stuff. So we managed to get it down to a point where it was like, literally, I’m doing an interview and I need a consent form. Boom. There’s 85% of the form generated, and the remaining 15% was giving you prompts to say, this is what you need to type into these bits. And it’s just, like, super fast and super easy. And the main thing is that it’s just consistent as well. So you have everybody who’s doing it in the same kind of way. And I think that’s really the thing is that scalability of these things and being able to do that without needing to just retrain people. I have to do this stuff. I think that’s one of the big things about research Ops that’s so exciting is that really recognition of we’re spending a lot of time doing this stuff and just figuring out process and stuff. And if we can have something where that’s sort of almost done for us or we figure out what those main processes are, we’re able to automate those a lot or to a thing in place that we can sort of distribute consistently across the teams. So, yeah, I think that’s a huge yeah.
Zack Naylor: It sounds really cool. And that’s one of those things to make this really complicated, hard to understand thing just easy for me. Let me honestly. And it’s one of those things that you can’t measure. It’s making this really hard, complicated, maybe even time consuming thing easy for me that I can sort of offload that mental space to not have to focus on that. But then you free that up to do something else. One of the things I wanted to ask you, as with most things in the world, but particularly most things that we do in our profession. In UX, the conversation used to be like, how do I get somebody to care about UX that it was how do I get somebody to care about UX research? Now it sounds like what you’re tackling is, how do I get somebody to care about consent? So I think most researchers, at least to some degree, care about consent if they know about informed consent. Awesome. But what I would love to ask you is just how do I get somebody to care more about consent to actually do this if they don’t today or if they don’t recognize the need for it?
Phil Hesketh: Yeah. I think it’s like an ethical maturity, almost in any practice, not just in design, but really in anything. And on some level, you just need to learn how to do the thing first. And then as you get up, you become more and more interested in things like ethics, for example, or things like compliance or whatever. You might get to a point in research ad. Then something will Tae you interest and you’d be like, OK, I’m gonna become about this. Or I’m going to get more interested in this. I mean, for me, with informed consent, it’s just something that we have to do. But I think we actually do the survey on Twitter. It’s on, like a really short those kind of short polls that you put out. And I was so curious about this to be like, why do why do people get informed consent? Is it because they have to do it and it’s a legal thing or is it because it’s the right thing to do and do people recognize that? And we got those, like, 97%, I think of responders that came back. We’re like, no, it’s the right thing to do. So I think people do recognize that because you’re really talking about someone’s rights, like sorts like human rights, almost of the Privacy and what you’re gonna do with this data. And I think that’s something which is more and more becoming on the radar about. Okay. This is something that we need to think about when I care about this inclusivity and stuff like that is Super Super important. It has been forever. But these seems to come in trends. If you like, it becomes more and more important thing as we become what we show in our practices. And that’s something that really I think that people really will. Yeah, we’ll just develop that. I get that. Or maybe they do already buds because barriers. There barriers in the way to be like, actually, I don’t know how to I don’t know how to do this. Sort of like, how can do that. So what we’ve been trying to do is just really address those barriers and just remove as much friction from that as possible, and not just for the researcher, also for the participant as well.
Zack Naylor: Yeah, totally. Really happy to hear you bring up the one point there particular, and it helps us be more equitable and inclusive on how we do research. We had several episodes back now, but Alba via Mill on our podcast and she does a lot of research with sort of like underserved populations, more vulnerable populations. And I think informed consent is one of the avenues in which you can do two things. As a researcher. Number one, you can help build that that comfort in rapport with somebody to actually have a real human conversation with them. Number one. But the number two is it actually helps serve them better and help them understand their rights in that situation, which in a lot of those cases in those underrepresented communities, that’s not the case. I mean, traditionally they just haven’t been right. And so this kind of helps empower them too with that, which is all going back to the answer you got, which is just the right thing to do. And I suppose somebody doesn’t care as much about that answer. You can always then say, Well, you’d probably get find a boatload if we don’t do this properly, if we don’t manage it properly after the fact. So follow us fills. I guess you can use the old carrot stick metaphor if you like.
Phil Hesketh: But yeah, I hate selling it on that because we should try to do this because we want to be better and want to be fair. I would be more excitable with people, but yeah, the fines are pretty massive with GDPR, it’s 20 million of like your global turnover or something like that. They’re really significant.
Zack Naylor: Yeah for sure. Well, this very detailed, very thorough conversation about to not easy and not clear topic for a lot of folks. This is pretty great. We’re running up to the end of our time. And so I ask this every episode, of course. And so I want to ask the same thing to you of everything we chatted about. If somebody came up to me and I had temporary amnesia, just completely forgot and somebody said, alright, Phil, what was that episode all about? Can you summarize that for folks? Yeah.
Phil Hesketh: The way that we’re working is changing and some of the things which enable us to do this work of and mechanics of the work almost or maybe haven’t necessarily been changing to adapt to those new needs. And obviously there’s a very human aspect to inform consent, which is super important and very central to it and the comprehension and things like that. But it’s really like how do we adapt and might move oviposit forward and evolve it so we can sort of continue to work in the ways that we’re working and see the benefits of innovations in the field and also know the products and tools using awesome.
Zack Naylor: Very well summarized and something I wanted to add because you brought this up a couple of times. Innovating in that space is really interesting to me as we talked about Efficiencies and inefficiencies creating Efficiencies with this freeze up mental space, other energy to innovate elsewhere.
Phil Hesketh: Right.
Zack Naylor: And I think that’s a really interesting thing. Let’s sit with folks who are listening. I can consider as an opportunity, but I do need to be respectful of your time because I know that we’re running out of it her. But is there anything you want to share with folks today that we didn’t already have a chance to cover?
Phil Hesketh: I think maybe accessibility would probably be the big one. I feel like I’m talking about this a lot at the moment. It always amazes me how unaccessible like a lot of things in the sort of products and services that we’re building because of Kobe. This has been exacerbated even more in that. Obviously, people have physically restricted from going out. And if your only connection to the world is through a computer, it’s something that is really important and that’s we’re doing research. If you’re asking people this thing that you’re sending in, which represents their rights and there is going to help towards their kind of psychological safety, if you like or understanding what it is that you’re there and what is that you’re doing, making sure that’s accessible is super. It’s also really important because I think it’s like 20% of people have some kind of accessibility need and by not ensuring that we’re doing that’s really including a lot of people from the processes. So yeah, I would say just to put that in, I guess at the end, but something that I think it’s been on our radar from day one, and I think it’s something that’s again, like when we’re not looking at our own processes or not accessing on processes too much is something that can be overlooked, but it’s obviously very important.
Zack Naylor: Yeah. Totally awesome. So this is a really good chat. I certainly learned a few things, even though I work in a similar space. So I really appreciate coming on, sharing your advice, answering the questions and having a chat with me today.
Phil Hesketh: Well, thank you. Thank you for the opportunity to be. It’s been really great. I’m a huge fan of the podcasters. Also, I look forward to a future episodes.
Zack Naylor: Awesome. I appreciate you saying that. All right, everybody, we will see you next time. This podcast is brought to you by a alias, the research and insights tool that helps you analyze, search, and share all your research in one place so you can go from data to insights to action faster and easier. Check out a alias for yourself with a 30 day trial by going to aralia. Com that’s Aurelius. Lab. Com. If you enjoyed this episode, it would mean a lot if you would give us a review on itunes to let others know what you think you can catch all new episodes of the The Alias podcast almost anywhere you listen to podcasts like itunes, Spotify and more. Stay up to date when new episodes come out by signing up for our email updates on our website.