The State of AI in Qualitative Research—How Researchers Can Benefit

AI-driven applications for qualitative research are being developed at the speed of light, coming out so quickly that it’s difficult for researchers to keep up. Author Renée Hopkins looks at the current state of AI in qualitative market research and how it’s benefiting researchers.

headshot photo of Renee HopkinsBy Renée Hopkins, Qualitative Research Manager, Decision Analyst, Arlington, Texas, rhopkin@decisionanalyst.com 

Artificial intelligence-driven (AI) applications for market research are being developed at what seems like the speed of light, coming out so quickly that it’s difficult for working researchers to keep up. If you haven’t tried any AI applications because you want to wait and see which app will be the “winner”—don’t wait. There will not be one “winner”—different apps do different things, all of them fascinating and useful. 

In the last few months, I’ve seen many demos of AI applications and had many conversations with the people behind those apps, both via Zoom and at IIEX in April. Here’s what I’ve learned about the current state of AI in qualitative market research.  

Gains in speed, productivity, and efficiency are not the best reasons to adopt AI—its biggest value is in synergy. 

The first AI applications most of us have seen allow for easier ways to organize and analyze the data. These applications save time and, therefore, save money. Yet I heard more than once that saving time and money are value-adds but don’t take full advantage of the new technology’s capabilities. 

To me, it seems the benefit is the ability to take a “holistic” approach to market research. Jack Wilson from 2CV expresses it better: “I would argue that AI allows us to deliver “synergetic” research—it brings elements of qualitative and quantitative research together to deliver new benefits.”1 

AI-driven applications make it easier to create innovative mixed-method approaches because they allow you to do qualitative research at scale. You can add qualitative to a quantitative research project without setting up the project in separate stages, including time-consuming hand-offs from qual to quant and, in some cases, back again. 

AI also makes it possible to do large-scale qualitative research projects that include quant approaches and question types. Clients increasingly ask for closed-ended questions to be included in qualitative, and AI allows for that without the need to dig through notes, transcripts, and/or videos to count the number of participants who liked X concept better than Y concept, or average by hand the scores for every question where you asked a participant to put a stake in the ground about their opinion by assigning it a number on a 1 to 10 scale. 

How might you design a research project if you knew you had the ability to easily quantify some findings? If you could do a qual follow-up on quant questions? AI can help find patterns and insights in structured data, nonstructured data, and what I would call really nonstructured data—online reviews, social media posts, etc. What if you could include this data in your analysis? AI is very good at finding patterns in lots of data. 

Adding some AI-moderated qualitative questions to a quant survey can also allow for more in-depth screening of participants who could be recruited into live, in-depth interviews with a human moderator. This can circumvent fake participants. It’s also a better way to select participants for follow-ups than just including an open-ended question or two on the screener. 

The value of using AI applications is in the interaction between you and the AI—not in whether the AI has done a perfect job of coding and summarizing the data. 

AI “hallucinates” at times—but so do humans, at times. Human brains are not always logical and are subject to biases. Sometimes we think we see patterns and insights that are not in the data. Sometimes we miss patterns and insights that are in the data. 

Jack Bowen of Co.Loop says the output from Co.Loop’s AI is “hypothesis-led—outputs are suggestions and should be quickly validated.” Others I’ve talked to say that it’s definitely an interactive process. You shouldn’t simply trust the AI to give you the “correct” answers from the data. 

The AI will find insights you miss, and you will find insights it misses. The interactive process between humans and AI will still go much faster. 

AI is being trained not just on data, but on qualitative interviewing techniques. And one bot can coach another bot in real time. 

Some AI applications for qualitative research have been developed by teams that include anthropologists and linguists who can shape the AI’s ability to understand and use language and its ability to operate within a variety of contexts. I met someone from a company that not only used an AI trained in anthropology but also a second AI trained more specifically in anthropological methods that could “coach” the first AI as it conducted the interview by suggesting questions and follow-up probes. After I found out about that, I had to sit down for a minute and mull over the fact that some AI bots have had better training than probably many of us have had. 

AI moderation allows for private, conversational interactions—and many people prefer to be interviewed in conversational interfaces such as text and WhatsApp. 

There’s a cultural difference here. Some people believe that a conversation bot can’t possibly conduct an interview that would prompt a participant to respond fully, as they would during a face-to-face conversation. 

Yet, people do talk about sensitive subjects in texts. In fact, the more sensitive the subject, the more some people would much rather text about it than have a face-to-face or phone conversation. My own Millennial daughter prefers to text me about things she “doesn’t want to talk about on the phone.” Work groups communicate via Slack, Teams, and other conversational messaging applications. Whole families have ongoing text chats or WhatsApp threads. Many people are quite at home with this. 

“Consumers are intimidated by sharing truthful feelings in interviews and focus groups,” says Ben Jenkins, cofounder and CEO of Sympler. “The presence of another human with their biases and judgments can be quite off-putting and has hampered qualitative research’s aptitude for depth.”2 

Sympler’s AI moderators interview participants in the “private, intimate spaces of messaging (Instagram DMs, Snapchat, and Facebook Messenger). Where people are at their most relaxed and emotionally honest, the platform is able to probe deeply and at scale.” Another company I talked to uses AI to interview participants on Slack. 

Some people believe that focus groups tend to be taken over by the louder, more extroverted participants, while the quieter ones aren’t heard. Some believe that having to look a stranger in the eye while you talk about personal matters is a deterrent to full expression. Clearly, opinions on this issue are mixed. 

The more AI is “grounded” in the project, the better the results. 

By “grounded,” I mean well-briefed or well-set-up. Those of us who already work with AI applications are becoming used to the need to set up the project carefully on the front end—including a brief with the research background and objectives and even the discussion guide. Most applications also allow you to identify the speakers and indicate which is the moderator. 

When working with the findings, you can create prompts that “query” the data, allowing you to view it from a variety of perspectives. I’ve learned that creating effective prompts can be difficult. One app, ResearchGOAT, helps with this by providing “lenses” for the AI to use as you work with the findings. One choice is “economist.” Applying that lens shows you insights that would result if you look at the findings through the lens of behavioral economics. This gives you different options for exploring the data without having to create prompts to query the data. 

Another approach is for the AI to be trained on a specific kind of research—concept testing, user experience, etc. I’ve seen a number of AI applications with this kind of structure “baked in.” Either way, the stronger the guardrails the AI must work within, the less likely you’ll get results that are wildly out of context. 

Focusing on the jobs that might be lost to AI is the wrong approach. 

It’s more helpful to think of AI as a technology that can take on some of your tasks but not your entire job. AI will change the way you do your job, but right now, it’s unlikely to take your job away. Consider what kinds of tasks within your job could be replaced or done better by AI. If you learn to use AI to handle the tasks it is best suited for while you spend your time on tasks better suited for humans—for example, tasks that involve strategic thinking, higher-level creativity, and/or interpersonal skills—that’s a huge job strength. 

It’s worth noting that AI-driven features have been in applications for years, although they were usually referred to as “automation.” Examples: machine transcriptions, spelling and grammar checks, and even that annoying little animated paper clip that used to be in Microsoft Office applications. We are used to such assistance. 

The difference is that with generative functionality, AI has become powerful enough and ubiquitous enough that it will eventually change the way everything works, just like PCs did and like the internet did. 

Many AI research solutions have a la carte plans that keep researchers from being locked into any one platform and give them a variety of tools to try. 

At this time of rapid evolution, it’s good that qualitative researchers don’t necessarily have to tie themselves to one software solution. AI applications do not do exactly the same things, so it’s good to be able to have reasonably priced access to more than one. However, that does increase the overall learning curve. 

In addition to a la carte plans, many AI applications are very inexpensive right now because the entire software category is new. The learning curve might be steep—but at least it’s not prohibitively expensive in most cases. 

AI frees qualitative researchers to focus on their strengths—or, to focus on the “fun stuff.” 

I’ve been asking everyone what they see as the role of the qualitative researcher in the world of AI. Jill Meneilley of Digsite/Question Pro said, “Focus on the fun stuff.” She was not the only person to say something like this, but she phrased it best. Qualitative researchers using AI tools should be able to focus their attention on the research itself—the design, the analysis, and, of course, the client—without having to spend hours digging through transcripts. 

Make sure to follow privacy and security guidelines while using AI. 

The main security questions to consider with AI applications are where your project data is stored and how it is being used. 

Obviously, our clients want data from their research projects to be kept private. Qualitative researchers are obligated by law to keep participants’ private data secure and private. Ask potential AI vendors about their policies for data privacy and security. They are prepared to answer these questions. 

To help you work through these and more issues about choosing a vendor for AI-based services, I recommend ESOMAR’s newly released “20 Questions to Help Buyers of AI-Based Services for Market Research and Insights” (available for free from ESOMAR).3 

These basic guidelines should keep you in compliance with most current privacy legislation:4 

  • Project data should not be made available to train publicly available AI applications. Don’t just upload transcripts into ChatGPT.  
  • Project data must be stored on a secure server located in the country where you (and/or your client) are located. 
  • Project data should only be accessible to you, your company, and your client. 
  • When the project is over, participants’ personal data must be deleted completely from the AI vendor’s server. 


  1. Wilson, Jack. Computer Does Qual: Avoiding AI-Overclaim and False Equivalency. Greenbook, The Prompt blog, Dec. 13, 2023 (www.greenbook.org/insights/the-prompt-ai/computer-does-qual-avoiding-ai-overclaim-and-false-equivalency)
  2. Jenkins, Ben. Chatbots Are the Future of Empathy: How candor and vulnerability are elicited from the ‘stranger’. Sympler blog, Nov. 15, 2023. (https://news.sympler.co/sympler-insights/chatbots-are-the-future-of-empathy-how-candor-and-vulnerability-are-elicited- from-the-stranger)
  3. Download at https://esomar.org/20- questions-to-help-buyers-of-ai-based-services
  4. If you haven’t looked into privacy regulations recently, you may be surprised at how much qualitative project-related data is subject to privacy legislation. I highly recommend the on-demand classes from QRCA’s Data Privacy in Qualitative Research Certificate 2023. (https://qualology.qrca.org/item/data-privacy-qualitative-research- certificate-2023-571614)

CREATING AN AI QUALITATIVE RESEARCH APP: Ross Mitchell Talks about How He Started ResearchGOAT 

ResearchGOAT is an AI application for qualitative research. Co-founder Ross Mitchell showed me how the app works and told me the story of how an Accenture consultant found himself in the AI business. 

Ross Mitchell, Co-founder, ResearchGOAT 

I was at Accenture, dealing with all sorts of clients and gathering all sorts of data on creating new designs or strategies. We always needed high-quality qualitative data. So, I explored AI and what I could do with it to optimize my own workflow. I was heading up design research for Accenture’s West Coast studio, so I went down the path of what could be automated in terms of the actual interview process. 

It just made sense—an interview is a conversation, and ChatGPT is a conversational agent. Could you reverse it in a coherent way and have it ask the questions? Early tests showed it was roughly possible. But there were a lot of subtleties involved to actually have it make sense for an interview over an extended period of time—30 minutes, 60 minutes, or beyond. An actual conversation is much more complex than a quant survey. 

Here’s how it works: first, the researcher sets up a project. They tell us what their questions are and why they’re doing the research. Essentially, we try to take it from there. We’ll do the recruitment if needed, or they can bring their own. Then, we carry out the research. Once the interview starts, every conversation is unique but within the bounds of the discussion guide. 

The interview looks like this (note: Sam is the name of the AI moderator). 

Ross: How are you? I’m great. I’m just getting started today. How are you doing, Sam? 

Sam: I’m an AI, so I don’t really have feelings, but I’m glad to hear you’re doing well, Ross. If you don’t mind, could you tell me a little bit about what kind of activities or interests you like to pursue in your free time? 

Ross: Sure. Well, since I live in LA, I do pretty typical LA types of hobbies—go hiking, go to the beach, try new restaurants. Things like that. 

Sam: Hiking and beach visits seem like great ways to enjoy the outdoors. Can you recall a recent experience when you did something that was especially memorable or exciting during these outdoor activities? 

Ross: A couple of weeks ago, I walked to or hiked to the Hollywood sign for the first time. I hadn’t done that before. It was memorable seeing all the tourists who were hiking up there with me. 

Sam: Sounds like an interesting experience. Why do you think the hike to the Hollywood sign left such a lasting impression on you? 

Ross: I don’t know. It was just something different. I mean, it’s obviously iconic. You live in a city for a long time, and you don’t actually ever do the tourist things. So yeah, I mean, it was just nice to actually experience the city through a tourist’s eyes. 

So, this was a very loose objective. The first question is usually asked almost exactly like it’s written. Subsequent questions are created by the AI based on the conversation and on the project objectives. It’s from the discussion guide but tailored to how we had already been speaking. We’re really trying to get examples. I’ll say I like hiking. It will say, okay, give me an example of your experience in hiking. So, that’s a lot of the training. Also, people make assertions—I like this, I don’t like that. We try to get at why they like or don’t like something. 

The goal is to make qualitative research easier and more scalable. At Accenture, dealing with client projects, there was always a push for data and information. But the research was always the first thing cut from the budget. We were charging management consultant rates far higher than a research agency would charge. So, the challenge was, how do we get high-quality data without having to send out a SurveyMonkey survey? 

Qual has always had the issue of scale. Clients will often say they love the depth and the stories and the quotes. But then they also want to scale it up somehow. So, to provide qual research that would fit into traditional research workflows, we would do 15 20-minute in-depth interviews. Then the client comes back and asks, “Hey, is this applicable nationally or internationally?” It would have been great to be able to add those conversations easily, get it done in a week, and not have to hire a translator. 

 Project Design and Reporting Are Critical Roles for Researchers  

The researcher creates the discussion guide and a project briefing. If we were recruiting, you would be able to see those recruits on the interface. Once the research is underway, you can click in and see the participant attributes and their screen responses. Then, in terms of the individual analysis… for this particular participant, we entered in research questions, then got them answered with supporting quotes. 

When you set up the project, you enter the types of general insights you’re looking for. This report pulls out those things and then gives you an overall summary of this individual’s interview. It shows you the results collectively again, answering the research questions that we entered in at the start. It provides quotes, key points, caveats, and recommendations. 

In answering all those questions, it looks [at the data] through lots of different lenses, pulling out key pain points that we heard from different user groups. It’s certainly not a quant segmentation, but it’s at the persona level, the users’ description of people who fit into those groups. 

If it’s a very design-focused project, we pull out design insights and design improvements, opportunity areas that we can then design around. There’ll be another button here to query the transcripts, asking questions in a ChatGPT kind of interface. What do people say about this? What are some quotes related to this? That sort of thing. You can ask whatever question you want. 

Every project is unique; people are going to have different stakeholders and different needs. I wanted to provide a lot of different, structured ways to look at the data so you don’t have to think through how to prompt the AI. These are provided by lenses the AI is programmed to use. It looks at the data through those lenses and pulls out what would be interesting from that perspective. Depending on the lens we ask it to use, we can make the AI behave as if it were a designer or an engineer that knows every field—mechanical engineering, aerospace engineering. The consultant lens is essentially that of a management consultant strategy type of insights marketer, and the economist lens is a behavioral economics lens. 

In addition to writing the discussion guide and providing the information for the brief, I think there’s a big role for the researcher in building the reports. The AI would serve as their assistant… you could say, I’m building this section. What other details do I need here? What are some quotes that I can add here? Eventually, we want to have a PowerPoint creator, but right now, you can download the information you see on the screen as a report. 

I think that overall, every qualitative researcher is becoming much more of a manager. You still have to be a good sort of manager… working with your end client, understanding their needs. There’s still a lot of managing the AI, doing prompt engineering… translating what the client wants into something that will work. 

ResearchGOAT’s initial core customers, I think, are agencies—design agencies and advertising agencies. Obviously, we want larger companies to use the platform, too. To get them to try it, we’re doing a bottom-up approach where the product is inexpensive enough that you can essentially put it on a credit card and sort of fly under the radar of the official purchase process. 

The value proposition is obvious: We’re going to do qual research, do it fast, and keep the quality, not cut corners. But trust obviously comes after people try out the platform. We opened a freemium model so people can try it and see that it works. Because we are so totally new, we’re still figuring out pricing. Right now, it’s per project, ad hoc pricing. You don’t need a subscription. We’ll see how the market reacts. 

Five years from now… I think there will be more qualitative research done than there is now. Today, a lot of what should be qualitative research becomes quant because the client asks for scale and timelines. But a lot of that work doesn’t need to be quant. Projects where we’re just running simple surveys would be better if they were actual interviews to open things up. Then, hopefully, there will be a lot of those opportunities where we take 10 of those people and do a secondary in-person interview with them to really get more detail. 

DIY quant research ultimately just resulted in more quant research being done by people who otherwise wouldn’t have done research at all. You still see polls everywhere. Ideally, ResearchGOAT will result in more qualitative research being done by people who wouldn’t have done qualitative research at all. That’s a good thing.

Be the first to comment

Leave a Reply