By Sebastian Murdoch-Gibson, Founder and CEO, QualRecruit, Ottawa, Ontario, Canada, firstname.lastname@example.org
Ian Ash, Co-Founder, Dig Insights President, Upside, Toronto, Ontario, Canada, email@example.com
Joel Anderson, EVP, Advanced Analytics, Dig Insights, Toronto, Ontario, Canada, firstname.lastname@example.org
Last December, I kept getting tricked. It seemed every day I’d start reading an article, only to realize that the words in front of me were not of human origin. Rather, they were the output of ChatGPT, a freshly available large language model (LLM). While the trope of confronting unwitting readers with the reality that they’ve been consuming AI-generated content has (mercifully) ended, large language models and their transformative potential for the market research industry continue to loom large. To understand how they may shift the reality of our work and our industry, I spoke to two veteran researchers who have been applying large language models in an agency context since the release of GPT-3 in 2020.
Ian Ash is the co-founder of Dig Insights and president of Upside. Joel Anderson leads the AI and advanced analytics team at Dig Insights as executive vice president of that division. He has been testing applications for large language models while pursuing a Master of Science in applied artificial intelligence from the University of San Diego.
Sebastian: Tell us how you got started in consumer insights, how your roles have evolved, and how you currently interact with ChatGPT.
Ian Ash: We started Dig Insights back in 2010. Before that, I’d been in the insights industry for at least 10 years. I’ve worked in other market research companies, in mostly analytics roles, as well as client-side jobs, including at CIBC. Joel works with me on the analytics side of the organization. He is much deeper into AI, and he’s actually getting his Master’s in AI now. My own personal experience with ChatGPT is really just using it and playing with it personally since the broader release and the work that I’ve done with Joel.
Joel Anderson: My background before my Master’s work, which is more recent, is in math and psychology. So, I’ve always been interested in the intersection between these two areas. A lot of psychology is eliciting some sort of response from people and using that to test hypotheses; I was really interested in that kind of analysis. The applied version of that—where you can get a job—was in market research. So, I did a post-grad diploma from Georgian College. I saw it as doing eight months of school and getting a job. I started at advanced analytics 15 years ago and kind of made my way from there. I joined Dig at the end of 2012, a couple of years after it was founded, fairly early on. I was the eighth person hired and have been the analytics lead under Ian’s supervision over the years since.
Sebastian: So, you were the eighth hired then. Where are you guys now?
Ian: We’re over 200. I can’t remember the exact number. I don’t know, somewhere probably around 210. You know, it fluctuates.
Joel: I joined Dig a little over 10 years ago and, in the last year, I’ve been doing my Master’s in AI. I had been kind of using and monitoring GPT-3. GPT-3 came out, I think, in June 2020, long before ChatGPT. He had been using it well before ChatGPT for work in all kinds of ways. The most interesting to you might be how we compared it to a human analyst for qual. We had it ingest and summarize a bunch of qual bulletin boards and tried to get it to write the emergent themes. Meanwhile, I had someone on my team in advanced analytics ask GPT-3 to write a summary similar to a qual report that our qual team did. It nailed the same top three ideas and produced fairly similar content for our team. Now, you wouldn’t just deliver that to the client, I think. But, still, the proof of concept is that it’s extremely good at text analysis; text is its playground.
Sebastian: Before we get into the practical applications of ChatGPT, I want to get from both of you a sense of (using your imagination, thinking a little bit into the future) how you envision the emergence of large language model technology changing the nature of insights.
Ian: At least to me, and probably to a lot of people who were less on top of exactly where AI was going, it seemed like a startling leap forward when we kind of expected that AI evolution would be stepwise. We expected it could do more menial tasks, but it would be a long time before it got to the higher-functioning thought tasks. It kind of leapfrogged over all of that and got right to the higher-functioning, thought-based tasks. It doesn’t necessarily apply in the places where you would potentially want it to automate some of the more mundane tasks of the job. But it, theoretically, can replace some of the parts of the job that people tend to enjoy more.
AI can summarize a lot of unstructured data in really concise and intelligent ways. That’s going to be a challenge for every knowledge worker for sure. But definitely within market research as well because, to Joel’s point, we thought it would be a long time before we had commentary within presentations or summaries of large amounts of unstructured data coming from AI. But we’re actually there right now. Now it’s about how you incorporate this into your processes in a way that still has a human being to vet, review, and revise. Because, you know, the one thing that I’ve noticed about ChatGPT is that it will give a very good-sounding answer that is not always right.
It’s kind of like a human being in that way: if you don’t know the topic well, or you don’t know what you’re talking about, and you just take it at face value, you can actually end up with a very good-sounding, wrong answer.
Sebastian: It sounds like you’ve found a role for it in the activities we tend to think of as the higher-order tasks (analysis, moderating, reporting). Have you found a use for it in the operations side of things?
Ian: The application isn’t as obvious, right? Because ChatGPT is great. You ask a question, and it gives you an answer. It’s great at analyzing unstructured data, but it’s not necessarily designed to tackle unstructured tasks. So, you’re not going to say to it, “Create a PowerPoint report out of this, and make it look good.” Not yet.
Joel: It’s interesting, too, because something like Siri or Alexa is really the opposite. They’re not good at general understanding and unstructured random question-and-answer type stuff. But they are good at integrating directly with hardware to do tasks like setting a calendar or sending a text message.
Sebastian: What are you guys seeing in terms of the way end clients are talking about ChatGPT? How are their expectations, relative to our industry, changing?
Ian: My experience to this point is that when I mention to a client, “Why don’t we take your core ideas and supplement them with AI-generated ideas?” most of them haven’t thought or heard of this. The reality is, they can now do that themselves in ChatGPT. They don’t need us to do it for them at all, really. But, usually, they haven’t thought of it, they haven’t heard of it, and they certainly haven’t tested it.
The big question for all of us who are offering services and/or have technology is where’s the value in our integrations? Because if all we’re doing is something clients can do on their own, then why would they pay us? Right. We were doing AI-generated ideas months and months ago. But, the reality is, somebody can do it themselves now using ChatGPT; they don’t need us to do it for them. However, if we had it integrated directly into our platform so it was like, “Okay, I’ve got 10 ideas, and I press a button to add 10 more,” that’s useful because anything that removes a layer of work on the client’s side, even if it’s just going into another application, could be seen as an assist.
Joel: One thing I would add. Ian, I totally agree that clients could do it themselves, but I suspect a lot of them won’t because there’s just always going to be a bit of a barrier to doing this type of work. Right now, ChatGPT is free to use, but it’s in this testing and beta phase. I don’t think it’ll be free to use forever. With GPT-3, anyone could have been using it back over a year ago in the playground, and very few people were. I think ChatGPT took everybody by storm. But I kind of see it like how our clients can—or anybody can—edit their own video. Anybody can make a video of their own wedding. But still, people will pay a videographer. You can take photos of your own family, but you still pay a photographer. I think there will always be a need for specialization in that area. There are tons of academic papers that are being constantly written and published about how to get the most out of GPT-3 and how to get the most out of large language models in general.
Sebastian: I think the point’s well taken; we and our clients already are kind of on a level playing field in terms of many of the tools at our disposal. Our clients don’t have us write a screener because they can’t use Microsoft Word, right? They ask us to do these things because of our expertise; we understand how to employ the same tools in a way that’s going to produce a better result. I think the term that I’ve heard kicked around in relation to ChatGPT is “prompt engineering.” In a research context, do you think that kind of skill set is where we can continue to add value?
Ian: Well, no. I think prompt engineering will become a very valuable skill. My concern is that, you know, there already are prompt marketplaces; you can already buy “101 prompts that’ll get you to a great term paper.” I think there will have to be a lot of strategy around where and how you’re adding value beyond things that will become just basically so automated or universally accessible that they don’t have much value anymore.
Sebastian: Where do you see these value-adds coming from? How will we, as an industry, continue to add value in an environment, maybe two years down the line, where ChatGPT gets more mature and more widely adopted?
Ian: That’s a really hard question because, as I said, it depends on where the large players decide to deploy, right? Let’s say Microsoft realized that PowerPoint’s main function was business strategy, and it decided to do a slide summary integration. Well, now, you’ve basically given everybody equal ability to have a team of analysts summarizing data for you. Now, I don’t know if it’ll go that far anytime soon. I don’t know if Microsoft will decide to make that a focus, but that’s just an example. It’s sort of like we need to search for the space where someone really big isn’t going to make it universally accessible to everybody. We also need to find spaces where you can integrate AI specifically with your tools, your processes, or the deliverables that your company provides in a unique way that automates things but is still something an individual wouldn’t be able to do on their own.
Joel: I think we’re at the beginning of this. I don’t think people are losing their jobs tomorrow. It’s like when spreadsheets were created. Before that, people were doing tabulation and manipulation in a much less efficient fashion. It’s not like we don’t have accountants anymore. They just are more efficient, and they can provide more types of analysis—the technology just unlocks the door to more things being done. I think that the jobs will change a little bit. There will be a somewhat gradual transition to overseeing AI doing some of the tasks we do currently. Who knows what 10 years from now is going to look like—it’s hard to predict.
Ian: I think you can look at quant as the example of what will also happen within qual: timelines became shorter, and there was downward pressure on price. It doesn’t mean that companies don’t still exist in the quant space; they obviously do. But it became a necessity that you had some technological advantage, or you were in serious danger of falling prey to companies that did. Even large companies, like Ipsos, realized eventually they had to go and build their own platforms. I think the same thing is going to happen to qual. The problem in the qual territory is that it’s much more fragmented with much smaller players. It’s really hard for a two- or three-person shop to say, “I’m going to compete with a company that’s automated far more of the process than I’m able to do and compete on price.”
Sebastian: As the market research industry looks forward, LLM technology seems certain to have a place in its future. I’m thankful to Joel and Ian for helping texture our picture of the AI-enabled future of insights.