Feature

How Getty Images Uses Data-Driven Insights to Shape Visual Culture with Dr. Rebecca Swift

Dr. Rebecca Swift shares how research and data-driven insights shape the future of visual content. She explains the evolution of Getty’s Visual GPS methodology, which blends content analysis, semiotics, and consumer data to drive representation and authenticity in imagery. Swift also discusses the impact of AI-generated content, the shifting trust in digital visuals, and Getty’s approach to responsible AI tools. This conversation highlights the power of qualitative research in predicting trends and influencing visual storytelling in advertising and media.

By Zoë Billington
UX Research & Customer Insights
Los Angeles, California
zbillington@gmail.com

 

 

Dr. Rebecca Swift

This quarter, I spoke with Dr. Rebecca Swift, senior vice president of creative at Getty Images. Working closely with creators, art directors, and creative researchers globally, Swift plays a critical role in ensuring that Getty Images is continually providing fresh, relevant creative content and insights, which, in turn, engages and inspires creatives and marketers around the world. Swift joined the photography industry over 20 years ago and was one of the founding members of the creative research team at Getty Images, introducing visual research methodology to the industry. In 2020, Swift was recognized as one of Ad Age’s “20 Women to Watch” and is Campaign magazine’s Female Frontier honoree. She was awarded the “Most Influential Leader in UK Visual Media” in 2020 at the Corporate Excellence Awards. Swift has a PhD in photography. Her research expertise is in commercial creativity and the evolution of visual trends in advertising.

Zoë Billington: Let’s start as we always do with the 10,000-foot view of your career. How do you tell the story of your career and how you got to where you are today?

Rebecca Swift: I always talk about it as an accident. I did that classic thing of leaving university with an English literature degree, wanting to move into advertising, move to London, and needing to find a job. I answered an ad for a photo agency that was looking for customer account executives. I imagined I would stay there for a couple of years and then move into advertising because that’s where I really wanted to be. But I stayed at the company, which eventually became Getty Images, and it was through working on the front line, with customers, that I started to think about how to take the information we had around the types of visuals that were popular at the time, or were used by certain industries, and create new content out of it. In short: how we used sales data, which was retrospective, to predict forward.

We developed a team called the Creative Research Team to work on this. It straddled both the sales and marketing teams and the creative team. I ended up moving into that full time, and over a period of years, we developed the methodology that we now use, a combination of content analysis, a bit of semiotics, and data points from interviews with our customers. Then, as big data evolved, and we became an e-commerce business, we not only had the data of what was being downloaded and utilized, but we also had search data, so we had a good sense of people’s thought processes as they were searching on a massive scale.

More recently, in 2019 and 2020, just before COVID, we realized that there was a piece of the puzzle missing. We sell to businesses—both massive businesses and small businesses and people who are working freelance for businesses. So, essentially, it’s a B2B market that we sell in. But we realized that we needed the consumer view, the gen pop view. So, we started running surveys and doing image testing with consumers globally.

For example, if we have a hypothesis around a certain type of image that we think could affect the efficacy of an image, then we will test that with consumers. Right now, we’re looking at things like how easy are AI-generated images to identify, how do people come to AI content, and what is their reaction to AI content? What is the value of AI-generated content versus the spectrum of aesthetics that you get from other mediums? By integrating all this, we created an infrastructure and methodology called VisualGPS, which includes the data we’ve always relied on—download data and search data from the last 25 years, and more recently, consumer data.

VisualGPS has become the core of how we create content. We use it for content planning and how we’re creating content on a mass scale; we make 27 million assets a year. We work with a core group of 8,000 exclusive creators—producing their shoots, art directing them, on set with them—to make content. We also do consulting when we’re working with brands who are building a new visual library, or rebranding, or if there’s a certain project that they want to work on. In those cases, we will also take our VisualGPS data and use that to help them make decisions about which visuals to use in their campaigns, and then we go away and shoot it.

So that’s the history of the research methodology that we’ve had here. It started 30 years ago, but I’d say it’s become more robust in the last five years.

Zoë: What an incredible growth story. Can we talk a little more about what each of these components of the VisualGPS methodology entails, and what insight it brings to the business? Let’s start with the content analysis piece.

Rebecca: The problem with analyzing images and videos at scale is that you can break down and categorize them to a certain degree, but an image has value beyond what the constituent parts are. For example, an image has to meet the brand guidelines of the company that’s using it, there needs to be space for copy or whatever else they want to put onto the page, and the image has to have some kind of conceptual value.

So, we have a backend system where we can start to break down the millions of images into categories. We have a categorization system that chooses a level one category, and then it breaks down to level two. So, level one will be things like travel or lifestyle or business. Then underneath that at level two, we’re looking at composition and color palette and emotion and photographic techniques.

Then there are factors that define the difference between whether an image is successful or not as a commercial image. That requires a deeper look beyond the two-­level content analysis, and that’s where the semiotics come in.

Content analysis is a difficult skill to train. When people come into our insights roles, they either come from research, your classic research agencies, or they come from some kind of fine art or even photographic backgrounds. Having both sides has been a unicorn. So, we’re very fortunate that the people we do have are really, really good.

Zoë: How about the semiotics piece? How do you approach and apply semiotics in this context?

Rebecca: Semiotics tends to come in when we are looking at a theme or hypothesis in a bit more depth. For example, we’ve done a lot of work around representation of underrepresented groups. If we were looking at disability, we would analyze where disability showed up in the category, and then we would go deeper into what each image is doing: How is disability shown? What is it representing? How is the person with a disability depicted in relation to other people and objects? We might even go deeper than that depending on which type of disability somebody has—whether it’s physical or they’re neurodiverse, for example. That’s how we can start to really delve in. But I have yet to come across any semiotics methodologies that allow large-scale commercial application. If anyone comes up with one, I’ll happily hear what they have to say.

Zoë: So, putting it all together, is there an example of something that your team picked up on through these combined analyses that drove a new kind of content offering?

Rebecca: Gosh, there are so many examples, but one that immediately springs to mind is when we first knew that brands wanted to diversify the body shapes within their advertising. There are great examples of brands, such as Dove and Aerie in the U.S., who spearheaded not only that research, but those campaigns. When we delved into our content analysis, we saw that more images of people with larger bodies were being used and licensed for campaigns, but actually, the majority of those images were about being fit, getting fit, or eating healthily. In other words, the larger bodies were being used to talk about health and well-being and not about having a family or being a business leader or going traveling. So, what was actually happening was that the depiction of larger bodies was still within this narrative of “you’re big, and you’re trying to do something about it.” Knowing that, we pivoted in terms of how we thought about body shape in the content that we created and created more content in the latter way in terms of representation in lots of different areas of life besides fitness.

When we looked at representations of disability, people with physical disabilities were usually seen alone and outside of society. Again, we saw an increase in depictions, but it wasn’t doing the work that it should be doing.

See, now you’ve opened the tap. Another example is when we were looking at transgender representation. I think every image that had been licensed of somebody who was transgender was by themselves, which is not helping the depiction of transgender people in the community. So, again, we made sure we brought more people into our images who identified as transgender and did so in the context of business and in friendship scenarios and in celebration images—using the opportunity to bring them out of that solitary “other” lifestyle into being a member of the community.

Zoë: What an important impact your work has had. Switching gears a little, to the topic du jour, how has the rise of generative AI impacted your role or focus in your role?

Rebecca: In terms of research, the biggest impact has been our focus on understanding the tolerance for AI-generated content.

We’re very involved in advocacy around the responsible development of AI models; we advocate for models that have been trained on licensed content, use released data, and don’t infringe on copyrights.

My team is also interested in the aesthetic value of AI content, where it’s acceptable and not acceptable, and whether there is value in creating content through AI right now. People are experimenting because it’s free or cheap. But, actually, the value of using an AI image for a brand is what we’re interested in. There’s been a lot of high-profile ad campaigns where the brand has said, “we’ve used AI to do this,” but in terms of general advertising, we’re still digging into how AI might be valued or not valued by the consumer. We don’t allow AI-generated or modified content within our libraries on Gettyimages.com, so we don’t have our proprietary data to look at. But we’re trying to understand it more through the consumer lens.

Zoë: Can you talk more about which brands might be finding value in AI-generated imagery versus those that are rejecting it, and why?

Rebecca: I’ll tell you the industries that are not using it, such as healthcare and finance, because there’s a lot of regulation about being truthful in advertising around these industries. Travel as well [is not using it], because there’s this expectation that what you’re looking at is what you will see when you arrive at your destination. The brands that are using AI are as expected; the automotive industry has been using CGI [computer generated images] ever since I’ve been in the industry. So, AI is the next iteration of the technology that allows them to create the concept or the environment that the car is seen in. Then you’ve got brands like Coke, who took their own intellectual property at Christmastime, and put all of the previous Christmas ads that they had created into AI to create a new Christmas ad. There were mixed reactions to that, but it seemed to me like a fair use of their own intellectual property(IP). We’ve seen a little bit of it in the consumer-­packaged goods space as well. AI is seen as the new frontier for personalization in marketing. So, if you want to show somebody a Cadbury’s bar, for example, with their name on it, you can use AI to mock that up and do it over and over again.

Zoë: Interesting how the cases of AI-generated imagery being used are for scenarios in which consumers are already used to highly edited imagery, or to reimagine images that the consumer has already seen before. So, there’s already a little bit of trust there to begin with.

Rebecca: Exactly, I think you’ve hit the nail on the head. The word “trust” is so important, and probably one of the most interesting things we’ve discovered through consumer research over the last two years is a shift from trusting what you see to a certain degree, to coming to images with a sense of distrust and having to deconstruct whether an image is AI or not before you can trust it. That shift has happened really quickly. In response, companies like ours are building trust through not using any AI—to take away that step of having to deconstruct the image to try and determine whether there’s AI in it or not—and being very transparent about how we create our content by showing us shooting on set. I expect to see a lot more brands doing that. Lastly, the labeling of AI is a big discussion within our industry, to build trustworthiness around what you’re looking at.

Zoë: I understand Getty Images is also working on a new AI tool of its own. Can you tell me more about that, and how your research team has been involved in developing it?

Rebecca: We have not accepted AI-generated content onto our site, but we appreciate that our customers want to create their own AI-generated images, and we think it’s important that the excitement around AI is around self-creation. We saw the MidJourney and DALL-E tools of the world coming over the horizon—[these are] AI models which were developed with content scraped from the web. So, we wanted to show that you could develop AI image generation responsibly, using AI tools that produce commercially safe content and remunerate the creators whose content it was trained off; and that’s what we’ve done. We’ve built a tool with the ability to not only generate new content, but also the ability to modify content that you’ve licensed from us. If you want to fill in or remove objects, you can do that with our AI tools. But the imagery from this is only available to the customer creating it; it does not go into our library. We’ve also made it impossible to create anything that’s not safe for work on our tool, meaning you can’t recreate or copy brands or famous people you know.

My team has been very involved in building and testing the capabilities of the tool. Since it’s a synthetic-creating machine, we wanted to apply the same principles that we build into the images that we humans have been creating for years, including the work we’ve done around driving representation in our imagery.

Zoë: Amazing how the research that you’ve already been doing for years was able to proactively inform this new offering. What advice would you offer our readers for being proactive in their work and careers?

Rebecca: We’re in an era of quantitative data, and big data has been a buzzword for, I don’t know, 15 years now. But I think there’s now more realization that unless you put quantitative and qualitative together, you’re not going to get the full human picture. I personally feel that the two should always go hand in hand; the key thing about qualitative data is that it gets you to foresight in a way that quantitative can’t. Yes, you can do all sorts of statistical modeling but really understanding human intention and human aspiration—meaning the things that we want to do and the things we desire to be in the future—I think you only get that from qualitative data. I have never in all my years been able to get to intention or future intention from quantitative data. I suspect that’s because when you’re looking at big data, everything becomes a bit more averaged out, and taking my own research as an example, how people act on our site is not how they think they act, or they say they act. So, it’s important to marry the two pieces of information to get closer to the truth and use that as a springboard for the next idea. AI will help summarize a lot of big data into bite-sized chunks. But then when it comes to interpreting what people are saying and how they’re saying it, there are deficiencies there. We know that it doesn’t necessarily tell the truth, or it can hallucinate, so it’s important to bring in the human eye.

Zoë: To wrap up, you have such an impressive bio. Is there an accomplishment or something along the way that you’re most proud of?

Rebecca: Definitely my PhD in photography. The reason I did it was to test our commercial research against academic research; my PhD was based on the research that we do at Getty Images and looking at that over a long period of time. So, that was eight years of a lot of work. In terms of my work achievements, I’m most proud of the work we’ve done around depictions of women and bringing more female photographers into the community through understanding why images of women were harmful and were not doing the work that they should be doing. I think it was only in deconstructing the images, really understanding what each image says, and doing that at scale, that it showed us how that can do harm and have issues for female audiences. So, once we really understood where women were not represented, we could then go out and rectify that.

Zoë: That’s a really inspiring story of how research can drive positive change at the societal level. Thanks so much for sharing your journey with us, Rebecca!