Feature

Usability Testing with Corner Store Owners in Pakistan: A How-To (and How-Not-To) Guide

When researching within the fast-and-loose world of corner store owners in Pakistan, Western design principles and User Experience research methodologies often do not apply. Issues such as cramped spaces in stores, cracked phone screens, dialect differences, and a target of technology novices can derail your research. Based on personal experience, Usama Waheed (co-founder at dli5) explains why many traditional UX techniques do not work in these situations and then shares UX methods he has developed to adapt to this market

By Usama Waheed, Co-founder, dli5, 2022 QRCA Global Qualitative Researchers’ Award Recipient, Lahore, Pakistan, usama@dli5.com, Twitter: @designlikeim5, LinkedIn: www.linkedin.com/in/usama-waheed

Pakistan is at the forefront of a digital revolution. With the sizable Indian and Indonesian markets becoming increasingly saturated with funding dollars, investors are starting to turn their attention toward the world’s fifth-most populous country. In 2021, Pakistani start-ups in the digital space raised $300 million—which was more than in the previous six years combined.1

This burgeoning digital ecosystem requires the development of apps that the local population can understand and use. A population with low literacy rates presents a development challenge. While the local Pakistani teams building these products have education levels and skill sets comparable to those in Silicon Valley, much of their audience is relatively new to the online experience.

Within this context, it is critical to avoid the mistake of assuming that Western design principles will translate easily to a segment that can barely read English—and sometimes cannot read at all. So, how do you ensure that, in this market, your app will make sense to your users?

Enter Usability Testing

I like to think of usability testing as an interview with a prop. The prop is the app you are testing, and the usual guidelines of interviewing apply.

In a usability test, you present the app (or website) to a participant and ask them to complete a set of tasks, such as finding the customer support number or filtering search results. As they go about the task, you observe where they get stuck, what confuses them, and whether the interface and words are comprehended as intended.

Resources on how to conduct usability tests are plentiful with Steve Krug’s Rocket Surgery Made Easy being an excellent starting point for markets with a high penetration of app-savvy users. But when testing with novice users out in the field, many of those assumptions and methods go out the window. Here, there are no comfortable offices, no calendars to send invites to, and no sophisticated multicamera recording setups. So, you have to learn to adapt to the nuances of this environment.

My design agency focuses on researching and testing digital products with low-literacy and novice users. For a recent project, we had to design a digital microlending app for corner store owners to help them order inventory with a buy now, pay later model. The app helps ease cash flow problems by paying for the stores’ inventory orders and allowing them to pay the company back over a period of three months.

To execute the design research for this app, we conducted three rounds of usability tests with about three to five users in each round. Between each round, we would go back to redesign elements of the app and present the new iteration to the next set of participants.

Based on my experiences, I developed the following pointers for conducting effective usability tests with low literacy and/or novice technology users.

  1. Language Barriers

While literacy may be low with this Pakistani market, many tend to be bilingual; if you include regional dialects, there can be conversations where several languages are being spoken and understood simultaneously.

The variations in language and dialect make it harder to establish rapport and keep the conversation flowing. Even if you can understand the other person perfectly well in conversation, the nuance is sometimes lost under the constraints of having to speak a common dialect that neither the interviewer nor the interviewee is comfortable with.

Another challenge with usability testing in this environment is that some words simply do not exist in some languages. For example, “log out” in English has no direct translation in Urdu (the national language of Pakistan). So, most translated apps will retain “log out” in their designs, but it will be written in the Urdu script. For someone who is interacting with digital products for the first time, “log out” will make no intuitive sense until they are told what it does. As a result, testing the copywriting language used in the app takes on increased importance.

Then there is the issue of translating words on the fly. As any bilingual person will attest, when you are thinking in one language but speaking in another, you will often run into mental blocks where you simply cannot translate the thought well enough to convey exactly what you mean.

To overcome this, I have found it helps to translate your questions and key terms beforehand. Even if you know in your head what the question means, the flow of the conversation may be disrupted (and you may be perceived as an outsider) if you cannot find the right words at the right moment.

  1. Note-taking

When conducting usability tests in the field, it is useful to record the interview, so you can refer back to it in the office. But, in my testing experience, low-literacy users are often hesitant to provide consent for recording. Even when anonymity is assured, and the topic of conversation is benign, there seems to be inherent suspicion about anything being recorded. So, when recording the session is not an option, you have to rely on good old pen and paper to take notes. But that presents its own set of problems.

In these situations, a usability test tends to work best when there is one person leading the test and explaining tasks, while the other is observing and taking notes. But, in cramped stores and noisy outdoor spaces, there is often not enough space for two people, let alone three.

For people who are already not comfortable with technology, it is also intimidating to have multiple onlookers while they struggle with a task. Therefore, to avoid embarrassment of not being able to complete the task, it is not uncommon for participants to dismiss the problems they are having and say, “I will figure this out later on my own.”

Hence, some tests must be done with just one tester (me), who also must simultaneously take notes. This can prove to be challenging in information-dense sessions where you have to keep your eyes on the screen to understand the participant’s interaction with the device, while also taking notes by hand and conversing in multiple languages/dialects.

In these situations, I find it best not to take notes at all during tests; and instead, be fully immersed in what is happening—although I do refer to my interview guide. Then, immediately at the end of the test, I will record a quick voice memo of my thoughts and listen back at the end of the day to elaborate on them.

For longer tests, it helps to make one- or two-word notes as the test goes on, by way of a checkpoint reference to come back to and expand upon later.

  1. Scheduling and Time Management

In a world of smartphones, emails, and calendar apps, scheduling interviews with participants is relatively painless. But, when your participants come from segments that do not have these luxuries, scheduling and time management become a lot harder.

When we set out to do our first round of testing, I planned to conduct guerrilla research by walking into corner stores and politely asking the owners for five minutes of their time.

That did not work.

The problem was not that they did not have five minutes to spare. Most were happy, even excited to help. But because they were often the only person running the store, there were frequent interruptions as customers came in and needed assistance.

It was important that the product was tested in its natural context, so I could not ask them to come to our office either; and, in any case, they could not abandon their storefronts for that long. Eventually, I had to scrap most of our first round of testing because the participants simply could not focus on carrying the conversation for more than a few minutes at a time.

I tried again by scheduling 30 minutes with them in advance, at a time of their choosing, and making the incentives for participating clearer and up front. But even then, there were issues with availability. Sometimes the store owners would simply forget; other times the unpredictable nature of their workday meant they were just not available for an uninterrupted stretch of 30 minutes.

But from this frustration came an important user experience learning—we realized that we had to design for interruption. So, we incorporated elements in the app that gave high-level information at a glance and made completing tasks faster.

While the app we were testing was highly context-dependent, this may not be true for apps targeted to low-literacy users who may have more downtime available. In other words, financial services demand more time and concentration compared to social media apps.

  1. Choose Your Device Wisely

To replicate the real-world experience of our users, we wanted to conduct our tests on a mobile device that could come close to what they used themselves, which in this case was a low-to-mid-range Android phone.

So, we went out and bought an Android phone in the low-end price bracket. But there were two problems.

First, these phones did not have enough memory (RAM) for us to do the testing. While the actual, developed app would have run just fine on these phones, our testing was being conducted with prototypes that required significantly more memory to run smoothly. As a result, the prototype tended to refresh (and interrupt the task) every few minutes. So, we ended up testing on a slightly higher-end device. This had its own challenges. First, users were already afraid of handling a device that was not their own—and now it looked premium too, adding to their hesitation.

Second, we could not entirely replicate our users’ real-world experiences with our own devices. Regardless of which device we used, our shiny new device simply could not provide the same experience as a corner store owner’s smudgy, cracked display.

This was not a trivial consideration given that a significant portion of our users had messy workspaces and, as a result, messy hands. Their phone displays were much more difficult to touch or read than our squeaky-clean device. I would recommend spending extra time doing research on which low-end phone will support your prototype; buying a battered, used phone may not be a bad idea either.

  1. What to Say—and How to Say It

When testing with low-literacy users, it is important to try and bridge the class divide, and this can manifest itself in unexpected ways.

For starters, I noticed that many participants were eager to impress us with their knowledge of the app being tested. It is nice to give reassurances and nod along when this happens. But sometimes they would brush aside things that they did not know. As a usability researcher, it is precisely these parts of the app that provide the most valuable insights, so probing at these points is critical.

In these situations, you will want to choose your words very carefully: ask politely if they can go back and explain why they are not able to do that task that they just skipped, but do not put them on the spot and make them feel incompetent—better to start off by asking why they “skipped” rather than why they “couldn’t do it.”

Some participants tend to get defensive as well. I observed that sometimes when they struggled with a task, participants would try to convince us that they could figure it out later. Or, they would brush off the app as “easy” and consider it a personal affront that they were being “tested” on it—even when we made it clear that this was a test of the app, and not a test of their abilities. In these instances, where they were extremely defensive, I would consider that particular task a lost cause and quickly move on to the next one before the participant became too frustrated to continue.

It is also a good idea to disassociate yourself from the app while testing. I stress the fact that I am from an independent company that has no relation to the app’s developers so that they know they can talk freely. This also helps avoid complaints about pricing and customer service, because it is not unusual for participants to initially mistake you for a company representative. Sometimes I go so far as to criticize the app myself before asking them about it. Naturally, this introduces bias by priming participants to think negatively about the product. But, given that the point of a usability test is to uncover flaws with the app, I think it is preferable to introduce this bias to them rather than them being defensive about articulating any problems they may have with the app. In these situations, with low literacy/app newbies, I have found that framing the product in a negative light garners far more useful insights than when I’m perfectly neutral.

Conclusion

While I have shared my experiences here so that you may be better prepared for a similar project, the beauty and fun of researching in this segment is you keep getting thrown new curveballs. Even after several dozen interviews for a particular project, I never quite know what to expect for the next one, which keeps me on my toes and lays bare the utter folly of expecting to follow a script to the “T.”

Lastly, I share the usual disclaimer that small samples are not always generalizable to the larger population—especially when you are building products to be used by tens of millions of people. But, with this type of testing, you don’t need a large sample to improve the product’s design—thus, the ultimate purpose of usability testing.

References:

  1. Bloomberg, “Funding of Pakistani Startups Crosses $300 Million This Year,” by Faseeh Mangi, Dec. 8, 2021. bloomberg.com/news/articles/2021-12-09/funding-of-pakistani-startups-crosses-300-million-this-year