Continuous Behavioural User Research – Customer Insights as a Service

User Experience

customer experience

user research

Insights

change management

September 17
blog author

Marc-Oliver

Lead UX Designer

Continuous Behavioural User Research – Customer Insights As A Service

At Appnovation, we recently started offering our clients user research on a retainer basis. The new service provides critical insights; delivered more frequently to help our clients better understand their customers’ behavioural changes over time, and to be able to make an informed design (and business) decisions faster, with more confidence. In this article, we share our overall approach to recurring, continuous, behavioural user research, how we effectively run experiments and conduct remote testing sessions in multiple markets simultaneously, and how we socialize the findings that can feed into an organizations’ change management process.

Many organizations undermine the power of their website when they only leverage it as a tool to publish news and stories, promote products and services, or use it as a touchpoint to control peak times for their customer service centres. Some of you may be thinking that I’ve just described your own company. Unfortunately, I find this to be the state of affairs in far too many companies.

We always suggest to our clients that their website can do so much more when it comes down to customer engagement, user-generated content, platform strategies and new digital services that monetize their most committed fan base. But did you know that the internet is a great tool to get to know your customer’s preferences and behaviours more thoroughly with little to no effort? Here is why:

  • Behavioural user experiments often need large samples and the internet’s big data is all about large sample sizes.
  • You can use your own website and social media channels to recruit test participants easily and often.
  • You can capture not only what your customers say (opinions) but, more importantly, what they do (behaviours).
  • Your website is built for iterative change — an experiment is an iterative change.
  • Probing a new design concept idea is one of these powerful experiments.

So, why is it that more organizations do not take advantage of easy to conduct, behavioural science testing and experimentation? We asked ourselves the same question, and the answer might be simply that they are busy executing their established business models. Or, perhaps, they don’t want to talk to customers. This could be because they assume that the marketing and business analysts already have all the relevant information available. Or, they lack the resources or skill-sets necessary to conduct behavioural user studies. Oftentimes, however, they simply need a bit of a push to get the ball rolling internally. The truth is, everybody can do some basic (behavioural) user research and everybody should do it more often, more thoroughly, and with more confidence. This is our approach to continuous behavioural research.

Ditch focus groups and preference tests. Customer’s preferences and desired outcomes cannot be quantified in a reliable and valid way. Statistical theory and psychology commonly agree.

What We've Learned so Far

Many companies we spoke to and worked with generally follow a one-off, project-based approach to (behavioural) user research. This entails engaging in such research perhaps once or twice a year, or whenever a new critical feature of their website is about to go live. At such a time, a user researcher gets hired (the one unbiased person in the room) to conduct qualitative and quantitative research. They pick a small sample size of the current user base to run some formative and summative research on a piece of a product, just so they can validate the highest-paid person’s opinion within the company with a Likert Scale. It might be common and practical, but here are a few things that are wrong with this approach:

  • Behaviour changes over time. People change over time.
  • A researcher's biases change over time.
  • Tech changes all the time and people are easily manipulated by tech.
  • Value changes and is non-linear.
  • Lastly; putting a number on something doesn’t make it quantifiable.

Instead of continuing with this one-off/project-based approach, we simply suggest that you carry out your research on a more frequent, continuing basis. Perhaps once a month or once a quarter. Let it become part of your company’s learning routine. Good researchers know that it’s not enough to take a measurement of something at a point in time. Taking a snapshot of your data won’t work because the things you are measuring are always in flux and are susceptible to outside influences. Instead, you must understand the system that is generating those data points.

Our Recommended Approach

When clients show interest in our new service, we usually start out by assessing their business goals, their product/website vision, and their strategy. We want to get a good picture of the business and product development stage they are currently in (ideation, development, traction, transition, growth). We also want to grasp their teams' capabilities and the things they have done in the past. We want to understand what has worked for them, and what has not; what data they collected already and how good/relevant this data is.

We continue by running a quick UX and Tech audit of their website. We compile a rough feature and content inventory list, and then take two to three days to understand their current customer base, diverse audiences and markets. The goal here is to establish Reference CustomersReference Customers are real customers (not friends or family), that are running or using your product/website in production (not a trial). They have actually paid for some of your features, products or services. They are not hypothetical personas and they should represent each target market. Having a list of your reference customers will help to recruit the right test candidates, phrase a precise research hypothesis and select appropriate research methodologies. We’ll come back to them later.

We finish the ‘warm-up’ phase with a stakeholder workshop. There, we share openly insights, thoughts, ideas and start defining research goals and some compass success metrics that help guide future research initiatives for the next 12 months. This is also a good opportunity to select an ambassador from the company, who can support us with organizing things, gathering information, connecting relevant people and passing on research results.

We pretend that we are in trial mode for at least 3 months since there are many factors we have to consider to make the process eventually run smoothly, with little to no effort. Again; in order for research to happen on a regular basis, it needs to be easy to start, easy to facilitate, and easy to share. All findings need to be broken down into actionable items that cater to other departments’ time schedules and their capabilities to make quick changes to course-correct the business, products and services. Otherwise – what’s the point of collecting insights in the first place, when you don’t implement the recommendations that came out of them.

“Not everything that can be counted counts. Not everything that counts can be counted.” 
William Bruce Cameron

Process & Tools

After having developed a holistic view of our client’s organization, their website and all the other digital touchpoints and services, we can come up with an initial recommendation for the research cadence and appropriate methodologies, based on a prioritized list of ‘unknowns’, ‘treasured and riskiest assumptions’ that usually float around inside the company. We want to eliminate these once and for all. We then schedule everything into a research calendar which we can share companywide, to allow everybody to join or call into live research sessions. Here are some of the things we want to understand with research:

  • What people really do with your products and services – from initial trial to replacing them with an alternative, and beyond.
  • What your customers content consumption patterns are, and how do they change over time.
  • What prevents customers from using your website, services and products successfully over the whole product usage life cycle.
  • How well do users adapt to changes or how fast they learn new features.

Still, our goal is to make all this happen with as little effort as possible, so we generally recommend conducting remote, unmoderated and moderated research. This allows us to do research beyond local borders and be more inclusive. We support these qualitative methodologies with follow up surveys that include forced-choice questions and cognitive interview techniques. At times we run indirect and implicit memory tests or throw in a diary study.

Research tools these days have become much more accessible, and are no longer expensive, clunky to use, or difficult and time-consuming to set up. Here is a list of tools we use most often, mixed with tips, and insights from our failures and learnings:

Recruiting & Scheduling

  • Use your website and social media channels to recruit your own customer fan base. Facebook and Amazon Turk can be used for additional participant recruiting. They also allow you to segment your participants by preferences and demographics. Pay Amazon Turk workers respectful rates: you’ll get more from them, and it’s good karma too. We saw too many companies lowballing these people, and that approach can sour quickly. Remember, you're in this research for the long run.
  • We recommend a tool to help with scheduling people such as Calendly for an intense day of moderated user testing. Timezones are a bit of a challenge, so factor that in.
  • Intercom Chat (or any other) is useful to engage with website visitors and start an initial conversation. Takes no time to set up, but use it only on carefully selected pages.

Moderated & Unmoderated Remote Testing at Scale

  • We probe and test early concepts and design variants building them out in a prototyping tool such as AxurePrinciple or Invision. We then use Lookback to invite test participants, share tasks and record and stream sessions to all our observers. These design concepts often look like real websites or mobile apps but have limited, selective functionality catering to the user task we are testing. This makes them easy to change and iterate upon findings. I wrote in detail how to set up automated, unmoderated remote user testing here. Use Trint to transcribe audio recordings.
  • Optimizely is great for life A/B and variant testing, but since you operate on a live web page, it takes a bit of developer commitment. We only use this when we need a large sample size.
  • Use Dovetail for a diary study. It’s easy to set up and people can self report their experiences. At times, we simply use a WhatsApp Group to loop test participants in and gather (behavioural) feedback regularly.

Moderated Testing

  • Occasionally, we need to leverage eye-tracking research techniques that allow us to get a less ‘primed’ perspective of what people are actually looking at when performing certain tasks on a website or mobile app. It takes a bit more preparation and resources but can also be done on a regular basis. We usually recommend this be done 4-times a year, depending on the type of test stimulus and the hypothesis we need to validate or invalidate. I wrote about eye-tracking here.

Sharing Results & Change Management

  • We try to be as transparent and as nimble as possible so we aim to share results while they are being collected via life sessions and dedicated internal chat channels. So at the beginning of a research stretch, set up a channel and invite people who might be interested in joining sessions but also are willing to distribute the findings company-wide.
  • Again, don’t bury your findings in a PDF that ends up collecting dust while sitting on a shelf, hidden folder and confluence page. We often ask our clients if we can get a dedicated physical space inside the office, where we can post or stream (via a monitor) our findings regularly, so they are always up to date.
  • Every 3 months, we host an internal workshop where we share our own interpretations and recommendations with a smaller group of people. We then also talk about some of the behavioural diagnosis tools you can use to make sense of the data you collect and avoid misinterpreting customer behaviour. We often see people mixing up what customers say with what they actually do, what is intentional and unintentional behaviour, what is heavily influenced by previous experiences, the environment and other factors that shape peoples decisions and actions (Schema Theory). This is a great occasion to loop in people who need a new perspective from outside the company.

Summary

  • Understand your company's ability to perform research regularly – this will help set up the research cadence.
  • Know where you are in the dev/biz development cycle – this will help define research methodologies and how much information/insights you actually need (don’t collect data for the sake of it).
  • Establish a research calendar, dedicated team and reporting structure.
  • Start with a 3-month trial and adjust.
  • Happy testing!