Determine the True Impact of Your Facebook Ad Campaigns with Lift Attribution Testing

by Andrew Krebs-Smith | September 19

A Typical Scenario
Imagine that you’re currently in charge of paid Facebook advertising at a leading retailer…and it’s Monday morning.

You are now going through a report of your weekend ad spend. In this scenario, you are only allocating budget to two Facebook audiences. These being Cart Abandoners and New Customers. Hers’s what the the Facebook data is telling you.

Facebook Lift Attribution Testing

So, you say, “this is great, we have a 7x ROI overall”! You put down your coffee, walk to the CFO’s office, to ask for a significant increase in your paid Facebook advertising budget. When you get to the CFO’s office, you excitedly share your findings. However, your CFO isn’t nearly as excited as you are. He says, “hey, wait!”, and proceeds to explain that he himself is part of the Cart Abandoner audience ROI, and thinks that the reported number may be not be right.

In an effort to explain his concern, he describes a story about a recent purchase he made. Yesterday, he was about to make a purchase online, just as he was about to leave the office. He went online, quickly added the item to the cart, and then realized that he would not make his train so he stopped what he was doing and left the office without completing the transaction. He thought to himself, “no problem, I’ll just continue with this when I get to the office tomorrow morning at 9am.

At 8am the next morning…

He sees a Facebook ad on his way to the office on his phone. Although he doesn’t engage directly with the ad, he gets to the office and completes his purchase just as planned at around 9am. The purchase, of course, shows up in Facebook as a purchase in Cart Abandoners caused by the ad he saw at 8am. However, that ad had nothing to do with the purchase, it just happened to precede the purchase.

The CFO’s purchase is reflected in the 10x ROI of Cart Abandoners, but we now know that the 10x isn’t real. Your CFO then makes clear that the budget increase is not approved.

So, you go back to your office and consult with your agency. They recommend running a lift test to determine just how much of the Cart Abandoner audience ROI you should actually take credit for. A plan is devised and the results are in. You are now ready to go back to your CFO and present a sound story backed by data.

You share new data with your CFO.

You now know that 70% of the revenue associated with Cart Abandoners would have happened anyway (that is to say, would have happened regardless of your Facebook advertising campaign). So, that leaves 30% of the revenue that is actually caused by your ads. So it’s true, there is an issue with the Facebook reported numbers. Your CFO’s purchase is clearly in the 70% that you will discount from Facebook’s reporting.

The true ROI of your ads is only 3x for Cart Abandoners (not 7x) and 4x for New Customers just as before. You are now in a much better position to ask your CFO for additional budget. It might not be quite what you were originally asking for but these numbers are still really good and warrant scaling you spend.

Facebook Lift Attribution Testing

Understanding the real impact of your ads not only gives you confidence, but also helps you to navigate where you should put your budget. For example, if relying on the original Facebook data, you’d be allocating more budget to Cart Abandoners, because it appears to have a much higher ROI. However, when you look at the post lift test data, you would actually want to allocate more spend towards New Customers since it has a higher real ROI.

A Little Vocabulary

When looking at the numbers and labels in the chart above, let’s use the following nomenclature.

  • In-Platform ROI (or Facebook reported ROI) = Gross ROI
  • What % is Real = Incrementality % (the portion you can take credit for)

So here’s the formula

  • Gross ROI x Incrementality % = True ROI
Who Can Run Lift Testing

There is a misconception that online companies with online-only businesses can successfully run lift tests. However, the short and simple answer is that everyone can run Facebook lift tests. The advanced measurement techniques required for lift testing are now available to all businesses for both online and offline transactions.

Facebook Lift Attribution Testing

When it comes to businesses looking to track offline conversions attributed to Facebook, there are a few different ways to go. Store revenue measurement can be measured using

  • GPS (great for suburban locations)
  • CRM data (great for companies with a lot of PII that can be associated with conversion)
  • Older technologies using beacons

All of these tools were developed by Facebook for advertisers trying to understand how digital ads drive in-store, offline behavior. Social Fulcrum has also helped to develop parts of these core products over the past three years.

One important note, some of our best data on the impact of Facebook ads driving revenue behavior is for companies leveraging Facebook ads to drive in-store behavior. These ad technologies are fairly new, and are under-adopted, resulting in opportunity in the short-term for more retail-focused advertisers.

A Look at Facebook’s Traditional Attribution Process
We will use a few examples to look at Facebook’s traditional attribution process.
  • In these examples, we will assume 7-day click, and 1-day view, attribution windows.
  • This means that Facebook will take credit for any purchase behavior that occurs within 1 day of viewing an ad and within 7 days of clicking an ad. Facebook in this case always prioritizes last-click attribution. After the 7 days of click behavior, Facebook does not attribute any purchase behavior to the ads.
First, let’s look at examples where traditional Facebook attribution works:
  • A potential customer sees an ad Monday morning at 9am and at 8am on Tuesday they make a purchase. This is within Facebook’s attribution window, so Facebook attributes this purchase to the campaign. In this example, let’s imagine that the Facebook ad actually caused the purchase. This means that it’s completely correct that Facebook takes credit for the purchase.
  • A potential customer sees an ad Monday morning at 9am. However, they never make a purchase. Facebook correctly observes that this person makes no purchase, and correctly attributes no purchases to the campaign.
Now, let’s look at examples when traditional Facebook attribution breaks:
  • A potential customer sees an ad Monday morning at 9am. However, this particular customer needs to do some research before making a purchase. After researching the product, they decide to make the purchase 8 days after seeing the original ad. In this example, let’s imagine that the ad actually caused the purchase. However, because of the attribution window we are using, Facebook will not attribute this delayed purchase to your campaign, resulting in an incorrect and missed credit.
  • A potential customer already knew they were going to purchase this week. They see an ad after adding a product to their cart, and then when they purchase the next day, Facebook attributes this purchase to its ad spend. In this case, Facebook incorrectly took credit for this purchase. This purchase was going to happen anyway, regardless of the ad; the ad just happened to precede the purchase.

    For well established brands, who have reached a meaningful level of background purchase behavior and brand recognition, this is the most common problem with incorrectly associating credit to Facebook ads.

Attribution tells you if an ad preceded a purchase, NOT if it caused the purchase

Facebook Lift Attribution Testing

Given the examples above, the traditional attribution process that Facebook uses can be described as follows:
  1. Choose attribution window
  2. Run ads
  3. Watch purchases
  4. If purchase is within the time window, give ad FULL credit
  5. If purchase is outside the time window, give ad ZERO credit
Key Issues with Facebook’s Traditional Attribution Process
  • We arbitrarily have to pick our attribution window. Unfortunately, there is no great way to make this decision in a thoughtful, data-driven way. It’s also only available one way (i.e. it’s one size fits all) on Facebook, which is challenging because we know the same attribution model should not be used for new customers vs. past customers, or across different brands and products that see completely different buying behaviors.
  • Standard attribution windows work poorly for high-consideration products with delayed purchases, such as life insurance or school enrollment
  • Traditional attribution windows only help us measure behavior that followed an ad, not behavior that was inspired by an ad.
How do we Guarantee that Facebook is Correctly Taking Credit for Purchases it Actually Caused?

When we look to lift testing, we are actively moving away from arbitrary, human choices around attribution. Rather than time-based attribution, we want to instead use a data-driven attribution model, which can correctly award credit to Facebook ads in over longer periods of time.

Lift testing lets us give correct, partial credit to purchases across a huge time window

Facebook Lift Attribution Testing

We might be tempted to conclude that these two approaches are no different. Time-based attribution is using a 100% multiplier for 7 days, and lift-based attribution is instead using a 70% multiplier across all of the days. Seems like another arbitrary decision, right?

Well no, that’s actually not right! The whole point of a lift test is to use test design and data to answer what our X% discount should be over as much time as we can track. Since this new discount is based on data, we can feel confident that this lift-based attribution is more accurate than time-based attribution.

How We Run Lift Tests
We will use one of our clients, Custom Ink, to demonstrate how lift testing worked for them

The first step to running a lift test is to randomly divide your audience into a test group and a control group. The percentage divide can vary based on audience size and budget, with the main goal driving towards ensuring statistical significance while also minimizing the audience hold-out size.

To run a lift test, you show real ads to an exposure group, and Public Service Announcement (PSA) ads to a control group

Facebook Lift Attribution Testing

The test group (in our example, 80% of the audience) will get real Custom Ink branded ads. So, we show this portion of the audience Custom Ink ads, and then track the revenue behavior attributed to those ads within Facebook. At this point, this is not a lift test, this is just regular digital advertising.

The key to making this a lift test, is what we do with the remaining 20% of the audience (the control group).

The control group functions similarly to a placebo group in a medical trial. In medical trials, the control groups don’t take the real medication, rather, they take a placebo. The same is true for lift tests. The control group (the other 20%) gets PSA (Public Service Announcement) ads, rather than holding them out and being shown no ads at all.

By showing the control group a PSA, we can track their Custom Ink purchase behavior since we served the PSAS from within our campaign. Again, here we are not tracking donations to the charity shown in the PSA ad, we are tracking Custom Ink revenue after someone in the audience sees a PSA ad.

Facebook will not give you data on audiences you are not advertising to. Hence why you need to spend a portion of your advertising budget on PSA ads, rather than showing no ads at all. There is no way to get data on the purchase behavior of your control group without doing so.

The data from the control group results in the purchase behavior of people who see PSA ads, which realistically represents what people would do if we did not advertise to them at all.

The comparison between these two groups represents the real impact of Custom Ink’s Facebook ads.

We have observed that Custom Ink ads claim 10 purchases per 1,000 people, whereas the PSA ads claim 3 purchasers per 1,000 people. By taking the difference, we then conclude that the Custom Ink ads are actually driving 7 incremental purchases per 1,000 people. Which means that 70% of the 10 purchases are actually driven by Facebook ads (not the 100% that Facebook was reporting), and 30% of the purchases would’ve happened anyway (regardless of our campaign).

Facebook Lift Attribution Testing

This takes us from assigning 100% credit to ads where purchases were made within 7 days, to assigning 70% credit to ads where purchases were made within Facebook’s full 28-day attribution window.

5 Key Elements of a Successful Lift Test Design

Facebook Lift Attribution Testing

Results We Typically See

Based on results from lift tests run across many of our clients, we typically see a high-level trend that shows prospecting on Facebook is close to 100% incremental and that retarget and retention are 25-50% incremental.

Facebook Lift Attribution Testing

For most clients, we see a range of ROI between 3x and 25x, depending on campaign scale, client vertical, and type of targeting (prospecting, retargeting, retention).

Lift testing has opened up a valuable opportunity for us to work closely with our clients to understand the real impact of advertising on Facebook. Lift testing allows our clients to thrive in a world that isn’t restricted by assumptions or arbitrary time-based attribution windows, but rather, a world that is accurate, and driven by data.

Andrew Krebs-Smith
About the author

Andrew Krebs-Smith

Helping B2C Retail/Ecomm companies test, measure, and scale digital marketing. Trying to fix the agency model so that agencies are accountable to every client media dollar they spend.