Embracing Change
We often get questions about how we built our unified measurement platform, Polaris. We thought that it would be helpful to share our thinking in this blog post.
The mobile advertising industry has always been a dynamic one that has attracted brilliant minds to embrace its constant change and bring to life businesses that wrestled with the messiness of advertising data to deliver increasing value as they evolved. Now is no different. So we think it is important that as we approach the privacy apocalypse and its measurement challenges, that we commit once again to center ourselves, continue to embrace change and innovate.
Let’s begin by taking stock of the positives. Firstly, this major industry shift offered increased user privacy. This is in alignment with our core values as end users ourselves and as a mobile ad tech company. Secondly, it allows the industry to ditch the issues with last click attribution and forge a new chapter of measurement innovation.
Thirdly, our unified measurement platform will ensure that you have granular visibility in order to continue to successfully optimize and execute. You will see more detail about this lower down in this post.
Our Quest For The Future Of Measurement
The IDFA restrictions, that rolled out in 2021, introduced a huge but welcome change to the mobile advertising ecosystem. After an Apple user upgraded to iOS 14, any iOS 14-compatible app that wants to access their device’s persistent identifier (IDFA), most often for the purposes of retargeting, user profile linking, or measurement, needs to ask for their permission (as shown in the example below).
This presented an interesting challenge for the industry since so many players in the ecosystem rely on the IDFA to varying degrees. At MetricWorks, we’ve focused on how measurement data will flow in the post-IDFA world. Back in 2020, based on the spirit of Apple’s new terms, we decided to evaluate this challenge with the assumption that MMPs will no longer be able to send events with attributed channel, campaign, country, and publisher app information. In its current state, SKAdNetwork, Apple’s on-device measurement offering, does not seem to provide enough granularity, nor does it handle post-install events well, making retention and LTV prediction impossible.
Early in our search for a suitable measurement solution, we identified incrementality as an important piece of the puzzle. For those not as familiar with the concept – incrementality refers to the incremental value directly caused by each advertising touchpoint in the user journey. This is impossible to measure in the last touch attribution model (Fig.1 below), which credits the final touchpoint with 100% of the value for the user without consideration to the possibility that the user could have been acquired with fewer, more valuable touchpoints or even zero advertising as an organic install.
A methodology known as incrementality testing is an ideal solution to prove the causal relationship between advertising touchpoints (ad buys) and uplift. It prescribes a rigorous scientific process similar to randomized clinical trials used by pharmaceutical companies where a population of users is randomly split into a test group that is delivered an ad and a control group that receives a placebo (often an unrelated public service announcement or PSA). However, it can be costly since, if you’re displaying PSAs, you still have to pay for those impressions. There is an even bigger problem with the advent of iOS 14. Most forms of incrementality testing require a large list of device IDs so that the audience can be split. iOS 14 will make this difficult to accomplish.
Another powerful tool in the measurement toolkit that also considers incrementality is media mix modeling (MMM). This technique uses regression models to find correlations between ad spend and business value. As a top-down technique (Fig. 2), versus a bottom-up (Fig. 3) approach like last touch attribution that works at the device level, it eschews device IDs in favor of aggregated data, and is therefore naturally aligned with user privacy. As you can see in Fig. 2 and Fig. 3 below, both top-down and bottom-up measurement techniques attempt to allocate the same users and app activity data (installs, opens, revenue) to the same four campaign/publisher app combinations in the proper proportions, but come up with different answers.
Key Idea: Based on the measurement output of the two different methodologies, we can see that they somewhat agree on the value of ironSource campaign B, publisher C (red). However, ironSource campaign A, publisher A (purple) didn’t get credited much in the bottom-up last touch methodology (just 1 install with little revenue) while top-down shows that, even though it might not be getting the last touch, it is providing significant incremental value. On the flip side, bottom-up attribution gives Vungle campaign A, publisher A (orange) a solid amount of credit for last touches, but statistically, it is providing almost no incremental value. We would have acquired those users either through other campaigns or organically anyway.
Coming back to the top-down media mix modeling (MMM) technique (Fig.2), let’s look at its advantages:
- Requires few inputs – mostly just a time series of spend and target outcome measurements.
- Robust to incongruities among ad channels, both online and offline, in terms of functionality and data availability.
- Can be used to predict the change in outcomes as a function of different spend inputs which is quite handy for planning purposes including what-if analysis.
- Can also be used in conjunction with other algorithms to optimize towards a given goal, which can be used for budget allocation optimization.
MMM has its own problems though, which also eliminated it from contention in our quest for the future of measurement. When we attempted to apply it to mobile app advertising, some major issues became apparent including:
- Requires several years of data at a minimum due to aggregation at the week or month granularity.
- Not used for quick decision making since it takes weeks or months to update a model with new data.
- Usually custom built for an individual advertiser by very expensive specialist consultants.
- Can only prove correlation, not causation (eating seafood may be highly correlated with personal wealth, but that doesn’t mean eating a lot of seafood will make someone more wealthy).
While MMM wasn't a perfect fit, we recognized early on that the MMM concepts held a lot of potential. The key is that we found that most of the downsides could be mitigated through a combination of creative feature engineering and automated model validation through constant backtesting and live experimentation.
Keep in mind that the entire measurement process including modeling and validation must be completely automated in order to be scalable. No need to worry though because our existing automation technology in UACC is being enhanced to make this a powerful reality for you.
Our Solution For Mobile Measurement
Thank you for bearing with us and making it this far. Let us now look at the high level overview of our solution:
- We believe that measurement of campaigns at the country and publisher app level (when applicable) remains critical for UA decision-making.
- Compatibility with the current iOS14 technical specifications and alignment with the spirit of the new rules are equally crucial for any future measurement solution including ours.
- Regression models similar to those called for by media mix modeling can be augmented with techniques that address problems unique to mobile app advertising in order to finally deliver a measurement solution that considers incrementality.
- Using daily data helps solve the data volume issue inherent to MMM and allows the model to be updated more quickly so it can inform the types of fast decision-making required by modern UA teams.
- Back testing allows us to automatically evaluate a wide array of approaches and select only the most accurate models based on their ability to predict known historical outcomes.
- Automated controlled experimentation enables live testing of incremental value predictions in the real world so that models can be validated and improved by rejecting poor models and feeding results back into the next model.
- Our unified measurement platform, Polaris, (based on our analysis outlined above) ties aggregated cohort activity with the channel, campaign, country, and publisher app dimensions, instead of the MMP.
- Our measurement module will be provided as an option at the app level so that advertisers can rely solely on MMP measurement for their Android apps, should they wish.
We hope that the above was helpful and invite you to connect with us about any questions so that you have a chance to shape our solution.
Best,
The MetricWorks Team