The famous line, “Half of the money I spend on advertising is wasted, the trouble is I don’t know which half,” spoken a century ago by department store magnate, John Wanamaker, has been quoted so often, it’s become a cliche…
50 years after men walked the moon, we now have self-driving cars, robots that vacuum, speakers that listen and artificial intelligence powering everything, so why do so many brands struggle to measure the response of their ads?
While digital marketing has helped make it easier to determine what’s working and what’s not; demand generation platforms like Podcast, Radio, and TV continue to leave marketers mystified…but, it shouldn’t. In this week’s Influencer, we will show you how to effectively track media campaigns and scale your efforts quickly and efficiently.
With clock-like precision, new advertisers profess, “At our company, we’re very data-driven. Analytics is everything.” Yet when onboarding a new campaign, getting a company’s full participation and buy-in to take necessary steps for a foundation of proper attribution is like pulling teeth. This is troubling. Without full engagement during the onboarding phase, proper measurement is much harder to achieve post-launch and campaign effectiveness is forever called into question. Some try to get a signal from direct attribution, such as the use of promo codes and vanity URLs, but direct attribution only tells part of the story–usually just ten to thirty percent of the actual impact of the campaign. To truly measure campaign effectiveness, you must determine the campaign’s total conversions, or Return on Ad Spend (ROAS).
An advertiser can spend a small fortune on Data Scientists, econometric models, and the like to determine the impact of independent variables (media) on dependent variables (sales, traffic, etc.), but there is a surprising and arguably equally effective way to tease out performance. And, it costs next to nothing to implement.
This preferred solution comes in the form of the How Did You Hear About Us Survey (HDYHAU). While the survey is ideally placed as high up in the user funnel as possible, most companies opt to implement it immediately after conversion to ensure an ideal user experience pre-conversion. An email survey to all or part of the new customer base is also an option if the number of weekly conversions is high enough to achieve statistical significance. Either way, without a proper survey in place, it’s a bit of a shot in the dark.
Here are some best practices for your survey:
Wording: How Did You Hear About Us? Include randomized choices to eliminate bias (i.e., Podcast, Radio, Social Media, etc.)
1) Intercept: Immediately after the final transaction on the website. We do not want to interfere with any of the points in the funnel before the final transaction and want the survey to be as simple as possible to ensure the highest response rate. Avoid drop-downs or open-entry fields, one-click buttons are best. The less complicated, the better.
2) Email: If the intercept survey is not possible, a daily post-purchase survey is recommended. Alternatively, weekly and monthly surveys are useful. However, it is important to note that the user recall and response rates will not be as reliable compared to the post-purchase survey emailed same-day.
3)The survey should be distributed to all purchasers (or signups, if that is your KPI), or to enough randomized customers to gather statistically significant data. Finding your response rates may require some trial and error, before landing on a minimum number of customers to email. Start with all of them, then work your way down as you better understand how many customers are needed to give you the necessary data set.
4)The survey should be implemented with a minimum two-week window before the start of the campaign to determine a baseline for survey responses. In other words, customers will select channels where you did not advertise and therefore could not have seen or heard you. If you establish a baseline of these false-positives before you launch, your projected results will be much more accurate once the campaign gets rolling.
Here’s how it works in a nutshell: Let’s say 1.44% of HDYHAU survey respondents select “Podcast.” Assuming 100% of all new conversions would respond similarly, we can extrapolate that podcasts drove approximately 1.44% of all new conversions. Therefore, we take the 1.44% of survey responses and multiply that by the total number of new subscribers to determine an estimated total number of podcast conversions. Then, we divide the total attributed conversions by the direct conversions (promo codes/vanity URLs) to establish a multiplier. Here’s a real-world example of the multiplier calculation in action:
Once the multiplier is calculated, we go back to the directly attributed numbers from the vanity URLs or promo codes and apply the multiplier accordingly.
With this multiplier methodology, we can calculate with high confidence (an average of 1.03% margin of error), the efficacy of each show and determine areas for optimization and scale. While this methodology is admittedly imperfect, we’ve been able to grow companies from small test budgets to successful offline campaigns generating millions of dollars per month of revenue for many of them. It might feel a bit antiquated, but the HDYHAU remains the most reliable practice available, that is until more credible attribution solutions enter the market.
Earlier this year, podcast network, Wondery, partnered with Qualia to offer a pixel-based model that they claim empowers brand marketers to track consumer actions, both online and offline – at scale and with accuracy. In a recent 3-week test, Wondery worked with an advertiser to measure the difference between its promo-code based methodology and the Qualia pixel and attribution window. Using a 14-day attribution window, Qualia’s cross-device pixel registered 44% more conversions than was previously being attributed from promo code redemptions. Below is a screenshot of their findings.
The concept is intriguing, and we plan to test their methodology in the coming months. While there is no silver bullet in attribution–anyone telling you otherwise is a bald-faced liar–we will look to corroborate our findings and continue to optimize our own best practices. If deemed useful, we can use this as another data point in calculating campaign performance.
While technology is still trying to solve the attribution issue, we’d argue that the survey is still a must–at least until all media is served digitally. By implementing this proven practice before launch, you can confidently determine campaign performance across the most challenging of media channels. Ultimately, if you’re going into 2019 and still feel like John Wanamaker, not knowing which ads are working, perhaps it’s time to find a new agency.