Autopsy of a Failed Campaign

As a Strategy Lead at Oxford Road, there is nothing quite as rewarding as bringing a client onto a new marketing channel and shattering their performance goals. It’s intoxicating. During my time here, I’ve been fortunate enough to be in the pilot’s seat as we skyrocketed offline advertising campaigns from small test flights into six and seven-figure-per-month performance marketing machines in a matter of months. While not all campaigns I’ve led have generated this kind of stratospheric growth, I’ve seen far more winners than losers. Success aside, it’s part of the human condition to dwell on failure, and there’s one campaign from years ago that haunts me to this day — a Podcast test that crashed and burned at take-off.

It’s said that success has many fathers, and failure is an orphan. On this campaign, like a reluctant guest on The Jerry Springer Show, the paternity test came in and it turns out this baby was mine. While the reasons for failure were not a result of negligence or malpractice, in hindsight, there were at least five points that directly impacted the success of the campaign. As it stands, the client has the perception that offline advertising doesn’t work for them (they could be right, but we’ll never know for sure). Below I’ve detailed every mistake I made and shared what I would do now if I could turn back time

My team and I launched the campaign in question on the heels of a few monumental wins in the Podcast space. I felt invincible. Perhaps in my cockiness, I failed to realize we were setting up this new account for failure. The stage was set as follows:

  • The client’s product category was crowded 
  • We didn’t push hard enough for a proper attribution survey 
  • We allowed the client to dictate copy decisions against our better judgement without pushing back 
  • Podcast hosts refused to try the product and we proceeded anyway
  • The overall test budget was too small and the client put too many restrictions on the content
Navigating A Crowded Category

Success is easier to find when you bring a completely new category to the marketplace. But when you’re 3rd, 4th, or in this case, 7th to market, it’s harder to stand out amongst the crowd. When you find yourself in a list of “me too’s”, you must find a distinct point of differentiation. Marketing guru Jack Trout defined this as the “Differentiating Idea” in his 2000 book Differentiate or Die. In short, the “Differentiating Idea” is having a simple concept that separates you from your competition. The classic example is the real-life “It’s Toasted” campaign for Lucky Strike, fictionally re-created on the TV show MadMen. In the show, Creative Director Don Draper uncovers something that everyone else was doing but not mentioning in their advertising. The agency helped the client create their “Differentiating Idea”. Had I been able to do it all over again, I would have fought like hell to uncover a true “Differentiating Idea” for this client. As it stands, they were one of more than a half dozen competitors in the space saying the same exact thing to the same people. The result was a campaign that didn’t stand out in any way, and ultimately didn’t hit the client’s KPI goals. 

The Importance of Attribution and a Strong Offer

Unless you’re providing an unbelievable offer, results typically indicate that 10% – 50% of people will actually jump through the attribution hoops us marketers put forth (i.e. vanity URLs and promo codes). The better your offer, the higher the percentage of total response will be directly attributed. I use the hypothetical analogy of a car dealer that said, “anyone who shows up to our dealership this week and remembers the special password gets a free car”. In this example, it’s safe to say that close to 100% of the respondents would remember the password. This particular campaign launched before the days of pixel-tracking on Podcast. While this new method of attribution is still in its infancy, pixel-tracking is filling in the gaps that may have helped this client see the large number of customers who responded to their ads without using vanity URLs or promo codes. 

At Oxford Road, calculating the indirect response of a campaign without the use of tracking pixels has traditionally been accomplished using a post-purchase survey. Not only did the client in question not have the ability to implement a post-purchase survey, but the offer they advertised was the same as what customers would get going to the main site. Therefore, the trackable visits from the campaign was likely much lower than what the campaign actually drove, but we couldn’t prove it. If you don’t have your attribution methodology in place to account for the total universe of respondents, you’re dead in the water. If I could do it again, I would have laid down on the train tracks to make sure the campaign’s KPIs were either adjusted accordingly, or that a proper attribution path was implemented.

Clients Shouldn’t Write Copy

They say “the client is always right”. The truth is, they’re usually wrong — especially when it comes to their advertising copy. At Oxford Road, we evaluate the elements that must be present in a performance marketing ad using our proprietary measurement tool, Audiolytics™. Our tool brings objectivity and clarity to our client’s message, using a binary evaluation methodology which includes an expert review of 9 key components and 71 key data points — or subcomponents — to ensure optimal messaging design resulting in enhanced performance. 

We strive to have every ad we put on air achieve an Audiolytics™ score of at least 90 out of a possible 100, but this one fell far short. We’ve already identified that the client wouldn’t provide an offer that was commensurate to what they were offering on their main site (missing Audiolytics™ Key Component 6: Offer). Missing a key component like “Offer” alone is problematic enough, but it gets worse. This client insisted on writing their own version of the podcast script in such a way that it could never achieve a passing Audiolytics™ grade. Their version of the copy missed an additional key component (Scarcity) and lacked several subcomponents. Despite pushing back, the client ultimately had their way. Perhaps I was overly optimistic, but in Podcast, even sub-par creative can work if the hosts really get behind the product they’re selling. If we could manage to get the hosts to rally around this client’s product, we could save this test…or so I thought.

Host Endorsements Need to Be Authentic

The magic of Podcast advertising happens when the show host gives an authentic endorsement of how they’re using the product or service and communicates to their listeners on why they should try it too. We generally accomplish this by sending the host free product and conducting onboarding calls with every host to discuss ways the hosts can communicate their experience with their listeners. With this campaign, the product required a little extra work on the host’s behalf in order to actually use it — there was no way around it. While I believed the hosts genuinely intended to do the homework, they ultimately never followed through and we were left with flat reads. The lesson here is that if the hosts aren’t into what you’re selling, you may need to find new hosts. Bonus note — if hosts aren’t willing to do a little work to get your product or service for free, perhaps you need to rethink your business model.

A Podcast Test Needs to Be Diversified

While it may have been possible years ago to run a test campaign on just a few proven podcasts and get a quick read on performance, the current landscape requires a more diverse approach. Now, with over 700,000 podcasts available and new genres and subgenres popping up every week, the podcast space is more nuanced than ever. While this is great for those of us who listen to podcasts, it presents a challenge for marketers. To add insult to injury, this client had specific mandates on which shows we could and could not buy, and a very limited budget. No political podcasts. No comedy podcasts. And we must avoid any show that may curse during the entire show. The result? We were launching a campaign with both arms tied behind our back. We had a test that was too small, on a handful of podcasts that fit within the client’s tiny budget and criteria. Given the opportunity to do it all over again, I would advocate for a larger testing budget that would allow us to test a wider array of podcasts across multiple genres so we could find pockets of success. Unfortunately, we didn’t have that chance. 

Lesson Learned

And there you have it — a recipe for failure in advertising, but I have to come clean. The truth is this “client” is actually an amalgamation of a few advertisers I’ve had over the past 7+ years here at Oxford Road. These things did happen, just not all at the same time with the same client (I’m sure if I let one client break all of these rules on one campaign, I wouldn’t have a job). Let the lessons learned from “my worst Podcast campaign” serve as a cautionary tale — any single one of these issues could send your podcast test off the rails and should be avoided at all costs. Happy podcasting!

Leave a Reply

Your email address will not be published. Required fields are marked *