What baseball can teach advertising
If you haven't seen Moneyball or read the book, here's the synopsis: In 2002, Billy Beane, manager of the Oakland A's, finds himself with a salary budget of $41 million, lowest in the majors. The Yankee's budget that year was $125 million. But the A's go on to win 101 games, same as the Yankees, despite being outspent 3 to 1.
How does Beane do it? He does it by discounting the collected wisdom of his experienced scouts, and with the help of a Harvard economics grad, fields a team by selecting players based on two statistics: on-base percentage and slugging percentage.
In lay terms: Beane chose players who were good at getting on base. It didn't matter if they just hit singles, or got lots of walks, or had a peculiar stance that got them hit by many pitches. The cold, clear logic: the team that gets more players on base, will score more runs, and win more games. It worked, brilliantly.
But what can Beane and Moneyball teach advertising? Like baseball, advertising is loaded with all manner of stats, indices, formulas plus the informed judgement of the agency professionals all claiming to predict how an ad will perform in the real world.
And just like baseball, two key performance indicators stand high above the rest: First, an ad must get noticed; it's got tobreakthrough the considerable clutter--it's got to get on base. But to score, the ad must then persuade it's target to take some specfic action. It's no more, and no less, than that: To be effective, an ad must first breakthrough and then persuade.
But unlike baseball, ads don't get to play a season or two in the minor leagues to establish a track record. They can, however, audition before a small audience before moving up to the big show. We're talking, of course, about advertising research. Here too, there are many players, many methods, several black boxes, and so forth.
Cutting through the research maze, a few suggestions, followed by a recommended equation: First, make sure the research methodology comes close to replicating how an ad is experienced in the real world (a clutter reel in TV, for example), and make sure the test ad is a close-in representation of the final ad. And then insist that the research service--whatever else it provides--provides the data required to execute this equation:
Breakthrough X Persuasiveness = Effectiveness
Let's put the equation into a sentence: If 30% of the test audience takes notice of an ad, and 50% of this group is persuaded to take action, then the ad's potential or its effectiveness can be expressed as 15% (30%X50%=15%). If another ad is noticed by 40%, but persuades 25%, this ad would be considered 10% effective (40%X25%=10%). The former ad, in this case, would get to play.
Net, net: Building a great ad campaign is like building a great baseball team. Both require talent, experience, and money. But when the call has to be made as to which player takes the field, or which ad gets to run, you're served best by an objective evaluation against the most predictive indicators.
And in the case of advertising, real world success is best predicted, in the final analysis, by an ad's ability to first breakthrough and then persuade (breakthrough X persuasiveness = effectiveness).