I should start this rant with the question of, “How lazy are you?”. That’s the real question. I have yet to see a benefit from not figuring out what the best ad is. If you only run a single ad, how can you possibly know if the ad could have performed better with different text, or different colors and imagery. How would you know?
You won’t know if you are getting the best bang for the buck unless you try more than one thing.
This seems like common sense, but it’s worth reminding you that marketing is not an exact science. Getting it perfect the first time rarely happens. This truism happens regardless of time spent, budget used or hair pulled out. Split testing will tell the truth. More clicks and views will show you what your audience is looking for.
Another reminder is that what you ‘think’ people want to see isn’t always what they actually want to see. If you truly think that you know your audience and what will and will not work, then prove it. Split testing gives you tangible statistics to show your boss (or your pocketbook) what the right path is.
Google makes split testing rather easy and they track everything, so knowing what worked and what didn’t is a matter of a few clicks. If you’re spending big bucks on advertising, split testing your ads becomes more important. At $100, your risk is rather low, so a few percentage points of a bonus doesn’t get you much. At the $1,000 mark, a 20% effectiveness boost starts making a really big impact. Whether you choose to make your budget more effective by 20% or use that money for something else, is up to you.
You are always welcome to throw your money at Google and hope for the best (Google doesn’t mind), but when I have to pull money out of my own pocket to pay for advertising, I want to make damn sure that it is spent with the maximum amount of effectiveness possible…..wouldn’t you?
Ok, rant over, now let’s get into how you really do this.
While you’re in Google Adwords, create a new ad campaign. I’m talking about image ads (also known as display ads), not text ads. You can still split test text ads as well, you just have less variables to consider (like headline, description, display url, and destination url). Give the campaign a broad title like “CamelPoop Spring Ad run 1”.
Now that you have the campaign ready, create a new display ad within that campaign. Be a lot more specific in the title this time. “CamelPoop Spring 1 – white, bag, better”.
What I’m trying to do is break down the actual ad into:
What we want to do is establish variables that we will adjust on each new version of the ad. For each ad run we will vary a single variable. Adjusting one variable at a time helps us to see which factors affect our ad performance. We are not going to run 27 ads simultaneously. What we want to do is run the first ad for a week to get a baseline of clicks.
You don’t even need to spend a lot of money here. Just put $100 for 1 full week and see what it does. If you have a bigger budget, sure drop $200-$300, but don’t blow all your milk money here. We want to save the lion’s share of the budget for the final best ad. You may even need to lower your initial budget to conserve money for the final ad run.
Once you have a baseline, say 50 clicks, you can begin to calculate how much you spent per click ($2 per click). Once we have a baseline to measure against, we are going to change one factor in the equation and see what happens.
Now create variations of the ad and change the overall color scheme from White to Green, Blue and red. When you’re done you should have 3 separate ads which are all the same, except the color scheme will be different for each. The color scheme change may only be the background color, but that’s okay. Even subtle changes can make a measurable difference that you could never know about until you compare the statistics.
Run all three ads for the full week at $100 (same as we did the original white ad). Then compare the cost per click of each of the different colors. Was there a change? Did one color do better than the other?
Take the best performing color and use it for the next run, but now we are going to change the image. The initial run for the ad was a bag of manure. It was boring, but gave us a baseline to do better. Now we want to change up the image. I’m using three for this example, but if you can think of 12 good images to use, use them! You are only limited by your budget and imagination (mostly your budget). Just be sure to only change one factor at a time for each ad run.
Let’s change the imagery to reflect different states of mind:
1. Logic – maybe a chemical symbol that represents why camel poop is superior
2. Sentimental – tap into the vintage trend by showing a vintage manure bucket
3. Sex appeal – no, I don’t have a clue how to make poop sexy, but I’m sure someone does.
Run these ads for the same full week, $100 and compare the results.
Which image got more clicks? Was it the camel in a bikini or the black and white picture of a grandma and her camel?
Based on our winning ad, we now have a paired down ad that has the best performing color and image. Now, duplicate that ad and start changing out the text. Again, we will use three examples, but more is fine:
1. Camel Poop promotes better plant health than cow manure
2. Camels Rock….they also poop
3. Hypoallergenic and non-toxic
Run the ads again for 1 full week and $100 for each ad, so that it matches the same run from the original baseline.
Check the stats and see how the ads did. This now gives you the ability to confidently plunk down the rest of your budget on the ad that gave you the best results. It’s not just proof that you have a good ad, its how you get the best bang for your advertising buck.
P.S. This is really just the tip of the iceberg. You can keep split testing even more. Maybe the new picture does better with a green background than the original did with the green background. You have to start somewhere, but there’s not really an ending. You can split test till the camels come home. Just remember, at some point the percentage increase of clicks gets so small, that it is not worth the time and effort to continue the split testing.