Did you know, that Facebook ads offer a Split Testing feature? Well, it’s time you knew – and here’s how you should use it in your Facebook ad campaigns.
The best way to find out what ads perform best against different audiences and other conditions is to conduct A/B testing – also known as split testing. Up until recently, you would have to manually create your own A/B testing conditions. and work with the resulting data on your own. Then, in March, Facebook launched an easier way to do the whole process – Split Testing.
With Split Testing, you can “simply and accurately test different components of your ad across devices and browsers” saving time and money in the process. However, you will also also get easy-to-understand results that will help you optimise your ads accordingly.
How Split Testing Works
Split Testing allows you to create “multiple ad sets and test them against each other.” This way, you can see which strategy works best, as your audience is split up into 2 or 3 random groups that are shown exactly the same creative. Each group doesn’t overlap with the other, making sure the test is done properly – i.e. each ad set is treated equally in the auction. As the creative is the same across a test, ad sets will differ slightly in terms of placement, delivery optimisation, or audience type – however, only one variable can be tested at a time. So, you could test two or three different audiences, but you wouldn’t be allowed to test a different placement at the same time.
By using the Split Testing feature, the ads creation process is halved, as Facebook automatically duplicates your ads and changes just that one variable. The test is conducted, and after gathering results across different devices, the performance of the ad sets are compared. At the end of a test, you will get a notification and email with easily-understood results. However, you don’t have to wait for a split test to be completed. Facebook will notify you when enough data has been collected in order to declare a winning strategy. This information can be used to optimise a campaign on the fly.
What objectives does it support?
Facebook’s Split Testing is available on Power Editor and Ads Manager, and it supports Traffic, App installs, Lead generation, Conversions, Video views, and Reach, business objectives.
Split Testing Variables
There are three variables you can use: audience, delivery, and placement.
Audiences: You can only use Split Testing for saved audiences. So, you will need to create new saved audiences to use for your Split Testing (if you don’t have any yet), or use existing ones. If you’re testing for Audience, you would need to choose between two to three different saved audiences to test against each other, to create an A/B test or a A/B/C test respectively.
Delivery optimisations: If you’d like to find out which delivery optimisation works best for your ads, you can run a split test on different ones. So for example, you could test the effectiveness of one optimisation like conversions against the effectiveness of another like link clicks. However, you could also test conversions optimisation with a 1-day conversion window, conversions optimisation with a 7-day conversion window, and link clicks optimisation – all within the same test.
Placements: Want to find out what placement works best for your audience? Simply run a split test on placements. Facebook recommends that you rest custom placements against automatic placements and not custom placements against each other. So you could create an A/B test of automatic placement vs custom placement.
Budgeting For Split Testing
Now, for the big question. Setting your budget. When you set up your split tests, Facebook will suggest a budget that will get the results to declare a winning strategy, but there is no minimum budget requirement. You should always choose a budget if you are confident it can get the results that you are expecting. Whatever your budget, Facebook will then divide it equally (or weigh more vs the other) – you can choose. Both budget, and reach will then be divided as per your choice.
Finally, set a schedule that is between 3 days and 14 days – minimum and maximum recommended run-time for split tests. In any case, Facebook says that “a test winner can usually be determined in 14 days or sooner.”
Results And What To Do With Them
Hopefully, your results will show which ads set and thus which variable strategy gave you the lowest cost per result – always based on your choice of optimisation. Even if you do have a winner, Facebook will give it a confidence rating that is based on the percentage of times it expects the same results (at least the same winner) if you were to run the same split test again.
So, you will have to look at your results differently based on this information. There are three confidence measurements:
A clear winner is chosen with high confidence if it outperforms a different strategy by a long shot. If you received a 90% confidence rating that one audience is the winner over another, you can rest assured that it’s the right one. There is 90% likelihood that the same result would be repeated if the test was run again.
A low confidence rating of 60% might declare a winner in a test, but there are two things that one can do after a result like this has been achieved. You could always go with the ad set that gets the lowest cost per result (Facebook best practice), or test the campaign again, with a longer schedule or bigger budget. This could lead to more data to alter the confidence percentage.
No clear winner
Split tests that can’t call a winning strategy have a close to a 50% or 33% split in confidence – depending if you are running an A/B test or an A/B/C test. What do do next? Well, you could test the campaign again with a longer schedule and bigger budget as above, or test a different variable. When you get no clear winner in a test, you should know that your strategies are probably too similar.
So, you’ve run your test (and maybe even repeated some), and you’ve got actionable results. What to do now? Well, you could create a new ad from the winning ad set by clicking on the relevant link in your results email (seen above), you could activate your winning ad sets within Ads Manager, or you could create a brand new campaign using the information that you have learned.
You could also just continue testing, and narrowing down. For example, if you tested one audience against the other (i.e. 18-34 vs 35-64) and got 18-34 as a clear winner, you might still want to test 18-34 vs 25-34 as well, to see if it continues to perform in very much the same way. Again, confidence should not be used to determine a winner – Facebook will always declare the strategy with the lowest cost per result the winner. If you’re not happy with the confidence ratings, you could just continue testing with a longer schedule and a bigger budget.
Is there something else you’d like to know about Split Testing Facebook ads? If so, just ask us in the comments!
You might also like
More from Facebook
Meta has introduced the Facebook Reels API, a solution allowing developers to build a 'share to reels' option into their …