One of the most groundbreaking benefits of being able to assign multiple designs to products in a single feed is that you can A/B test with ease without having to create new catalogs.
With different designs for each product, you can select which creative to use and test on the ad level.
We recommend testing multiple designs instead of just 2. Using at least 3 different designs will allow you to have a higher chance of success, as well as to learn what type of design choices work for your audience.
You can find inspiration for what to test here.
The first step is to create the individual designs you'll be using for A/B testing, and add them to the feed automation as design variants.
If you haven't yet done so, click here and follow STEP 1 of this guide.
In case this is your first time customising your Dynamic Product Ads, and you want to test the difference between using standard, unedited pictures and Confect designs, we recommend creating these designs:
Next, you'll have to duplicate your current ad set in the campaign you're using once for each new design variation.
So if you're testing your current images against 2 new Confect designs, you'll want to duplicate the ad set three times (original + 2 duplicates).
Once you've duplicated the ad sets, you'll need to update the catalog design variants used in each ad. To do that, just follow STEPS 2 and 3 of this guide. Go to each ad, and make sure the right design variants are selected.
Double-check that the ad set has the correct design variant loading into Facebook, and name the ad sets accordingly for easy organisation.
For your own sake it's recommended naming the facebook ad of each variant the same name as your variants in Confect.
With the hardest part behind you, all that's left to do is to select the ad sets you'll be using in this test (select the duplicates, not the original ad set).
With the correct ad sets selected, just click the "A/B Test" button to specify and set up your test.
Here it's important to select "Existing ad sets" for comparison.
For the key metric, we recommend using "Cost per result", as this is among the most important metrics for eCommerce companies, however this can change based on your advertising objectives (i.e. clicks, leads, etc...).
We recommend running the test for a period of 2 weeks MINIMUM, and not concluding on tests with less than 30 results in each variant.
Once the A/B test has been set up, just click "Publish Test" to save your changes and begin testing.
Now the waiting game begins.
After the testing period has come to an end, compare the key metric(s) between the ad sets to find your winner.
Since all the creatives come from the same catalog, with the same degree of optimisation (which customer is interested in which products), the only variable creating the difference will be the design itself.
Lastly, after finding your winning design, it's good practice to reflect on which design choices correlate with higher performance.
You might for example find that the main difference between your best and worst performing design is that the winning variant uses your logo, while the loosing doesn't. In that case, it's a good sign that your target audience responds well to seeing your logo in the ad, and you should be basing your future designs from this learning.
Remember that the more variation between the designs (i.e. different background, text, elements), the more difficult it will be to judge exactly which change cause the higher, or lower, performance. Though, it can give you a feel of what type of creative makes your audience buy.