Split Testing to Maximise Campaign Performance

Published by

Split Testing in Email Marketing | Emailcenter

In email marketing, split testing is a way of comparing different versions of one variable in an email to assess which one is the most engaging. This method of testing is so easy there’s no reason why you shouldn’t be doing it, and it can really help you to improve the performance of your campaigns.

We previously highlighted the difference between split testing and multivariate testing, for those who wanted clarification on the two different methods of testing. Multivariate testing is a little more advanced, but with Christmas fast approaching we urge you to take advantage of the testing tools available to improve open rates, engagement and sales.

The key rule of split testing is to only test one variable at a time. It’s important to follow this rule so you can pinpoint what variable caused the change in performance. If you want to test other variables then there’s no problem in doing this, you can run further split tests later.

How Many Versions Can You Test?

If you’re unfamiliar with split testing, you can run a simple A/B test to compare the results of two versions. If you have more than one version to test, it is possible to run a split A/B/C/D/E test in Maxemail using the platform’s advanced functionality.

Split Testing in Email Marketing | Emailcenter

Be Significant

There is no point in making changes that are of little difference. You won’t see much change in results and you will need a far larger sample size to test with. Use this as an opportunity to be creative and step away from the norm.

Sample Size

How many subscribers do you need to run your test by? Well, this depends on how many subscribers you have to start with to create a decent sized sample list. There isn’t a single answer of how big a sample size should be – you need to factor in how responsive your database is and how significant the changes are.

If you are testing ‘Subject Line A’ against ‘Subject Line B’ then we usually say at least 2,500 subscribers per subject line. If it’s testing ‘Call to Action A’ against ‘Call to Action B’ in a targeted email then you could get away with under 1,000.

The best way to gauge this is to play with some numbers using a Split A/B testing calculator by replacing ‘visitors’ with the number of subscribers on your sample list, and actions with the metric you are testing.

Test Length

A big mistake most marketers make is expecting to make a quick decision on the outcome of the test. Gathering results within an hour or two are unlikely to be reliable, so leave for a minimum of 24 hours before assessing the results. Some test may take days or even a week to come through. Jump in too quickly and you run the risk of excluding the late openers of your email. It’s also good to look at how long it takes for people to open your email from the time sent.

Repeat the Test

A common mistake is to test a subject line change on a newsletter where the learnings can’t be re-used. As the content of your emails change, so do the words in your subject line so you can’t reuse that specific test. In addition it’s important to re-test to make sure the test wins again – was it simply the email looking different to normal that caused the uplift?

About Rupert Adam

Marketing Manager at Emailcenter, the UK's largest independent ESP.

Emailcenter_HEX