All glossary terms
A/b tests in digital marketing

A/b tests in digital marketing

Understanding A/B testing in marketing: when to use it, when to avoid, and how it differs from incrementality testing.
A/b tests in digital marketing
3 min read

When to use A/B tests in marketing:

  1. Immediate Feedback: Best for direct marketing, website tweaks, and e-commerce changes.
  2. Clear Comparisons: Ideal when having two distinct versions of a marketing tactic for straightforward results.
  3. Tactical Decisions: Perfect for minor alterations like adjusting messages, or specific targeting.

When to use A/B tests in marketing

When A/B tests might not work:

  1. Beyond direct marketing: In broader marketing strategies, choosing the right success metric can be complex, leading to misunderstandings in A/B test results.

    Example: For a social media brand awareness campaign aiming to boost positive brand perception, just tracking website clicks might miss other impacts, such as increased positive brand discussions in comments and shares.

  2. Bigger strategy questions: A/B tests won’t provide insights on overarching decisions like budget allocation, brand strategy, or channel choice.
  3. Execution pitfalls: Poorly planned tests, or those that compromise on quality, can produce misleading results.
  4. Design concerns: Without thoughtful test design, resources may be wasted on inconclusive or unreliable insights.

How to move beyond A/B tests:

A/B tests can be slow, often requiring many conversions for clarity. If you want not only to compare creatives etc. but to optimise new campaigns smarter, SegmentStream is the answer.

This analytics and optimization tool doesn’t just wait for conversions; it swiftly analyses a myriad of data points: from impressions, clicks, and CRM details to user behaviour on your website.


A/B testing vs. incrementality testing

While both involve comparison, A/B testing is about finding the better of two versions, while incrementality testing is about measuring the added value of a tactic.

A/B testing vs. incrementality testing

A/B testing marketing example

A shoe store tests two homepage banners: one showing running shoes and another showcasing formal shoes. They divide visitors evenly between the two banners. After a week, the running shoes banner results in 20% more sales. The store decided to feature running shoes prominently on its homepage.

Example for A/B testing vs. Incrementality testing:

A/B Testing:

A company has two landing pages: one with a video and another with text. They send half their traffic to each. The video page gets more sign-ups. They choose the video page.

Incrementality Testing:

A company runs a new ad campaign for half its users. After a week, those users buy 15% more than users who didn’t see the ads. The company concludes the ads increased purchases by 15%.

What are A/B tests in marketing attribution?

A/B tests compare results from two groups: one exposed to a marketing tactic (Test) and one not (Control) to determine the tactic’s impact.

How much data is required for an A/B test to be conclusive?

A conclusive A/B test requires a statistically significant sample size. This depends on factors like baseline conversion rates and the minimum detectable effect. For a reliable result, many tests need at least 1,000 conversions per variation.

You might also be interested in

More articles

Optimal marketing

Achieve the most optimal marketing mix with SegmentStream

Request a demo
Optimal marketing image