A/B Testing

Set up and analyze form experiments to find the version that converts best.

6 min read

A/B testing lets you compare two or more versions of a form to determine which one performs better. Instead of guessing whether a shorter form or a different question order will improve conversions, you can run a controlled experiment and let the data decide.

BttrForm's built-in A/B testing handles traffic splitting, data collection, and statistical analysis β€” so you can focus on designing better forms rather than crunching numbers.

How A/B Testing Works

When you create an A/B test, BttrForm randomly assigns each new visitor to one of your form variants. Each visitor consistently sees the same variant even if they return later (using a browser cookie). Responses are collected separately for each variant, and BttrForm calculates the statistical significance of any performance difference.

The testing flow follows four steps:

  1. Create variants β€” Duplicate your form and make changes to the copy.
  2. Configure the test β€” Set the traffic split and sample size.
  3. Run the experiment β€” BttrForm distributes traffic and collects data.
  4. Analyze results β€” Review the winner and apply the changes.

Creating an A/B Test

Step 1: Navigate to Experiments

Open the form you want to test, then go to Analytics > Experiments > New Experiment.

Step 2: Create Variants

Your original form is automatically set as Variant A (the control). Click Add Variant to create Variant B. BttrForm duplicates your form so you can modify it independently.

You can create up to four variants (A, B, C, D) in a single experiment, though we recommend testing only two at a time for faster, clearer results.

Pro Tip

Change only one element at a time between variants. If you change the headline, field order, and button text simultaneously, you will not know which change caused the improvement. Isolate variables for actionable insights.

Step 3: Make Your Changes

Common elements to test include:

  • Form title and description β€” Does a benefit-focused headline outperform a generic one?
  • Number of fields β€” Does removing optional fields improve completion rates?
  • Question order β€” Does starting with easy questions reduce drop-off?
  • Button text β€” Does "Get My Results" convert better than "Submit"?
  • Page structure β€” Does a single-page form outperform a multi-step form?
  • Field types β€” Do dropdowns perform better than radio buttons for the same question?

Step 4: Configure the Test

Set the following parameters before launching:

ParameterDescriptionRecommended Value
Traffic SplitPercentage of traffic sent to each variant50/50 for two variants
Minimum Sample SizeResponses needed before declaring a winner100 per variant minimum
Primary MetricThe metric used to determine the winnerCompletion Rate
Test DurationMaximum time the test will run2-4 weeks

Sample Size Matters

Running a test with too few responses leads to unreliable results. BttrForm requires a minimum of 50 responses per variant before calculating significance, but we recommend at least 100 per variant for confidence in the results.

Step 5: Launch the Experiment

Click Start Experiment to begin. BttrForm will use your original form's share URL and automatically route visitors to the appropriate variant. You do not need to change or share different URLs.

Traffic Splitting

BttrForm uses cookie-based random assignment to split traffic between variants. Here is how it works:

  • When a new visitor loads your form URL, BttrForm generates a random number and assigns the visitor to a variant based on your configured split percentages.
  • A cookie is set so the visitor always sees the same variant on return visits.
  • The assignment is truly random β€” it does not factor in the visitor's device, location, or any other attribute.

Adjusting the Split

For most tests, an even split (50/50) is ideal because it reaches statistical significance fastest. However, if you want to limit risk, you can use an uneven split:

  • 90/10 split β€” Sends most traffic to the proven control while testing a new variant with a small audience. Useful when the variant includes a major change you are not confident about.
  • 70/30 split β€” A moderate risk-reduction approach that still gathers variant data reasonably quickly.

You can adjust the traffic split while a test is running. Changes take effect immediately for new visitors, but existing visitors remain assigned to their original variant.

Analyzing Results

While the experiment is active, the results dashboard updates in real time. Navigate to Analytics > Experiments and click on your active test to view the results panel.

Metrics Compared

BttrForm compares the following metrics across variants:

  • Completion Rate β€” The primary metric for most tests.
  • Average Completion Time β€” Faster is generally better (less friction).
  • Drop-off Rate β€” Lower is better.
  • Field-level Completion β€” See if a specific field performs differently between variants.

Reading the Results Table

A typical results view looks like this:

MetricVariant A (Control)Variant BDifference
Responses487502β€”
Completion Rate42.3%48.7%+6.4%
Avg. Time3m 12s2m 48s-24s
Statistical Sig.β€”95.2%Significant

Understanding Statistical Significance

Statistical significance tells you whether the observed difference between variants is likely real or just due to random chance.

  • Below 90% β€” Not enough evidence. The difference could be random. Keep the test running.
  • 90-95% β€” Moderate confidence. The difference is likely real but consider gathering more data.
  • Above 95% β€” High confidence. You can be reasonably sure the winning variant genuinely performs better.
  • Above 99% β€” Very high confidence. The result is almost certainly real.

BttrForm uses a two-tailed Z-test for proportions to calculate significance. You do not need to understand the math β€” just look for the green "Significant" or "Not Significant" label in the results.

Avoid Peeking

Do not make decisions based on early results. Statistical significance requires sufficient data. Checking results daily and stopping the test early when one variant "looks better" is a common mistake that leads to false conclusions. Let the test reach your minimum sample size.

Applying the Winner

Once a test reaches statistical significance:

  1. Click End Experiment on the results page.
  2. BttrForm will ask which variant to keep. Select the winner.
  3. All traffic is redirected to the winning variant immediately.
  4. The losing variants are archived (not deleted) so you can reference them later.

The winning variant replaces your original form. All historical response data from both variants is preserved and labeled by variant in the submissions table, so you always know which version generated each response.

Best Practices

  • Test one variable at a time to isolate what drives the improvement.
  • Set your hypothesis before starting β€” "I believe removing the phone number field will increase completion rate by 5%."
  • Let tests run to completion rather than stopping early based on preliminary data.
  • Document your learnings so future form designs benefit from past experiments.
  • Run sequential tests β€” once you find a winner, test the next hypothesis against the new baseline.

Pro Tip

Keep a testing log in your team workspace. Over time, patterns emerge (for example, "shorter forms always win for lead generation") that accelerate future form design without needing to test every decision.

Next Steps

  • Conversion Tracking β€” Use funnel data to identify which elements to test next.
  • Dashboard Overview β€” Monitor how your A/B test winners impact overall form performance over time.

Was this helpful?

A/B Testing | BttrForm