A/B testing lets you compare two or more versions of a form to determine which one performs better. Instead of guessing whether a shorter form or a different question order will improve conversions, you can run a controlled experiment and let the data decide.
BttrForm's built-in A/B testing handles traffic splitting, data collection, and statistical analysis β so you can focus on designing better forms rather than crunching numbers.
How A/B Testing Works
When you create an A/B test, BttrForm randomly assigns each new visitor to one of your form variants. Each visitor consistently sees the same variant even if they return later (using a browser cookie). Responses are collected separately for each variant, and BttrForm calculates the statistical significance of any performance difference.
The testing flow follows four steps:
- Create variants β Duplicate your form and make changes to the copy.
- Configure the test β Set the traffic split and sample size.
- Run the experiment β BttrForm distributes traffic and collects data.
- Analyze results β Review the winner and apply the changes.
Creating an A/B Test
Step 1: Navigate to Experiments
Open the form you want to test, then go to Analytics > Experiments > New Experiment.
Step 2: Create Variants
Your original form is automatically set as Variant A (the control). Click Add Variant to create Variant B. BttrForm duplicates your form so you can modify it independently.
You can create up to four variants (A, B, C, D) in a single experiment, though we recommend testing only two at a time for faster, clearer results.
Pro Tip
Step 3: Make Your Changes
Common elements to test include:
- Form title and description β Does a benefit-focused headline outperform a generic one?
- Number of fields β Does removing optional fields improve completion rates?
- Question order β Does starting with easy questions reduce drop-off?
- Button text β Does "Get My Results" convert better than "Submit"?
- Page structure β Does a single-page form outperform a multi-step form?
- Field types β Do dropdowns perform better than radio buttons for the same question?
Step 4: Configure the Test
Set the following parameters before launching:
| Parameter | Description | Recommended Value |
|---|---|---|
| Traffic Split | Percentage of traffic sent to each variant | 50/50 for two variants |
| Minimum Sample Size | Responses needed before declaring a winner | 100 per variant minimum |
| Primary Metric | The metric used to determine the winner | Completion Rate |
| Test Duration | Maximum time the test will run | 2-4 weeks |
Sample Size Matters
Step 5: Launch the Experiment
Click Start Experiment to begin. BttrForm will use your original form's share URL and automatically route visitors to the appropriate variant. You do not need to change or share different URLs.
Traffic Splitting
BttrForm uses cookie-based random assignment to split traffic between variants. Here is how it works:
- When a new visitor loads your form URL, BttrForm generates a random number and assigns the visitor to a variant based on your configured split percentages.
- A cookie is set so the visitor always sees the same variant on return visits.
- The assignment is truly random β it does not factor in the visitor's device, location, or any other attribute.
Adjusting the Split
For most tests, an even split (50/50) is ideal because it reaches statistical significance fastest. However, if you want to limit risk, you can use an uneven split:
- 90/10 split β Sends most traffic to the proven control while testing a new variant with a small audience. Useful when the variant includes a major change you are not confident about.
- 70/30 split β A moderate risk-reduction approach that still gathers variant data reasonably quickly.
You can adjust the traffic split while a test is running. Changes take effect immediately for new visitors, but existing visitors remain assigned to their original variant.
Analyzing Results
While the experiment is active, the results dashboard updates in real time. Navigate to Analytics > Experiments and click on your active test to view the results panel.
Metrics Compared
BttrForm compares the following metrics across variants:
- Completion Rate β The primary metric for most tests.
- Average Completion Time β Faster is generally better (less friction).
- Drop-off Rate β Lower is better.
- Field-level Completion β See if a specific field performs differently between variants.
Reading the Results Table
A typical results view looks like this:
| Metric | Variant A (Control) | Variant B | Difference |
|---|---|---|---|
| Responses | 487 | 502 | β |
| Completion Rate | 42.3% | 48.7% | +6.4% |
| Avg. Time | 3m 12s | 2m 48s | -24s |
| Statistical Sig. | β | 95.2% | Significant |
Understanding Statistical Significance
Statistical significance tells you whether the observed difference between variants is likely real or just due to random chance.
- Below 90% β Not enough evidence. The difference could be random. Keep the test running.
- 90-95% β Moderate confidence. The difference is likely real but consider gathering more data.
- Above 95% β High confidence. You can be reasonably sure the winning variant genuinely performs better.
- Above 99% β Very high confidence. The result is almost certainly real.
BttrForm uses a two-tailed Z-test for proportions to calculate significance. You do not need to understand the math β just look for the green "Significant" or "Not Significant" label in the results.
Avoid Peeking
Applying the Winner
Once a test reaches statistical significance:
- Click End Experiment on the results page.
- BttrForm will ask which variant to keep. Select the winner.
- All traffic is redirected to the winning variant immediately.
- The losing variants are archived (not deleted) so you can reference them later.
The winning variant replaces your original form. All historical response data from both variants is preserved and labeled by variant in the submissions table, so you always know which version generated each response.
Best Practices
- Test one variable at a time to isolate what drives the improvement.
- Set your hypothesis before starting β "I believe removing the phone number field will increase completion rate by 5%."
- Let tests run to completion rather than stopping early based on preliminary data.
- Document your learnings so future form designs benefit from past experiments.
- Run sequential tests β once you find a winner, test the next hypothesis against the new baseline.
Pro Tip
Next Steps
- Conversion Tracking β Use funnel data to identify which elements to test next.
- Dashboard Overview β Monitor how your A/B test winners impact overall form performance over time.
Was this helpful?