All Collections
A/B Testing
A/B Testing | Dashboards & Reporting
A/B Testing | Dashboards & Reporting

Overview of the A/B Testing Experiments Dashboard & Reporting

Christian Sokolowski avatar
Written by Christian Sokolowski
Updated over a week ago

A/B TESTING DASHBOARDS

The A/B Testing experiments dashboard will outline the performance of your control widget along with each of the variations that you chose to test against. If you have not yet set up your experiment, then head over to the install instructions to get started!

The metrics you see on this dashboard are at a page level and NOT the widget level. If you are interested in seeing your widget specific performance, then head over to your reports page to view the classic reporting.

Page-level reporting is used for A/B testing on ecommerce sites because it provides a comprehensive view of the overall impact and performance of different page variations. While widget-level reporting can provide insights into the performance of specific elements or features, page-level reporting offers a more comprehensive understanding of how different variations impact the overall user experience and conversion rates on an ecommerce site. It allows for more effective optimization and decision-making at the page level, leading to improved performance and better user satisfaction.


VARIATION NAMES

These names are the aliases you chose during the installation process. If you did not set any specific alias for each of the widgets or control, then it will default to the widget name.


TRAFFIC

This is the allocated traffic you elected to assign to each variation type. The traffic percentages you have chosen for each variation type will be assigned randomly, similar to flipping a coin. As a result, the traffic distribution among the variations may not be exactly equal. This approach is implemented to prioritize the speedy loading of the widgets. By randomizing the traffic allocation, the widgets can be loaded quickly and efficiently, providing a seamless user experience.


VISITORS

The total number of visitors or views for each specific variation is tracked using cookies upon page load. Once a visitor sees a particular variation, they will consistently see the same variation throughout their browsing session, unless they clear their cookies or start a new tab in private browsing mode. This cookie-based tracking ensures that visitors have a consistent experience and allows for accurate measurement of the performance and impact of each variation based on the number of views received.


CONVERSION

The "Conversion Rate" option is used to determine the winning variation in an A/B test based on which variation, including the control, achieves the highest percentage of order conversions. In other words, it assesses the success of each variation by comparing the conversion rates, which represent the proportion of visitors who complete a desired action.

The conversion rate is based on page views of the widgets or controls. Not the individual widget conversion. If you want to see widget conversion rates, visit the performance reporting section. The "winning" widget is chosen based off of the page load conversion that is shown on the A/B testing dashboard and not the specific widget that made the most conversions during the A/B test.


REVENUE

The "Revenue" option is used to determine the winning variation based on the amount of revenue generated. The control and variations are compared in terms of the total revenue they generate during the test period.

The goal of using the "Revenue" option is to identify the variation that drives the highest amount of revenue. This approach focuses on the monetary impact of each variation and aims to optimize for maximizing revenue generation.

The revenue is based on page views of the widgets or controls. Not the individual widget revenue. If you want to see widget revenue, visit the performance reporting section. The "winning" widget is chosen based off of the page load revenue that is shown on the A/B testing dashboard and not the specific widget that made the most revenue during the A/B test.


PER USER

The "Per User" metric refers to the revenue generated per user who has viewed a particular variation in an A/B test. It calculates the average revenue generated by each user who has been exposed to a specific variation, taking into account the individual contributions of each user towards the total revenue.

Did this answer your question?