Our A/B testing platform is tailored for developers and experienced marketeers.
Get an overview of the features that make us so valuable for your experimentation program.
Privacy compliance isn't just a feature - it's a requirement for continued serious experimentation! ABlyft was developed with this approach and continuously adapts to requirements.Find out more
We are rock solid and rocket fast. With a super tiny snippet, superb CDN and the possibility to host the snippet yourself we are faster than other solutions. Goodbye to any flickering and performance doubts!Find out more
From the small agency to the enterprise, it always remains fast and clear. With a strong focus on developers and an easy to use powerful visual editor you are capable of testing any of your ideas.Find out more
ABlyft makes developing experiments as easy as possible. Whether you're a front-end developer or an A/B testing experienced marketeer, you get the features you need from the platform.
If experiments could overlap technically or in hypothesis, they would be mutually exclusive. Let experiments run in parallel without getting in each other's way and don't wait for one to finish.
By switching on the debug mode it is possible to exactly follow the process / lifecycle of ABlyft. The question, why e.g. an experiment is not used, can be answered easily. The mode also overrides the caching so that changes take effect immediately and shows when goals are triggered.
Keep your developed code complete and with all comments in the platform. ABlyft can take care of minifying the code automatically. This ensures best readability in the platform and the smallest possible snippet on the website.
Callback, if an element appears in the DOM? Or appears in the viewport? Read URL parameters, work with cookies and parse URLs? For many practical tasks there are easy to use methods available.
Upload assets (e.g. images) in a central location and use them in any experiment. Keep an overview of where which assets are used. Delete unused assets with one click. No more workarounds for icons, graphics and backgrounds.
If you want to test two existing pages against each other, this is possible in an uncomplicated but very flexible way. Among other things, you determine how existing or new URL parameters and hashes should be treated.
If you test a lot, at some point Goals, Audiences and Pages accumulate, which are no longer used. For these three types, there are two filtered views in addition to the overview of all of them: one for elements that are in use and one for elements that are no longer used. So you always know what is still up-to-date.
Be as specific as you like about where and who should be part of experiments. In addition to specific targeting for experiments, use higher-level rules that apply globally. Create environments for languages, staging websites or much more.
Determine at the top level (before ABlyft starts working) if something should be excluded in general. This could be, for example, exclusion of targeting the payment page or generally no visitors with Internet Explorer version 11 or ABlyft should not run if the use of marketing cookies has been objected to. The purpose is manifold - this avoids errors and redundancies in targeting on an experimental level.
For example, if there is a staging system or the website under different domains (e.g. de, at, ch) and an experiment is to run on all or some environments, simply define this on the experiment. This way you don't have to create redundant projects that do basically the same thing (Audiences, Pages, Goals), but you can define on experiment level where it should run (e.g. first only staging or only US & CA, but not DE page).
Define user groups, page or page types once. Set primary goals and micro-conversions and use them in all experiments. Get a direct overview of what is in use and where.
Pages that cannot be determined uniquely by the URL can also be determined by variables, entries in the data layer or elements on the page. The same applies to user segments (audiences). An experiment should only run on product detail pages that are a sale product? Or should it be an expensive product? Check if there is a sales badge or read out the price.
If required, determine the percentage of potential visitors from the total traffic that should enter an experiment. Also determine the ratio for the traffic to the respective variants. You can also define a prioritization for the mutual exclusion of experiments.
The topic of data protection is now a must for the vast majority of companies. To ensure that this is not a show stopper in A/B testing, ABlyft meets and exceeds even the requirements in the toughest markets worldwide. For safer experimentation for your business.
ABlyft does not transmit or store any personal data on the platform. All data is kept in aggregated form and cannot be assigned to any visitor. Even separately, no visitor data is stored - no IP address, no user ID, etc.
A simple and secure self-hosting flow is available as a further option. No external resources are loaded and yet the snippet is always automatically kept up-to-date. Release processes are also easily possible via this.
All data and files are stored on highly secure servers. The data center is certified according to the highest standards and enjoys the trust of even the largest customers in the enterprise sector. In addition to best loading time, this means your data is safe.
Every change of the snippet can store the current version via GIT if required. This allows you to see what your snippet looked like, when, and what was changed at any time on GitHub or Bitbucket. And this is done by an independent party.
Determine the duration of the set Cookies completely freely - so that it corresponds to your requirements. Decide if you want to test with visitors who have the DoNotTrack setting enabled.
Secure your account with two-factor authentication. This way you can also meet the compliance guidelines of major customers or your company.
For the storage of Goals/Events you can either add your own or send all tracking calls to your own service/server. Here we are happy to help you.
All changes to experiments are stored in a change log. This way you always have an overview of who changed what and when. From the status change to the adjustment of the targeting, etc.
Combine ABlyft with all your analysis tools. Keep the overview even with many tests and test with many goals. Stay informed about all processes. ABlyft fits to largest teams and largest-scale experimentation efforts.
Pass information about the project, experiments or variants to any web analytics tool. Use the existing integrations or create your own. Pass information to heatmap and session recording tools and pass the information to the Google Tag Manager or directly to Google Analytics and Co.
Clear presentation of the metrics that are important for your experiment. Get a direct overview and get into details. View cumulative or daily histories. Share a link to a read-only view of the reporting with the person in charge - and revoke access with one click.
Optionally set an upper value limit for a goal. This can have an absolute or a relative value. For example, an absolute limit would be 1,000 USD, a relative limit would be "average of the variant + 500%".
If you have a Testing Channel at Slack, just connect it to ABlyft. If something changes in an experiment, e.g. it is started or stopped, ABlyft will directly inform the action and editors in the channel. If you don't use Slack or want to receive these news via mail, this is also possible.
Keep an overview of how many tested persons were used in the current billing period. Enter the consumption for individual months and how the respondents are distributed among your projects in the account.
Simply invite new employees to join your team. Also, if you work for several teams (e.g. customers), you can easily switch between accounts. So you only need one login for several teams.
ABlyft was created by A/B testers for A/B testers. And also the support is not made by any people, but by great people who do experiments intensively for years. We know what we are doing - no matter if it is a small question or support in the complete experimentation workflow.
Support from people who only understand half of what they are saying? Help that has nothing to do with practice? We support in all points and know our platform under the hood. We honor every hint and continuously develop ABlyft for all customers.
A testing tool only makes sense if it is used. We have years of experience in the practice of A/B testing for agencies and companies of all sizes and verticals. We also provide support for complex issues and, if required, we can also take over the design, implementation and quality assurance from the evaluation of experiments. For full agility.