Optimize your AMP pages with amp-experiment


Whether you’re running an online news, travel, or e-commerce site, you’ve likely invested time in reviewing your site’s design and user journeys to make your experiences more useful to your users. Often this means running A/B-style experiments to learn which enhancements work best. To enable this in AMP, we’ve launched <amp-experiment>, a new AMP component that allows you to conduct user experience experiments on an AMP page.

How it works

You can now design experiments and specify how much traffic to drive to specific variations. AMP handles the traffic diversion on the client side and provides a way to collect data with either <amp-pixel> or <amp-analytics>.

There are three key steps to getting a content experiment set up on your AMP page:

  1. Configure the experiment
  2. Implement variations
  3. Collect the data

Configure the experiment

<amp-experiment> is a new custom element that you use to specify all experiment behaviors via a JSON configuration. Here’s a code sample that configures an experiment called “recommendedLinksExperiment”:


<script type=”application/json”>
      recommendedLinksExperiment: {
        sticky: true,
        variants: {
          shorterList: 25.0,
          longerList: 25.0,
          control: 50.0,
      bExperiment: {…}


The JSON configuration supports specifying the following attributes of one or several experiments:

  • Whether assignment to a given experiment is sticky: Should a given user always be assigned to the same experiment variants across pageviews? In the example above, the experiment is indeed sticky.
  • How much traffic to expose to each variant of a given experiment: Do you want a random 50% of users to see version A and 50% to see version B? What about 20% for each of versions A through E? In the sample above, we allocate 50% of users into the control experience and allocate 25% each into either an experience with a shorter list of recommendations or one with a longer list of recommendations.

Please consult the configuration documentation for other advanced settings like experiment dependencies (groups) and employing a user notification constraint when using the sticky setting.

Implement the variations

Next up, you need to implement how each variant in each experiment should behave. <amp-experiment> will expose an attribute on the <body> element for each variant the user has been assigned to. You can then use CSS to change styling or visibility to construct variants as you’d like users to experience them:


body[amp-x-recommendedLinksExperiment=”control”] .extra-links {
display: none;


In the above example, the “control” variant of the recommendedLinksExperiment is meant to not display (“display: none”) the extra links for building the longer recommendation list, as indicated by the class name “extra-links”. This behavior will give just the right list length that we want to test as the experimental control experience.

Collect the data

Finally, AMP takes the configuration and decides what variant to assign across all experiments and for all users. As users receive different experiences based on the experiment variants you’ve defined, you collect data so that you can measure the key metrics of interest such as button clicks or time spent.

<amp-experiment> exposes a couple new reporting features. There is a new substitution variable called VARIANT that you can use to look up which experiment variants were assigned to a user on a given page view. If you’re running multiple experiments, you can use the VARIANTS variable to get the assigned variants across each defined experiment in a serialized format. You can use the combination of the user’s experiment combinations and the data indicating how they behaved during their visit to judge the success of each variant.

Try it out!

The <amp-experiment> feature gives developers a handy tool to optimize their users’ experiences.

Please read the documentation for a full overview of features supported in this initial version and check out the sample at AMP By Example. Drop by GitHub and let us know your feedback and any ideas you have to enhance amp-experiment to be even more useful for the content experiments you’d like to run.

Posted by Rudy Galfi, Product Manager, AMP Project