This post is the second of a three part series addressing best practices around implementing AMP:
- How to measure the success of your AMP pages
- AMP implementation best practices and common pitfalls (this article)
- AMP monetization best practices and common pitfalls (stay tuned!)
The previous article in this series described how to correctly measure the performance of your AMP pages. In this article, we discuss best practices and common pitfalls when building quality AMP pages. Building a good user experience is important because it improves user acquisition and retention. Ensuring that everything is implemented according to best practices will enable you to correctly analyze user behavior and maintain the quality of your website through informed data based decisions. Let’s take a deep dive into the following major areas to address this:
- User Engagement
- Measuring User Behavior
- Monitoring and Troubleshooting
If you’d just like to see a list of common and important features that can be implemented in AMP, please jump to the table in the Appendix at the end of this post.
User engagement can be closely correlated to revenue, and both factors are typically top of mind goals for publishers.
Let’s look into best practices that publishers can adopt to drive user engagement and how to avoid common pitfalls in your implementation of AMP.
Content and Feature Parity
It is common for the AMP and non-AMP pages of websites to differ in terms of look and feel, content and/or features. This can be due to a number of reasons (outdated content, outdated designs, budget, shifting priorities etc) but the reasons why aren’t as important as working to bridge the gap between the two. A disparity in content or features means that some users may be having a worse user experience on a site when compared to others.
Having content and feature differences could lead to:
- An inconsistent experience between pages which can deter users from browsing AMP pages. For example, if a non-AMP page has a very different look and feel from an AMP page (eg. personalized recommendations only available on a non-AMP page), users may get confused and opt for browsing non-AMP over AMP pages.
- Poorer first impressions on AMP pages. If the page does not have engaging content, features and call to actions (eg. navigation, search, recommended articles, etc) users will not spend time browsing the rest of the site. This results in lower user engagement, translating to lower pageviews and revenue impact.
Ideally, the user should be seeing the same content, features and call-to-actions in both types of pages.
Example: The web portal, Excite Japan, provides a great example of this feature and content parity:
AMP is constantly evolving to add new components and capabilities. For example, the <amp-script> component allows you to write custom JavasScript that runs in a Web Worker, without compromising page performance. For a list of common and important features that can be implemented in AMP, please refer to the appendix.
Keep users in AMP experience to facilitate onward journey
When possible, it is important to retain users in an AMP experience in order to drive pageviews, pages per session and in turn maximize better monetization opportunities. Some more complex user journeys might require directing a user to a non-AMP page. In these circumstances, a PWA combined with the <amp-install-serviceworker> component is highly recommended.
Below are some good patterns for keeping a user in an AMP experience:
Method 1: Link menu/search/related articles to AMP pages
Example: La Iene from Italy shows AMP links for onsite search results:
Method 2: Call to action (CTA) to continue browsing
Example: The Indian publication NDTV added a “next article” CTA button at the bottom of all AMP pages to keep users in an AMP experience:
Method 3: Infinite scroll using amp-next-page
Implementation of infinite scroll (amp-next-page) turns an AMP page into an infinite scrolling experience by loading additional recommended content from the publisher’s website when the user reaches the end of an article. This helps to increase pageviews and offers a monetization opportunity as an ad unit can be slotted between pages (announcement)
Example: Times of India uses version 1.0 of the component and shows ads between articles:
As a best practice, please A/B test any UI/UX and feature changes to assess effectiveness. The <amp-experiment> component supports A/B and multivariate testing. The component allows you to define customizable variants and configure how you would like the traffic allocated. The component also includes analytics integration so that the necessary data can be collected to perform statistical comparison across variants.
To test AMP vs non-AMP, you can follow usual testing methodologies such as a split traffic test.
- Case Study – How The Financial Times increased user engagement and revenue with better speed and UX
- News UX Playbook
Measuring User Behavior
Inaccurately measuring user behaviour is a common pitfall that publishers fall into, but one that can be fixed easily. Setting up proper measurement is extremely important to get accurate data and to help you make decisions around your AMP strategy. Here are a few things to keep in mind:
We often hear publishers asking why the bounce rate for AMP pages is unusually high. Bounce rates for AMP pages tend to be inflated because user journeys from the AMP Cache to the origin are not properly tracked.
AMP Linker is a way to unify sessions between AMP Cache and your origin. In turn, this allows you to accurately track user metrics such as bounce rate and pages per session.
How it works for the scenario of AMP Cache to non-AMP navigation:
Steps to implement:
The steps to test and verify the implementation are detailed comprehensively in the first article in this series. If you need to test in the staging environment, take a look at “Option 3: Verify via Chrome Developer Tools” in the ‘Setup Verification’ section of this help center article.
In Google Analytics, it is common to use the data-source dimension to compare AMP vs non-AMP metrics. As the data-source dimension is a hit level scope variable, its value can only be associated with the page that sent it, which may result in inaccurate information as a user transitions from one data source to another. For example, if a visitor starts a session from an AMP document and navigates to other pages on your domain built using other technologies, the final data source value will be “web”. This makes understanding user traffic to your domain more difficult and potentially confusing.
Custom dimensions are a more reliable way to compare AMP vs non-AMP metrics as you can combine hit-level and session-level scope dimensions for more accuracy.
There are 2 types of custom dimensions that can be useful to implement:
- Session level dimension to identify sessions that start from AMP
- Hit level dimension to track the number of hits to an AMP Cache vs the origin domain
For the detailed implementation steps, please refer to Part 1 of this series.
Measurement parity with Non-AMP
In order to do an apples-to-apples comparison between AMP and non-AMP events, ensure that there is measurement parity between the two types of pages. For example, if you are measuring an important call to action button on non-AMP, the same button and click event should be measured on AMP as well, in order to properly compare the performance.
The <amp-analytics> component supports various types of events and configuration which you can read about here. To learn more about which analytics vendors are supporting AMP, check the list of Analytics Vendors.
After you’ve worked so hard to build a seamless user experience, you want to make sure that your pages are discoverable by users. Here are some top tips to do so:
- Ensure proper canonical linking
- Create AMP versions of the most popular pages on your site
For pages with frequently updated content, set an appropriate value for “max-age” in the Cache Control header
Treebo, for example, found that their AMP pages were being discovered after correcting their canonical linking:
“We were all set and made our first release and then waited for 3 long days for our amp pages to be cached by Google. But it wasn’t being cached. Now we didn’t know what went wrong. Digging deep we found out that since Google has not yet started indexing mobile pages and all our amp canonical links <link href=”https://www.treebo.com/hotels-in-mumbai/” rel=”canonical”> were part of our mobile site so that’s the reason they were not getting crawled!”
- If you’re using multiple languages, ensure proper hreflang linking
- Follow SEO and Structured data guidelines
Monitoring and Troubleshooting
Regular monitoring and troubleshooting is needed to ensure consistent quality and discoverability of pages.
- Check analytics reports for abnormalities such as sudden traffic drop or abnormal user behavior
- Refer to Part 1 of this series on how to analyze AMP traffic.
- Check the Google Search Console AMP report for issues such as invalid pages or indexing issues
- For a walkthrough of the Google Search Console report, watch this video.
- CI integration for automated monitoring
- Integrate the AMP Validator NPM packages into your build and test pipelines and run scheduled checks in production.
If you do encounter issues during monitoring, here are some ways you can start the troubleshooting process:
- Check and fix validation errors
- Check and fix any Google-specific AMP Issues
- Ensure that canonical linking was implemented correctly
- Ensure that the page can be reached and loaded as a Googlebot
- Look for patterns in Google Search Console reports
- eg. if all URLs under /category/world are affected, look for issues in the template
If all else fails, reach out to the Webmasters community forum.
We hope that this article has given you a comprehensive guide to making AMP successful for your users and business. In the coming weeks, we will deep dive further into monetization best practices.
Please let us know if you have any issues or feature requests here.
Implementing common features on publisher websites in AMP: