There's certainly a bunch of amp validators online already, and I personally use these 2 - search.google.com/search-console/amp and validator.ampproject.org/
However, for some reason in a certain cases they will reflect different results (screenshot here)
Can anyone tell which one should be trusted? Meaning, which one is more authoritative?
Thanks,
Wadek
Both validators can be trusted. The problem in your case is that the search console obeys your robots.txt (which restricts the access to your AMPs). Once you provide the Google crawler access to your AMP, the Google Search Console will show the same validation error as the validator on ampproject.org.
Related
Essentially, I'm concerned that a single user can be counted twice. Is there a best practice, etc. I've tried googling and I'm not sure if I'm just not asking the right question with the right words. Platform is on sitecore.
Using the same property to track AMP and non-AMP pages will result in multiple users. See here for Google's recommendation.
Though looks like you can use the Google AMP Client ID API to work around this.
Here are the two pages that the Google Search Console shows 2 Critical Errors for:
1. https://www.hillwebcreations.com/fix-product-markup-errors-avoid-google-manual-action/amp/
2. https://www.hillwebcreations.com/google-image-guidelines/amp/
Both (out of 80+ post are the only two with issues all use the exact same AMPforWP plugin) show the same 2 issues:
Details:
(1) The mandatory tag 'amphtml engine v0.js script' is missing or incorrect.
(2) The mandatory tag 'link rel=canonical' is missing or incorrect.
But then when I run the link to the AMP TEST, both say "Valid AMP page
Page is eligible for AMP search features in Google search results".
https://search.google.com/test/amp?utm_source=wmx&utm_medium=link&utm_campaign=wmx-agg&id=KWGeOzUCXJ7fBP9azfDLIw
https://search.google.com/test/amp?utm_source=wmx&utm_medium=link&utm_campaign=wmx-agg&id=Josb4P-x6poP3B_96qRJhw
And oddly, the one URL has a strange "v" added to the end. Where does that come from?
https://www.hillwebcreations.com/fix-product-markup-errors-avoid-google-manual-action/amp/v/
Thanks in advance for any help to sort this out.
You may refer with this thread regarding your issue The mandatory tag '%1' is missing or incorrect. It might be a conflict with another plugin that's modifying the output of AMP pages. Best option is to disable it for any URLs that end in /amp/?. Based from this documentation, the fix is to add (or correct) the mandatory HTML tag.
It sounds like Google Search Console has a different copy of your web page than the AMP Test tool. Google Search Console doesn't fetch your page live, it is using Google's last crawled version. It is possible that the page has changed since then and this issue will resolve itself after some crawling time has passed.
Try refreshing the version that Google has cached in the amp-cache by using the update-ping url (include the full path to the page after the domain):
https://cdn.ampproject.org/update-ping/c/s/example.com/
See details here: https://developers.google.com/amp/cache/update-cache
Is there is a tool or a solution that automatically checks if the GTM (Google Tag Manager) tags are working properly on a page?
I don't need it to do anything else except retrieve the bag of tags and let me know which any URLs where there is a problem with a tag.
I can only find manual validation checking and I would need to implement a solution for a large number of tags so automation would be most helpful.
There are a few paid services that I can think of:
Observepoint
Tag Inspector
I'm sure there are others, but those are the ones I've used.
You can also use GTM's error tracking to log client-side JS error. Doesn't really check to see what tags are firing, but will let you know when your JavaScript is having problems.
Good blog post here: Using Google Tag Manager to log JavaScript errors in Google Analytics
What people are doing is basically taking the UA-XXXXXX code that you normally get with analytics, and they are generating calls against it. This is skewing my analytics stats. On top of that, in Google WebMaster tools, it's also causing this:
It looks like somehow these pages, with my code on or at least with the generated code on, is making Google Webmaster tools think I have lots of 404's. This can't possibly be good for my rankings.
Anyone know if there is anything you can do to stop this?
Try making async call from your server end using CURL.That way you will never expose your GA code.
I have not implemented it, but it might work as per theory
Since you can filter by custom dimensions you can set a "token" in a custom dimension on every page and filter out any traffic in your view settings that does not include the token.
Obviously this will not help against people who use the code from your website (unless you also implement shahmanthan9s suggestion - which is a lot of work but will give you cleaner data), but it will work against drive-by shooters who randomly select UAIDs to send data to (which is the situation you refer to in your comment).
I'm doing custom-rolled view tracking on my website, and I just realize that I totally forgot about search bots hitting the pages. How do I filter out that traffic from my view tracking?
Look at the user-agents. It might seem logical to blacklist, that is filter out all the strings that contain "Googlebot" or other known search engine bots, but there are so many of them, it could well be easiest to just to whitelist: log visitors using a known browser.
Another approach would be to use some JavaScript to do the actual logging (like Google Analytics does). Bots won't load the JS and so won't count toward your statistics. You can also do a lot more detailed logging this way because you can see exactly (down to the pixel - if you want) which links were clicked.
You can check the user agent: here there is a nice list.
Or you could cross-check with the hits on robots.txt, since all the spiders should read that first and users usually don't.