I have written a page and need to test it locally.
How can I see the result of my development site served from my local machine using Google's "Fetch As Google" feature in Webmaster Tools?
(disclaimer: more of a comment than an answer)
This is an excellent question and there are amazingly little sources on the web for a solution.
Fetch Google Bot - http://www.google.com/support/webmasters/bin/answer.py?hl=en&answer=158587
The Fetch as Googlebot tool lets you
see a page as Googlebot sees it. This
is particularly useful if you're
troubleshooting a page's poor
performance in search results. For
example, if you use rich media files
to display content, the page returned
by the tool may not contain this
content if Google can't crawl it
effectively. You can choose to fetch a
page as Google's regular web crawler
sees it or, if you publish mobile
content, as our mobile crawlers do.
I followed the link above and tried out User Agent Switcher but it doesn't accomplish what the asker is looking for. See this thread: chrispederick.com/forums/viewtopic.php?id=788
You can change the user agent settings
to be the same as GoogleBot, for
example, but I'm not sure if sites
also change their appearance based on
the headers the search bot sends.
Changing the headers is beyond the
scope of the extension, however.
And chrispederick.com/forums/viewtopic.php?id=259
Q
For example if i put googleBot i'd
like it to customize that it would be
emulate Google's spider.
A
The User Agent Switcher has always
been designed to be a simple,
light-weight solution so I'm not
planning on adding anything like this.
In short I don't think there is a solution. This would be a great opportunity for a google app
Do you mean you want to see how your site will react to the google web crawler?
For this you could use Firefox with the User Agent Switcher addon.
In order to test your localhost website with the official Google tools, you can use Ngrok as i described in this post : https://www.aymen-loukil.com/en/blog-en/how-to-test-localhost-website-with-google-seo-tools/
Fetch as Google is not possible to use directly with non verified domain in Google Search Console (Webmaster tools). A trick to view it, is to iframe your Ngrok URL in a another verified domain.
- you should have a website verified in Search console
- Make an iframe that loads your Ngrok URL of your localhost webpage
Ngrok + Fetch as google combination is great, but you will need to go through the verification process each time you launch ngrok on the google tools side.
In my case I just needed to check if server side render was properly done, just went to google Chrome navigator settings and disabled javascript:
Settings >> Advanced >> Content Settings >> Disable >> Javascript Allowed (off)
It allowed me to check that the page was being 100% rendered in the server side(nextjs server side rendering) and no JavaScript render was being run on the client side.
Related
My website is hosted on wix.com . Wix does not allow you to insert HTML code directly in the page of your web site. When I input HTML code, Wix inserts an iframe that is hosted from a different domain (filesusr.com). This iframe does not use Google Analytics tracking, so when the browser loads this iframe GA believes my customer has "left" my web site and gone somewhere else. When the iframe loads, the original source of the traffic is lost.
From the research I've done, it seems this Wix feature does not work with GA traffic tracking, and so there is no solution other than using a different hosting platform.
However, I'm sure you clever folk know otherwise!...
Right, Wix is notorious for being a "widget" based platform that does not play nice with custom code. However, the whole GA different-origin thing is such a common request that they implement the tracker directly themselves if you plug your GA ID into your site settings. Any reason you are not using this? - https://support.wix.com/en/article/adding-your-google-analytics-tracking-id-to-your-wix-site. They also claim to support other custom tracking snippets - make sure you are pasting it into the "Tracking & Analytics" section and not as a custom HTML widget.
If for some reason you can't or don't want to use the above methods, it used to be that you were just out of luck. There is a reason why Wix is not as favored as other platforms by digital marketers that need to implement tracking code. However, if you were really determined, you could probably implement a very custom GA tracker or any custom code through their new feature called Corvid, which exposes internal APIs and extra coding features. How to do so is beyond the scope of this question, but the postMessage() method is the normal way to pass messages from a parent to a child (iframe) container. Or you could use wix-fetch, which is an internal version of the web API fetch(), to manually send a hit request to GA.
I'm having a few issues with making our site shareable on linked in and I'm at a loss. The og: meta tags all look fine, the facebook scraper picks it up fine, but the linkedIn scraper does not... and the img etc are not on a protected folder or anything like that.
When inspecting the developer tools the get request to the url-preview?url= link shows that the img etc.. aren't there.
The image is less than 1mb, all og: meta tags are obeyed. The only think that may not be 100% is the image ratio is not 1/4 or 4/1 (it's 2/1)... But that is only a recommendation and not a hard and fast rule.
Does LinkedIn provide something similar to FB (https://developers.facebook.com/tools/debug/) where you can test the scraper and re-run it? Or is there another way to debug this? Any help appreciated.
https://www.hipla.co.uk (is the page i'm trying to share).
cheers
It transpires linked in doesn't offer a similar facility to FB or twitter to test the OG meta tags and re-scrape the page. They cache a page for 7 days and then re-scrape again. However, you can refresh the linkedIn crawler cache simply by appending GET params to the URL, i.e. https://www.hipla.co.uk?123.
I eventually figured out what our issue was. We were using a wildcard cert (for multi domain, so we could have a single ssl cert for multiple subdomains) which meant we had to set the server name in the apache default-ssl.conf file, but we had a typo in it for the www instance ... which meant it gave an SSL error (for the linkedIn crawler) which isn't debuggable (if that's a word) using linkedIn but was spotted as we got an SSL error when testing the twitter metadata tags using the twitter card validator. Hope this helps anyone else who has a typo in their ssl settings. Note that the ssl error was not visible using a browser(s) as all looked fine.
I have a Wordpress site that has multiple plugins & that has somehow over time got the same Analytics tracking link (albeit slightly different implementations of it) in a few places on the site.
I want to remove one of them so the site 'touch wood' only uses a single tracking link throughout, is there a website scanning tool or desktop application that will scan an app and help me find each location of this tracking link?
Used the Google Tag Assistant for Chrome extension
I am having difficulty getting approved with Adsense. It seems there is not enough content but I have many blog articles, no inappropriate content or copyright infringements and I have the Ad code in place within the footer.
I believe the issue may caused by my site using client side rendering. (Meteor javascript framework)
So this means that if I do:
$> curl http://www.dales-sports-media.com
I get mostly empty html (meta and html tags, but nothing in the body)
Sharing articles from my site to Facebook and Twitter seems to work fine
Is it possible that google's adsense approval bot is unable to see the fully rendered page?
Has anyone successfully applied for a Adsense account with a Meteor web app?
Thanks,
Mick
What you need is Prerender which is a service that will render and cache your page(s), and then bots will be served up that version so they get the full HTML body.
You should set up nginx to be in front of your Meteor app, so that nginx will use proxy_pass to pass traffic from port 80 into your Meteor app on localhost port 3000, for example.
Then use this nginx config file as a guideline to set up Prerender: https://gist.github.com/thoop/8165802
If you're limited and can't install your own web server, make sure you've tried the spiderable package.
$ meteor add spiderable
I'm wondering if it's possible to capture details from the web page that a user previously visited, if my page was not linked from it?
What I am trying to achieve is to allow users to my site to find a page they like while browsing the web, and then navigate to a page on my site via a bookmark, which will add the URL (and possibly some other details like the page title) to a form which they can then submit to my site to add the page to a list of favourites there.
I am not really sure where to start looking for this. I wondered if I could use http referrer, but think this may only work if there is a link to my page?
Alternatively, I am open to other suggestions as to how I could capture this data - a Firefox plugin? A page which users browse other sites in an iframe, with a skinny frame on top?
Thanks in advance for your suggestions.
Features like this are typically not allowed by browsers for security and privacy reasons. The IFrame would work, but this is a common hacking technique so it may be likely to break or be flagged in the future.
The firefox addon is the best solution, but requires users to install it manually.
Also, a bookmarklet could be used. While they are actively on the target page, the bookmarklet could send you the URL.
This example bookmarklet would create a tinyURL for the destination page. You could add it to your database or whatnot.
javascript:void(window.open('http://tinyurl.com/create.php?url='+document.location.href));
If some other site links to yours and the user clicked on that link which took them to your site you can access the "referrer" from the http headers. How you get a hold of the HTTP headers is language / framework specific. In .NET you would use the Request.UrlReferrer; other frameworks would probably handle it differently.
EDIT: After reading your question again, my guess would be what you're looking for is some sort of browser plugin. If I understand correctly, you want to give your clients the ability to bookmark a site, while they are on that site, which would somehow notify your site about the page they're viewing. The cleanest way to achieve this would be a browser plugin. You can also do FRAME tricks, like the Digg bar.