Track ad link clicks but maintain SEO-friendly links? - asp.net

I have a web site that spits out links to third party sites. Now these third parties want MY site to track their clicks. How do I do this without ruining the SEO-friendly nature of a plain link?
Currently an ad link is just an anchor:
Come Visit Site A!
I can easily change the links to something like this:
http://mysite.com/clicktracker.aspx?redirect=adsiteA.com
But won't that kill any search engine benefits of linking to their site? If not, I'll happily do it this way... What are my other options? An onmousedown script that hijacks the click and does a postback then redirect?

Do your third party sites want you to report on all the bots and spiders that have crawled your site and followed the links, or just "real" people?
If it's the latter, you could do something along the lines that google use for their search results.
Basically, you render the link out normally, but add an OnMouseDown event to it, so that a spider that doesn't use a mouse follows the standard link, but a normal browser will fire the JS event first.
What you would end up with is something like this:
<a onmousedown="return trackMe(this)" href="http://example.com/">
And the trackMe method is then performing the redirect to the tracking page, which then issues a 302 redirect to the third party site.
You'd obviously want to check how this works for users navigating via the keyboard or similar (i.e. using Space or Return to follow the links).

If they're paid links, Google says they're not supposed to benefit your advertiser's PageRank. (In fact you could get penalized for trying to subvert this)
http://www.google.com/support/webmasters/bin/answer.py?hl=en&answer=66736

Related

Google Analytics not tracking Google Ads and UTMs clicks correctly

I've identified a discrepancy between Google Ads clicks and Analytics sessions in Paid Search (about twice more clicks than sessions). So I contacted Google Ads support and after a long conversation, they send me an email saying that my website structure uses redirections and it's making it lose parameters, and that I had to contact a developer to solve that problem because they don't give assistance on it. What exactly they told me to tell the developer was that:
Loss of parameters by redirection
The website trendotrends.com is not holding navigation parameters
because of the structure in which it was developed.
To verify this redirection, simply replicate the following steps: I
accessed the link
https://trendotrends.com/products/running-shoes?variant=15320930779194
After full site loading, I added the & gclid = Tester123 parameter to
the URL (in the browser, so the final URL was
https://trendotrends.com/products/running-shoes?variant=15320930779194&gclid=Tester123)
and hit Enter To understand if there is a redirect, the normal
behavior would be for the URL to remain the same (with & gclid =
Tester123 at the end), but in this case, the parameter disappears (and
hence the assignment) This link was just an example, which can be
verified in several other products of the site.
They also said I can't use manual tagging (UTMs) instead of automatic tagging in Google Ads because those redirections are also going to spoil the UTMs.
I don't use any redirections in my website and I have also tested with UTMs and there's also a discrepancy in google analytics data for that.
But before I contact a developer and invest on this fix, I would like to know if anyone had experienced that? If Googles answer fits this problem? And even if is there a way to fix it without being an expert.
Thanks in advance.
The issue here isn't really that there's a redirect (301), but a state change. There is javascript on the page that essentially rewrites the URL before the GA code can parse it.
Are you able to change to a different theme and test if this happens with that theme?

How can I track button clicks on a site that can't use Google Analytics?

I will start by saying that I have fair experience in HTML, but please keep the technical terms to a bare minimum. Pretend you're explaining it to a child. :-)
I used Wix.com to make my site (Wix is a place to easily design websites and has little HTML capabilities, since it's all based on being able to easily design a site with no HTML knowledge). You can add a Google Analytics tracking code, so i can see the number of clicks on the site, but that's about all. Apparently you can't change the code to be able to see button clicks on the site etc. (or maybe you can?...)
This is what I need above anything else:
On the site are a few "sign up now" buttons. When someone clicks it, they go to a signup page on an external site. I need to be able to track who clicks these buttons and when.
Ideally it's all tracked within google but apparently it doesn't work on wix.
Priorities:
Somehow it works with Google Analytics on Wix. It would have to be if somehow I can track it with Analytics without putting a code on the site itself. Don't know if or how that would work.
If not Google, is there a simple 3rd party Analytics site that could track the number of clicks on these buttons to external pages? It would be best if I can get the IP addresses of the clickers as well.
this is fairly easy, try customerlabs.co/google-analytics-event-tracking which can directly help you to send data about the users when they click event tracking.
eg:
Wix supports 3 types of goal tracking for your site: Destination, Duration and Pages/Screens per session. Currently, Wix doesn't support an Event tracking goal.

Adding Google's standard search (not custom) to my website

My intention is to embed Google results in my website. I don't want to customise the domain/s on which the search is performed or anything, just a 'bog standard' Google search based on search parameters I pass it.
2 questions:
How do I display google results on my website as a response to search criteria entered into a textbox I have?
Is there any legislation I need to take into account?
I know my second question sounds rather strange but I'm aware that what I'm appearing to do here is present content driven by Google as though it's my own so want to avoid breaching any copyright or 'same-origin policy' type thing.
What I've Tried/Ways I Know I Could Achieve This
Screen scraping Google's response to a simple web request with the necessary query parameters (but seems a bit excessive)
Google's custom search (but I don't want to customise anything)
I've tagged this question for some more context.
As it is mentioned here
you can use your own XML parser to customize the display for your
search users.
with an http request like this:
GET /search?q=bill+material&output=xml&client=test&site=operations
But it has a limitation on number of requests per day, 500 or 1000 I guess.
Custom Search can be configured to include the entire Web in its results:
From the Google Custom Search homepage, click New search engine.
In the Sites to Search box, enter at least one valid URL (e.g. www.google.com).
Click Create.
On the next page, under Optional next steps, click Edit.
On the Basics tab, under Search Preferences, select Search the entire web but emphasize included sites.
Click Save Changes.
In the left-hand menu, under Control Panel, click Sites.
Delete the site you entered during the initial setup process.

Will Googlebot follow _escaped_fragment_ HTTP redirect?

I have an ajaxified website, and I want all my content to be crawlable. I have a photo gallery, which only loads the photo using ajax, without refreshing the whole page. My root URL is this:
http://mysite/photos
and whenever a photo thumbnail is clicked, it displays the photo, and hash becomes #!/photo/photoid/phototitle, or when you are searching for a criteria, it becomes #!/photos/f-number/1.8/iso/640 e.g. for searching for photos with f/1.8 at ISO 640 (and more criteria can be appended this way). When a user opens up a URL like http://mysite/photos/#!/photos/f-number/1.8/iso/640 the landing page, using a javascript, will redirect the user to http://mysite/photos/f-number/1.8/iso/640 (without the hashbang), and again, there, the page loads http://mysite/Dynamic/PhotoThumbnails.aspx?f-number=1.8&iso=640 using ajax (yes, javascript looks at the location path and parses it according to that format). For the first case (link of a photo itself rather than a search), using again, only javascript, the page loads the photo itself (along with some extra tables showing technical info about photo) from the url http://mysite/Dynamic/RenderPhoto.aspx?ID=123 (where 123 the ID of the photo).
Given this information, my problem is simple: I am planning to (on my masterpage load event) redirect all requests with _escaped_fragment_s to the appropriate RenderPhoto or PhotoThumbnails page, by parsing the _escaped_fragment_ at server side. Will that work? My main concerns are;
Will Google follow the HTTP redirect? (301 or 302)
Will I get into any trouble (such as being removed from index) as I am not showing the exact same content to Google? (a browser will load a side mavigation bar, and all those fancy css styles visually-nice-looking page etc. and then load the real content into a pane at that page, where Google will be getting the "true" content only. My base page, sidebar content thumbnail list page, and photo renderer are COMPLETELY different pages which implement their OWN logic, so I cannot ever merge them)
If there is a risk of being removed due to the reasons above, what are my alternatives (no, I cannot merge the pages, it is NOT an option)? Do you recommend taking regular snapshots of pages and cache them and sending those to Googlebot?
Here is the current BETA of my website (yeah I know about lots of bugs), just to give you the idea how it will work: http://canpoyrazoglu.com/photos
I'm on ASP.NET 4.0, and using jQuery, if it helps.
A new answer to an old question. Yes it will follow it. However you may end up with both the clean and #! URLs. However, check this out (from Google Developer Guides):
Note that if you use a permanent (301) redirect, the url shown in our
search results will typically be the target of the redirect, whereas
if a temporary (302) redirect is used, we'll typically show the #! url
in search results.
This is the Google Developer Guide link:
https://developers.google.com/webmasters/ajax-crawling/docs/faq#redirects
Yes, I'm pretty sure it will follow a redirect. The Facebook open graph debugger does, and this blog post advocates implementing redirects: http://www.yearofmoo.com/2012/11/angularjs-and-seo.html

re-rendering a site within an iframe?

I want to make a site where there user can basically navigate the web from within an iframe. The catch is that I'd like to be able to have more control over what is rendered within the iframe. Specifically,
I'd like to be able to filter out images or text, disable forms etc.
I'd also like to be able to gather feedback such as what links the users clicked on.
Question 1:
Is this even possible using a standard back-end scripting language (like php), with html and javascript on the frontend?
Question 2:
Would I first need to grab the source of the site before it is rendered, then do whatever manipulation is necessary, and finally re-render it somehow?
Question 3:
Could somebody please explain the programming flow that would occur here (assuming its possible)?
I think you would probably want to grab the source of the of site (with server-side code) before rendering it. You might run into cross-site scripting issues if you try to use JavaScript. Your iframe would load a page like render.php and pass the address of the page to render os a querystring parameter. Then use regular expressions to find elements in the HTML that render.php downloads from the address. Rewrite the HTML as necessary and then write it all out to the iframe.
Rewrite links so that that the user is taken to a page you control and redirected onto a target site if you want to track where people are going. Example: a link in the page needs to go to google.com. You would send them to tracker.php?target=http://google.com. You control tracker.php and can log each load of this page and then redirect the user to the target site.
Update:
Another possible solution is to use Apache or other server to proxy the target website. There are modules like mod_proxy for this. There may also be modules that let you parse the HTML or you could roll your own.
I should point out that even the best solutions offered to your question will be somewhat brittle if you do not have full control over the target site. You will want to have lots of error handling or alerting.
You can have a look at this. It uses iFrame really well, and maybe even use the library it has.

Resources