Web Analytics and Content Reuse - tridion

We have created a button template where an editor can associate a web analytic tag with it. The issue is we would like to reuse this button component on the same page but still be able to differentiate where on the page it is coming from.
Has someone encountered this issue before? We're looking for some insight into the problem and perhaps some ways to solve this without modifying the template.
FYI, we are using Tridion 2009 SP1.
Thanks!
Updated with HTML
<a href ="/security.jsp" onClick="trackCustomLink('tttt:p:apply-now','Link','onClick');">
<img src="/images/GICs/applynow-button.gif" border="0" alt="Apply Now" /></a>

Assuming your button is rendered with a component template, it seems like you need to just use the Ordinal Position of the component on the page - Can you specify what kind of templates you are using (VBScript or Modular with DWT or C#)? Base on that we may be able to give you some code samples.
Graham bird has a good article about OrdinalPosition with VBScript at: http://www.grahambird.co.uk/2011/01/ordinalposition/
The idea behind this is that you add the ComponentPresentation.OrdinalPosition value to your analytics code.

Related

How to get the XPATH or CSS selector from dynamically loaded website to follow links?

This is a dynamically-loaded website https://www.gelbeseiten.de/suche/hotels/n%c3%bcrnberg.
I'm trying to follow every link from the results. I found //article[#class='mod mod-Treffer']/a to follow the search result links. But the problem is this XPATH works only for a couple of links. For the rest of the others, I don't find any Selector. Because the other are using probably JS to make this action. I'm not familiar with this kind of dynamic website. So, I don't know how to get the selector from this kind of website. Any suggestions will be highly appreciated.
I will post this as an answer, without actually giving you the code, as it might help you more in the long term.
First, load that page in browser with javascript disabled (there are ways with disabling js in browser directly, or use an extension like ublock origin, etc - look it up).
You will notice that only the first 2 hotels are fully loading - the rest are being loaded dynamically by javascript (which in this case is disabled). There are 13 hits for //article[#class='mod mod-Treffer']/a selector, while there are more hotels on that page.
However, each hotel is wrapped in an <article> tag, and that tag has data-realid="[...]" attribute. The url for each hotel would be https://www.gelbeseiten.de/gsbiz/{data-realid}.
This is how you can get all those hotels' profile links.

data being hidden by and class regenerated when scraping web page using Beautiful Soup

I am trying to pull pricing data from a website, but each time the page is loaded, thet class is regenerated to a different sequence of letters, and the price is showing instead of a number. Is there a technique that I can use to bypass this in any way? Thanks! Here is the line of html as how it appears when I inspect the element:
<div class="zlgJQq">$</div>
<div class="qFwqmC hkVukg2 njGalW"> </div>
Your help would be much appreciated!
Perhaps that website is actively discouraging you from scraping their data. That would explain the apparently random class names. You might want to read their terms of use to be sure that it's OK to scrape their site.
However, if the raw HTML does not contain the price data but it is visible when the page is rendered, then it's likely that Javascript is being used to insert the prices after the page has loaded. You could try enabling the developer tools in your browser and monitoring the network activity while the page is loading. That might reveal that the site is using dynamic Ajax queries to populate the price data, and you could then write code to interact with the Ajax resource directly.
It's also possible that the price data is embedded somewhere in the HTML, possibly obfuscated, and then loaded dynamically by javascript.
That's just a couple of suggestions. You will need to analyse the site to see whether automated scraping is feasible. If you can let us know what website you're dealing with then someone might be able to suggest something more specific.

New UI SiteEdit Implementation

I have implemented New UI SiteEdit in Tridion 2011 SP1. When I have created a page without components in it ,I am able to edit the page. If I am inserting the component I am not able to edit the page. Please help on this issue?
When changing a Page in New UI (Experience Manager or XPM), the page is checked-out. What you might be seeing for other users is expected behavior--other users should not be able to edit the page in the CME or within XPM.
Also, you should be restricted from editing content page for even the same user that has a different session (e.g. viewing the page from another browser).
When editing the page with the same user and session, you should be able to add multiple components. The page is checked out. Editing content on the page should be "editing components," rather than the page itself.
Let us know if you're seeing something else.
This can be a result of having an syntax error in your inline editing commands (i.e. the JSON syntax inside HTML comments). Normally you would use the OOTB building blocks that generate this for you, however, in some extreme scenarios, this syntax is written out by hand. I suspect that you may have the latter scenario. Verify your component and component field command syntax.

Facebook comments: CSS doesn't work

I'm developing a website and I decided to use Facebook comments to provide commentable behaviour. But, unfortunately, I met some problems.
While trying to customize look of news page, ex. http://buchman.pcspace.pl/aktualnosci/ept-snowfest-podsumowanie-czwartego-dnia.html, I am not able to apply CSS to view: everything appears correctly in HTML source, but doesn't change view.
What's wrong?
According to this blog post the new fb:comments no longer supports custom CSS.
After log research I noticed that while loading fb frame, it doesn't use css given as param.
It causes all parts don't work..

Host .NET app inside HTML web page

The main page of our website is HTML. The powers that be want to put an asp.net calendar on this main page. Is there any better way to accomplish this than to use an iframe?
Start with this...
<div id="calendar">
View our calendar
</div>
Then use an AJAX request to replace the link with the actual calendar... works whether or not JavaScript is enabled / successfully fires!
Here is the jQuery way of getting the calendar...
$("#calendar").load("/ajax/calendar/");
I recommend that the first URL (the link to the calendar) links to a full page containing the calendar and the second URL (the link to ajax/calendar/) links to something that just returns the HTML for the calendar, to make it faster and less bulky...
All URLs are fictitious and any resemblance to your real URLs is purely coincidental.
If you control IIS - you could just map .HTML to the ASP.NET handler and add your asp:calender wherever you want.
If you're able to use JavaScript, then I would recommend one of the plethora of JavaScript calendars out there.
http://jqueryui.com/demos/datepicker/

Resources