How can we pass on referrer details to Adobe SiteCatalyst? - http

Our website is a vertical search engine and we refer a lot of traffic offsite to partners sites.
We recently switched our website over to serve all traffic via HTTPS. We realised this might confuse some of our partners if they were looking at referrer stats and saw a drop in traffic attributed to us. Therefore at the same time, we added the content-security-policy:referrer origin header and we can see that the referrer is correctly passed along by the browser.
Generally this is working fine but we have had complaints from users of Adobe SiteCatalyst (previously Omniture) who are no longer able to attribute traffic as being referred from us. We don't have access to SiteCatalyst to test this out. How does SiteCatalyst track referral traffic and is there a way to view all traffic split by different sources/referrers?

I don't know if this accounts for everything, since I don't have full context on both your end or your users' end, but here is some info / thoughts that might help.
By default, Adobe Analytics tracks referrer from document.referrer. This can be overridden by setting s.referrer.
In general, depending on how your site directs visitors to the other site vs. Browser security/privacy settings, document.referrer may or may not have a value. For example, Internet Explorer's default security/privacy settings is to suppress document.referrer on dynamically generated popup windows (e.g. window.open() calls).
So, and again, this is just speculation because I don't know the full context, you may need to work something out w/ your users, e.g. explicitly passing the referring url as a query param to the target page, and have your users pop s.referrer with it if it exists. Something along the lines of:
if ( !document.referrer ) {
s.referrer=s.Util.getQueryParam( 'refURL' );
}
Note: s.Util.getQueryParam is a utility function for Adobe Analytics AppMeasurement library that will return the value of the specified query param, or an empty string if it doesn't exist. If your users are still using legacy H code, they should use the s.getQueryParam plugin instead. Or use whatever homebrewed method of getting a query param from the URL, since javascript doesn't have a built-in function for it.

Related

Website with https, Google Analytics for long time with http... can I change it?

and its URL is 'secured' with SSL (with httpS://mywebsite.nl).
However, I found out that, for a long time, at Google Analytics, I use http://mywebsite.nl, ('non-secured') at my property and view's 'Default URL'.
I have two questions:
Did I miss data because I used http instead of https in the property and view's Default URL?
Can I CHANGE the http to httpS (in Google Analytics property/view) without problem, or do I lose historical data because of that? (This probably also depends on answer of Q1...) Or should I ADD a new property and/or view with https Default URL?
Thanks!
you didn't
you don't lose the historical data, feel free to change it.
That "default url" is for your convenience. you can do anything with it. That's just what GA uses to form full URLs from page paths only. Instead of using the hostname dimension there.
Also, GA is gracious enough to warn you whenever you can do significant changes to your core data.

Generate Get Request with No User Agent

I have a website that has been experiencing errors because of null references due to poorly coded logic regarding the user agent. Basically, there has been a slew of incoming requests that contain no user agent which leads to null reference exceptions in the user agent tracking. (It contained a call to "Request.UserAgent.ToLower()) I am correcting this logic to avoid the error condition. Since I'm certain these requests are coming from specialized tools and not ordinary users, I'm also blocking empty user agents via URL rewrite rules.
I need to test both of these changes. However, I can't seem to find a user agent spoofer that will enable me to generate a simple get request with NO USER AGENT. All of the tools that I have tried will allow me to do a custom agent string, but they won't let that string be left empty and there are no options that I can find to tell it to send no user agent.
So my question is, what tools are available, for a Windows-based system, that I can use to emulate a browser request with NO USER AGENT so that I can verify that my changes are working properly?
I believe that value is coming from the request headers. If yes, just try
Fiddler. Go to composer tab (see below) - by default it adds User-Agent to the request, however when you delete it in the Composer it seems to disappear from the request.

ssl get null my session in first login

I have ssl in my e-commerce web site. At first, browser always asking "do you want to show this web site's content" in all page and when I redirect to mycart page browser shows the same alert like that "This webpage contains content that will not be delivered using a secure HTTPS connection, which could comprise the safety of the entire webpage....Yes...No....". After I clicked to yes, all my sessions get null. Do you have any suggestions for me?
KR,
Çağın
The problem is your secure page is accessing information (scripts, images, etc.) from pages that are not secure. For example if you reference a javascript file (say jQuery) from a nonsecure site (say Google) then certain browsers (like IE) will display this message. You need to search through your references and find these. In other words searching src="http or something along those lines will pull up the nonsecure references.
Depending on what you are referencing you can move those items to your site so that they are now "secure". Also, in some cases changing your reference from src="http to src="https can resolve the problem.
Once you resolve this alert you can check again to see if you are having sessions issues as you could have some other issues to address.

HTTP Referrer Gotchas?

I need to ensure that my webpage is always within an iframe owned by a 3rd party. This third party refers to our landing page using src="../index.php".
Now my question is, if I make use of referrer to ensure that the page was requested by either myself or from the third party and if not force a reload of the 3rd party site, are there any big gotchas I should be aware of?
For example, are there certain common browsers that don't follow the referrer rules?
Thank you.
Also, it's REFERER because it somehow got misspelled in the spec. That was my very first REFERER gotcha.
You can't use referrer to "ensure" that the webpage is always being called from somewhere else because of referrer spoofing.
Referrers are not required. If a browser doesn't supply it then you'll get yourself into an endless redirect loop. Referrer is effectively "voluntary" just like cookies, java, and javascript.
Although. You could keep a log of IP & time last redirected. Prune the logs for anything over 5 minutes old and never redirect more than once per 5 minutes. You should catch 99.9% of users out there but avoid an infinite redirect loop for the rest. The log cannot rely on anything in the browser (that's the original problem) so no cookie and no session. A simple 2-column database table should suffice.
The only way you could do this is to directly authorize the request because of referrer manipulation..
You could restrict requests to a set of IP addresses, if you want to be lax, or require that the including client/system has an authentication cookie for requests shown in the iframe.
Good Luck
Even well-known formats may change...
Google apparently has changed its referrer URL. April 14, 2009, An upcoming change to Google.com search referrals; Google Analytics unaffected:
Starting this week, you may start seeing a new referring URL format for visitors coming from Google search result pages. Up to now, the usual referrer for clicks on search results for the term "flowers", for example, would be something like this:
http://www.google.com/search?hl=en&q=flowers&btnG=Google+Search
Now you will start seeing some referrer strings that look like this:
http://www.google.com/url?
sa=t&source=web&ct=res&cd=7
&url=http%3A%2F%2Fwww.example.com%2Fmypage.htm
&ei=0SjdSa-1N5O8M_qW8dQN&rct=j
&q=flowers
&usg=AFQjCNHJXSUh7Vw7oubPaO3tZOzz-F-u_w
&sig2=X8uCFh6IoPtnwmvGMULQfw
(See also Google is changing its referrer URLs from /search into /url. Any known issues?)
Be aware that Internet Explorer (all versions) specifically OMITS the HTTP REFERRER whenever a user navigates to a link as a result of JavaScript. (bug report)
e.g.
function doSomething(url){
//save some data to the session
//...
location.href = url;//IE will NOT pass the HTTP REFERRER on this link
}

ASP.NET application exhibits strange behaviour through firewall

This problem has been solved thanks to your suggestions. See the bottom for details. Thanks very much for your help!
Our ASP.NET website is accessed from several specific and highly secure international locations. It has been operating fine, but we have added another client location which is exhibiting very strange behaviour.
In particular, when the user enters search criteria and clicks the search button the result list returns empty. It doesn't even show the '0 results returned' text, so it is as if the Repeater control did not bind at all. Similar behaviour appears in some, but not all, other parts of the site. The user is able to log in to the site fine and their profile information is displayed.
I have logged in to the site locally using exactly the same credentials as them and the site works well from here. We have gone through the steps carefully so I am confident it is not a user issue.
I bind the search results in the Page_Load of the search results page the first time it is loaded (the criteria is in the query string). i.e.
if (!IsPostBack) {
BindResults();
}
I can replicate exactly the same behaviour locally by commenting out the BindResults() method call.
Does anybody know how the value of IsPostBack is calculated? Is it possible that their highly-secure firewall setup would cause IsPostBack to always return true, even when it is a redirect from another page? That could be a red herring as the problem might be elsewhere. It does exactly replicate the result though.
I have no access to the site, so troubleshooting is restricted to giving them instructions and asking for them to tell me the result.
Thanks for your time!
Appended info: Client is behind a Microsoft ISA 2006 firewall running default rules. The site has been added to the Internet Explorer trusted sites list and tried in FireFox and Google Chrome, all with the same result.
SOLUTION: The winner for me was the suggestion to use Fiddler. What an excellent tool that no web developer should be without. Using this I was able to strip various headers from the request until I reproduced the problem. There were actually two factors that caused this bug, as is so often the case with such confusing issues.
Factor one – Where possible the web application uses GZIP compression as supported by all major browsers. The firewall was stripping off the header that specifies GZIP decompression support (Accept-Encoding: gzip, deflate).
Factor two – A bug in my code meant that some processing was bypassed when the content was being sent uncompressed. This problem was not noticed before because the application is used by a limited audience, all of which supported GZIP decompression.
If they're at all tech-savvy, I would have them download Fiddler or something similar, capture the entire HTTP session, and then send you the saved session. Maybe something in there will stick out.
Meanwhile, see if you can get an install of ISA Server (an evaluation install, if you have to, or one from MSDN if you have or know anyone with a sub) and see if you can replicate it locally.
Is it possible the client has disabled Javascript and it's not picking up the _EVENTTARGET form value?
It might be some sort of proxy which creates a GET request out of a given POST request...
I am not sure how the IsPostBack is calculated, but my guess would be that it checks the HTTP request to see if it's a POST or a GET...
Ohh, yeah. It's definitely NOT "_EVENTTARGET" BTW...
I know this since Ra-Ajax does NOT pass any of those parameters to the server and they (Ra-ajax requests) are processed as IsPostBack requests...
Location, location, location. Check the user's culture. Normally that causes issues.
Could you create a test Post Page that passes the same things that your search page does, and in the Page_Load write back all of the post to make sure they are getting passed, particularly the __VIEWSTATE.
foreach (string key in Request.Form)
{
Response.Write("<br>" + key + "=" + Request.Form[key]);
}
Then ask one of the users to forward back what they see on that test page.
EDIT: There is documentation that some firewalls can corrupt the VIEWSTATE and some methods to get around it: View State Overview
Check the IIS logs to see if the request even makes it to your server. The ISA setup might be caching the initial request and serving that up in the succeeding requests.

Resources