Ocassionally-unreliable DATE field of http response - http

We're interested in the rough timezone of the user, give or take a few hours, and for reasons we don't trust the user's device clock.
One cheap and easy thing we tried is to post a GET (a HEAD would work too) to (eg) google.com, and look for the DATE field of the header. This is, by the http standard, always in GMT. This may fall foul of caching, so we added a ?rng=XXXXXX to the end.
However in some cases the date field seems to be way off. Like, days off. The previous and next request get the correct date. Now maybe I just need to add more digits to my cache-beating rng field, but could something else be going on? Are there any flaws or downsides in this plan, considering we don't care about second-accuracy?

We seem pretty sure that given our player install base, and the frequency of these checks, and the poorness of our random suffix, that google/someone in between was able to cache them successfully.
Seems like a good rule not to use http headers for authoritative time, unless you control the server and can send non-caching responses.

Related

How do websites change content that have already been labeled with very long expiry dates?

What would happen if the server sets a very far away expiry date for a resource (e.g. 20 years later) and then after some new requirements emerge it decides that the resource must be changed? For example, the CSS files of some websites seem to have such long expiry dates. Is there another header that the website can send to cancel an exact previous cached resource?
No. Once you have a very long expiry date, you have no idea if people will still be using that up until the expiry date.
The way to “change” something generally falls into two categories:
Create a unique filename so it’s versioned. For example styles.57ab85ca183.css instead of styles.css. Or perhaps styles.css?v=12345. This requires you to refer to the specific version in your code so does add a little complexity but there are tools for this.
Have a short expiry date. This is generally what people do with the main page (as it’s not possible to change that location with a versioned path).

How to Sample Adobe Analytics (Omniture) Data

I can't find anything on the web about how to sample Adobe Analytics data? I need to integrate Adobe Analytics into a new website with a ton of traffic so the stakeholders want to sample the data to avoid exorbitant server calls. I'm using DTM but not sure if that will help or be a non-factor? Can anyone either point me to some documentation or give me some direction on how to do this?
Adobe Analytics does not have any built-in method for sampling data, neither on their end nor in the js code.
DTM doesn't offer anything like this either. It doesn't have any (exposed) mechanisms in place to evaluate all requests made to a given property (container); any rules that extend state beyond "hit" scope are cookie based.
Adobe Target does offer ability to output code based on % of traffic so you can achieve sampling this way, but really, you're just trading one server call cost for another.
Basically, your only solution would be to create your own server-side framework for conditionally outputting the Adobe Analytics (or DTM) tag, to achieve sampling with Adobe Analytics.
Update:
#MichaelJohns comment below:
We have a file that we use as a boot strap file to serve the DTM file.
What I think we are going to do is use some JS logic and cookies
around that to determine if a visitor should be served the DTM code.
Okay, well maybe i'm misunderstanding what your goal here is (but I don't think I am) but that's not going to work.
For example, if you only want to output tracking for 50% of visitors, how would you use javascript and cookies alone to achieve this? In order to know that you are only filtering 50%, you need to know the total # of people in play. By itself, javascript and cookies only know about ONE browser, ONE person. It has no way of knowing anything about those other 99 people unless you have some sort of shared state between all of them, like keeping track of a count in a database server-side.
The best you can do solely with javascript and cookies is that you can basically flip a coin. In this example of 50%, basically you'd pick a random # between 1 and 100 and lower half gets no tracking, higher half gets tracking.
The problem with this is that it is possible for the pendulum to swing 100% one way or the other. It is the same principle as flipping a coin 100 times in a row: it is entirely possible that it can land on tails all 100 times.
In theory, the trend over time should show an overall average of 50/50, but this has a major flaw in that you may go one month with a ton of traffic, another month with few. Or you could have a week with very little traffic followed by 1 day of a lot of traffic. And you really have no idea how that's going to manifest over time; you can't really know which way your pendulum is swinging unless you ARE actually recording 100% of the traffic to begin with. The affect of all this is that it will absolutely destroy your trended data, which is the core principle of making any kind of meaningful analysis.
So basically, if you really want to reliably output tracking to a % of traffic, you will need a mechanism in place that does in fact record 100% of traffic. If I were going to roll my own homebrewed "sampler", I would do this:
In either a flatfile or a database table I would have two columns, one representing "yes", one representing "no". And each time a request is made, I look for the cookie. If the cookie does NOT exist, I count this as a new visitor. Since it is a new visitor, I will increment one of those columns by 1.
Which one? It depends on what percent of traffic I am wanting to (not) track. In this example, we're doing a very simple 50/50 split, so really, all I need to do is increment whichever one is lower, and in the case that they are currently both equal, I can pick one at random. If you want to do a more uneven split, e.g. 30% tracked, 70% not tracked, then the formula becomes a bit more complex. But that's a different topic for discussion ( also, there are a lot of papers and documents and wikis out there published by people a lot smarter than me that can explain it a lot better than me! ).
Then, if it is fated that that I incremented the "yes" column, I set the "track" cookie to "yes". Otherwise I set the "track" cookie to "no".
Then in in my controller (or bootstrap, router, whatever all requests go through), I would look for the cookie called "track" and see if it has a value of "yes" or "no". If "yes" then I output the tracking script. If "no" then I do not.
So in summary, process would be:
Request is made
Look for cookie.
If cookie is not set, update database/flatfile incrementing either yes or no.
Set cookie with yes or no.
If cookie is set to yes, output tracking
If cookie is set to no, don't output tracking
Note: Depending on language/technology of your server, cookie won't actually be set until next request, so you may need to throw in logic to look for a returned value from db/flatfile update, then fallback to looking for cookie value in last 2 steps.
Another (more general) note: In general, you should beware sampling. It is true that some tracking tools (most notably Google Analytics) samples data. But the thing is, it initially records all of the data, and then uses complex algorithms to sample from there, including excluding/exempting certain key metrics from being sampled (like purchases, goals, etc.).
Just think about that for a minute. Even if you take the time to setup a proper "sampler" as described above, you are basically throwing out the window data proving people are doing key things on your site - the important things that help you decide where to go as far as giving visitors a better experience on your site, etc..so now the only way around it is to start recording everything internally and factoring those things in to whether or not to send the data to AA.
But all that aside.. Look, I will agree that hits are something to be concerned about on some level. I've worked with very, very large clients with effectively unlimited budgets, and even they worry about hit costs racking up.
But the bottom line is you are paying for an enterprise level tool. If you are concerned about the cost from Adobe Analytics as far as your site traffic.. maybe you should consider moving away from Adobe Analytics, and towards a different tool like GA, or some other tool that doesn't charge by the hit. Adobe Analytics is an enterprise level tool that offers a lot more than most other tools, and it is priced accordingly. No offense, but IMO that's like leasing a Mercedes and then cheaping out on the quality of gasoline you use.

Efficiently webscraping a website without an api?

Considering that most languages have webscraping functionality either built in, or made by others, this is more of a general web-scraping question.
I have a site in which I would like to pull information from about 6 different pages. This normally would not be that bad; unfortunately though, the information on these pages changes roughly every ten seconds, which could mean over 2000 queries an hour (which is simply not okay). There is no api to the website I have in mind either. Is there any possible efficient way to get the amount of information I need without flooding them with requests, or am I out of luck?
At best, the site might return you an HTTP 304 Not Modified in its header when you make a request - indicating that you need not download the page, as nothing has changed. If the site is set up to do so, this might help decrease bandwidth, but would still require the same number of requests.
If there's a consistent update schedule, then at least you know when to make the requests - but you'll still have to ask (i.e.: make a request) to find out what information has changed.

Advice needed on REST URL to be given to 3rd parties to access my site

Important: This question isn't actually really an ASP.NET question. Anyone who knows anything about URLS can answer it. I just happen to be using ASP.NET routing so included that detail.
In a nutshell my question is :
"What URL format should I design that i can give to external parties to get to a specific place on my site that will be future proof. [I'm new to creating these 'REST' URLs]."
I need an ASP.NET routing URL that will be given to a third party for tracking marketing campaigns. It is essentially a 'gateway' URL that redirects the user to a specific page on our site which may be the homepage, a special contest or a particular product.
In addition to trying to capture the referrer I will need to receive a partnerId, a campaign number and possibly other parameters. I want to provide a route to do this BUT I want to get it right first time because obviously I cant easily change it once its being used externally.
How does something like this look?
routes.MapRoute(
"3rd-party-campaign-route",
"campaign/{destination}/{partnerid}/{campaignid}/{custom}",
new
{
controller = "Campaign",
action = "Redirect",
custom = (string)null // optional so we need to set it null
}
);
campaign : possibly don't want the word 'campaign' in the actual link -- since users will see it in the URL bar. i might change this to just something cryptic like 'c'.
destination : dictates which page on our site the link will take the user to. For instance PR to direct the user to products page.
partnerid : the ID for the company that we've assigned - such as SO for Stack overflow.
campaignid : campaign id such as 123 - unique to each partner. I have realized that I think I'd prefer for the 3rd party company to be able to manage the campaign ids themselves rather than us providing a website to 'create a campaign'. I'm not
completely sure about this yet though.
custom : custom data (optional). i can add further custom data parameters without breaking existing URLS
Note: the reason i have 'destination' is because the campaign ID is decided upon by the client so they need to also tell us where the destination of that campaign is. Alternatively they could 'register' a campaign with us. This may be a better solution to avoid people putting in random campaign IDs but I'm not overly concerned about that and i think this system gives more flexibility.
In addition we want to know perhaps which image they used to link to us (so we can track which banner works the best). I THINK this is a candiate for a new campaignid as opposed to a custom data field but i'm not sure.
Currently I am using a very primitive URL such as http://example.com?cid=123. In this case the campaign ID needs to be issued to the third party and it just isn't a very flexible system. I want to move immediately to a new system for new clients.
Any thoughts on future proofing this system? What may I have missed? I know i can always add new formats but I want to use this format as much as possible if that is a good idea.
This URL:
"campaign/{destination}/{partnerid}/{campaignid}/{custom}",
...doesn't look like a resource to me, it looks like a remote method call. There is a lot of business logic here which is likely to change in the future. Also, it's complicated. My gut instinct when designing URLs is that simpler is generally better. This goes double when you are handing the URL to an external partner.
Uniform Resource Locators are supposed to specify, well, resources. The destination is certainly a resource (but more on this in a moment), and I think you could consider the campaign a resource. The partner is not a resource you serve. Custom is certainly not a resource, as it's entirely undefined.
I hear what you're saying about not wanting to have to tell the partners to "create a campaign," but consider that you're likely to eventually have to go down this road anyway. As soon as the campaign has any properties other than the partner identifier, you pretty much have to do this.
So my first to conclusions are that you should probably get rid of the partner ID, and derive it from the campaign. Get rid of custom, too, and use query string parameters instead, should it be necessary. It is appropriate to use query string parameters to specify how to return a resource (as opposed to the identity of the resource).
Removing those yields:
"campaign/{destination}/{campaignid}",
OK, that's simpler, but it still doesn't look right. What's destination doing in between campaign and campaign ID? One approach would be to rearrange things:
"campaign/{campaignid}/{destination}",
Another would be to use Astoria-style indexing:
"campaign({campaignid})/{destination}",
For some reason, this looks odd to a lot of people, but it's entirely legal. Feel free to use other legal characters to separate campaign from the ID; the point here is that a / is not the only choice, and may not be the appropriate choice.
However...
One question we haven't covered yet is what should happen if/when the user submits a valid destination, but an invalid campaign or partner ID. If the correct response is that the user should see an error, then all of the above is still valid. If, on the other hand, the correct response is that the user should be silently taken to the destination page anyway, then the campaign ID is really a query string parameter, not a part of the resource. Perhaps some partners wouldn't like being given a URL with a question mark in it, but from a purely REST point of view, I think that's the right approach, if the campaign ID's validity does not determine where the user ends up. In this case, the URL would be:
"campaign/{destination}",
...and you would add a query string parameter with the campaign ID.
I realize that I haven't given you a definite answer to your question. The trouble is that most of this rests on business considerations which you are probably aware of, but I'm certainly not. So I'm more trying to cover the philosophy of a REST-ful URL, rather than attempting to explain your business to you. :)
I think the URL rewriting is getting out of hand a little bit lately. Not everything belongs to the URL. After all, a URL is supposed to describe a resource that can be searched for, discovered or manipulated and it seems to me that at least the partner ID and the custom fields from above are not part of the resource.
Not to mention that that at some point you would like to actually keep the partner ID constant across multiple campaigns and that means that it is now orthogonal to the particular places they need to visit. If you keep these as parameters, you will allow your partners to access uniformly multiple resources on your website, while still reliably identifying themselves, so you can track their participation in any of your campaigns.
It looks like you've covered all of your bases. The only suggestion I have is to change
{custom}
to
{*custom}
That way, if you ever need to accept further parameters, you don't have to take the chance that old URLs will get a 404. For example:
If you have a URL that looks like:
campaign/PR/SO/123
and you decide in the future that you would like to accept a fourth and fifth parameter:
campaign/PR/SO/123/blah/foo
then the first URL will still be valid, because you're using a wildcard character in {*custom}. "blah/foo" would be passed as a string to your action. To get those extra two parameters, you would simply split the custom argument in your action by '/'. Add some friendly error handling if they don't exist and you've successfully changed the amount of information you can receive with a campaign URL without completely breaking URLs already in the wild.
Why not use URL encoded variables instead of routes? They're a lot more flexible - you can add any new features in the future while still maintaining 100% backwards compatibility. Admittedly, it's a little more trouble to type manually, but if there's all those parameters anyway, it's already no picnic.
http://mysite.com/page?campaign=1&dest=products&pid=15&cid=25
To me, this is much more indicative of what is really going on. Using paths implies a that a resource exists at that location. But really you're just providing a web service with various parameters, and this model captures that much more clearly. And in the future, you can add more parameters effortlessly. You can also default parameters if they are missing without messing anything up.
Not sure of the code in ASP, but it should be trivial to implement.
I think I'd look at doing it the way that SO does it's questions.
"campaign/{campaign-id}/friendly-name-of-campaign"
Create a mapping in your database when the campaign is created that associates all the data you need with an automatically generated id. The friendly name could be assigned basically the same way as a question is on SO -- by the user -- but you could also have an approval process that makes sure that it meets your requirements and is distinct from any existing campaign names. Your tracking company can track by the id and you can correlate that with your associated data with a simple look up.
What you have looks good for your needs. The other posts here have good points. But may not be suitable for you. One thing that you could consider with future proofing your links is to put a version number somewhere in there.
"campaign/{version}/{destination}/{partnerid}/{campaignid}/{custom}"
This way if you decide to completely change your format you can up the version to 2.0 (or whatever) and still keep track of the old links coming in.
I would do
/c/{destination}/{partnerid}/{campaignid}/?customvar=s
You should think about the hierarchy of the first parameters, you already got that managed quite well. Only if there's a hierarchy path segments should be used.
From your description, destination seems to be the broadest parameter, partnerid only works with destination, and campaingid is specific to a partner.
When you really need to add custom parameters I would go for query variables (they are not forbidden in REST), because these are not part of the hierarchy.
You also shouldn't try to be too RESTful here. After all, it's for a campaign and for redirecting to a final resource. So the URL you want to design here is not really a specific resource in the terms of REST.
Create an URL called http://mysite.com/gateway
Return an HTML form, tell your partners to fill in the form and POST it. Redirect based on the form values.
You could easily provide your partners with the javascript to do the GET and POST. Should be trivial.
The most important thing i have learned about REST URL´s thats usually burried deep in some book or article:
The URL should point to a resource and the following ?querystring should have all the scoping information needed. DONT mix those two or you will have a design thats very hard to work with.
Other then that i fully agree with Craig Stuntz

How to decode google gclids

Now, I realise the initial response to this is likely to be "you can't" or "use analytics", but I'll continue in the hope that someone has more insight than that.
Google adwords with "autotagging" appends a "gclid" (presumably "google click id") to link that sends you to the advertised site. It appears in the web log since it's a query parameter, and it's used by analytics to tie that visit to the ad/campaign.
What I would like to do is to extract any useful information from the gclid in order to do our own analysis on our traffic. The reasons for this are:
Stats are imperfect, but if we are collating them, we know exactly what assumptions we have made, and how they were calculated.
We can tie the data to the rest of our data and produce far more accurate stats wrt conversion rate.
We don't have to rely on javascript for conversions.
Now it is clear that the gclid is base64 encoded (or some close variant), and some parts of it vary more than others. Beyond that, I haven't been able to determine what any of it relates to.
Does anybody have any insight into how I might approach decoding this, or has anybody already related gclids back to compaigns or even accounts?
I have spoken to a couple of people at google, and despite their "don't be evil" motto, they were completely unwilling to discuss the possibility of divulging this information, even under an NDA. It seems they like the monopoly they have over our web stats.
By far the easiest solution is to manually tag your links with Google Analytics campaign tracking parameters (utm_source, utm_campaign, utm_medium, etc.) and then pull out that data.
The gclid is dependent on more than just the adwords account/campaign/etc. If you click on the same adwords ad twice, it could give you different gclids, because there's all sorts of session and cost data associated with that particular click as well.
Gclid is probably not 100% random, true, but I'd be very surprised and concerned if it were possible to extract all your Adwords data from that number. That would be a HUGE security flaw (i.e. an arbitrary user could view your Adwords data). More likely, a pseudo-random gclid is generated with every impression, and if that ad is clicked on, the gclid is logged in Adwords (otherwise it's thrown out). Analytics then uses that number to reconcile the data with Adwords after the fact. Other than that, there's no intrinsic value in the gclid number itself.
In regards to your last point, attempting to crack or reverse-engineer this information is explicitly forbidden in both the Google Analytics and Google Adwords Terms of Service, and is grounds for a permanent ban. Additionally, the TOS that you agreed to when signing up for these services says that it is not your data to use in any way you feel like. Google is providing a free service, so there are strings attached. If you don't like not having complete control over your data, then there are plenty of other solutions out there. However, you will pay a premium for that kind of control.
Google makes nearly all their money from selling ads. Adwords is their biggest money-making product. They're not going to give you confidential information about how it works. They don't know who you are, or what you're going to do with that information. It doesn't matter if you sign an NDA and they have legal recourse to sue you; if you give away that information to a competitor, your life isn't worth enough to pay back the money you will have lost them.
Sorry to break it to you, but "Don't be Evil" or not, Google is a business, not a charity. They didn't become one of the most successful companies in the world by giving away their search algorithm to the first guy who asked for it.
The gclid parameter is encoded in Protocol Buffers, and then in a variant of Base64.
See this guide to decoding the gclid and interpreting it, including an (Apache-licensed) PHP function you can use.
There are basically 3 parameters encoded inside it, one of which is a timestamp. The other 2 as yet are not known.
As far as understanding what these other parameters mean—it may be helpful to compare it to the ei parameter, which is encoded in an extremely similar way (basically Protocol Buffers with the keys stripped out). The ei parameter also has a timestamp, with what seem to be microseconds, and 2 other integers.
FYI, I just posted a quick analysis of some glcid data from my sites on this post. There definitely is some structure to the gclid, but it is difficult to decipher.
I think you can get all the goodies linked to the gclid via google's adword api. Specifically, you can query the click performance report.
https://developers.google.com/adwords/api/docs/appendix/reports#click
I've been working on this problem at our company as well. We'd like to be able to get a better sense of what our AdWords are doing but we're frustrated with limitations in Analytics.
Our current solution is to look in the Apache access logs for GET requests using the regex:
.*[?&]gclid=([^$&]*)
If that exists, then we look at the referer string to get the keyword:
.*[?&]q=([^$&]*).*
An alternative option is to change your Apache web log to start logging the __utmz cookie that google sets, which should have a piece for the keyword in utmctr. Google __utmz cookie and you should be able to find plenty of information.
How accurate is the referer string? Not 100%. Firewalls and security appliances will strip it out. But parsing it out yourself does give you more flexibility than Google Analytics. It would be a great feature to send the gclid to AdWords and get data back, but that feature does not look like it's available.
EDIT: Since I wrote this we've also created our own tags that are appended to each destination url as a request parameter. Each tag is just an md5 hash of the text, ad group, and campaign name. We grab it using regex from the access log and look it up in a SQL database.
This is a non-programmatic way to decode the GCLID parameter. Chances are you are simply trying to figure out the campaign, ad group, keyword, placement, ad that drove the click and conversion. To do this, you can upload the GCLID into AdWords as a separate conversion type and then segment by conversion type to drill down to the criteria that triggered the conversion. These steps:
In AdWords UI, go to Tools->Conversions->Add conversion with source "Import from clicks"
Visit the AdWords help topic about importing conversions https://support.google.com/adwords/answer/7014069 and create a bulk load file with your GCLID values, assigning the conversions to you new "Import from clicks" conversion type
Upload the conversions into AdWords in Tools->Conversions->Conversion actions (Uploads) on left navigation
Go to campaigns tab, Segment->Conversions->Conversion name
Find your new conversion name in the segment list, this is where the conversion came from. Continue this same process on the ad groups and keywords tab until you know the GCLID originating criteria
Well, this is no answer, but the approach is similar to how you'd tackle any cryptography problem.
Possibility 1: They're just random, in which case, you're screwed. This is analogous to a one-time pad.
Possibility 2: They "mean" something. In that case, you have to control the environment.
Get a good database of them. Find gclids for your site, and others. Record all times that all clicks occur, and any other potentially useful data
Get cracking! As you have started already, start regressing your collected data against your known, and see if you can find patterns used decrypting techniques
Start scraping random gclid's, and see where they take you.
I wouldn't hold high hope for this to be successful though, but I do wish you luck!
Looks like my rep is weak, so I'll just post another answer rather than a comment.
This is not an answer, clearly. Just voicing some thoughts.
When you enable auto tagging in Adwords, the gclid params are not added to the destination URLs. Rather they are appended to the destination URLs at run time by the Google click tracking servers. So, one of two things is happening:
The click servers are storing the gclid along with Adwords entity identifiers so that Analytics can later look them up.
The gclid has the entity identifiers encoded in some way so that Analytics can decode them.
From a performance perspective it seems unlikely that Google would implement anything like option 1. Forcing Analytics to "join" the gclid to Adwords IDs seems exceptionally inefficient at scale.
A different approach is to simply look at the referrer data which will at least provide the keyword which was searched.
Here's a thought: Is there a chance the gclid is simply a crytographic hash, a la bit.ly or some other URL shortener?
In which case the contents of the hashed text would be written to a database, and replaced with a unique id.
Afterall, the gclid is shortening a bunch of otherwise long text.
Takes this example:
www.example.com?utm_source=google&utm_medium=cpc
Is converted to this:
www.example.com?gclid=XDF
just like a URL shortener.
One would need a substitution cipher in order to reverse engineer the cryptographic hash... not as easy task: https://crypto.stackexchange.com/questions/300/reverse-engineering-a-hash
Maybe some deep digging into logs, looking for patterns, etc...
I agree with Ophir and Chris. My feeling is that it is purely a serial number / unique click ID, which only opens up its secrets when the Analytics and Adwords systems talk to each other behind the scenes.
Knowing this, I'd recommend looking at the referring URL and pulling as much as possible from this to use in your back end click tracking setup.
For example, I live in NZ, and am using Firefox. This is a search from the Firefox Google toolbar for "stack overflow":
http://www.google.co.nz/search?q=stack+overflow&ie=utf-8&oe=utf-8&aq=t&client=firefox-a&rlz=1R1GGLL_en-GB
You can see that: a) im using .NZ domain, b) my keyword "stack+overflow", c) im running firefox.
Finally, if you also stash the full landing page URL, you can store the GCLID, which will tell you the visitor came from paid, whereas if it doesn't have a GCLID, then the user must have come from natural search (if URL tagging is enabled of course).
This would theoretically allow you to then search for the keyword in your campaign, and figure out which adgroup them came from. Knowing the creative would probably be impossible though, unless you split test your landing URLs or tag them somehow.

Resources