I'm trying to run a multivariate test across 3 pages, google says this should be ok, as long as you use the same section names, it doesn't care about the url so much.
so I have checked in google optimizer, it says all my scripts are set up correctly.
Also the strange thing is it DOES work, it replaces the content, but it never ever saves a cookie, so when you get to the next page, or even just refresh the same page, you might get a different variant.. Which obviously shouldn't happen.
I'm pulling my hair our here, any help would be appreciated.
I had the exact same problem. I've run several tests with GWO and never had this issue. However, with my latest MVT, at first I wasn't seeing GWO even set the cookie. Then, it started setting the cookie but was not able to read the cookie.
Although I am running a sitewide, MVT, I am not running it across multiple domains. However, in order to get GWO to set and read the cookie correctly, I found that I needed to use some of the code required for sitewide tests that span multiple domains. Here is the google article on cross domain tracking:
http://www.google.com/support/websiteoptimizer/bin/answer.py?hl=en&answer=151978
According to the Google WSO API, by default, GWO is setting this property:
gwoTracker._setDomainName("auto");
For whatever reason, it was my experience that this was not happening and by making this call explicitly in my code, it fixed the problem! This call needs to be added to both your Tracking Script and your Conversion Script.
Again, I've never experienced this with any of my other GWO tests, so it was kind of weird to have it happen out of the blue but explicitly making this call fixed this problem for me. Good luck.
Related
Previously working code that downloads a csv file from our site, now fails. Chrome, Safari and Edge don't display anything helpful except "Blob Blocked", but Firefox shows a stack trace;
Uncaught TypeError: Location.href setter: Access to 'blob:http://oursite.test/7e283bab-e48c-a942-928c-fae0907fdc82' from script denied.
Then a stack dump from googletagmanager
This appears to be a fault in the tagmanager code introduced in the last couple of weeks.
The fault appears in all browsers and is resolved immediately by commenting out the tag manager. The problem reported by a customer on the production system, and then found on both staging and locally. The customer advised they had used the export function successfully 2 weeks ago.
The question really is, do Google maintain a public facing issues log for things like the tag manager?
It's not about GTM as a library really, it's about poor user implementation. It's not up to Google to check for user-introduced conflicts with the rest of the site's functionality.
What you could do is go to GTM, and see what has been released in the past two weeks. Inspect things and look for anything that could interfere with the site's functionality. At the same time - do the opposite, see all the front-end changes introduced during this time frame by the web-dev team.
Things to watch for is mostly unclosured JS deployed in custom HTML tags. junior GTM implementation specialists like to use the global space, taking the global variables, often after the page is loaded, thus overwriting front-end's variables.
Sometimes, people would deploy minified unclosured code to the DOM, thus chaotically taking short var names. To the same end.
This is likely the easiest and most common way for GTM to break front-end. There definitely still are many ways to do so besides this though.
If this doesn't help, there's another easy way to debug it: make a new workspace from Default (or whatever is live), go into the preview mode and confirm that the issue still happens. Now start disabling newest created fired tags one by one and pinpoint which one causes the issue.
Let us know what it was.
Solution was to replace the previous tag manager code with the latest recommended snippet
I was just about to set up a 2nd GA property that I would implement into my Staging environment. I figured i'd do the same with GTM and just export/import containers from Stage to Production whenever necessary. I also figured I'd dynamically populate the Tracking-ID dynamically based on hostname. No big deal.
But then I stumbled across Environments for GTM. The first bit I read said that using this feature would solve the problem of moving code across environments. To me this implied that the snippet code would remain the same in all environments and that there would be no need to change (dynamically, via build script, manually or otherwise) any values or anything... that GTA was smart enough to deploy the right container(s) to the right place(s) at the right time(s). That sounds great, I'll do it.
Now that I'm getting into that process I'm learning (if I'm understanding correctly) that each environment does in face have to have a separate snippet. So now I"m back to where I started, with having to dynamically add values to the snippets based on domain name (which determines stage or test). With out that, every time the file containing the snippet is pushed between environments, it will contain the wrong values. I guess using Environments still takes out the export/import process for containers (which, don't get me wrong, is nice) but having to change those values is a pain..
Is this the long and short of it - do I have this right? Is there any way around having to change code in the web page (or template) by doing it somehow through GTM instead? I'm guessing not, since the snippet is the base of GTM's functionality, but i figure I'd ask.
Further complicating things is that I was planning to use a Wordpress plugin, Google Tag Manager for Wordpress, to add the GTM code. in this case, all I can even change is is the Tracking-ID, which actually stays the same... it's other values that change that I have no control over with the plugin. Is anyone aware of a way to inject new values into the snippet that the plugin writes to the page?
The snippet for an environment has the same GTM id, but has a token for the environment name attached to the GTM url. If you use any kind of build system it should be possible to set or change the token according to the server you deploy to. Personally I am not convinced that environments are really useful.
If all you need is different values for tracking ids, you can implement lookup table variables that take hostname variable as input and return the respective tracking id for live or staging. Then use that instead of hardcoding the tracking id into your tag.
Recently ran into some trouble with Google DFP that I'm hoping others have had.
We have a site that's served via SSL and it contains some Google DFP ad tags. Google DFP's debugging console shows no errors in the tags or our implementation of them. (i.e. the tags themselves are fine)
However, the ads are getting served via different methods. Some of the iframes get served as FriendlyFrames and some get served as SafeFrames. The SafeFrame ads appear correctly. The FriendlyFrame ads don't show up.
It appears that the FriendlyFrame ads are running afoul of some sort of browser security measure (likely because the pages are served via SSL).
I looked into this in the DFP docs but haven't found anything that explains how to solve the issue. There is a setForceSafeFrame method available that I've tried to use but it doesn't actually seem to do anything when I try to use it:
https://developers.google.com/doubleclick-gpt/reference#googletag.PassbackSlot_setForceSafeFrame
I've setup a test page demonstrating the issue here:
https://methnen.com/ad-test
There should be 5 separate ads on the page. If you get all of them refresh the page until you get at least one ad that doesn't show. The broken ads are being served as a FriendlyFrames.
Really hoping someone knows what the heck is going on.
FYI and possibly helpful for anyone else who might run into this at a later date:
Turns out the Ad Ops person hadn't set things up on their end to have enough inventory to fill all of the slots and there was nothing wrong with the tagging at all. The empty FriendlyFrames are apparently what DFP serves up when it decides it doesn't have anything to fill a given slot.
Try to force render all ads in SafeFrame
googletag.pubads().setForceSafeFrame(true);
More about it here https://developers.google.com/doubleclick-gpt/reference#googletag.PassbackSlot_setForceSafeFrame
I am using a webapp2 in GAE, when I called self.redirect to some page like below:
self.redirect(some_url)
which returned a page looks like cached, I have to refresh/reload the page so that I would get latest data.
Is there any cache setting for webapp2? or I have to set some properties for response of that page? Please advise.
In my project I've fixed that by adding time.sleep(0.1) just before the self.redirect('someurl') call.
Not sure if it is a best way to solve the problem, but pages started to show most recent info.
Edit: beware of consistency issue
Check out #Lindsay answer. Using time.sleep(0.1) might give you the expected result in a local environment, but you cannot trust it in a production environment. If you really need results to be strongly consistent, use an ancestor query, not time.sleep(0.1).
My guess is that this is happening because the earlier page is updating an entity that is then being accessed on the later page by means of a non-ancestor query. A non-ancestor query provides eventual-consistency rather than strong-consistency, so the problem is not that the page isn't being refreshed, but that it's showing what the data looked like before the update was completed. When you refresh, or add a call to time.sleep(), you may be providing enough time for the datastore to catch up, especially during testing. However, in production, your sleep may not be long enough in all cases, and the same is true of a page-refresh.
If you check your application and find out that you are using a non-ancestor query, and therefore your problem is indeed eventual-consistency vs strong-consistency, a Google search will show you that many pages discuss that topic; here's one: https://cloud.google.com/developers/articles/balancing-strong-and-eventual-consistency-with-google-cloud-datastore#ftnt_ref1.
The simplest solution seems to be to create an entity group and use an ancestor query, though that comes with a possible performance hit and a limitation of one update per second per entity group.
Got the same problem I have done a trick which is not good but helped me anyway. Called a temporary view file then did html redirect:
<meta http-equiv="refresh" content="0.5;URL='/'">
Hope it helps. any one with a better answer?
Do you call return immediately after the self.redirect(some_url)? It may be falling through to other code that renders a page.
This might fall under the category of "you can't", but I thought it might be prudent to at least see if there is something I can do about this.
According to FireBug, the major bottleneck in my page loading times seems to be a gap between the loading of the html and the loading of Google adsense and analytics. Notice in the screenshot below that the initial GET only takes 214 ms, and that adsense + analytics loading takes roughly 130 ms combined. However, the entire page load time is 1.12 seconds due to that large pause in between the initial GET and the adsense/analytics loading.
If it makes any difference at all, the site is running off of the ASP.NET MVC RC1 stack.
alt text http://kevinwilliampang.com/pics/firebug.jpg
Update: After removing adsense and analytics, I'm still seeing a slow response time. Hovering over the initial GET request, I see that the following speeds: 96ms Receiving Data, 736ms DOMContentLoaded (event), 778ms 'load' (event). I'm guessing then that the performance is a result of my own jQuery javascript that has processing tied to the ($document).ready() event?
You should place your analytics code at the bottom of the page so that everything else loads first. Other than that, I don't think there's much you can do.
edit: Actually, I just found this interesting blog post on a way to speed up analytics by hosting your own urchin.js file. Maybe it's worth a look.
I've never seen anything like that using Firebug on Stack Overflow and we use Analytics as well.
I just ran a trace and I see the request for the
http://www.google-analytics.com/__utm.gif?...
Happening directly after the DOMContentLoaded event (the blue line). So I'd suspect the AdSense, first. Have you tried disabling that?
As it goes, I happen to have rather heavily researched this just this week. Long story short, you are screwed. As others have said the best you can do is put it at the bottom of the list of requests and make the rest of your code depend on ready rather than onload events - jQuery is really good here. Some portion of the js is static, so you could clone that locally if you keep an eye on it for maintenance purposes.
The google code isn't quite as helpful as it could be in this area*, but it's their ballgame and anything you do to change it is going to be both complex and risky. In theory, wrapping with a non-blocking script call in the header is possible, but would be unlikely to gain you a benefit given the additional abstraction, and ultimately with adsense your payload is an html source, not script.
* it's possible google have a good reason, but nothing I can deduce from the code they expose
Probably not anything you can do aside from putting those includes right before the closing body tag, if you haven't already. JavaScript includes block parallel HTTP requests which is why they should be kept out of <head>
Surely Google's servers will be the fastest part of the loading, given that your ISP and most ISPs will have a local cache of the files too?
You could inject the script into the head on page load perhaps, but I'm not sure how that effects urchin.js.
Could be that your page simply takes that long to parse? It seems nothing network-related is happening. It simply waits around a second before the adsense/analytics requests are even fired off.
I don't suppose you have a few hundred tables on the page or something? ;)