Why does Google Analytic request a GIF file?
Is it because the GIF allows access to more data than JavaScript alone. Is it to get the IP address of the user?
Google's javascript has to transmit the details of your page view to their servers some how. Ajax cannot be used across domains, so the only way to submit the information is to request a file from a Google server, passing the necessary information in the query string. To facilitate this, some form of content must be requested using a standard HTML tag. The easiest way to do this without impacting the page itself is to load a 1x1 transparent gif. Note that in the case of the Google script (and others), this image isn't actually added to the page. It's merely loaded via a javascript statement
var img1 = new Image();
img1.src = 'http://path/to/file.gif?otherinfohere';
This loads the image without adding it to the page. The information could also be loaded using a script tag like so:
<script src="http://path/to/script.js?otherinfohere" type="text/javascript"><script>
However, users are more likely to have javascript blocked than images, so the safer route is to request an image. As to why they would use a gif instead of say, a jpg, a gif is safer in case a rogue browser actually adds the image to the page. The transparent gif is unlikely to negatively impact the layout, where as a 1x1 jpg would leave a 1 pixel dot somewhere on the page.
Edit: To add to my comment about users having blocked javascript, a gif request containing static information can be added inside a noscript tag to allow basic tracking even in the event that javascript is disabled. To my knowledge, GA doesn't do this, but some other web analytics providers do.
Even with JavaScript enabled, analytics requests a GIF file. If you look at the GET params of the image, it contains a lot of information about the browser. Stuff like utmsr=1280x1024 (the screen size). Google Code has a list of parameters.
It uses the image request to send information about the browser without an XMLHttpRequest.
Now, to actually answer the original question, Google is probably doing this to get around cross-domain XMLHttpRequest restrictions.
http://www.perlmonks.org/?node_id=7974
The smallest transparent GIF is 43 bytes.
http://garethrees.org/2007/11/14/pngcrush/
The smallest transparent PNG-24 (which can't be shown by older browsers too) is 67 bytes.
http://www.techsupportteam.org/forum/digital-imaging-photography/1892-worlds-smallest-valid-jpeg.html
The smallest (opaque) JPEG is 134 bytes.
The math is simple! Bigger size = more costs.
you can use the __utm.gif tracker without javascript (w some server help)
you can use it in an email message (w some programmatic help before sending the email)
Urchin was developed before AJAX was popular (2005)
It has nothing to do w cross-domain. They could have used JSONP for that.
Related
Tag 1: Retargeting
Fire on all pages of the site
For Image format use:
<img height="1" width="1" style="border-style:none;" alt=""
src="https://trck.adsway.org/track/pxl/?adv=ou2fddz&yy=0:r1wen4w&fmt=4"/>
I got this, so I created a Custom Html tag that fires on "All Pages" and I put the image DOM element inside the custom html tag, but I am not sure if this is the only thing we need. Aren't we suppose to add some kind of javascript code in order for this to work? I wasn't provided with any js script tag, and I wasn't provided with any documentation, so I have no idea if pixel tracking doesn't need some kind of js file to make the tracking work. Thanks.
You don't need a JS script in order for retargeting to work. Many vendors will provide a simple image pixel. You'll basically send a ping to the vendor each time a page is loaded.
JS would allow you to do more advanced things like read cookies and global variables to collect more information. But maybe you could reach out to who sent you the script and see if a JS script is provided. If not, you should be good to go with just the image pixel.
You do not need additional code. Image requests are a time-honoured method to send information across domain boundaries. On the receiving end there will be a script that writes the information from the query parameters ("?adv=ou2fddz" etc.) to some kind of data store, where it then can be processed. This was long in use before javascript based tracking became common because it is simple and reliable. In fact, even many javascript based tags generate an image request, only they collect information or set cookies or create user ids etc. via Javascript before they send it off. So this should work fine as is.
What you could do better is not to use a custom HTML tag, since they cause extra work every time the pixel is fired. GTM has a tag type "custom image". You save your user's browsers a little work when instead of using custom HTML, you create a custom image tag and insert the url from the "src" attribute of the image:
(the url is cut of in the image, but you need to use the full src attribute).
The cache busting option will add a random parameter. Browsers might decide to cache the image, which means it will be sent only once to the tracking server and on the next occasion will come from the browser cache. The random parameter means this a different url every time, and thus not cached (the parameter usually does not have ill effects on the tracking).
I have a wordpress website running and i am using W3Total Cache Plug-in to make the site load faster. When i scan the site in Google Page Speed Insight, i noticed i am getting in-consistent scan results. I have a Facebook Messenger chat floating on the webpage and a google map. Since these two gave me Reduce the impact of third-party code Warning i have made changes so that these two will be loaded only after the DOM has loaded completely. Actually i have used jQuery SetTimeOut for this. I actually managed to remove the warning from the result by doing this. But now and then i noticed the same warning coming back in, even if i have made adjustments. if i scan the site two or three times frequently the warnong may go off, but will be back again once i try after a while.
These are the result of frequent scans. Do you guys have any idea about what would be going wrong here ? I spent a lot of time searching but couldn't get my head around it.
With the classic HTTP/1.0 Hypertext Transfer Protocol, resources like Javascript, CSS, HTML, images etc. are loaded in a request / response pair, meaning the browser sends a request to request for a resource (be it CSS, Javascript, etc.), and will wait for the response to come back before it requests another resource. Even though they are loaded in a request / response pair, the request and response pairs are not always going to follow the same sequence strictly, due to randomness in network latency, server response time, the load of the server is currently experiencing, etc.
With HTTP/2 and HTTP/3, the newer versions of HTTP protocols, instead of waiting for a response to come back before sending another request, the requests can be sent all at once. I checked your website and saw that your website is using HTTP/2 and HTTP/3. With HTTP/2 and HTTP/3 protocols, since requests can be sent all at once, it can contribute to a degree of "inconsistency" as well, among other things. Even with HTTP 1, there's always a degree of randomness since there are many factors that play into it like the server response time is going to be different, the network latency is going to be different, etc.
To illustrate this, if you are using the Chrome browser, open the "Developer Tools" tab by clicking the three dots on the very top right corner of the browser, and then click "More Tools" and then click "Developer Tools". Alternatively, you can do "Ctrl+Shift+I" if you use Windows or "Command + Option + I" on Mac. Then go to its "Network" tab, and refresh the page. Each time you refresh the page, the resources are loaded a bit different in sequence:
In the image above, using the Google Tag Manager UA-174548329-1 Javascript as an example (I know it's probably not Google Map), it is loaded as the 4th resource.
When I refresh the page again, your Google Tag Manager UA-174548329-1 Javascript is loaded as the 11th resource:
When the page is being loaded or if you run it on Google's PageSpeed Insight, the main thread is sometimes busy, sometimes not, due to the nature of the randomness of the request and response. Your main thread is also constructing the DOM, and doing a lot of work. Sometimes it's getting blocked by render-blocking resources, such as Javascript.
Javascript is always going to block the Critical Rendering Path by default. Without looking at your Javascript SetTimeOut it's hard to say what implementation you are using to delay your Javascript but it's safe to assume that it probably doesn't help with clearing the critical rendering path. Instead of using SetTimeOut, you should use defer or async.
You can look more into the Critical Rendering Path here. The main thread is the main process your browser is running to do most of the work to process and render the CSS, Javascript, HTML on a page. The critical rendering path is "the sequence of steps the browser goes through to convert the HTML, CSS, and JavaScript into pixels on the screen". - Quoted from Critical Rendering Path. The critical rendering path is the sequence of your Javascript, HTML, CSS, images, and other resources being downloaded and rendered. It requires a lot of knowledge to optimize your critical rendering path and it's no easy job. However there are two attributes you can try to use in the script tag, namely "async" and "defer" to control when your Javascript will be executed.
Take a look at this image:
Credit: Growing with the Web
https://developers.google.com/web/fundamentals/performance/optimizing-content-efficiency/loading-third-party-javascript/?utm_source=lighthouse&utm_medium=unknown
As you can see, you can try putting the async attribute in your script or the defer attribute in your script tag and see if it helps.
With 'async' attribute in the script tag, it means that your Javascript will be executed asynchronously as soon as it's downloaded. The blue bar under the <script async> as shown in the image shows that the script is downloaded at the same time when the HTML is being parsed as well, since the green bar and the blue bar are seen executing in parallel. As soon as the downloading of the script is finished, the script is then executed. At this point, the HTML parsing is paused until the script is finished executing. Whereas without the 'async' attribute, your HTML parsing will be paused (or blocked) when the script is being downloaded and executed.
With 'defer' attribute in the script tag, it means you are deferring the execution of your Javascript until the DOM is finished parsing. Although it will be downloaded as soon as the browser receives the javascript resource, but the downloading won't block the HTML parsing.
In summary, you can use the 'async' attribute in your third party script to 'unblock' your main thread to a certain degree, that they will be downloaded and executed in the background while your DOM is being rendered. This will speed up the main thread a bit. However one caveat is that the execution is still going to be render-blocking. A very important thing to note is that by using the 'async' attribute, be prepared to see some possible erratic behaviors of the page because, more 'inconsistencies' might happen as now the Javascript can be executed anytime in the rendering path and therefore if something needs to happen before or after the script, you might break the flow and the logic of it.
Or you can use the 'defer' attribute in your third party script to tell your script to be executed only after the DOM has been loaded completely. This can only speed up the process very little, only a little because the downloading of the script can now happen in parallel while the HTML parsing is taking place vs using the default script tag without specifying defer or async, but the execution is still going to take an overhead on the main thread.
As per Google's support document, there's a section on How do you load third-party script efficiently?, here are a few ways:
"
Load the script using the async or defer attribute to avoid blocking document parsing.
Consider self-hosting the script if the third-party server is slow.
Consider removing the script if it doesn't add clear value to your site.
Consider Resource Hints like <link rel=preconnect> or <link rel=dns-prefetch> to perform a DNS lookup for domains hosting third-party scripts.
"
Other methods:
Check out how to compress, minify, or combine various Javascript files into one file (if you are using Javascript in the form of files). Use GZIP compression to compress your Javascript, CSS. Also check out how to load third party scripts using a CDN (Content Delivery Network / Content Distribution Network), among others.
Updated Aug 12, 2020:
In response to your comment, since you mentioned that your third party scripts are coming from plugins that you can't code the 'async' or 'defer' attribute into the script tags, you can consider adding this before your other scripts:
<script>
// If your script tag has an id, use either one below:
document.getElementById("your_script_tag_id").async = true;
document.getElementById("your_script_tag_id").defer = true;
// If your script tag has a class name, use either one below:
document.getElementsByClassName("your_script_tag_class_name")[0].async = true;
document.getElementsByClassName("your_script_tag_class_name")[0].defer = true;
// If for once and for all scripts, use either one below:
document.getElementsByTagName("script")[0].async = true;
document.getElementsByTagName("script")[0].defer = true;
</script>
You can also check this out: Async JavaScript, this allows you to defer or async your Javascripts including the third party ones.
From what I can see you have set the "delay" to 3 seconds on Facebook Messenger chat. However your site takes a lot longer than this to load the initial content.
Your site will often not have loaded the "above the fold" content within 3 seconds due to things like network latency, load on your server etc.
For this reason the Facebook Messenger chat script is getting loaded at a point where the CPU may or may not be busy. For things like "Total Blocking Time" this is important as that is listening for when the CPU has it's first quiet period to work out when the page is usable.
For working out "impact of third party code" it is looking at when the CPU is working while trying to render the "above the fold" content, hence why sometimes it shows as an impact and other times it does not as sometimes your above the fold content has loaded sufficiently before the Facebook Messenger is initialised.
Additionally you have to consider when your main JS file containing the timeout is loaded, sometimes it will be loaded sooner depending on latency etc. so this will impact the time the fbDiv is added as well.
There is a lot to cover so to simplify the answer (as there is an awful lot to explain as to why this happens) is to increase the delay on Facebook Messenger or only have it load on a button click.
For example you could have a button that says "chat with us" and then use the click event to load facebook messenger (and hide the "chat with us" button). This would be my recommendation
Alternatively looking at the load speed on your site you could set the delay to about 7 seconds and it would then (probably) be consistent.
I created a clickable tag cloud generator. The tool generates a nice image which is the actual tag cloud, and also to make it clickable and hover-able (interactive), the tool (essentially a method in a class) also returns some HTML.
Since the image and HTML are both generated in the same action method, in my MVC project, I am wondering whether to return a ViewResult (with HTML) or an FileResult (with the Image). I do not want to use the session, and i have <sessionState mode="OFF"> in my App.
Right now, I have a partial solution, where I save the image to the filesystem and send back the HTML ViewResult with the <img> tag in it pointing to the saved image. This obviously will not work with concurrent users (each user may overwrite the file, and interfere with each other)
Essentially what is the best way to send the image and HTML to the browser, without using server-side session? And without using an elaborate filesystem based store for the images?
I'm aware of the <img src="data: .. " /> and since it does not work IE7 and less, and since the image is quite big, its not an option.
Thanks in advance!
Your image must be identifiable in some unique way. Perhaps a time stamp of when the tag cloud was changed? Or a hash of the tags in that cloud?
Put your image and image map into storage.
Mike Brown
I'm looking at embedding youtube videos onto a webpage (a Drupal webpage if that helps), but I need to figure out what people will see if their business/workplace/country blocks youtube access.
Does it show 'video no longer available', does it not show anything?, does it add a class or ID to the embedded html to let css, or a scripting language know that there is an error.
I would like to be able to swap the embedded code out for a gif or something else. So users that can't access youtube will not be left with what ever youtube decides to show them.
Any tips would be great.
I tried editing the hosts file to test myself but it wouldn't take for some reason.
Cheers.
EDIT: * first-question *
This can be achieved using javascript.
In your script call a resource that is located on youtube. Since it's javascript running in the client browser, the request will comes from it and not your website.
If the request fail, the client has no access to youtube.
Did I mention that relying on external resources you can't control is bad ?
Is there some method to issue a screen capture(browser window content only) from the browser with javascript or a embedded flash object etc so that a full quality image of the page content be saved or printed or an alternative approach.
I have a web app (asp.net 3.5) with google maps and other ajax operations client side like a custom tile server. I have been trying to implement a way for the user to print good quality captures of the webpage.
I have used the basic Window.Print() but in both IE and FF there many artifacts within the google maps and some items such as the popped up bubble doesnt print. I have experimented with save pdf thru cutepdf(just to post an example here) and the quality thru window.print() is low too.
For example, A screenshot with FireShot addon is perfect and what I want the client to have. however that is FF only and I cant ask the clients to install addons/activex controls on their browsers.
Have a look at this download example zip file(4mb) with:
Example screen shot using FireShot
(example of what I want to achieve
thru a html/JS button with in the
page)
Firefox Window.Print() result
(cutepdf used to save as pdf)
IE Window.Print() result (cutepdf
used to save as pdf)
note in 2,3 the little bubble is not printed even when open.
For now, I have added the function on my site to go fullscreen and guide the user to take a screenshot or call the window.print() function.
I am still looking for a method to print/capture my page.
are there any flash/activex controls that I can include in my page and thru them provide a quality print mechanism?
Thanks again for all the help but I still need more. :)
Thank you in advance.
http://rapidshare.com/files/311849636/Print_examples.zip.html
You'll go to all that work only to find that a simple app like Snagit will do the job. Building a SnagIt Screen Capture Plugin
The only way to reliably provide a high-quality print version of whats on-screen in a rich web application is to use the client-side, say JavaScript, to send the server precise information about the current state (where bubbles are, etc.) and use that to generate an image that mimics the positioning. Convert that image to a PDF or what-have-you, then send to the client for download.
Google also has a Static Maps API that might give you good results. I looked into it myself once, and only didn't go with it since (at the time) there were limitations on how many points they could support in a polyline.
I don't think this is possible. It would be quite a security risk to be able to capture the user's screen through scripting (imagine bad sites capturing screen information).
No there is not, though it would come in quite handy at times for bug reporting etcetera.
You will probably get the best result by creating a separate version of the page as a PDF have that being generated. It's no quick fix by all means, but you'll get superb print quality and total control over everything. The map part will probably be a bit tricky though as you need to get the map as a bitmap on the server somehow, and if it's not in flash on the client I don't know how you'd do that.