I'm setting a cookie during http GET request of the .html pages with embedded images. I'm expecting the browser to return the cookies when getting all the embedded images, but apparently it does not happen for the first embedded image.
Is this how it's supposed to work or am I missing something ?
Make sure the domain name matches your domain and that you've set a valid expiration date/time for it. These are the 2 most common mistakes.
It would help if we knew how you were setting the cookies. Note that NRNR's response is a bit misleading - he/she's right about the domain, but there's no requirement to set an expiration. However you will get varying results unless you explicitly set a path too - even if it's just '/'.
Browsers do vary a lot in how they handle all sorts of things, including cookies - so I wouldn't be too surprised if there are browsers out there which start retrieving additional content before the response headers for the referencing html page are processed. This is not how its supposed to work though.
C.
Related
I can see this site.com/assets/css/screen.css?954d46d92760d5bf200649149cf28ab453c16e2bwhat is this random alpha numeric vales question mark ? i don't think it's taking some value to use or what is it about ?
edit : also on refreshing page the alpha-numeric value is same.
It is for preventing the browser from caching the CSS. When a CSS is requested by some browsers, specifically Internet Explorer, the browser will have a local copy of the CSS.
When a request is given to a server as:
site.com/assets/css/screen.css?skdjhfk
site.com/assets/css/screen.css?5sd4f65
site.com/assets/css/screen.css?w4rtwgf
site.com/assets/css/screen.css?helloWd
The server at site.com sees only:
site.com/assets/css/screen.css
And gives the latest version. But when the HTML page is requesting the browser to fetch the CSS as: site.com/assets/css/screen.css, for the first time, it fetches from the site.com server. There are many possibilities that the content might be changed in the meantime when the next request is sent. So programmers generally add a ?and-some-random-text, which is called Query String. This will force the browser to get a new copy from the server.
Some more detailed explanation:
It is a well known problem that IE caches too much of html, even when
giving a Cache-Control: no-cache or Last-Modified header to
everypage.
This behaiviour is really troubling when working with querystrings to
get dynamic information, as IE considers it to be the same page
(i.e.: http://example.com/?id=10) and serves the cached version.
I've solved it adding either a random number or a timestring to the
querystring (as others have done) like this
http://example.com/?id=10&t=2009-08-06_13:12:56 that I just ignore
serverside.
Is there a better option? Is there another, cleaner way to acomplish
this? I'm aware that POST isn't cached, but it is semanticaly
correct to use GET here.
Reference: Random Querystring to avoid IE caching
I have a JSON resource, let's call it /game/1, which is being publicly cached with a long duration. Based on some client-side logic, I want to occasionally want to refresh this resource (for instance, when I know something should be happening server-side - a game ending, in my case).
Once refreshed, I would like all downstream caches to update with the new content, so any requests to /game/1 will fetch the refreshed content. Appending a querystring with a random parameter won't work in this case.
I have tried adding the following headers on the request, which seems to work in a temperamental fashion in browsers other than IE:
headers['Cache-Control'] = 'max-age=0, no-cache';
headers['Pragma'] = 'no-cache';
Using these headers, Chrome seems to sometimes refresh the content, presumably based on some internal heuristics.
Does anyone have any better ideeas for what I'm trying to achieve?
Try setting meta http-equiv="expires" content to zero.
Setting the 'expires' metatag to zero should force the browser to reload everything on each page visit. Forcing constant cache deletion will obviously slow down page loading (if all browsers obey it!) but maybe that's an acceptable trade-off. This won't help with downstream caches however, so it's far from a complete solution.
I'm trying to understand what's the best Cache-Control value to be set for static content (images, css, javascript). The issue with this is that my JavaScript/CSS is still very much in development, and whenever I make a change I want people to see changes immediately (they shouldn't have to reload their cache).
What's the best way to go about this? Should I add a ?version=1000202210 after each static request so the browser knows it's new?
Yes, a long expiration date + fingerprinting brings you maximal browser caching and at the same time the necessary flexibility to propagate changes immediately. Google page speed has a good explanation. You can either add a fingerprint in the query string or in the path of the assets. It doesn't really matter how you do it as long as the URL changes when you want the resource to be fetched again.
This is happening in multiple versions of Safari, including 5.x
It will post _EVENTTARGET=&_EVENTARGUMENT= but nothing for __VIEWSTATE=
This is only happening in Safari, and only on one page of our site.
I can't reproduce it - we've spent days trying to.
The viewstate isnt overly huge on this page.
Thanks!
We ran into a lot of viewstate problems with version 3. Safari sets limits to the amount of data that can appear in any one field that gets posted back to the server.
The way we got around our problems was to set viewstate to span multiple input controls.
You can do this in the system.web / pages section of the web.config. For example:
<system.web>
<pages maxPageSTateFieldLength="500" />
</system.web>
You might have to play with the value. I can't remember what the limits are for the various versions of safari. A few people have said 1k, but if I remember correctly from our testing some versions were only passing around 500 bytes.
Another option is to store viewstate server side. You can see an example of this here. You should also read this blog about potential issues. We did try this path and eventually abandoned it as it conflicted with some other encryption things we were doing.
(taking a different tact from previous answer)
To sum up what we know thus far:
only safari
only a particular page
there is a device called StrangeLoop in the mix which removes viewstate on the way out and puts it back in when the page is posted back. It does so through some type of token value.
A couple of questions:
First, is this limited to just a particular customer or set of people? I ask because it might be important that it's "only" safari.
Second, does the StrangeLoop device have some type of timeout value or traffic limit where it's token cache is garbage collected?
I can envision a scenario where a particular client goes to this page and sits for awhile (10 minutes.. longer?). In the meantime either a timeout value is met or the amount of traffic you have forces the strangeloop device to throw viewstate for this particular client out. Then when they go ahead and post back the device has no viewstate to inject back into the html stream.
It seems to me that in order for you to not have any viewstate at all, the device itself must not be injecting it. The only reason I can come up with for that would be if the token value wasn't sent by safari (unlikely as it has to be quite small) or the device couldn't locate a match in it's cache table.
Does the device have any sort of logging or metrics where you can see if it can't match an incoming token value?
A similar idea is if this page has some ajax going on. Does the device send a different token back for each request or does a single client browser retain the token for the entire browsing session? If it sends a different token.. then it might be that safari isn't properly updating itself client side with the new token value. Although this path ought to be pretty easy to duplicate.
if the server doesn't send the content-type header, how does the browser tell which kind of content he got? For example, when I get the SO logo with chrome, the image is intact, though the server doesn't state its extension (at least, explicitly)
Most browsers do content sniffing if the type is not explicitly declared in the HTTP header. They are looking for specific signatures they know and thereby guess the media type.
See the section Determining the type of a new resource in a browsing context in the HTML 5 specification or this Draft of Content-Type Processing Model for some examples.
It can guess the content type by inspecting the file.
For example, PNG have "PNG" among the first 4 bytes.
Different browsers handle it in different ways.
Internet Explorer guesses based on content. In fact has often ignored Content-Type headers, instead using its own guess.
Some browsers also take the extension into account.