In my application the user has the option to modify the CSS of their site.
This is unlikely to change that often, but when it does we need to ensure that they, and their site visitors, see the results instantly.
We do record the date and time that the user updated their CSS so an easy solution would be to just append a timestamp to the url.
However, I would like to know if I can set the cache headers programatically to force the browser to re-request the CSS file if it changes.
If you include an hash to your url, ie
http://server.example.com/styles/css.css?hash
it will get reloaded when the hash changes, because the browser will fetch it from the new url :
Version 1:
<style type="text/css" link="styles/css.css?hash=v1" />
Version 2:
<style type="text/css" link="styles/css.css?hash=v2" />
Client caching is a client matter, let them do as they see fit : a new URL means the ressource has changed, so it will need to be reloaded. Keeping the same URL with cache control headers may lead you to a world of pain, because of varying client implementations.
If you put cache control headers (last-modified, expires, ETAG), you cannot be sure that you CSS will get refreshed when it changes :
because aggressive browser (or proxy) caching may ignore those.
because you may serve V1 on May 1st, with an expiration date of Jun 1st, update it to V2 on May 15th, and your clients will have to wait 15 days to get the new version.
With the url hash, the worst case scenario is that the client does not put your css in cache, but user experience is not altered, as they always get the latest version.
With expiry date or last modified date, worst case is that the client gets an old version, and this will alter the user experience :)
Thanks to Mathieu's response I'm using a combination of Output caching and version numbers to handle cache invalidation.
Output cache profile:
I created the following extension method for appending the time stamp:
public static string AppendTimeStamp(this string src, DateTime lastModified)
{
if (string.IsNullOrEmpty(src))
return src;
return string.Format("{0}?v={1}", src, lastModified.ToString("yyyyMMddHHmmss"));
}
Usage:
<link rel="stylesheet" href="#Url.Content("~/assets/usercss").AppendTimeStamp(CustomizationSettings.LastModified)"/>
In cases where you don't want to ever cache the file you can just pass DateTime.UtcNow as the last modified date.
Related
When implementing the post-redirect-get pattern in a web application, it is common for the final step in your server code to look something like this (pseudocode):
if (postSuccessful)
{
redirect("/some-page?success=true")
}
That is, the redirect URL has some kind of success parameter in the query string so that you know when to display a nice looking "Your form has been submitted!" message on your page. The problem with this is that the success=true persists in the query string when it's only needed to initialize the page. If the user refreshes the page or bookmarks it, they will receive a false success message even though no additional POST has taken place.
Is there an elegant solution to this that doesn't involve using JavaScript to eliminate success=true from both the query string and the browser history? This solution works, but definitely adds complexity to a page's load process.
You can use server side technology to implement this feature, without any JavaScript. The stes are listed below:
When post is successful, redirect to /some-page with current timestamp information:
if (postSuccessful)
{
redirect("/some-page?success=true×tamp=1559859090747")
}
When server receives GET /some-page?success=true×tamp=1559859090747 request, compare the timestamp parameter with the current timestamp, check whether it is within the last 3 seconds (or you can change this number according to the network environment).
If the timestamp parameter is within last 3 seconds, then it means this GET /some-page?success=true request is a result of server redirect. If not, then it's more like a result of "user refreshes the page or bookmarks it".
In server code that handling GET /some-page, render different HTML according to the result of step 3. Display the success message only when current access is a result of server redirect.
I was able to successfully use external authentication with datazen via HTTPWEBREQUEST from code-behind with VB.NET, but I am unclear how to use this with an iframe or even a div. I'm thinking maybe the authorization cookies/token isn't following the iframe around? The datazen starts to load correctly, but then it redirects back to the login page as if it's not being authenticated. Not sure how to do that part, this stuff is pretty new to me and any help would be greatly appreciated!!
Web page errors include:
-OPTIONS url send # jquery.min.js:19b.extend.ajax # jquery.min.js:19Viewer.Controls.List.ajax # Scripts?page=list:35Viewer.Controls.List.load # Scripts?page=list:35h.callback # Scripts?page=list:35
VM11664 about:srcdoc:1
XMLHttpRequest cannot load http://datazenserver.com/viewer/jsondata. Response for preflight has invalid HTTP status code 405Scripts?page=list:35
load(): Failed to load JSON data. V…r.C…s.List {version: "2.0", description: "KPI & dashboard list loader & controller", url: "/viewer/jsondata", index: "/viewer/", json: null…}(anonymous function) # Scripts?page=list:35c # jquery.min.js:4p.fireWith # jquery.min.js:4k # jquery.min.js:19r # jquery.min.js:19
Scripts?page=list:35
GET http://datazenserver.com/viewer/login 403 (Forbidden)(anonymous function) # Scripts?page=list:35c # jquery.min.js:4p.fireWith # jquery.min.js:4k # jquery.min.js:19r # jquery.min.js:19
' ''//////////////////////////////////
Dim myHttpWebRequest As HttpWebRequest = CType(WebRequest.Create("http://datazenserver.com/"), HttpWebRequest)
myHttpWebRequest.CookieContainer = New System.Net.CookieContainer()
Dim authInfo As String = Session("Email")
myHttpWebRequest.AllowAutoRedirect = False
myHttpWebRequest.Headers.Add("headerkey", authInfo)
myHttpWebRequest.Headers.Add("Access-Control-Allow-Origin", "*")
myHttpWebRequest.Headers.Add("Access-Control-Allow-Headers", "Accept, Content-Type, Origin")
myHttpWebRequest.Headers.Add("Access-Control-Allow-Methods", "GET, POST, PUT, DELETE, OPTIONS")
Dim myHttpWebResponse As HttpWebResponse = CType(myHttpWebRequest.GetResponse(), HttpWebResponse)
Response.AppendHeader("Access-Control-Allow-Origin", "*")
' Create a new 'HttpWebRequest' Object to the mentioned URL.
' Assign the response object of 'HttpWebRequest' to a 'HttpWebResponse' variable.
Dim streamResponse As Stream = myHttpWebResponse.GetResponseStream()
Dim streamRead As New StreamReader(streamResponse)
frame1.Page.Response.AppendHeader("Access-Control-Allow-Origin", "*")
frame1.Page.Response.AppendHeader("headerkey", authInfo)
frame1.Attributes("srcdoc") = "<head><base href='http://datazenserver.com/viewer/' target='_blank'/></head>" & streamRead.ReadToEnd()
You might have to do more of this client-side, and I don't know whether you'll be able to because of security concerns.
External authentication in Datazen looks something like this:
User-Agent | Proxy | Server
-------------------|----------------------|------------------------------------
1. /viewer/home --> 2. Append header --> 3. Check cookie (not present)
<-- 5. Forward <-- 4. Redirect to /viewer/login
6. /viewer/login --> 7. Append header --> 8. Append cookie
<-- 10. Forward <-- 9. Redirect to /viewer/home
11. /viewer/home --> 12. Append header --> 13. Check cookie (valid)
<-- 15. Forward <-- 14. Give content
16. .................. Whatever the user wanted ..........................
So even though you're working off a proxy with a header, you're still getting a cookie back that it uses.
Now, that's just context.
My guess, from your description of the symptoms, is that myHttpWebResponse should have a cookie set (DATAZEN_AUTH_TOKEN, I believe), but it's essentially getting thrown out--you aren't using it anywhere.
You would need to tell your browser client to append that cookie to any subsequent (iframe-based) requests to the domain of your Datazen server, but I don't believe that's possible due to security restrictions. I don't know a whole lot about CORS, though, so there might be a way to permit it.
I don't know whether there's any good way to do what you're looking to do here. At best, I can maybe think of a start to a hack that would work, but I can't even find a good way to make that work, and you really wouldn't want to go there.
Essentially, if you're looking to embed Datazen in an iframe, I would shy away from external authentication. I'd shy away from it regardless, but especially there.
But, if you're absolutely sure you need it over something like ADFS, you'll need some way to get that cookie into your iframe requests.
The only way I can think to make this work would be to put everything on the same domain:
www.example.com
datazen.example.com (which is probably your proxy)
You could then set a cookie from your response that stores some encrypted (and likely expiring) form of Session("Email"), and passes it back down in your html.
That makes your iframe relatively simple, because you can just tell it to load the viewer home. Something to the effect of:
<iframe src="//datazen.example.com/viewer/home"></iframe>
In your proxy, you'll detect the cookie set by your web server, decrypt the email token, ensure it isn't expired, then set a header on the subsequent request onto the Datazen server.
This could be simplified at a couple places, but this should hold as true as possible to your original implementation, as long as you can mess with DNS settings.
I suppose another version of this could involve passing a parameter to your proxy, and sharing some common encryption key. That would get you past having to be on the same domain.
So if you had something like:
var emailEncrypted = encrypt(Session("Email") + ":somesalt:" + DateTime.UtcNow.ToString("O"));
Then used whatever templating language you want to set your iframe up with:
<iframe src="//{{ customDomain }}/viewer/home?emailkey={{ emailEncrypted }}"></iframe>
Then your proxy detected that emailkey parameter, decrypted it, and checked for expiration, that could work.
Now you'd have a choice to make on how to handle this, because Datazen will give you a 302 to /viewer/login to get a cookie, and you need to make sure to pass the correct emailkey on through that.
What I would do, you could accept that emailkey parameter in your proxy, set a completely new cookie yourself, then watch for that cookie on subsequent requests.
Although at that point, it would probably be reasonable to switch your external authentication mode to just use cookies. That's probably a better version of this anyway, assuming this is the only place you use Datazen, and you'd be safe to change something so fundamental. That would substantially reduce your business logic.
But, you wouldn't have to. If you didn't want to change that, you could just check for the cookie, and turn it into a header.
You should do (1), but just for good measure, one thing I'm not sure on, is whether you can pass users directly to /viewer/login to get a cookie from Datazen. Normally you wouldn't, but it seems like you should be able to.
Assuming it works as expected, you could just swap that URL out for that. As far as I know (although I'd have to double-check this), the header is actually only necessary once, to set up the cookie. So if you did that, you should get the cookie, then not need the URL parameter anymore, so the forced navigation would be no concern.
You'll, of course, want to make sure you've got a good form of encryption there, and the expiration pattern is important. But you should be able to secure that if you do it right.
I ended up just grabbing the username and password fields and entering them in with javascript. But this piece helped me a ton. You have to make sure you set the
document.domain ='basedomain.com';
in javascript on both sites in order to access the iframe contents else you'll run into the cross-domain issues.
I'm developing a webgame. As part of the game, you start out with a limited set of features, and you unlock more of them as you play.
For instance, you unlock /fields as part of step 3 in the tutorial. But what if you just navigate to /fields in the address bar?
I'm trying to work out what would be the best status code to respond with.
403 seems ideal since the user is forbidden from accessing the page until they unlock it.
404 also makes sense since the page technically "doesn't exist" until it is unlocked and also prevents users from being able to tell the difference between a page that doesn't exist and one that they just haven't unlocked yet.
But in both cases I've had some users report issues with the browser cacheing the 403/404 result and not letting them access the page even after unlocking it unless they purge the cache entirely.
I'm wondering if I should keep using 403 or 404, or should I use an unused 4XX code such as 442 with a custom statusText, or even jokingly send HTTP/1.1 418 I'm A Teapot in response to a user poking around where they shouldn't be.
I need a good, solid reason why one option should be used over the others.
tl;dr 409 Conflict would be an idea, but perhaps you have problems with caching. In this case a cache-buster to force a reload will work.
Long explanation
Perhaps a 409 Conflict status code would make sense:
10.4.10 409 Conflict
The request could not be completed due to a conflict with the current state of the resource. This code is only allowed in situations where it is expected that the user might be able to resolve the conflict and resubmit the request. The response body SHOULD include enough information for the user to recognize the source of the conflict. Ideally, the response entity would include enough information for the user or user agent to fix the problem; however, that might not be possible and is not required.
Conflicts are most likely to occur in response to a PUT request. For example, if versioning were being used and the entity being PUT included changes to a resource which conflict with those made by an earlier (third-party) request, the server might use the 409 response to indicate that it can't complete the request. In this case, the response entity would likely contain a list of the differences between the two versions in a format defined by the response Content-Type.
It would make sense, because the resource is only available after the user did the tutorial. Before that the resource is in an «invalid» state. And the user is able to resolve this conflict by completing the tutorial.
Later I investigated the case a little more and I discovered that the devil is in the detail. Let's read the specification for 403 Forbidden and 404 Not Found.
10.4.4 403 Forbidden
The server understood the request, but is refusing to fulfill it. Authorization will not help and the request SHOULD NOT be repeated. If the request method was not HEAD and the server wishes to make public why the request has not been fulfilled, it SHOULD describe the reason for the refusal in the entity. This status code is commonly used when the server does not wish to reveal exactly why the request has been refused, or when no other response is applicable.
Important is the specification that «the request SHOULD NOT be repeated». A browser which never re-requests a 403 page might do the right thing. However, let's continue with 404:
10.4.5 404 Not Found
The server has not found anything matching the Request-URI. No indication is given of whether the condition is temporary or permanent.
[omitted]
Now we have a problem! Why would your 404 pages be cached if the specification allows them to be temporary?
Perhaps in your setup you have caching configured not correctly for your 403 and 404 pages. If this is so, please consult this answer on StackOverflow. It gives a detailed answer about caching 4xx pages.
If you don't want to mess with caching headers, use a so-called cache-buster and pass the system time like this (assuming PHP as your web language):
<a href="/fields?<?php echo time(); ?>">
This produces URLs like /fields?1361948122, increasing every second. It's a variant of the solution proposed by Markus A.
I assume the querystring 1361948122 is ignored by your resource. If it is not, pass the cache-buster in a querystring parameter instead, for example t=1361948122 and make sure that the parameter t is not evaluated by your resource.
In terms of the intended purpose of the HTTP error codes, I would definitely go with 403 Forbidden, because the page does exist (404 is out), but the user is forbidden to access it for now (and this restriction is not due to a resource conflict, like concurrent modification, but due to the user's account status, i.e. 409 is out as well in my opinion). Another sensible option based on it's intended purpose could have been 401, but as nalply already noted in his comment, this code triggers some, if not all, browsers to display a login dialog, as it implies that using the standard web-authentication mechanism can resolve the issue. So, it would definitely not be an option for you here.
Two things seem a little "misfitting" in the description of 403, so let me address them:
Authorization will not help ...: This only talks about the authorization mechanism inside the HTTP protocol and is meant to distinguish 403 from 401. This statement does not apply to any form of custom authorization or session state management.
... the request SHOULD NOT be repeated ...: A request must always be seen in the session context, so if the session context of the user changes (he unlocks a feature) and then he retries accessing the same resource, that is a different request, i.e. there is no violation of this suggestion.
Of course, you could also define your own error code, but since it probably won't be reserved in any official way, there is no guarantee that some browser manufacturer isn't going to intentionally or accidentally use exactly that code to trigger a specific (debugging) action. It's unlikely, but not disallowed.
418 could be OK, too, though. :)
Of course, if you would like to specifically obscure the potential availability of features, you could also decide to use 404 as that is the only way to not give a nosy user any hints.
Now, to your caching issue:
Neither one of these status codes (403, 404, 409, 418) should trigger the browser to cache the page against your will more than any other. The problem is that many browser simply try to cache everything like crazy to be extra snappy. Opera is the worst here in my opinion. I've been pulling my hair out many times over these things. It SHOULD be possible to work it all out with the correct header settings, but I've had situations where either the browser or the server or some intermediate proxy decided to ignore them and break my page anyways.
The only sure-fire way that I have found so far that absolutely positively guarantees a reload is to add a dummy request parameter like /fields?t=29873, where 29873 is a number that is unique for every request you make within any possibly relevant time scales. On the server, of course, you can then simply ignore this parameter. Note that it is not enough to simply start at 1 when your user first opens your page and then count up for following requests, as browsers might keep the cache around across page-reloads.
I do my web-development in Java (both server and client-side using GWT) and I use this code to generate the dummy "numbers":
private static final char[] base64chars = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz_.".toCharArray();
private static int tagIndex = 0;
/**
* Generates a unique 6-character tag string that is guaranteed to not repeat
* for about 400 days, if this function is, on average, not called more often
* than twice every millisecond.
*
* #return the tag string
*/
public static String nowTag() {
int tag = (int) ((System.currentTimeMillis() >>> 5)); // adjust
char[] result = new char[6];
result[5] = base64chars[(tagIndex++) & 63];
result[4] = base64chars[tag & 63];
tag >>>= 6;
result[3] = base64chars[tag & 63];
tag >>>= 6;
result[2] = base64chars[tag & 63];
tag >>>= 6;
result[1] = base64chars[tag & 63];
tag >>>= 6;
result[0] = base64chars[tag & 63];
return new String(result);
}
It uses the system's clock in combination with a counter to be able to provide up to about two guaranteed unique values every ms. You might not need this speed, so you can feel free to change the >>> 5 that I marked with "adjust" to fit your needs. If you increase it by 1, your rate goes down by a factor of two and your uniqueness time-span doubles. So, for example, if you put >>> 8 instead, you can generate about 1 value every 4 ms and the values should not repeat for 3200 days. Of course, this guarantee that the values will not repeat will go away if the user messes with the system clock. But since these values are not generated sequentially, it is still very unlikely that you will hit the same number twice. The code generates a 6-character text-string (base64) rather than a decimal number to keep the URLs as short as possible.
Hope this helps. :)
I feel there is no need to throw an error code, in spite you just display a message like
You have to be Level XX to access this page or something funny like Come back when you grow-up
with code 200-OK itself, so there will be no cache problem and objective is also achieved.
for some reason, IE6/7 is caching the ajax call that returns a json result set back.
My page makes the call, and returns a json result which I then inject into the page.
How can I force IE6/7 to make this call and not use a cached return value?
You might want to add
Cache-Control: no-cache
to your HTML response headers when you're serving the JSON to tell the browser to not to cache the response.
In ASP.NET (or ASP.NET MVC) you can do it like this:
Response.Headers.Add("Cache-Control", "no-cache");
you can change your settings in ie, but the problem most likely lies on your server. You can't go out and change all your users' browser settings. But if you want to at least check it on your browser, go to Internet Options->General (Tab)->Browsing History(section)->Settings (button)->"Every time I visit the webpage"
Make sure you set it back, though, at some point.
To fix it on the server, have a look at http://www.mnot.net/cache_docs/
Using curl (w/ cygwin) for debugging is your great way to figure out what's actually being sent across the wire.
If cache-control doesn't work for you (see DrJokepu's answer), according to the spec the content from any URL with a query string should be non-cacheable, so you might append a pointless query parameter to your request URL. The value doesn't matter, but if you really want to be thorough you can append the epoch value, e.g.:
var url = "myrealurl?x=" + (new Date()).getTime();
But this is a hack; really this should be solved with proper caching headers at the server end.
In the controller action that returns a JsonResult, you need to specify in your headers to avoid caching:
ControllerContext.HttpContext.Response.AddHeader("Cache-Control", "no-cache");
I want to be able to run a little script that I can populate with a list of URLs and it pulls in and checks when the page was last updated? Has anyone done this?
I can only find a manual way of doing this using JavaScript by pasting this into the browser URL field
javascript:alert(document.lastModified)
Any ideas greatly received :)
The following will step through an array of URLs and display the last modified date or, if it's not present, the date of the server request.
string[] urls = { "http://boflynn.net", "http://slashdot.org" };
foreach ( string url in urls )
{
System.Net.HttpWebRequest req =
(System.Net.HttpWebRequest) System.Net.WebRequest.Create(url);
System.Net.HttpWebResponse resp =
(System.Net.HttpWebResponse) req.GetResponse();
Console.WriteLine("{0} - {1}", url, resp.LastModified);
}
If you use urllib2 (or perhaps httplib might be better still) in a python script you can inspect the headers that are returned for the last-modified field.
It depends on what you mean by "last updated". Sure, there is the Last-Modified HTTP header, but it can be very misleading. For example, if the page is being served up dynamically, there is a good change that this field will be the current time, even if the content of the page itself (the part useful to humans) has not been updated in a rather long time. This page itself is a good example of this phenomenon.
If you are truly interested in the last time the content was updated, then I don't have an immediate answer.