I have been struggling with this question for several days.
What is the age of an object stored in a cache. In particular, I am wondering about Cloud Front. If I specify
Cache-Control: max-age=60, public
when I upload it at T = N, and then 10 minutes go by, is the official age of my entity 600 seconds? In other words, is the age of an entity NOW - T?
For example, if I push a file into S3 with an expiration date of 30 minutes, and then I do not push a new version of it for for 1 week, will Cloud Front re-validate (GET If-Modified-Since) that entity on every request? Or will it re-validate the entity every 30 minutes and set the age back to 0 after each re-validation?
I have looked here for a definition of age, but I am not quite sure what the answer is:
http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html
The max-age directive specifies the time, in seconds, after which the client should consider that the content at the given URL is invalid.
This means that if you have a max-age of 10 minutes, get the resource at instant t, then if you need the content at t + 9 minutes for instance, it will not be fetched. But its expiry date remains unchanged on the client side.
Which means at any instant after t + 10 minutes, the content will be fetched again, and its expiry time recalculated (since it may have changed in the meanwhile).
At least, that is how clients should operate.
max-age=0 is the equivalent of "always fetch that resource again".
Related
How would you design a Fungible token that expires in Corda (in both state design and flow design)? A token that cannot be used anymore once the expiration date has passed, and it consequently becomes historic once the expiration is detected in the flow.
Any idea is welcome. Thank you!
Extend FungibleToken and add an expiration date field (of type Instant).
Extend the contract so that:
A rule expects the Move and Redeem commands to have a time-window included in the transaction.
The time-window's "until time" should be less than the expiration date.
Inside your flow, you should add a time-window to the transaction of Move or Redeem. When the notary receives the transaction it will either accept it or reject it based on if it received it within the supplied time-window; meaning if you said that the time-window is from now until 1 minute; basically you're telling the notary, only accept this transaction if you received it within "now + 1 minute" (so basically your flow should be able to sign locally, verify, and collect signatures within 1 minute -the 1 minute is just an example, you can set the time-window to whatever you want-).
The notary is the time-stamping authority, so if the notary accepted your transaction; it means that it attests that your transaction was received at a certain time (before the token's expiration date).
You can read about time-windows (an explanation, an exercise, and a solution) here.
staticCompressionIgnoreHitFrequency, enter True to disable the behavior that a static file is compressed only if it is hit a certain number of times within a time period. can anyone tell me, number of hit required and time period in which hits to be done?
See system.webServer/serverRuntime section for the default values:
frequentHitThreshold for number of times
frequentHitTimePeriod for the period
defaults to 2 hits in 10 seconds.
But rather than fiddling with these you may want to just use the staticCompressionIgnoreHitFrequency parameter if you want to disable this completely. This way static content will be compressed and cached from the outset and won't switch between static and dynamic compression. This will also mean that the ETag in the response doesn't vary.
I have a program that needs to do something exactly every hour. The catch is that the time needs to be relative to the remote server, which is not synchronised with a time server and is, in fact, about 6 seconds ahead (!). There is no way for me to change that server.
All I have, is access to the HEAD headers of the web server, which have a handy field date (that's how I found out about the discrepancy).
Question: regardless of the language (I use nodeJS, but that's not the point), what would you do to calculate a precise offset between my server and the remote server?
I am especially worried about network latency: I have the following variables:
Local server time
Time when request was sent
Time when the response with the Date header arrived
Remote server time
However, the remote server time was generated when the server received the request -- something that might have taken up to 1 second. And, the time when the response arrived needs to take into account the time it took to receive it...
Right now I am offsetting with (Time request was sent - Time response arrived) / 2. However, it feels lame.
Is there a better, established way to deal with this?
Hmm, i know this kind of problem, though i never had the limitation of not being able to change one of the 2 'actors'. I would say this approximation (Time request was sent - Time response arrived) / 2 feels ok. If you care more about it you could experiment with the approximation in a 'benchmark' kind of way:
don't make one synchronization request but make 10 in sequence, then eliminate the first 3 offsets and the last 3 offsets and average the remaining 4
or:
don't make one synchronization request but make a burst of 10 in 10 different threads, this should theoretically eliminate the client side (local side) time it takes to create the request and should block (if it blocks) on the server side (or remote side in your case). But this would involve some math and i think it's too much trouble for value
P.S. the number 10 is arbitrary (and hopefully the remote server doesn't ban/block you for making too many requests :)
I'm using Amazon S3 to serve static assets for my website. I want to have browsers cache these assets for as long as possible. What meta-data headers should I include with my assets
Cache-Control: max-age=???
Generally one year is advised as a standard max value. See RFC 2616:
To mark a response as "never expires," an origin server sends an
Expires date approximately one year from the time the response is
sent. HTTP/1.1 servers SHOULD NOT send Expires dates more than one
year in the future.
Although that applies to the older expires standard, it makes sense to apply to cache-control too in the absence of any explicit standards guidance. It's as long as you should generally need anyway and picking any arbitrarily longer value could break some user-agents. So:
Cache-Control: max-age=31536000
Consider not storing it for "as long as possible," and instead settling for as long as reasonable. For instance, it's unlikely you'd need to cache it for longer than say 10 years...am I right?
The RFC discusses max-age here: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9.3
Eric Lawrence says that prior to IE9, Internet Explorer would treat as stale any resource with a Cache-Control: max-age value over 2147483648 (2^31) seconds, approximately 68 years (http://blogs.msdn.com/b/ie/archive/2010/07/14/caching-improvements-in-internet-explorer-9.aspx).
Other user agents will of course vary, so...try and choose a number that is unlikely (rather than likely!) to cause an overflow. Max-age greater than 31536000 (one year) makes little sense, and informally this is considered a reasonable maximum value.
The people who created the recommendation of maximum 1 year caching did not think it through properly.
First of all, if a visitor is being served an outdated cached file, then why would it provide any benefit to have it suddenly load a fresh version after 1 year? If a file has 1 year TTL, from a functional perspective, it obviously means that the file is not intended to be changed at all.
So why would one need more than 1 year?
1) Why not? It doesn't server any purpose to tell the visitors browser "hey, this file is 1 year old, it might be an idea to check if it has been updated".
2) CDN services. Most content delivery networks use the cache header to decide how long to serve a file efficiently from the edge server. If you have 1 year cache control for the files, it will at some point start re-requesting non-changed files from the origin server, and edge-cache will need to be entirely re-populated, causing slower loads for client, and unnecessary calls to the origin.
What is the point of having max 1 year? What browsers will choke on an amount set higher than 31536000?
I use an asp.net [WebMethod] to push a .net object back to the Ajax call on a browser.
One of the properties of the object is of a DateTime type.
When it arrives at the browser the time is seven hours before the time that is stored in the SQL Server.
Okay, so my browser is in Peru (GMT-5) and the server is in Germany (currently GMT+2), that's where the 7 hours come from.
As a fix I send the UTC offset on the client with the Ajax request
d = new Date();
d.getTimezoneOffset();
then on the server I figure out the offset there:
// get a local time zone info
TimeZoneInfo tz = TimeZoneInfo.Local;
// get it in hours
int offset = tz.BaseUtcOffset.Hours;
// add one hour if we are in daylight savings
if (tz.IsDaylightSavingTime(DateTime.Now))
{
offset++;
}
Now I can fix the time field in my object before it is send to the browser.
My real question is, how does the serializer know about the 7 hours?
The http request doesn't include any time information.
Do I ask too much if I want the exact time as stored in the database?
Update:
Here's an example, the date in the database is: 2009-Oct-15 22:00
There is no TimeZone information attached to that.
When I call my WebMethod on my dev machine where client and server
are obviously in the same time zone, the JSON from the server is:
{"d":{"TheDate":"\/Date(1255662000000)\/"}}
The JSON from the remote server in Germany is:
{"d":{"TheDate":"\/Date(1255636800000)\/"}}
There is a difference of 7 hours in the JSON as seen in Firebug. At this
point there is no JavaScript involved yet.
One idea I had is that asp.net attaches a TimeZone to a session but that doesn't seem to be the case.
To answer the OPs question, the timezone information is implicit in the conversion to the JSON /Date()/ format, because it is relative to UTC. For example, on my server here in NY, if I return a DateTime.Parse("1/1/1970"), it returns /Date(18000000)/, or, 5 hours (we're in DST now), which is the number of seconds since 1/1/1970 UTC, since the conversion says, "hey, it's 1/1/1970 00:00:00 here in NY, so it must be 1/1/70 05:00:00 back over in Greenwich."
Now, if a client in California received this date notation, and simply instantiates a JavaScript date from the milliseconds (e.g. new Date(18000000)), the browser will say, "hey, here is a date object, which I know is relative to UTC, and I know I am 8 hours from Greenwich, so it must be 12/31/1969 21:00:00."
So this is a pretty clever way to deal with time, so that it is "correct" in all time zones, and such that all localization is handled by the user's browser. Unfortunately, we are often dealing with just a raw date that we don't want to be time zone relative (say, a birthday). If we need to keep the date the same, there are two ways that I know of.
The first, as you have done above, is to adjust the time (although I think you need to do it at the browser, too, if you want it to work in any time zone).
The other way would be to return it as a string, formatted already. This is the method I normally employ, but I am normally working with US clients (so I can return MM/DD/YYYY, for example, and they don't get mad at me for being American).
To avoid weird bugs and having to deal with these kinds of issues, you should always deal in UTC and convert to local time at the last possible moment.
How are you examining the DateTime once it arrives in the browser? Are you sure the raw serialized format is not including the offset as part of the DateTime object? In which case, it could reconstitute at the other end in local time
I've just ran into the same issue. It seems that the Json Serializer returns dates in the default timezone that the system is in (essentially disregarding whatever timezone the DateTime is in).
e.g.:
On my dev machine, it returns the dates in the timezone my machine is in (pacific); on our production machine the JSON dates are in UTC - which is the timezone the server is set to.
To solve this, in our case we had to manually add hoursoffset, minutesoffset, and daylight savings time offset in the client via javascript.