For a GET request which is consistently the same (it has a query param too), I'm returning in the response this response header.
Cache-Control: max-age=3600, private
I'm creating a URLSession with this configuration. Basically, ensuring my cache is large. The URLSession persists throughout the life of the app and all requests go through it.
var configuration = URLSessionConfiguration.default
let cache = URLCache(memoryCapacity: 500_000_000, diskCapacity: 1_000_000_000)
configuration.urlCache = cache
configuration.requestCachePolicy = .useProtocolCachePolicy
I create a data task with the completion handler:
let task = session.dataTask(with: request) { data, response, error in
//... my response handling code here
}
task.resume()
There isn't anything complicated or unusual I'm doing. I'm now trying to optimise some calls with a caching policy.
So I have 2 issues:
Requests that have no-cache policy are still being saved into the URLCache. Which I've confirmed by looking into the URLCache . Why is it caching them when the response explicitly states no-cache policy and I've told the session to use the protocol policy.
On the request where I do want to use the cache, when I call session.dataTask on subsequent times, it never fetches from the cache. It always hits the server again - which I can confirm from the server logs.
What piece of this am I missing here?
Related
Considering that the server responds with the following headers:
Cache-Control: public
Expires: <EXPIRATION DATE>
ETag: <HASH VALUE>
Both <EXPIRATION DATE> and the <HASH VALUE> are not changed if the underlying resource is not actually updated.
I'm I correct to expect the following:
all intermediate proxy-servers (including the CDN) will consider this resource public and safe to cache.
all intermediate proxy-servers (including the CDN) as well as browsers will consider this resource fresh until the <EXPIRATION DATE> and will return it from the cache without accessing the network. However, after the <EXPIRATION DATE> they all will use HTTP validation mechanism with every request to check if the resource is outdated.
So, if the resource is updated after the <EXPIRATION DATE> I can safely expect that all the clients will receive the fresh version of the resource with the next request (because HTTP validation will fail due to the ETag's change)?
I'm interested both from the standards perspective (RFCs) as well as from the real life perspective (e.g. known browser and proxy quirks).
I would like my resource to be fresh for e.g. one day from the time the file is actually updated on the server and to be always returned from the cache. However, after one day, I would like all the client to receive the fresh copy only if the file was actually changed (using HTTP validation mechanism).
As Kevin's comment says:
in terms of the standard your analysis is correct
It's difficult to answer in terms of "known browser and proxy quirks" without knowing your engineering requirements. It sounds like you may be serving static content; consider services like S3 and CloudFront.
For this design, from your expectations:
browsers will consider this resource fresh until the and will return it from the cache without accessing the network
Most browsers will still reach out to the network when a resource is directly referenced, even when it's still fresh in their cache. That should be a conditional request, but it's still network traffic.(immutable may help.)
Any cache may evict the resource; for one CDN:
If a file in an edge location isn't frequently requested, CloudFront might evict the file
If your intention is to reduce load on your origin server, it's a good strategy. You're making correct use of Expires, Cache-Control: public and ETag, assuming you're also handling conditional requests correctly. In practice, you should:
be ready for browsers to make more than a single request in a 24 hour period
be ready to tune your CDN and confirm that it's respecting those headers, and that all requests lead to the same cache key
expect more than a single request per day to your origin server
I created login FE and finished it.
And as per usual my goto for ajax was Axios. My code is as follows.
const baseUrl = http://localhost:5000/project/us-central1/api
Axios.post(
`${baseUrl}/v1/user/login`,
{ ...data },
{
headers: {
Authorization: 'Basic auth...'
}
},
).then(r => console.log(r).catch(e =>console.log(e));
Now when i try to send request to my local firebase cloud function.
I get a 400 bad request.
after checking the request, I was wondering why it wasn't sending any preflight request, which it should do(to the best of my knowledge) but instead I saw a header named Sec-Fetch-Mode. I searched anywhere it's a bit abstract. And I can't seem to figure anything why my request still fails.
Is there anything Im missing in my config of axios?
My FE is running on a VSCode Plugin named live server(http://127.0.0.1:5500)
Also, my firebase cloud function has enabled cors
// cloud function expres app
cors({
origin: true
})
Any insights would be very helpful.
The OPTIONS request is actually being sent, because you are sending a cross-origin request with an Authorization header which is considered as non-simple. It doesn't show in developer tools because of a feature/bug in Chrome 76 & 77. See Chrome not showing OPTIONS requests in Network tab for more information.
The preflight request is a mechanism that allows to deny cross-origin requests on browser side if the server is not CORS aware (e.g: old and not maintained), or if it explicitly wants to deny cross-origin requests (in both cases, the server won't set the Access-Control-Allow-Origin header). What CORS does could be done on server side by checking the Origin header, but CORS actually protects the user at browser level. It blocks the disallowed cross-origin requests even before they are sent, thus reducing the network traffic, the server load, and preventing the old servers from receiving any cross-origin request by default.
On the other hand, Sec-Fetch-Mode is one of the Fetch metadata headers (Sec-Fetch-Dest, Sec-Fetch-Mode, Sec-Fetch-Site and Sec-Fetch-User). These headers are meant to inform the server about the context in which the request has been sent. Based on this extra information, the server is then able to determine if the request looks legitimate, or simply deny it. They exist to help HTTP servers mitigate certain types of attacks, and are not related to CORS.
For example the good old <img src="https://mybank.com/giveMoney?amount=9999999&to=evil#attacker.com"> attack could be detected on server side because the Sec-Fetch-Dest would be set to "image" (this is just a simple example, implying that the server exposes endpoints with the GET method with unsafe cookies for money operations which is obviously not the case in real life).
As a conclusion, fetch metadata headers are not designed to replace preflight requests, but rather to coexist with them since they fulfill different needs. And the 400 error has likely nothing to do with these, but rather with the request that does not comply with the endpoint specification.
You are missing a dot on your spread operator, this is the correct syntax:
{ ...data }
Note the three dots before “data”.
Please see the use of spread operators with objects here:
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Spread_syntax
What is the best way to implement the following scenario in a React Native app?
Make an HTTP request to the server, get a JSON response and an ETag header.
Save this JSON response in a way that will persist even after the app is restarted by the user.
Whenever this HTTP request is repeated, send an If-None-Match header.
When you get a "Not Modified" response, use the version in the persisted cache.
When you get a "Successful" response (meaning the response has changed), invalidate the persisted cache, save the new response.
Does React Native have a component that does these things out of the box? If not, what is the most common way people use to handle this?
The fetch() API of React native is following the http caching spec and it provides this feature. When you hit a 304 a 200 old response will be found in the cache and be reused.
Details:
https://github.com/heroku/react-refetch/issues/142
As answered at: https://stackoverflow.com/a/51905151
React Native’s fetch API bridges to NSURLSession on iOS and okhttp3 on Android. Both of these libraries strictly follow the HTTP caching spec. The caching behavior will depend primarily on the Cache-Control and Expires headers in the HTTP response. Each of these libraries have their own configuration you can adjust, for example to control the cache size or to disable caching.
And this: How to use NSURLSession to determine if resource has changed?
The caching provided by NSURLSession via NSURLCache is transparent, meaning when you request a previously cached resource NSURLSession will call the completion handlers/delegates as if a 200 response occurred.
If the cached response has expired then NSURLSession will send a new request to the origin server, but will include the If-Modified-Since and If-None-Match headers using the Last-Modified and Etag entity headers in the cached (though expired) result; this behavior is built in, you don't have to do anything besides enable caching. If the origin server returns a 304 (Not Modified), then NSURLSession will transform this to a 200 response the application (making it look like you fetched a new copy of the resource, even though it was still served from the cache).
Oof. Been over a year. I assume you know this is a resounding "no," right? You'll have to parse the response headers to grab the ETag and store that on the device (you're not using the browser) and then add the header to the subsequent requests after retrieving it from your storage mechanism of choice.
I just found this because I was looking to see if anybody had done this in React, let alone React Native, and I'm not seeing anything.
Whelp, time to roll up my sleeves and invent this thing...
okay, this is my current solution, not production tested yet. would love your feed back googlers.
i use Axios, but if you dont you still implement this around what ever wrapper you have around fetch -unless u use native fetch !-
import api from 'your api wrapper.js'
api.etags = new Set;
api.cache = new Set;
api.addRequestTransform(request => {
const etag = api.etags.get(request.url);
if (etag) {
request.headers['HTTP_IF_NONE_MATCH'] = etag;
}
})
// or whatever you use to wrap ur HTT
api.addResponseTransform(response =>{
if (
response.status === 304 &&
response.headers &&
response.headers.etag &&
api.cache.has(response.headers.etag)
) {
console.log('%cOVERRIDING 304', 'color:red;font-size:22px;');
response.status = 200;
response.data = api.cache.get(response.headers.etag);
} else if (response.ok && response.headers && response.headers.etag) {
api.cache.set(response.headers.etag, response.data);
api.etags.set(response.config.url, response.headers.etag);
}
});
what we are doing here is saving response result into api.cache, and saving the etags into api.etag, then we send etag with request every time.
we can upgrade this to also remember the correct status code, or save etags to disk, duno. what do you think :) ?
I'm using okhttp and Retrofit to call a REST service. The data that is returned from that service is store in my Android app inside an sqlite database.
Whenever I call the REST api, if the data hasn't changed (determined by either ETag or Last-Modified header) I want to have the Retrofit callback do nothing (data in DB is ok). Otherwise I want to download the updated JSON from the REST service and update the database (via the onSuccess method of my callback).
The okhttp examples on caching all setup disk caches for the responses, I just need to cache/store the Etag/last-modified time of each request (and not the whole response).
Should I be doing this through a custom Cache implementation that I pass to okhttp, or is there a better interface I should be using with okhttp or Retrofit?
Once I have the implementation setup do I just need to handle the 304 "errors" in my onFailure callback and do nothing?
To know if you've got a 304 as response, in the onResponse callback you can catch it as follows:
if (response.raw().networkResponse().code() == 304){
// Do what you want to do
}
At least, this is when you are using Retrofit 2 and okHttp 3. Not sure about earlier versions, but I guess it would be kind of the same? You could always try to find a 304 response when setting breakpoints in the response.
Is there an HTTP status code to instruct a client to perform the same request again?
I am facing a situation where the server has to "wait" for a lock to disappear when processing a request. But by the time the lock disappears, the requests might be close to its timeout limit. So instead, once the lock clears, I would like to instruct the client to just perform the same request again.
The best I an come up with is a HTTP 307 to the same location, but I'm worried that some browsers might not buy into this (redirect loop detection).
The correct response, when a server is unable to handle a request, is 503 Service Unavailable. When the condition is temporary, as it is in your case, you can set the Retry-After header to let the client know how long it should wait before trying again.
However, this will not force the browser to perform the request again - this is something you would need to handle yourself in javascript. For example, here is how you might perform a retrying ajax POST request in jquery:
function postData() {
$.ajax({
type: 'POST',
url: '503.php',
success: function() {
/*
Do whatever you need to do here when successful.
*/
},
statusCode: {
503: function(jqXHR) {
var retryAfter = jqXHR.getResponseHeader('Retry-After');
retryAfter = parseInt(retryAfter, 10);
if (!retryAfter) retryAfter = 5;
setTimeout(postData, retryAfter * 1000);
}
}
});
}
Note that the above code only supports a Retry-After header where the retry delay is specified in seconds. If you want to support dates that would require a little more work. In production code I would also recommend a counter of some sort to make sure you don't keep retrying forever.
As for using a 307 status code to repeat the request automatically, I don't think that is a good idea. Even if you add a retry parameter to get around the browser loop detection (which feels like a horrible hack), it's still not going to work on a POST request. From RFC2616:
If the 307 status code is received in response to a request other than GET or HEAD, the user agent MUST NOT automatically redirect the request unless it can be confirmed by the user.
While some browsers are known to ignore this requirement, it's definitely not correct, and isn't something you would want to rely on.
And if you're not using a POST request, you almost certainly should be. Remember that a GET request should not have any side effects, and by default the response will be cached. From the description of your problem, it sounds very much like your request is likely to be doing something that has side-effects.
Use the 307 redirect, but add a retry counter:
http://path/to/server?retry=3
This will make the URL different on each retry, preventing loop detection. And the server could check for retry hitting a limit, and abort with an error when that happens, so the user doesn't wait forever.