I'm new to Cache Control Header implementation and I need someone to point out any of my mistakes and/or misunderstandings over the cache control effects on Firebase Cloud Functions.
My understanding & expectation on Cache Control over Firebase Functions
When the Cache Control Header has been successfully set using Express response object (confirmed by checking from the Chrome's Network
tab), regardless it is on localhost or production server, the Firebase
Https Functions (not callable functions) should not be invoked again
after the first reload until the cache is expired.
Am I right? But after a few rounds of testing, it seems like my cloud function on localhost still consistently get invoked (confirmed by server console logging) regardless the number of refresh on my web browser. Below is my current Http header:
**General:**
Request URL: http://localhost:5005/otk-web-solutions?id=B0Y0jp2x83WVYzWrpg5y
Request Method: GET
Status Code: 304 Not Modified
Remote Address: 127.0.0.1:5005
Referrer Policy: strict-origin-when-cross-origin
**Response Headers:**
Access-Control-Allow-Origin: *
cache-control: public, max-age=432000, s-maxage=432000
content-length: 9688
content-type: text/html; charset=utf-8
date: Mon, 05 Apr 2021 11:52:20 GMT
etag: W/"25d8-TxL0Q+ujhzDjys8IJ1mLigY7jT8"
vary: Origin, Accept-Encoding, Authorization, Cookie
x-powered-by: Express
**Request Headers:**
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
Accept-Encoding: gzip, deflate, br
Accept-Language: en-GB,en;q=0.9,en-US;q=0.8,zh;q=0.7
Cache-Control: max-age=0
Connection: keep-alive
Cookie: _ga=GA1.1.816734993.1603107580; _gid=GA1.1.223745218.1617606982; __atuvc=20%7C12%2C15%7C13%2C23%7C14; __atuvs=606aec5f76521aab00a
DNT: 1
Host: localhost:5005
If-None-Match: W/"25d8-TxL0Q+ujhzDjys8IJ1mLigY7jT8"
sec-ch-ua: "Google Chrome";v="89", "Chromium";v="89", ";Not A Brand";v="99"
sec-ch-ua-mobile: ?0
Sec-Fetch-Dest: document
Sec-Fetch-Mode: navigate
Sec-Fetch-Site: none
Sec-Fetch-User: ?1
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.114 Safari/537.36
**Query String Parameters:**
id: B0Y0jp2x83WVYzWrpg5y
On Firebase documentation:
You can, though, configure caching behavior for dynamic content. For
example, if a function generates new content only periodically, you
can speed up your app by caching the generated content for at least a
short period of time.
You can also potentially reduce function execution costs because the
content is served from the CDN rather than via a triggered function.
Could it be that, the cache control header has no effects on localhost except on Firebase CDN, which means only when we've deployed it to the production server for the caching to work on the cloud CDN? Is there a right way to implement such test to see the effectiveness of the cache control header in helping to save the Firebase Cloud Functions' execution costs?
Please advise, thanks a lot!
From my understanding of the documentation and after checking your test, only if the content is served from the CDN you will be able to save costs.
Even if your request is met with a status code 304 Not Modified, it seems like you're still making the request and invoking the function if you're not using the Firebase Hosting CDN.
So to make a test to see if you can save costs by not invoking the function many times you should set up Firebase Hosting and do that same test to see if a response from the CDN invokes the function.
Related
I have some react code that is rendering content dynamically via React.createElement. As such, css is applied via an object. Elements in that dynamic generation can have background image, pointing to a public aws S3 bucket.
It seems that every time my components re-render, the background images are being fetched again from S3. This is delaying the page render. I have S3 meta-data for Cache-Control set on all the objects . Here are request and response headers for background image load -
Response header -
Accept-Ranges: bytes
Cache-Control: public, max-age=604800
Content-Length: 52532
Content-Type: application/octet-stream
Date: Sun, 06 Feb 2022 05:57:32 GMT
ETag: "f29655808a5f80627d9ea7f44058a5e3"
Last-Modified: Sun, 06 Feb 2022 05:55:10 GMT
Server: AmazonS3
x-amz-meta-filetype: IMAGE
Request Header -
Accept: image/avif,image/webp,image/apng,image/svg+xml,image/*,*/*;q=0.8
Accept-Encoding: gzip, deflate, br
Accept-Language: en-US,en;q=0.9,hi;q=0.8
Cache-Control: no-cache
Connection: keep-alive
Host: <bucket-name>s3.amazonaws.com
Pragma: no-cache
Referer: https://<my-domain>.com/
sec-ch-ua: " Not;A Brand";v="99", "Google Chrome";v="97", "Chromium";v="97"
sec-ch-ua-mobile: ?0
sec-ch-ua-platform: "Linux"
Sec-Fetch-Dest: image
Sec-Fetch-Mode: no-cors
Sec-Fetch-Site: cross-site
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36
I can see in Network tab that images are being loaded multiple times and it also shows data transfers being done everytime. What am I doing wrong here? Can someone please help finding the root cause. Thanks.
could it be a forgotten "disable cache" option selected in the network tab in the dev tools ? Because it seems the server responds with the correct type of cache headers.
If images are optimized and not huge, using base64 data url is a good solution.
const getBase64FromUrl = async (url) => {
const data = await fetch(url);
const blob = await data.blob();
return new Promise((resolve) => {
const reader = new FileReader();
reader.readAsDataURL(blob);
reader.onloadend = () => {
const base64data = reader.result;
resolve(base64data);
}
});
}
const image = getBase64FromUrl('the image url')
While creating the element you can use
background-image: `url(${image})`;
In addition, rarely do we serve from S3 directly, you should probably use cloudfront as a proxy to
reduce get request
reduce bandwidth charges
cache at cdn
better control of cache headers
hide your s3 real url
The reason you're seeing a network request is probably because you're using the Cache-Control: no-cache header in your request.
As seen here:
The no-cache response directive indicates that the response can be
stored in caches, but the response must be validated with the origin
server before each reuse, even when the cache is disconnected from the
origin server.
Cache-Control: no-cache
If you want caches to always check for content
updates while reusing stored content, no-cache is the directive to
use. It does this by requiring caches to revalidate each request with
the origin server.
Note that no-cache does not mean "don't cache". no-cache allows caches
to store a response but requires them to revalidate it before reuse.
If the sense of "don't cache" that you want is actually "don't store",
then no-store is the directive to use.
See here: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control#response_directives
Here is what a full request for a cached asset looks like on my network tab, when the asset returns 304 Not Modified from the validation request. (from S3) This is in a background: url context.
I have a CORS-enabled Spring Boot API that runs on Google Cloud Run and a Vue.js front end that runs on Firebase and uses Axios to make the calls to the back end.
The Problem is that when the front end wants to access the back end (Browser --> Google Clud), it fails with:
Access to XMLHttpRequest at 'https://<backend>' from origin 'https://<frontend>' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource.
If I access from LOCAL front end (also Browser) to a LOCAL back end, it works: The error above is not shown in the Browser console and I get the data).
If I make the OPTIONS or GET call from Postman to the Google Cloud back end, it works.
I noticed, that with Postman I need to include the Authorization header in the OPTIONS request to send the Bearer token to Google to make it work. The Browser does not send any Authorization header in the OPTIONS call, even if I add withCredentials: true to the Axios config like this:
const response = await axios({
method: 'post',
withCredentials: true,
url: 'https://<backend>',
headers: {
'Authorization': 'Bearer ' + gCloudToken
},
data: {
// data...
}
});
Isn't that a security problem, to send the token in the header? I mean, everyone can see the headers and then fake a call to the server.
Can anybody show how to send the Authorization header in the OPTIONS call via Axios or tell how to correctly handle this problem?
UPDATE 1:
The request from the browser looks like this:
OPTIONS /path/to/api HTTP/2
Host: <backend>-ew.a.run.app
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0
Accept: */*
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate, br
Access-Control-Request-Method: POST
Access-Control-Request-Headers: authorization,content-type
Referer: https://frontend.web.app/
Origin: https://frontend.web.app
Connection: keep-alive
And this is the response:
HTTP/2 403 Forbidden
date: Tue, 07 Jul 2020 23:57:27 GMT
content-type: text/html; charset=UTF-8
server: Google Frontend
content-length: 320
alt-svc: h3-29=":443"; ma=2592000,h3-27=":443"; ma=2592000,h3-25=":443"; ma=2592000,h3-T050=":443"; ma=2592000,h3-Q050=":443"; ma=2592000,h3-Q046=":443"; ma=2592000,h3-Q043=":443"; ma=2592000,quic=":443"; ma=2592000; v="46,43"
X-Firefox-Spdy: h2
As you can see, no CORS headers (like e.g. access-control-allow-origin) are present.
The cause of this issue is that by not allowing unauthenticated calls the CORS preflight are always rejected with a 403 error message.
There is already a feature request for Cloud Run in orther to support CORS and authentication with Cloudrun.
The Workaround I would see so far is to allow unauthenticated calls on the CloudRun and implement the authentication on your code, However this can have security disadvantages.
I am getting the following error message on my angular/asp.net web api project.
XMLHttpRequest cannot load http://localhost:7291/api/products. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:7305' is therefore not allowed access. The response had HTTP status code 500.
I know it has something to do with CORS not being implemented correctly, but I'm not sure what I'm doing wrong. I'm following a tutorial and as far as I can tell I've got everything right?
Here's the info from the network tab in chrome debug.
Remote Address:[::1]:7291
Request URL:http://localhost:7291/api/products
Request Method:GET
Status Code:500 Internal Server Error
Response Headers
(8)
Request Headers
view source
Accept:application/json, text/plain, /
Accept-Encoding:gzip, deflate, sdch
Accept-Language:en-US,en;q=0.8
Connection:keep-alive
Host:localhost:7291
Origin:http://localhost:7305
Referer:http://localhost:7305/
User-Agent:Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.93 Safari/537.36
Here is where I map the API url.
(function () {
"use strict";
angular
.module("common.services", ["ngResource"])
.constant("appSettings",
{
serverPath: "http://localhost:7291/"
})
}());
And this is how I'm setting up the EnableCOrsAttribute:
[EnableCorsAttribute("http://localhost:7305", "*", "*")]
Can anyone see what I'm doing wrong? If more code is needed please let me know. Thanks.
First Step, use a tool like Postman or Fiddler to verify your service endpoint first to ensure functionality.
It looks as if this is not a CORS issue at all, due to the 500 response:
Status Code:500 Internal Server Error
A CORS issue on a normal success response is usually manifested as a status 0 or -1 in the client, but what you are experiencing is an error on the server side. I have seen this in my own code and suspect that your implementation on the server side is only injecting the CORS headers at the end of processing the request and as the processing abnormally aborted the CORS headers didn't make it in there.
Once you have confirmed functionality do an OPTIONS request on your endpoint to verify CORS:
OPTIONS / HTTP/1.1
Host: localhost:7291
Cache-Control: no-cache
Content-Type: multipart/form-data; boundary=----WebKitFormBoundary7MA4YWxkTrZu0gW
Then inspect the headers of the response, if your CORS is enabled correctly on the server you should see Access-Control-Allow headers similar to this:
Access-Control-Allow-Methods: GET, POST, PUT, PATCH, DELETE, OPTIONS, ETAG
Access-Control-Allow-Origin: http://localhost:7305
I developing Grails+BlazeDS server and Flex AIR client and stucked with this error:
Detected duplicate HTTP-based FlexSessions, generally due to the remote host disabling session cookies. Session cookies must be enabled to manage the client connection correctly
Google searches didn't successfully, as I see some difference in situations.
The issue I got only when Flex client interact with server via https.
Flex client:
<s:ChannelSet id="userChannel">
<s:SecureAMFChannel uri="https://localhost:8443/Con/messagebroker/amfpolling" />
</s:ChannelSet>
button click in UI triggered login method:
loginResult.token = channelSet.login(usernameInput.text, passwordInput.text);
And finished with DuplicateSessionDetected exception. :(
After investigating network monitor logs, I found that jsession cookie received from server not set in next requests to a server:
Response from server (operation: client_ping)
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Set-Cookie: JSESSIONID=F58F1ADA97E70915EF9E6E4EE1AEBE00; Path=/; Secure
Content-Type: application/x-amf
Content-Length: 173
Date: Sun, 23 Feb 2014 10:17:00 GMT
Flex Message (flex.messaging.messages.AcknowledgeMessageExt) clientId = EA18E8B9-951F-6F87-7B47-48B8B202EE75 correlationId = 7D2782C1-C8A5-41A3-2055-5E3F771424C8 destination = null messageId = EA18E8F6-9E0E-1FE4-0D26-6F0E602F5C5E timestamp = 1393150620542 timeToLive = 0 body = null hdr(DSMessagingVersion) = 1.0 hdr(DSId) = EA18E8B9-950B-4B42-EF70-369D656BA3F2
And next request to server (login operation) without jsession cookie:
POST /Conn/messagebroker/amfsecure HTTP/1.1
Referer: app:/BlazeDSClient.swf
Accept: text/xml, application/xml, application/xhtml+xml, text/html;q=0.9, text/plain;q=0.8, text/css, image/png, image/jpeg, image/gif;q=0.8, application/x-shockwave-flash, video/mp4;q=0.9, flv-application/octet-stream;q=0.8, video/x-flv;q=0.7, audio/mp4, application/futuresplash, */*;q=0.5
x-flash-version: 12,0,0,68
Content-Type: application/x-amf
Accept-Encoding: gzip,deflate
User-Agent: Mozilla/5.0 (Windows; U; en) AppleWebKit/533.19.4 (KHTML, like Gecko) AdobeAIR/4.0
Host: localhost
Content-Length: 299
Flex Message (flex.messaging.messages.CommandMessage) operation = login clientId = null destination = auth messageId = 7B47BBF2-08C0-0E41-5D88-5E3F76FA4882 timestamp = 0 timeToLive = 0 ***not printing credentials***
and server answering with new session cookie:
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Set-Cookie: JSESSIONID=03BD8347F9E9511C299B717DD55625C9; Path=/; Secure
Content-Type: application/x-amf
Content-Length: 535
Date: Sun, 23 Feb 2014 10:17:01 GMT
Flex Message (flex.messaging.messages.ErrorMessage) clientId = null correlationId = 7B47BBF2-08C0-0E41-5D88-5E3F76FA4882 destination = auth messageId = EA18F4A7-C80D-103B-F8D0-58B6F148F142 timestamp = 1393150621768 timeToLive = 0 body = null code = Server.Processing.DuplicateSessionDetected message = Detected duplicate HTTP-based FlexSessions, generally due to the remote host disabling session cookies. Session cookies must be enabled to manage the client connection correctly. details = null rootCause = null body = null extendedData = null
And again - when used non-secure protocol everything ok - session cookie sevt to server in login operation as expected.
I have a little experience in Flex development and didn't find any method to set session cookie when triggered channel login request. Can you help to resolve this issue?
Thanks!
Gotcha!!
It's unbelievable, but the cause of DuplicateSessionDetected exception has been a Network Monitor tool of Flash Builder. After switching it off no any exception has been occurred. I think there issues when Monitor acting as proxy when used with secure protocol.
Surely, this question is already dead, but I have got something to say in this regard for future readers.
The Flash Player (including Flex) does not transmit the default JSESSIONID in the request and cannot do it until you have set SameSite=None in the JSESSIONID cookie.
I have faced the problem where the JSESSIONID cookie is dropped in the request and I have discovered that it is because modern browsers (chrome > 80) do not allow the Flash/Flex Player to access the JSESSIONID cookie it the cookie does not have SameSite=None and Secure flash.
Please, read the announcement from Adobe here
More to read about the new cookie policy:
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie/SameSite
https://medium.com/adobetech/adobe-experience-cloud-cookie-updates-for-google-chrome-19ad67cf1598
https://digiday.com/media/what-is-chrome-samesite/
Do not perform the client_ping operation and then try the secure channelSet. by pingin the server, you are creating another channelset(by default flash creates one for you) and then you are trying to open another channelset using .login operation. Try this by restarting you server,(fresh instance) or else you will be creating more sessions.
I followed this tutorial, http://symfony.com/doc/current/cookbook/controller/error_pages.html, and have error.html.twig and error.json.twig within app/Resources/TwigBundle/views/Exception/
Even though the content type of the request is set to application/json, all errors default to the html version of the error page.
The format of the route is also defined:
http://symfonyinstall/api/v1/users.json
Request Header:
Accept: application/json
Content-Type: application/json
Connection: keep-alive
Origin: chrome-extension: //rest-console-id
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.19 (KHTML, like Gecko) Chrome/18.0.1025.162 Safari/535.19
Response Header:
Status Code: 404
date: Sun, 29 Apr 2012 06:54:35 GMT
Content-Encoding: gzip
X-Powered-By: PHP/5.3.10
Transfer-Encoding: chunked
Connection: keep-alive
Server: nginx
Content-Type: text/html; charset=UTF-8
cache-control: no-cache
I'm out of ideas... and I really need a json version of the errors for my API to work...
I just hit the same problem, and although your question is quite old and symfony bumped a few versions up since then, the problem is still relevant, so even if you don't need to know it any more, maybe somebody else will.
Your original problem was probaly caused by error described here, but there wasn't much going on about it after initial post. Since then the entire codebase was updated, so while the same symptoms reappeared, the error is not related. I am posting it here because anybody looking for an answer these days will probably find this question (as I did :).
Even when returing JsonResponse directly form kernel exception handler, it will still have Content-Type: text/html; charset=UTF-8. This stumped me so much that I used netcat to make a manual request without any smart software in between, and it turns out that the response in such case has actually two different Content-Type headers:
HTTP/1.0 500 Internal Server Error
Connection: close
X-Powered-By: PHP/5.5.9-1ubuntu4.17
Content-Type: text/html; charset=UTF-8
Cache-Control: private, must-revalidate
Content-Type: application/json
pragma: no-cache
expires: -1
X-Debug-Token: 775c55
X-Debug-Token-Link: http://127.0.0.1:8000/_profiler/775c55
Date: Thu, 27 Oct 2016 23:08:31 GMT
Now, double Content-Type header is not something you see everyday. It seems that this is implemented in Symfony\Component\Debug\ExceptionHandler class that is only used in debug mode. In order to be as robust as possible, it first renders standard Symfony error page that describes thrown exception. Rendered content is not sent back directly, instead it leverages PHP's output buffering feature to buffer and store produced output. Then it attempts to produce custom error page from framework. In case this fails, previously prepared message is sent.
Output buffering however works only for message content, and not for headers - these are always sent directly. This problem only appears in debug environment, and unusual content types on error are only common in WebAPI, where debug mode is arguably of little use. This makes exposure surface relatively small, but if an application that offers both WebAPI and end-user interface needs to be tested, this might become a problem.
Solving this problem without modifying internal Symfony files doesn't seem possible. Output control sits deep within symfony kernel and doesn't offer any configuration. Anyway, I am not convinced of benefits of such solution. If anyone could explain to me what could have happened during custom exception handler that would make default handler useless in case it failed?
Maybe user code messing with ob_* functions?