I am trying to get the user's browser to stop making OPTIONS request once it learns that my api server allows for cross origin requests but it doesn't seem like adding the standard headers effects it. Is it possible to do this?
Access-Control-Allow-Credentials:true
Access-Control-Allow-Headers:Authorization,Content-Type,Request,Accept,Origin,User-Agent,DNT,Cache-Control,X-Mx-ReqToken,Keep-Alive,X-Requested-With,If-Modified-Since
Access-Control-Allow-Methods:POST,GET,OPTIONS,PUT,DELETE
Access-Control-Allow-Origin:http://example.com
Cache-Control:max-age=31536000
Connection:keep-alive
Content-Length:0
Content-Type:text/plain charset=UTF-8
Date:Thu, 13 Aug 2015 18:25:42 GMT
Expires:Fri, 12 Aug 2016 18:25:42 GMT
Server:nginx/1.4.6 (Ubuntu)
http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.2:
9.2 OPTIONS
[…]
Responses to this method are not cacheable.
Related
I have issues convincing Firefox 71 to cache a large (>4MB) image. I notice both in developer tools (as being logged) and during normal operations (as per loading delay) that the image is loaded every time the page is accessed.
Although I thought I provided all the necessary response headers, Firefox is not sending If-Modified-Since or If-None-Match request headers.
These are the HTTP headers my server is sending:
$ HEAD https://😉/image.png
200 OK
Cache-Control: public, max-age=31536000, immutable
Connection: close
Date: Sat, 04 Jan 2020 19:52:20 GMT
Accept-Ranges: bytes
ETag: "564cd5fb-4484b0"
Server: nginx/1.14.0 (Ubuntu)
Content-Length: 4490416
Content-Type: image/png
Last-Modified: Wed, 18 Nov 2015 19:48:11 GMT
Client-Date: Sat, 04 Jan 2020 19:52:20 GMT
Client-Peer: 😛
Client-Response-Num: 1
Client-SSL-Cert-Issuer: /C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3
Client-SSL-Cert-Subject: /CN=😉
Client-SSL-Cipher: ECDHE-RSA-CHACHA20-POLY1305
Client-SSL-Socket-Class: IO::Socket::SSL
The web page loads the image via JavaScript:
let mapImg = new Image();
mapImg.src = 'image.png';
I believe I did everything according to documentation and wonder if I made some wrong combination of response headers, encryption, compression, and loading method?
I have a quick question but in advance I've read the RFC 2616 Chapter 14.22 about Host and HTTP Header but I still not understand where in httpd.conf or configuration file of a webserver should be changed? Please correct me if I'm wrong.
Look at following two HTTP GET I did to an Apache. The first one is GET for HTTP 1.0 , the other one is GET for HTTP 1.1. See the output:
HTTP/1.0 200 OK
Date: Thu, 24 Oct 2013 03:46:22 GMT
Server: Apache/1.3.41 (Unix) mod_gzip/1.3.26.1a PHP/5.2.9 mod_throttle/3.1.2 mod_psoft_traffic/0.2 mod_ssl/2.8.31 OpenSSL/0.9.8b
Vary: *
Last-Modified: Fri, 10 Aug 2012 20:22:30 GMT
ETag: "17c815b-3b-50256d86"
Accept-Ranges: bytes
Content-Length: 59
Connection: close
Content-Type: text/html
<html>
<body>
<center>webli7</center>
</body>
</html>
HTTP/1.1 400 Bad Request
Date: Thu, 24 Oct 2013 04:04:40 GMT
Server: Apache/1.3.41 (Unix) mod_gzip/1.3.26.1a PHP/5.2.9 mod_throttle/3.1.2 mod_psoft_traffic/0.2 mod_ssl/2.8.31 OpenSSL/0.9.8b
Connection: close
Transfer-Encoding: chunked
Content-Type: text/html; charset=iso-8859-1
16e
The HTTP protocol version is decided dynamicaly, not through configuration files. The client send a request specifying the highest protocol version that its support. Then, the server must respond with either the version requested by the client, or any earlier version that it prefers.
Since Apache does support HTTP/1.1, it should therefore match exactly the version provided by the client.
There exist a flag that you may set in Apache's config to force Apache to use HTTP/1.0 in certain situations, even though the browser requested HTTP/1.1. This is used to fix bugs in HTTP/1.1 handling of some very old browser. Today, you should not need to play with this flag.
As for your error, I would suggest that you make sure that your GET does provide the Host: header. This header is required in HTTP/1.1, yet optional in HTTP/1.0, and having it missing would certainly result in a 400 error.
This is an example response from my amazon bucket.
$ curl -I http://amazon_bucket/image.jpg
HTTP/1.1 200 OK
x-amz-id-2: Tmr9SynKe8ztlB/Jix1hNrclwyc/k4NVHyqK3B0vNKUoPFIxfzwALi0XQRwEjhzO
x-amz-request-id: DCFDBCF510988AFB
Date: Wed, 27 Mar 2013 13:06:34 GMT
Cache-Control: public, max-age=2629000
Expires: Wed, 26 Mar 2014 23:00:00 GMT
Last-Modified: Wed, 27 Mar 2013 13:00:19 GMT
ETag: "52dd53ea738c7824b3f67cfea6a3af2a"
Accept-Ranges: bytes
Content-Type: image/jpeg
Content-Length: 627046
Server: AmazonS3
I would expect the browser to cache the image and serve it from cache. Instead, when I reload the page, my browser does a request, which yield a 304 not modified response. Why is it acting like must-revalidate option was passed? Why isn't the browser serving the image directly from cache? The options I've configured on the image, from my S3 client are these:
Cache-Control: public, max-age=2629000
Expires: Wed, 26 Mar 2014 23:00:00 GMT
Is there some other option I should be passing to the S3 files? It might be a dumb answer, but I see that the requests my browser makes to get these pictures all have the following headers:
Cache-Control:no-cache
Pragma:no-cache
Why is my browser sending those?
I was hitting refresh, and apparently, this always triggers an If-Modified-Since request. If you visit the page normally, the asset is served from browser cache.
I have an ASP.NET / MVC web app which when running locally produces this header:
Cache-Control:public, max-age=3
Connection:Close
Content-Encoding:gzip
Content-Length:287122
Content-Type:text/javascript
Date:Thu, 26 Jul 2012 21:21:26 GMT
ETag:K5fBpkMM+t9XPl07ydQ54pR6bg8=
Expires:Thu, 26 Jul 2012 21:21:29 GMT
Server:ASP.NET Development Server/10.0.0.0
X-AspNet-Version:4.0.30319
X-AspNetMvc-Version:3.0
But when running in AppHarbor behind their proxy the headers I get are these:
Cache-Control:public
Connection:keep-alive
Content-Encoding:gzip
Content-Length:287122
Content-Type:text/javascript
Date:Thu, 26 Jul 2012 21:22:49 GMT
ETag:K5fBpkMM+t9XPl07ydQ54pR6bg8=
Expires:Thu, 26 Jul 2012 21:22:41 GMT
Server:nginx
AppHarbor is stripping the Max-Age portion of my Cache-Control header and stomping over my Date with one I'm not synchronized with.
My goal is serve JavaScript via a CDN with a very short max age so that changes can be rolled out quickly. Changing the url frequently is not an option.
Does anyone know how to fix this?
A college who more closely examined the RFC noticed that it looks like the common one line form of this header used by servers and browsers:
Cache-Control: public, max-age=X
isn't actually valid. Splitting the two parts of the header into two lines like so:
Cache-Control: public
Cache-Control: max-age=X
works! So in my .net code, this:
response.Cache.SetCacheability(HttpCacheability.Public);
response.Cache.SetMaxAge(MaxAge);
becomes this:
response.AddHeader("Cache-Control", "public");
response.AddHeader("Cache-Control", "max-age=" + (int)MaxAge.TotalSeconds);
and now I can get a max-age out of AppHarbor.
I Have a specific set of HTTP response headers I'm trying to recreate in ASP.NET.
Here is how it looks in Fiddler (Raw):
HTTP/1.1 200 OK
Content-Length: 570746
Content-Type: audio/wav
Last-Modified: Wed, 19 May 2010 00:44:38 GMT
Accept-Ranges: bytes
ETag: "379d676ecf6ca1:3178"
Server: Microsoft-IIS/6.0
X-Powered-By: ASP.NET
Date: Tue, 05 Oct 2010 18:35:18 GMT
Here is how it looks on the Headers tab (same data. Different view)
I am trying to recreate the same set of headers (different values of course) with code, on an ASP.NET page. The biggest problem is with the cache settings and the ETag. It usually shows some "private" or similar cache setting and no ETag value, even though I'm trying to set it explicitly with
Response.Cache.SetETag
Have you tried something like this:
if (Response.Headers("ETag") == null)
Response.AddHeader("ETag", "379d676ecf6ca1:3178")
else
Response.Headers("ETag") = "379d676ecf6ca1:3178";