I'm using Brave right now (but I could reproduce the issue in Chrome as well.)
Scenario:
I have a control panel which served via Cloudflare on https. This should make a call to a service running locally http://localhost:39031. The local server served by go-chi and using it's CORS middleware. When the local server get's the response it returns 200 back to the control panel for the Preflight request.
2022-08-25T14:59:30+02:00 [DEBUG] Handler: Preflight request%!(EXTRA []interface {}=[])
2022-08-25T14:59:30+02:00 [DEBUG] Preflight response headers: [map[Access-Control-Allow-Credentials:[true] Access-Control-Allow-Methods:[GET] Access-Control-Allow-Origin:[*] Access-Control-Max-Age:[300] Vary:[Origin Access-Control-Request-Method Access-Control-Request-Headers]]]
2022/08/25 14:59:30 "OPTIONS http://localhost:39031/hc HTTP/1.1" from 127.0.0.1:43378 - 200 0B in 132.937µs
So it should be fine, but I get a message like this in the browser (which is a warning):
A cross-origin resource sharing (CORS) request was blocked because the response to the associated preflight request failed, had an unsuccessful HTTP status code, and/or was a redirect.
To fix this issue, ensure all CORS preflight OPTIONS requests are answered with a successful HTTP status code (2xx) and do not redirect.
What make no sense, that it's doing the GET request otherwise, and I'm getting back the result properly. Does it maybe because the disabled shield?
I don't see what I did which results this behavior. I also sent OPTIONS via Postman and it turns out that the response satisfy the needs of a preflight response. I already read nearly everything from Preflight response and handling.
This is the Preflight Request:
Accept: */*
Accept-Encoding: gzip, deflate, br
Accept-Language: en-GB,en-US;q=0.9,en;q=0.8
Access-Control-Request-Method: GET
Access-Control-Request-Private-Network: true
Connection: keep-alive
Host: localhost:39031
Origin: https://sandbox.XXX.com
Sec-Fetch-Dest: empty
Sec-Fetch-Mode: cors
Sec-Fetch-Site: cross-site
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.5112.81 Safari/537.36
Related
Here is a question I have been researching for some time now.
I have a redirect that does not seem to be respecting a Set-Cookie attribute in a 302 Redirect.
Here are the request and response headers that I used wireshark to obtain.
HTTP/1.1 302 Moved Temporarily\r\n
Connection: close\r\n
Location: http://192.168.1.1:8888/home/\r\n
Set-Cookie: foo=test_data; Domain=192.168.1.1; Path=/\r\n
\r\n
GET /home/ HTTP/1.1\r\n
Host: 192.168.1.1:8888\r\n
Connection: keep-alive\r\n
Upgrade-Insecure-Requests: 1\r\n
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36\r\n
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8\r\n
Accept-Encoding: gzip, deflate\r\n
Accept-Language: en-US,en;q=0.8\r\n
DNT: 1\r\n
\r\n
I sanitized the content just a bit, but nothing critical should have been modified. The point is no matter the browser I use, the cookie 'foo' is not put in the GET request following the 302. From what I have read, this is not expected behavior. Am I incorrect in believing this? Is there something that I am missing or doing wrong with the 302?
In the question, Cookie header does not appear in the redirected HTTP request (GET http://192.168.1.1:8888/home). The root cause is: the cookie foo=test_data never exists. When it is delivered from server by Set-Cookie response header, it would be rejected by browser, as its Domain does not include the original server.
According to MDN:
A cookie belonging to a domain that does not include the origin server should be rejected by the user agent. The following cookie will be rejected if it was set by a server hosted on originalcompany.com.
Set-Cookie: qwerty=219ffwef9w0f; Domain=somecompany.co.uk; Path=/; Expires=Wed, 30 Aug 2019 00:00:00 GMT
For more accurate description, you can check RFC6265 section -4.1.2.3
This is designed with a good reason. If all server can Set-Cookie for all domain, it would be extremely easy to wipe out other website's cookie, which would be a disaster for internet.
I would like to make a file download resumable using byte-range requests.
The problem is that my existing download action is responding on a POST method and I would like to keep it that way.
But it seems from my early tests that Chrome turns interrupted POST requests for file downloads into GET requests when the user tries to resume and thus the resuming of the download fails.
Am I missing something?
Is this something related to the HTTP specs that only allow GET requests to be resumed?
Or is it simply a design flaw in Chrome (and maybe other browsers as well) that makes it forget the original HTTP method used?
UPDATE:
Here are the request/response data:
Initial POST request:
POST http://localhost:35547/Download?Guid=396b4697-e275-4396-818c-548bf8c0a281 HTTP/1.1
Host: localhost:35547
Connection: keep-alive
Content-Length: 0
Cache-Control: max-age=0
Origin: http://localhost:35547
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.71 Safari/537.36
Content-Type: application/x-www-form-urlencoded
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Referer: http://localhost:35547/File/396b4697-e275-4396-818c-548bf8c0a281
Accept-Encoding: gzip, deflate, br
Accept-Language: en-US,en;q=0.8
Cookie: __RequestVerificationToken=LuPgM05MHrsuyskgfhsrHVUs; ASP.NET_SessionId=gfiulghfuygisghkf; .ASPXAUTH=FGDJHGDHSDFB15AFDE6371CGJHDFGFBHD; fileDownload=true
Initial response (to the request above):
HTTP/1.1 200 OK
Cache-Control: private, s-maxage=0
Content-Type: application/zip
Server: Microsoft-IIS/7.5
X-AspNetMvc-Version: 5.2
Content-Disposition: attachment; filename="FILE-396b4697e2754396818c548bf8c0a281.zip"
X-AspNet-Version: 4.0.30319
Set-Cookie: fileDownload=true; path=/
X-Powered-By: ASP.NET
Date: Wed, 09 Nov 2016 11:13:50 GMT
Content-Length: 1885473
PK.......... ZIP file data .............................................
After the interruption, this is the request that the browser does on resume (notice the GET method used):
GET http://localhost:35547/Download?Guid=396b4697-e275-4396-818c-548bf8c0a281 HTTP/1.1
Host: localhost:35547
Connection: keep-alive
Referer: http://localhost:35547/File/396b4697-e275-4396-818c-548bf8c0a281
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.71 Safari/537.36
Accept-Encoding: gzip, deflate, sdch, br
Accept-Language: en-US,en;q=0.8
Cookie: __RequestVerificationToken=.............
(Some data from security-related cookies have been shortened and altered)
Am I missing something ?
It depends, upon how have you analyzed the behavior of google chrome. Ideal way will be use any proxy or to use packet sniffer such as Wireshark to see what request method, is used by chrome in subsequent request.
Is this something related to the HTTP specs that only allow GET
requests to be resumed?
As of now, there's no mention in spec of HTTP protocol, that only GET requests can be resumed.
Or is it simply a design flaw in Chrome (and maybe other browsers as
well) that makes it forget the original HTTP method used?
Yes, It's the flaw of google chrome. Make sure that you check it on the latest version of Google chrome with all the update patches. Also check it on other browsers.
For more info about HTTP protocol, refer to https://www.ietf.org/rfc/rfc2616.txt.
Refer to following request, for serving partial response : https://en.wikipedia.org/wiki/Byte_serving
Edit
For more updated info regarding HTTP info, refer to :-
https://www.rfc-editor.org/rfc/rfc7230
I wrote a small http server in java with Netty as core. Problem is that mobile browsers don't show some images, when data compression proxy is enabled (data saver in chrome, traffic economizer in opera)
On PC everything is ok, on mobile phone, connected to wi-fi (compression disabled) everything is ok also, but when mobile phone is working in gsm network (no wi-fi) some images are not visible. In opera (android version) the same - when compression is disabled all pictures are visible, when enabled - some are not visible.
Example:
http://tdpol.ru/template/imgs/404.jpg <- this image is not visible
http://tdpol.ru/template/imgs/map-sm.jpg <- this image is visible
Both images are returned to clients in the same way, response is 200 (OK) headers are:
CONTENT_TYPE, "image/jpeg"
CONNECTION, HttpHeaderValues.CLOSE
CONTENT_LENGTH, response.content().readableBytes()
ChannelHandlerContext is written with retain, then closed and decoder is destroied:
ctx.writeAndFlush(response.retain());
ctx.close();
decoder.destroy();
What should i implement for compression proxies to operate normally? I tried setting 304 response code
Here are headers for failing picture with compression enabled (via gsm network):
Request header: Host: tdpol.ru
Request header: Connection: keep-alive
Request header: Cache-Control: max-age=0
Request header: Upgrade-Insecure-Requests: 1
Request header: User-Agent: Mozilla/5.0 (Linux; Android 5.0; ASUS_Z00AD Build/LRX21V) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.98 Mobile Safari/537.36
Request header: Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Request header: Referer: http://tdpol.ru/ca2
Request header: Accept-Encoding: gzip, deflate, sdch
Request header: Accept-Language: ru-RU,ru;q=0.8,en-US;q=0.6,en;q=0.4
Request header: Cookie: _ym_uid=1472566224516803345
Response header: content-type: image/jpeg
Response header: connection: close
Response header: content-length: 52249
And here are headers for the same picture with compression disabled (via wi-fi)
Request header: Host: tdpol.ru
Request header: Connection: keep-alive
Request header: Cache-Control: max-age=0
Request header: Upgrade-Insecure-Requests: 1
Request header: User-Agent: Mozilla/5.0 (Linux; Android 5.0; ASUS_Z00AD Build/LRX21V) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.98 Mobile Safari/537.36
Request header: Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Request header: Referer: http://tdpol.ru/ca2
Request header: Accept-Encoding: gzip, deflate, sdch
Request header: Accept-Language: ru-RU,ru;q=0.8,en-US;q=0.6,en;q=0.4
Request header: Cookie: _ym_uid=1472566224516803345
Response header: content-type: image/jpeg
Response header: connection: close
Response header: content-length: 52249
//Here chrome requests also favicon.ico:
Request header: Host: tdpol.ru
Request header: Connection: keep-alive
Request header: Pragma: no-cache
Request header: Cache-Control: no-cache
Request header: User-Agent: Mozilla/5.0 (Linux; Android 5.0; ASUS_Z00AD Build/LRX21V) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.98 Mobile Safari/537.36
Request header: Accept: */*
Request header: Referer: http://tdpol.ru/template/imgs/404.jpg
Request header: Accept-Encoding: gzip, deflate, sdch
Request header: Accept-Language: ru-RU,ru;q=0.8,en-US;q=0.6,en;q=0.4
Request header: Cookie: _ym_uid=1472566224516803345
Response header: content-type: image/ico
Response header: connection: close
Response header: content-length: 1150
as i can see, they are identical, the only difference - without compression chrome requests a favicon.ico file after getting image
No requests about cache validity (if file was modified) present in both cases so it's not a sync fault.
p.s. i've tried repacking image (original is saved with photoshop, tried paint in case proxy's resampler fails to manage it) and renaming it (in case proxy contains some erroneous sample of it in cache) - nothing helped.
Ok, figured out
Proxy is not the one to blame, it is just used on SLOWER (important) connections by default
Problem was in latency between writeAndFlush and close - on high speed direct connections 50kb file was flushed before actual forced channel close, on slower connections only part of file was transferred, so proxy was unable to compress it. Test with slow direct connection showed that only part of image was transferred.
Removing close and reset - fixed situation.
So Close must be implemented in ChannelFuture (or ChannelFuture's complete event listener)
I am facing a strange issue with running CORS on Nginx, CORS is working fine for everything but one scenario when the Server responds with a 403 http response.
Basically when I login with correct credentials the cors request works fine , however when I provide wrong credentials for login the server(backend) responds with a 403 status and I get the following error
"NetworkError: 403 Forbidden - http://mydomain.com/v1/login"
login
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://mydomain.com/v1/login. This can be fixed by moving the resource to the same domain or enabling CORS.
If the credentials are correct I don't get this error and everything works perfectly.
I have done the configuration for enabling CORS and it seems to be working fine for everything else.
Following are the Request Headers
Request Headers
User-Agent:Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:29.0) Gecko/20100101 Firefox/29.0
Referer:http://abc.mydomain.com/
Pragma: no-cache
Origin: http://abc.mydomain.com
Host: www.mydomain.com
Content-Type: application/json;charset=utf-8
Content-Length: 74
Connection: keep-alive
Cache-Control: no-cache
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Accept: application/json, text/plain, /
Response Headers
Server: nginx/1.4.1
Date: Tue, 10 Jun 2014 05:28:30 GMT
Content-Type: application/json; charset=utf-8
Content-Length: 76
Connection: keep-alive
An option for nginx(>=1.75) is to specify always parameter in add_header :
If the always parameter is specified (1.7.5), the header field will be
added regardless of the response code.
I assume that you are using add_header directive to add CORS headers in the nginx configuration.
Nginx modules reference states about add_header:
Adds the specified field to a response header provided that the response code equals 200, 201, 204, 206, 301, 302, 303, 304, or 307.
To fix problem you could use ngx_headers_more module to set CORS headers also to error responses.
more_set_headers 'Access-Control-Allow-Origin: $http_origin';
more_set_headers 'Access-Control-Allow-Headers: Content-Type, Origin, Accept, Cookie';
I tried to make a GET request for the netflix home page with command prompt because the response returned to me was a 302/301 all the time. So I connected to netflix via the following:
telnet signup.netflix.com 80
Then the request I made was
GET / HTTP/1.1
Host: signup.netflix.com
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:19.0) Gecko/20100101 Firefox/19.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-gb,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: keep-alive
As copied exactly from LiveHTTPHeaders when I visit netflix however I removed the Cookie part because I don't know where the browser (FireFox) is getting these values
Netflix responds with
Why don't I get a 200 OK status code? Is it because I'm not sending any cookies?
Its doing a redirect to https://signup.netflix.com/?tcw=2. i.e., it wants you to resend the request with the twc=2 variable in the query string and more importantly, with the cookie it just gave you through the Set-Cookie header.