I'm having some problems with CORS definitions, and I have a question (not about CORS in general - that I'm fine with - just about the official specification and usage):
According to the IETF, if the Origin header is passed and if it is a URL, that URL must be fully serialized, and must include scheme and host (and optionally port). From https://www.rfc-editor.org/rfc/rfc6454#section-7.1:
The Origin header field has the following syntax:
origin = "Origin:" OWS origin-list-or-null OWS
origin-list-or-null = %x6E %x75 %x6C %x6C / origin-list
origin-list = serialized-origin *( SP serialized-origin )
serialized-origin = scheme "://" host [ ":" port ]
; <scheme>, <host>, <port> from RFC 3986
At least, I think I have understood that correctly.
The IETF also says that the format of the Access-Control-Allow-Origin header must follow the same format. From http://www.w3.org/TR/cors/#access-control-allow-origin-response-header:
Access-Control-Allow-Origin = "Access-Control-Allow-Origin" ":" origin-list-or-null | "*"
and links to the Origin header page.
However, I have seen numerous examples (both here on SO and elsewhere) which show ACAO headers without the scheme (i.e. not an exact 'mirror' of the Origin header), e.g. they show this being passed in the request:
Origin: http://www.example.com
and this as the 'correct' response:
Access-Control-Allow-Origin: www.example.com
So is that ACAO header valid? I thought that the ACAO header had to be an exact mirror of the Origin header value (or '*' or 'null').
If I respond with an ACAO header which doesn't include the scheme, should the User Agent accept it? Or is it on a UA-by-UA basis? What if the Origin includes a port number - do I need to include that in the ACAO response header, with or without the scheme?
As you mentionned, RFC 6454 define the syntax of an origin without ambiguity:
origin = "Origin:" OWS origin-list-or-null OWS
origin-list-or-null = %x6E %x75 %x6C %x6C / origin-list
origin-list = serialized-origin *( SP serialized-origin )
serialized-origin = scheme "://" host [ ":" port ]
and CORS W3C recommandation explicity refer to the same definition.
Access-Control-Allow-Origin = "Access-Control-Allow-Origin" ":" origin-list-or-null | "*"
So the following header is not valid
Access-Control-Allow-Origin: www.example.com
and must not be accepted by User Agent
When generating an Origin header field, the user agent MUST meet the
following requirements:
Each of the serialized-origin productions in the grammar MUST be
the ascii-serialization of an origin.
This is particularly important because of the same-origin policy:
The same-origin policy is one of the cornerstones of security for
many user agents, including web browsers.
Concerning the second part of the question about the port the number, the ASCII serialization of an origin algorithm states:
If the port part of the origin triple is different from the
default port for the protocol given by the scheme part of the
origin triple:
Append a U+003A COLON code point (":") and the given port, in base ten, to result.
Related
I want to have a conditional header based on a header I want to get from the upstream.
For some reason it always gets translated to default.
Configuration:
upstream service decides if a header called x-no-iframe-protection should exist.
main nginx:
map $http_x_no_iframe_protection $x_frame_options {
yes "";
default "SAMEORIGIN";
}
server {
...
add_header X-Frame-Options $x_frame_options;
...
}
No matter what I try - I get both headers:
$ curl -v myhost
...
< x-no-iframe-protection: yes
< x-frame-options: SAMEORIGIN
...
Just to clarify - I use the x-no-iframe-protection just as a trick to remove x-frame-options in specific cases. I'm OK with it staying (although it is not needed once parsed by nginx)
Anyways - how can I make it get caught in order to replace the header value?
An HTTP transaction contains request headers and response headers. From the context of your question you are setting the value of a response header based on the value of another response header (which was received from upstream).
Nginx stores request headers in variables with names beginning with $http_ and response headers in variables with names beginning with $sent_.
In addition, response headers received from upstream may also be stored in variables with names beginning with $upstream_http_.
In your configuration you use the variable $http_x_no_iframe_protection, whereas you should be using either $sent_x_no_iframe_protection or perhaps $upstream_http_x_no_iframe_protection.
All of the Nginx variables are documented here.
try using $upstream_x_no_iframe_protection to access upstream response header.
I am using openresty as a proxy server, which may change response from upstream. Directive header_filter_by_lua* is executed before body_filter_by_lua*. But I changed Content-length in body_filter_by_lua*, and headers has been sent at that time.
So how to set correct Content-length when response from upstream is changed in body_filter_by_lua*?
Thank you!
From https://github.com/openresty/lua-nginx-module#body_filter_by_lua:
When the Lua code may change the length of the response body, then it is required to always clear out the Content-Length response header (if any) in a header filter to enforce streaming output, as in
location /foo {
# fastcgi_pass/proxy_pass/...
header_filter_by_lua_block { ngx.header.content_length = nil }
body_filter_by_lua 'ngx.arg[1] = string.len(ngx.arg[1]) .. "\\n"';
}
I expect that nginx would use http://greenbytes.de/tech/webdav/rfc2616.html#chunked.transfer.encoding in this case (didn't test)
I use Nginx + lua module and body_filter_by_lua directive.
Nginx-lua docs said
When the Lua code may change the length of the response body, then it is required to always clear out the Content-Length response header (if any) in a header filter to enforce streaming output.
ngx.header.content_length = nil
Could it break keepalive connections?
Could it break requests on problematic channels?
How client will know that data is completely read from server?
Why Nginx does not forces Transfer-Encoding: chunked for this responses?
Update.
As a temporary solution i convert response to a chunked via
ngx.header['Content-Type'] = "text/html"
ngx.header['Content-Length'] = nil
ngx.header['Transfer-Encoding'] = 'chunked'
and in content-rewrite phase
-- Length of current chunk.
local hexlen = string.format("%x", #ngx.arg[1])
ngx.arg[1] = hexlen .. "\r\n" .. ngx.arg[1] .. "\r\n"
-- Last chunk. Send final sequence.
if (ngx.arg[2]) then
ngx.arg[1] = ngx.arg[1] .. "0\r\n\r\n"
end
Update 2.
Use ngx.location.capture!
Nginx is serving only static files, yet, some of file names contains '?'. Yes, the question mark.
All URLs that contains '?' yield 404 even though file actually exists. e.g.
> GET /foo?lang=ar.html HTTP/1.1
...
...
< HTTP/1.1 404 Not Found
While a file named foo?lang=ar.html does exists in the expected location.
> GET /foo%3flang=ar.html HTTP/1.1
...
...
< HTTP/1.1 200 OK
How do I write a rewrite directive so all '?' will be redirected to %3f?
You should url-encode your query string to escape special characters such as ? and =
Specifically, the name of your file you have to request for, once encoded, is this:
foo%3Flang%3Dar.html
In Javascript you can url-encode the filename with encodeURIComponent() function, in PHP you have urlencode().
You MUST encode the ? as %3F before the http call to nginx.
The reason is that the url rfc reserves the ? character as a special character (specifcally see section 3.3). Consequently nginx will, correctly, interpret an unescaped ? character as the end of the path part of the url
I am using a nginx as a proxy for an apache server.
Here is my config:
location ~ ^/subsite/(.*)$ {
proxy_pass http://127.0.0.1/subsite/$1?$query_string;
}
the problem is that if I send a request with %20 like mywebsite.com/subsite/variable/value/title/Access%20denied/another/example
the %20 is replaced by a whitespace, and apache don't care about all the end of the request after Access /title/Access
Any Idea ?
I was able to solve a similar issue -- we have an api that requires the search terms to be part of the URL path. Passing the output directly to the proxy_pass directive caused it to throw a 502 even though the request was properly url encoded.
Here's the solution we came up with:
location ~ /api/search(/.*) {
set $query $1;
proxy_pass http://127.0.0.1:3003$query;
}
The "set" directive seems to keep the url encoding intact (or re-encodes from what the regex is passing back in $1).