Docker will not send auth headers over HTTP - nginx

I'm setting up a private docker registry with NGINX in front for authentication. Both in a container which are linked. The nginx image I'm using is jwilder/nginx-proxy. I can ping the registry just fine:
>http zite.com:5000/v1/_ping
HTTP/1.1 200 OK
Cache-Control: no-cache
Connection: keep-alive
Content-Length: 2
Content-Type: application/json
Date: Thu, 02 Apr 2015 12:13:32 GMT
Expires: -1
Pragma: no-cache
Server: nginx/1.7.11
X-Docker-Registry-Standalone: True
But pushing an image gives me:
FATA[0001] HTTP code 401, Docker will not send auth headers over HTTP
I've tried marking the registry as insecure but to no avail:
--insecure-registry zite.com:5000
I have been able to get this setup running without NGINX in the middle.
My NGINX config file is (where 'dockerregistry' is the name of the linked container):
upstream dockerregistry {
server dockerregistry:5000;
}
server {
listen 80;
server_name zite.com;
proxy_set_header Host $http_host;
client_max_body_size 0;
location / {
proxy_pass http://dockerregistry;
auth_basic "Docker Registry";
auth_basic_user_file /etc/nginx/dockerregistry_users;
}
location /v1/_ping {
auth_basic off;
proxy_pass http://dockerregistry;
}
}
I think I've read almost every article about this setup but one thing I cannot figure out is whether HTTP only access to a private docker repo is a no-go at all. Is it at all possible to get it working? Or do I have to use SSL certificates? If so, who knows a good guide for this setup?

Yes, you need SSL if you want to use (basic) authentication against your registry (and there is no way around that).
This was a deliberate design decision: the reasoning was that basic authentication over plain http would give a false sense of security, while credentials would really be transmitted in the clear and be extremely easy to compromise.
Not allowing for false security was indeed on purpose (though a questionable move, judging by the number of people being confused by that).
About setting up SSL, I would just go with the example nginx files in the repo:
https://github.com/docker/docker-registry/tree/master/contrib/nginx

Related

SSE event data gets cut off when using Nginx

I am implementing a web interface using React and Flask. One component of this interface is a server sent event that is used to update data in the front end whenever it is updated in a database. This data is quite large and one event could contain over 16000 characters.
The React front end uses a reverse proxy to the flask back end in order to forward API requests to it. When accessing this back end directly, this works fine with the SSEs and the data is pushed as expected. However, when using Nginx to serve the reverse proxy, something weird happens. It seems like nginx buffers and chunks the event stream and does not send the results until it has filled around 16000 characters. If the data from the event is smaller than this, it means the front end will have to wait until more events are sent. If the data from the event is larger, it means the new-lines that tell the EventSource that a message has been received aren't part of the event. This results in the front end receiving event X when event X+1 is sent (that is, when the new-lines actually appear in the stream).
This is the response header when using nginx:
HTTP/1.1 200 OK
Server: nginx/1.14.0 (Ubuntu)
Date: Thu, 19 Nov 2020 13:10:49 GMT
Content-Type: text/event-stream; charset=utf-8
Transfer-Encoding: chunked
Connection: keep-alive
This is the response header when running flask run and accessing the port directly (I'm going to use gunicorn later but I tested with flask run to make sure nginx was the problem and not gunicorn):
HTTP/1.0 200 OK
Content-Type: text/event-stream; charset=utf-8
Cache-Control: no-transform
Connection: keep-alive
Connection: close
Server: Werkzeug/1.0.1 Python/3.7.6
Date: Thu, 19 Nov 2020 13:23:54 GMT
This is the nginx config in sites-available:
upstream backend {
server 127.0.0.1:6666;
}
server {
listen 7076;
root <path/to/react/build/>;
index index.html;
location / {
try_files $uri $uri/ =404;
}
location /api {
include proxy_params;
proxy_pass http://backend;
proxy_set_header Connection "";
proxy_http_version 1.1;
proxy_buffering off;
proxy_cache off;
chunked_transfer_encoding off;
}
}
This config is based on the answer mentioned here.
As you can see, both proxy_buffering and chunked_transfer_encoding is off, so I don't understand why this is happening. I have also tried changing the buffer sizes but without any luck.
Can anybody tell me why this is happening? How do I fix it such that using nginx results in the same behaviour as when I don't use it? Thank you.
The above mentioned configuration actually did work. However, the server I was using contained another nginx configuration that was overriding my configuration. When the configuration parameters specific for SSEs were added to that configuration as well, things started working as expected. So this question was correct all along.

Is it possible to set nginx Response cookies to http only without needing to rebuild?

The intended production environment will be utilising an AWS EKS nginx ingress controller so it would be preferable to not require a bespoke build of nginx.
For local development the docker image https://hub.docker.com/r/lautre/nginx-cookie-flag has been installed, which should have the cookie-flag module pre-installed.
Both methods suggested in the example at https://geekflare.com/httponly-secure-cookie-nginx/ have been tried, but don't seem to be working:
http {
...
proxy_cookie_path / "/; HTTPOnly; Secure";
...
}
And
server {
...
proxy_cookie_path / "/; HTTPOnly; Secure";
...
}
Specifically the token "atlassian.xsrf.token" is never signed as HttpOnly, this is being generated from a jira plugin within the web app https://confluence.atlassian.com/adminjiracloud/using-the-issue-collector-776636529.html
Questions:
Most examples found are the same as that above, is the external module the only solution available?
Does the nginx plus version have this module baked in, allowing to reference by default?
You can also solve this using the add_header directive and manually setting the cookie
Example
location / {
add_header Set-Cookie 'MyCookie=SomeValue; Path=/; HttpOnly; Secure';
proxy_pass http://1.2.3.4;
}
No need to compile nginx, just use:
proxy_cookie_flags ~ secure httponly;
You might need to update your version of nginx, as this wasn't yet available as of nginx 1.12. I think it was added in 1.19.

NGINX forwarding SMTP auth credentials to next server?

Using NGINX SMTP relay capabilities as described here, to proxy a commercial SMTP server, effectively "white-labelling" the relay address (including cert) while preserving authentication.
Configured NGINX as follows:
mail {
server_name smtp.proxy.mydomain.net;
auth_http 127.0.0.1:9000/auth;
proxy_pass_error_message on;
xclient off;
smtp_capabilities "8BITMIME" "STARTTLS" "PIPELINING" "ENHANCEDSTATUSCODES";
starttls on;
ssl_certificate /etc/letsencrypt/live/smtp.proxy.mydomain.net/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/smtp.proxy.mydomain.net/privkey.pem;
ssl_protocols TLSv1.2;
ssl_session_cache shared:SSL:10m;
server {
listen 587;
protocol smtp;
smtp_auth login plain;
}
}
The cert is being used by NGINX when the client requests STARTTLS, and the client (in my case swaks) sends the AUTH LOGIN credentials to NGINX.
NGINX then calls the mail_auth_http_module OK. I have a simple Python Flask app that returns headers indicating auth is always accepted, and the server address. You can see the auth server's response to a curl request here:
$ curl -v localhost:9000/auth
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 9000 (#0)
> GET /auth HTTP/1.1
> Host: localhost:9000
> User-Agent: curl/7.61.1
> Accept: */*
>
* HTTP 1.0, assume close after body
< HTTP/1.0 200 OK
< Content-Type: text/html; charset=utf-8
< Content-Length: 2
< Auth-Status: OK
< Auth-Server: 52.214.232.65
< Auth-Port: 587
< Server: Werkzeug/0.15.2 Python/3.6.8
< Date: Tue, 07 May 2019 23:10:29 GMT
<
* Closing connection 0
NGINX is then attempting onward delivery to the server, as expected. I can see it's hitting the correct server endpoint.
Unfortunately in my case, the server requires (and will always require) the same login credentials to be presented, as the client originally gave. At this point the delivery fails with a "5.7.1 Authorization required", as NGINX is not supplying them onward.
I suspect that NGINX assumes that, because it has called the auth module, the server will not require further credentials, and therefore does not supply them.
Is there a way to get NGINX to "pass through" the Auth credentials to the server?
Since nginx 1.19.4 the SMTP native backend auth is supported (SMTP AUTH).
See: http://nginx.org/en/docs/mail/ngx_mail_proxy_module.html#proxy_smtp_auth
You can enable backend auth with the proxy_smtp_auth on; directive.
For example:
server {
server_name smtp.company.com;
listen 587;
protocol smtp;
proxy_smtp_auth on; # <- enable native SMTP AUTH
smtp_auth plain login cram-md5;
starttls on;
}
According to nginx mail list nginx doesn't pass AUTH command to backend for SMTP.
There's a nginx patch to do this for Postfix, but it's not official.
Alternatively you can try to use this openresty solution.

Nginx reverse proxy subdirectory rewrites for sourcegraph

I'm trying to have a self hosted sourcegraph server being served on a subdirectory of my domain using a reverse proxy to add an SSL cert.
The target is to have http://example.org/source serve the sourcegraph server
My rewrites and reverse proxy look like this:
location /source {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Scheme $scheme;
rewrite ^/source/?(.*) /$1 break;
proxy_pass http://localhost:8108;
}
The problem I am having is that upon calling http://example.org/source I get redirected to http://example.org/sign-in?returnTo=%2F
Is there a way to rewrite the response of sourcegraph to the correct subdirectory?
Additionally, where can I debug the rewrite directive? I would like to follow the changes it does to understand it better.
-- Edit:
I know my approach is probably wrong using rewrite and I'm trying the sub_filter module right now.
I captured the response of sourcegraph using tcpdump and analyzed using wireshark so I am at:
GET /sourcegraph/ HTTP/1.0
Host: 127.0.0.1:8108
Connection: close
Upgrade-Insecure-Requests: 1
DNT: 1
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8
Referer: https://example.org/
Accept-Encoding: gzip, deflate, br
Accept-Language: de,en-US;q=0.9,en;q=0.8
Cookie: sidebar_collapsed=false;
HTTP/1.0 302 Found
Cache-Control: no-cache, max-age=0
Content-Type: text/html; charset=utf-8
Location: /sign-in?returnTo=%2Fsourcegraph%2F
Strict-Transport-Security: max-age=31536000
Vary: Cookie
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
X-Trace: #tracer-not-enabled
X-Xss-Protection: 1; mode=block
Date: Sat, 07 Jul 2018 13:59:06 GMT
Content-Length: 58
Found.
Using rewrite here causes extra processing overhead and is totally unnecessary.
proxy_pass works like this:
proxy_pass to a naked url, i.e. nothing at all after domain/ip/port and the full client request uri gets added to the end and passed to the proxy.
Add anything, even just a slash to the proxy_pass and whatever you add replaces the part of the client request uri which matches the uri of that location block.
so if you want to lose the source part of your client request it needs to look like this:
location /source/ {
proxy_pass http://localhost:8108/;
.....
}
Now requests will be proxied like this:
example.com/source/ -> localhost:8108/
example.com/source/files/file.txt -> localhost:8108/files/file.txt
It's important to point out that Nginx isn't just dropping /source/ from the request, it's substituting my entire proxy_pass URI, It's not as clear when that's just a trailing slash, so to better illustrate if we change proxy_pass to this:
proxy_pass http://localhost:8108/graph/; then the requests are now processed like this:
example.com/source/ -> localhost:8108/graph/
example.com/source/files/file.txt -> localhost:8108/graph/files/file.txt
If you are wondering what happens if someone requests example.com/source this works providing you have not set the merge_slashes directive to off as Nginx will add the trailing / to proxied requests.
If you have Nginx in front of another webserver that's running on port 8108 and serve its content by proxy_pass of everything from a subdir, e.g. /subdir, then you might have the issue that the service at port 8108 serves an HTML page that includes resources, calls its own APIs, etc. based on absolute URL's. These calls will omit the /subdir prefix, thus they won't be routed to the service at port 8108 by nginx.
One solution is to make the webserver at port 8108 serve HTML that includes the base href attribute, e.g
<head>
<base href="https://example.com/subdir">
</head>
which tells a client that all links are relative to that path (see https://www.w3schools.com/tags/att_base_href.asp)
Sometimes this is not an option though - maybe the webserver is something you just spin up provided by an external docker image, or maybe you just don't see a reason why you should need to tamper with a service that runs perfectly as a standalone. A solution that only requires changes to the nginx in front is to use the Referer header to determine if the request was initiated by a resource located at /subdir. If that is the case, you can rewrite the request to be prefixed with /subdir and then redirect the client to that location:
location / {
if ($http_referer = "https://example.com/subdir/") {
rewrite ^/(.*) https://example.com/subdir/$1 redirect;
}
...
}
location /subdir/ {
proxy_pass http://localhost:8108/;
}
Or something like this, if you prefer a regex to let you omit the hostname:
if ($http_referer ~ "^https?://[^/]+/subdir/") {
rewrite ^/(.*) https://$http_host/subdir/$1 redirect;
}

Error with IP and Nginx as reverse proxy

I configured my Nginx as simple reverse proxy.
I'm just using basic setting
location / {
proxy_pass foo.dnsalias.net;
proxy_pass_header Set-Cookie;
proxy_pass_header P3P;
}
The problem is that after some time (few days) the site behind nginx become unaccessible. Indead nginx try to call a bad ip (the site behind nginx is at my home behind my box and I'm a using a dyn-dns because my ip is not fixe). This dyn-dns is always valid (I can call my site directly) but for obscure reason Nginx get stuck with that..
So as said, nginx just give me 504 Gateway Time-out after some time. It looks like the error come when my ip change at home.
Here is a sample of error log:
[error] ... upstream timed out (110: Connection timed out) while connecting to upstream, client: my.current.ip, server: myreverse.server.com, request: "GET /favicon.ico HTTP/1.1", upstream: "http://my.old
.home.ip", host: "myreverse.server.com"
So do you know why nginx is using ip instead of the DN ?
If the proxy_pass value doesn't contain variables, nginx will resolve domain names to IPs while loading the configuration and cache them until you restart/reload it. This is quite understandable from a performance point of view.
But, in case of dynamic DNS record change, this may not be desired. So two options are available depending on the license you possess or not.
Commercial version (Nginx+)
In this case, use an upstream block and specify which domain name need to be resolved periodically using a specific resolver. Records TTL can be overriden using valid=time parameter. The resolve parameter of the server directive will force the DN to be resolved periodically.
http {
resolver X.X.X.X valid=5s;
upstream dynamic {
server foo.dnsalias.net resolve;
}
server {
server_name www.example.com;
location / {
proxy_pass http://dynamic;
...
}
}
}
This feature was added in Nginx+ 1.5.12.
Community version (Nginx)
In that case, you will also need a custom resolver as in the previous solution. But to workaround the unavailable upstream solution, you need to use a variable in your proxy_pass directive. That way nginx will use the resolver too, honoring the caching time specified with the valid parameter. For instance, you can use the domain name as a variable :
http {
resolver X.X.X.X valid=5s;
server {
server_name www.example.com;
set $dn "foo.dnsalias.net";
location / {
proxy_pass http://$dn;
...
}
}
}
Then, you will likely need to add a proxy_redirect directive to handle redirects.
Maybe check this out http://forum.nginx.org/read.php?2,215830,215832#msg-215832
resolver 127.0.0.1;
set $backend "foo.example.com";
proxy_pass http://$backend;
In such setup ip address of "foo.example.com" will be looked up
dynamically and result will be cached for 5 minutes.

Resources