Using NGINX SMTP relay capabilities as described here, to proxy a commercial SMTP server, effectively "white-labelling" the relay address (including cert) while preserving authentication.
Configured NGINX as follows:
mail {
server_name smtp.proxy.mydomain.net;
auth_http 127.0.0.1:9000/auth;
proxy_pass_error_message on;
xclient off;
smtp_capabilities "8BITMIME" "STARTTLS" "PIPELINING" "ENHANCEDSTATUSCODES";
starttls on;
ssl_certificate /etc/letsencrypt/live/smtp.proxy.mydomain.net/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/smtp.proxy.mydomain.net/privkey.pem;
ssl_protocols TLSv1.2;
ssl_session_cache shared:SSL:10m;
server {
listen 587;
protocol smtp;
smtp_auth login plain;
}
}
The cert is being used by NGINX when the client requests STARTTLS, and the client (in my case swaks) sends the AUTH LOGIN credentials to NGINX.
NGINX then calls the mail_auth_http_module OK. I have a simple Python Flask app that returns headers indicating auth is always accepted, and the server address. You can see the auth server's response to a curl request here:
$ curl -v localhost:9000/auth
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 9000 (#0)
> GET /auth HTTP/1.1
> Host: localhost:9000
> User-Agent: curl/7.61.1
> Accept: */*
>
* HTTP 1.0, assume close after body
< HTTP/1.0 200 OK
< Content-Type: text/html; charset=utf-8
< Content-Length: 2
< Auth-Status: OK
< Auth-Server: 52.214.232.65
< Auth-Port: 587
< Server: Werkzeug/0.15.2 Python/3.6.8
< Date: Tue, 07 May 2019 23:10:29 GMT
<
* Closing connection 0
NGINX is then attempting onward delivery to the server, as expected. I can see it's hitting the correct server endpoint.
Unfortunately in my case, the server requires (and will always require) the same login credentials to be presented, as the client originally gave. At this point the delivery fails with a "5.7.1 Authorization required", as NGINX is not supplying them onward.
I suspect that NGINX assumes that, because it has called the auth module, the server will not require further credentials, and therefore does not supply them.
Is there a way to get NGINX to "pass through" the Auth credentials to the server?
Since nginx 1.19.4 the SMTP native backend auth is supported (SMTP AUTH).
See: http://nginx.org/en/docs/mail/ngx_mail_proxy_module.html#proxy_smtp_auth
You can enable backend auth with the proxy_smtp_auth on; directive.
For example:
server {
server_name smtp.company.com;
listen 587;
protocol smtp;
proxy_smtp_auth on; # <- enable native SMTP AUTH
smtp_auth plain login cram-md5;
starttls on;
}
According to nginx mail list nginx doesn't pass AUTH command to backend for SMTP.
There's a nginx patch to do this for Postfix, but it's not official.
Alternatively you can try to use this openresty solution.
Related
I'm sometimes on a very restrictive network which only allows HTTP/HTTPS on Port 80/443 i have an openvpn server setup and ready and some services behind Nginx Proxy Manager. I now wand to setup an Squid HTTP Proxy for openvpn behind Nginx. I can't use sslh because HTTP is only allowed on Port 80 and HTTPS on 443. If i make a default config for Nginx:
set $forward_scheme http;
set $server "http_proxy";
set $port 3128;
listen 80;
listen [::]:80;
server_name squid.domain.tld;
access_log /data/logs/proxy-host-41_access.log proxy;
error_log /data/logs/proxy-host-41_error.log warn;
location / {
include conf.d/include/proxy.conf;
}
include /data/nginx/custom/server_proxy[.]conf;
}
For Squid i have:
auth_param basic program /usr/lib/squid/basic_ncsa_auth /etc/squid/passwords
auth_param basic realm proxy
acl authenticated proxy_auth REQUIRED
http_access allow authenticated
http_port 3128 accel allow-direct
http_access allow all
The Proxy funktions as standalone
made from From Nginx to Squid
If i try openvpn offical andriod client i get HTTP code 400 and no LOG
I can't think of anything anymore whay it won't funktion.
It would have worked if i had compiled nginx with HTTP-CONNECT protocol support.
I've been facing some issues with nginx and PUT redirects:
Let's say I have an HTTP service sitting behind an nginx server (assume HTTP 1.1)
The client does a PUT /my/api with Expect: 100-continue.
My service is not sending a 100-continue, but sends a 307 redirect instead, to another endpoint (in this case, S3).
However, nginx is for some unknown reason sending a 100-continue prior to serving the redirect - the client proceeds to upload the whole body to nginx before the redirect is served. This causes the client to effectively transfer the body twice - which isn't great for multi-gigabyte uploads
I am wondering if there is a way to:
Prevent nginx to send 100-continue unless the service actually does send that.
Allow requests with arbitrarily large Content-Length without having to set client_max_body_size to a large value (to avoid 413 Entity too large).
Since my service is sending redirects only and never sending 100-Continue, the request body is never supposed to reach nginx. Having to set client_max_body_size and waiting for nginx to buffer the whole body just to serve a redirect is quite suboptimal.
I've been able to do that with Apache, but not with nginx. Apache used to have the same behavior before this got fixed: https://bz.apache.org/bugzilla/show_bug.cgi?id=60330 - wondering if nginx has the same issue
Any pointers appreciated :)
EDIT 1: Here's a sample setup to reproduce the issue:
An nginx listening on port 80, forwarding to localhost on port 9999
A simple HTTP server listening on port 9999, that always returns redirects on PUTs
nginx.conf
worker_rlimit_nofile 261120;
worker_shutdown_timeout 10s ;
events {
multi_accept on;
worker_connections 16384;
use epoll;
}
http {
server {
listen 80;
server_name frontend;
keepalive_timeout 75s;
keepalive_requests 100;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:9999/;
}
}
}
I'm running the above with
docker run --rm --name nginx --net=host -v /path/to/nginx.conf:/etc/nginx/nginx.conf:ro nginx:1.21.1
Simple python3 HTTP server.
#!/usr/bin/env python3
import sys
from http.server import HTTPServer, BaseHTTPRequestHandler
class Redirect(BaseHTTPRequestHandler):
def do_PUT(self):
self.send_response(307)
self.send_header('Location', 'https://s3.amazonaws.com/test')
self.end_headers()
HTTPServer(("", 9999), Redirect).serve_forever()
Test results:
Uploading directly to the python server works as expected. The python server does not send a 100-continue on PUTs - it will directly send a 307 redirect before seeing the body.
$ curl -sv -L -X PUT -T /some/very/large/file 127.0.0.1:9999/test
> PUT /test HTTP/1.1
> Host: 127.0.0.1:9999
> User-Agent: curl/7.74.0
> Accept: */*
> Content-Length: 531202949
> Expect: 100-continue
>
* Mark bundle as not supporting multiuse
* HTTP 1.0, assume close after body
< HTTP/1.0 307 Temporary Redirect
< Server: BaseHTTP/0.6 Python/3.9.2
< Date: Thu, 15 Jul 2021 10:16:44 GMT
< Location: https://s3.amazonaws.com/test
<
* Closing connection 0
* Issue another request to this URL: 'https://s3.amazonaws.com/test'
* Trying 52.216.129.157:443...
* Connected to s3.amazonaws.com (52.216.129.157) port 443 (#1)
> PUT /test HTTP/1.0
> Host: s3.amazonaws.com
> User-Agent: curl/7.74.0
> Accept: */*
> Content-Length: 531202949
>
Doing the same thing through nginx fails with 413 Entity too large - even though the body should not go through nginx.
After adding client_max_body_size 1G; to the config, the result is different, except nginx tries to buffer the whole body:
$ curl -sv -L -X PUT -T /some/very/large/file 127.0.0.1:80/test
* Trying 127.0.0.1:80...
* Connected to 127.0.0.1 (127.0.0.1) port 80 (#0)
> PUT /test HTTP/1.1
> Host: 127.0.0.1
> User-Agent: curl/7.74.0
> Accept: */*
> Content-Length: 531202949
> Expect: 100-continue
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 100 Continue
} [65536 bytes data]
* We are completely uploaded and fine
* Mark bundle as not supporting multiuse
< HTTP/1.1 502 Bad Gateway
< Server: nginx/1.21.1
< Date: Thu, 15 Jul 2021 10:22:08 GMT
< Content-Type: text/html
< Content-Length: 157
< Connection: keep-alive
<
{ [157 bytes data]
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.21.1</center>
</body>
</html>
Notice how nginx sends a HTTP/1.1 100 Continue
With this simple python server, the request subsequently fails because the python server closes the connection right after serving the redirect, which causes nginx to serve the 502 due to a broken pipe:
127.0.0.1 - - [15/Jul/2021:10:22:08 +0000] "PUT /test HTTP/1.1" 502 182 "-" "curl/7.74.0"
2021/07/15 10:22:08 [error] 31#31: *1 writev() failed (32: Broken pipe) while sending request to upstream, client: 127.0.0.1, server: frontend, request: "PUT /test HTTP/1.1", upstream: "http://127.0.0.1:9999/test", host: "127.0.0.1"
So as far as I can see, this seems exactly like the following Apache issue https://bz.apache.org/bugzilla/show_bug.cgi?id=60330 (which is now addressed in newer versions). I am not sure how to circumvent this with nginx
I have the following API: https://kdhdh64g.execute-api.us-east-1.amazonaws.com/dev/user/${user-id} which proxies to a Lambda function.
When the user hits /user/1234 the function checks if 1234 exists and return the info for that user or a redirection to /users
What I want is to create is a redirection with nginx. For SEO, I want a simple 302: return 302 the-url. If someone goes to mySite.com it should redirect to https://kdhdh64g.execute-api.us-east-1.amazonaws.com/dev
No matter what I do, I always receive a 403 with the following:
x-amzn-errortype: MissingAuthenticationTokenException
x-amz-apigw-id: QrFd6GByoJHGf1g=
x-cache: Error from cloud-front
via: 1.1 dfg35721fhfsgdv36vs52fa785f5g.cloudfront.net (CloudFront)
I will appreciate help.
If you are using the reverse proxy set up in nginx, add the below line in the config file and restart or reload the nginx configuration.
proxy_set_header Host $proxy_host;
I run into the same issue trying to run a proxy on Nginx towards an API Gateway which triggers a Lambda function on AWS. When I read the error logs on Nginx, I noticed that it had to do with the SSL version which Nginx was using to connect to API Gateway, the error was the following:
*1 SSL_do_handshake() failed (SSL: error:1408F10B:SSL routines:ssl3_get_record:wrong version number) while SSL handshaking to upstream
I managed to fix it by adding this line:
proxy_ssl_protocols TLSv1.3;
I attach the complete Nginx configuration in case anyone wants to build a static IP proxy on Nginx that redirects traffic towards a Lambda function:
server {
listen 443 ssl;
server_name $yourservername;
location / {
proxy_pass https://$lambdafunctionaddress;
proxy_ssl_server_name on;
proxy_redirect off;
proxy_ssl_protocols TLSv1.3;
}
ssl_certificate /home/ubuntu/.ssl/ca-chain.crt;
ssl_certificate_key /home/ubuntu/.ssl/server.key;
}
Also it is important to consider that all required information must be included on the request:
curl -X POST https://$yourservername/env/functionname -H "Content-Type: application/json" -H "x-api-key: <yourapikey>" -d $payload
I'm setting up a private docker registry with NGINX in front for authentication. Both in a container which are linked. The nginx image I'm using is jwilder/nginx-proxy. I can ping the registry just fine:
>http zite.com:5000/v1/_ping
HTTP/1.1 200 OK
Cache-Control: no-cache
Connection: keep-alive
Content-Length: 2
Content-Type: application/json
Date: Thu, 02 Apr 2015 12:13:32 GMT
Expires: -1
Pragma: no-cache
Server: nginx/1.7.11
X-Docker-Registry-Standalone: True
But pushing an image gives me:
FATA[0001] HTTP code 401, Docker will not send auth headers over HTTP
I've tried marking the registry as insecure but to no avail:
--insecure-registry zite.com:5000
I have been able to get this setup running without NGINX in the middle.
My NGINX config file is (where 'dockerregistry' is the name of the linked container):
upstream dockerregistry {
server dockerregistry:5000;
}
server {
listen 80;
server_name zite.com;
proxy_set_header Host $http_host;
client_max_body_size 0;
location / {
proxy_pass http://dockerregistry;
auth_basic "Docker Registry";
auth_basic_user_file /etc/nginx/dockerregistry_users;
}
location /v1/_ping {
auth_basic off;
proxy_pass http://dockerregistry;
}
}
I think I've read almost every article about this setup but one thing I cannot figure out is whether HTTP only access to a private docker repo is a no-go at all. Is it at all possible to get it working? Or do I have to use SSL certificates? If so, who knows a good guide for this setup?
Yes, you need SSL if you want to use (basic) authentication against your registry (and there is no way around that).
This was a deliberate design decision: the reasoning was that basic authentication over plain http would give a false sense of security, while credentials would really be transmitted in the clear and be extremely easy to compromise.
Not allowing for false security was indeed on purpose (though a questionable move, judging by the number of people being confused by that).
About setting up SSL, I would just go with the example nginx files in the repo:
https://github.com/docker/docker-registry/tree/master/contrib/nginx
I have set up a POP3 reverse proxy and is being used to serve multiple domains. I was thinking to pass the hostname of the request to the auth script as a custom header, but I don't know how.
The relevant section of the nginx.conf file is:
mail {
server_name mail.example.com;
auth_http 10.169.15.199:80/auth_script.php;
auth_http_timeout 5000;
proxy on;
proxy_pass_error_message on;
pop3_capabilities "LAST" "TOP" "USER" "PIPELINING" "UIDL";
server {
protocol pop3;
listen 110;
pop3_auth plain;
auth_http_header X-Auth-Port 110;
auth_http_header User-Agent "Nginx POP3/IMAP4 proxy";
auth_http_header my_hostname $host;
}
}
I tried with this:
auth_http_header my_hostname $host;
expecting nginx to replace the $host with the actual hostname, but it does not happen, the auth script receives $_SERVER[MY_HOSTNAME] = '$host'.
Is there any way I can accomplish this?
The only way to get the host of the auth, is authenticating like user#hostname.tld and in the auth headers split the part of the hostname.
If you want to proxy auth to multiple domains I wrote a module in perl