Im'm trying to deploy an application in kubernetes which connects to an s3 object bucket. The s3 bucket is exposed by an storage api secured with a self signed certificate.
The application which is supposed to connect to this bucket is already into a container and it't not that easy to edit to include the CA of the s3 bucket. I can only manipulate the endpoint of the s3 bucket.
Since all this is deployed into kubernetes, I thoght that It would be possible to deploy an nginx pod to act as a proxy and negotiate SSL with the bucket and expose it by a kubernetes service.
I searched about this in google and found this article in which explains how to use nginx as a proxy.
This is my nginx configuration.
http {
default_type text/html;
#access_log /;
sendfile on;
keepalive_timeout 65;
proxy_cache_path /tmp/ levels=1:2 keys_zone=s3_cache:10m max_size=500m
inactive=60m use_temp_path=off;
server {
listen 80;
# Configure your domain name here:
server_name _;
# Configure your Object Storage bucket URL here:
set $bucket "myobjectstoragebucket.int.company.net";
# This configuration provides direct access to the Object Storage bucket:
location / {
resolver 1.1.1.1;
proxy_http_version 1.1;
proxy_redirect off;
proxy_set_header Connection "";
proxy_set_header Authorization '';
proxy_set_header Host $bucket;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_hide_header x-amz-id-2;
proxy_hide_header x-amz-request-id;
proxy_hide_header x-amz-meta-server-side-encryption;
proxy_hide_header x-amz-server-side-encryption;
proxy_hide_header Set-Cookie;
proxy_ignore_headers Set-Cookie;
proxy_intercept_errors on;
add_header Cache-Control max-age=31536000;
proxy_ssl_verify off;
proxy_pass http://$bucket;
}
And doesn't seem to be working. The client logs that is trying to connect to the upstream endpoint (myobjectstoragebucket.int.company.net) and fails to authenticate ssl.
In the client I've added the kubernetes service to use this nginx proxy. And seems to be working since it reaches the s3 bucket.
Is the idea even possible? Sorry if this is nonsese I don't know much about NGINX or S3.
Thanks for the help.
Related
I have a NUXT js application on Ubuntu 20.04 Server. I used Nginx to serve my nuxt application as follow:
server {
client_max_body_size 300M;
root /var/www/app/dist;
server_name example.com;
location / {
proxy_pass http://localhost:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
My NUXT application works with npm on port 8080 and Nginx reverse proxy, passes user requests to localhost:8080 and everything is working fine.
Now I want to access the js service worker file (named: p-sw.js). but when I try to access it via my website address, (https://example.com/p-sw.js) due to Nginx reverse proxy it returns 404. This file is in the dist folder (see Nginx configuration).
Anybody can explain to me how to set Nginx reverse proxy to works fine as before alongside load service worker file when I enter service worker address (https://example.com/p-sw.js) in browser?
Finally, I solved it!
the Nginx config must look like this:
upstream backend {
server localhost:3000;
}
server {
server_name example.com;
client_max_body_size 300M;
root /var/www/app/dist;
location /p-sw.js {
try_files $uri #backend;
add_header 'Cache-Control' 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0';
expires off;
proxy_no_cache 1;
}
location #backend {
proxy_pass http://backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
In this configuration, I solved the issue by defining the location /p-sw.js for my service worker file, and for Nuxt routes, I used the same proxy pass!
I have a Frontend app hosted on Firebase hosting. I also have a backend API running on a Digital Ocean droplet. I have nginx installed on the droplet which will either redirect to the frontend app or to the backend API. My nginx configuration file looks like the following:
server {
listen 80;
return 301 https://$host$request_uri;
}
server {
listen 443;
server_name myapp.com *.myapp.com;
ssl_certificate /etc/nginx/cert.crt;
ssl_certificate_key /etc/nginx/cert.key;
ssl on;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://150.101.64.193:80;
proxy_read_timeout 90;
proxy_redirect http://150.101.64.193:80 https://myapp.com;
}
location /api/ {
proxy_pass http://localhost:5000;
proxy_read_timeout 90;
proxy_redirect http://localhost:5000 https://localhost:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Firebase tells me to copy two TXT records into my DNS settings but could only get that to work if I wanted to map the domain name to the frontend app. Instead my DNS settings map my domain name to the IP address of the droplet. The proxy on the droplet should then forward the request to either the frontend or the backend depending on the route passed. e.g.
www.myapp.com/blah redirects to Firebase app
www.myapp.com/api/blah redirects to API
Currently Firebase reports that my custom domain needs setup because there is no corresponding TXT records. This is the first time I have tried to deploy a web app so I am unsure if this setup will work.
If you are using NGINX to proxy traffic to a Firebase Hosting site, you likely want to proxy that traffic to the shared domain e.g. <site>.web.app instead of re-proxying back to the same domain that is serving traffic.
We don't recommend putting proxies in front of Firebase Hosting as that defeats the purpose of Firebase Hosting's global CDN, but it should work.
You could also use Cloud Functions or Cloud Run to build your API surface directly instead of proxying to an NGINX backend.
I apologize, in advance - I'm extremely new to Nginx.
I have two VPC-based AWS Elasticsearch Domains, we'll call dev and prod. I want both domains to be inaccessible to the open internet, but available in some networks outside the VPC. To that end, I set them up as VPC-based Elasticsearch domains and planned to use a reverse proxy accessible only from the networks I wish. I've setup the dev cluster, which has no authentication, using an NGINX reverse proxy with the following config:
events{
}
http{
server {
listen 80;
server_name kibana-dev.[domain name];
location / {
proxy_http_version 1.1;
proxy_set_header Connection "Keep-Alive";
proxy_set_header Proxy-Connection "Keep-Alive";
proxy_pass https://[vpc id].[vpc region].es.amazonaws.com/_plugin/kibana/;
proxy_redirect https://[vpc id].[vpc region].es.amazonaws.com/_plugin/kibana/ https://kibana-dev.[domain name]/;
}
location ~ (/app/kibana|/app/timelion|/bundles|/es_admin|/plugins|/api|/ui|/elasticsearch|/app/opendistro-alerting) {
proxy_pass https://[vpc id].[vpc region].es.amazonaws.com;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $http_host;
}
}
}
This works fine.
For the prod domain, however, I'm running into an issue. I want all users, even those that use the proxy, to have to authenticate with AWS Cognito (so I don't just want to, for example, create an access policy with an IP exception for the proxy's IP address, as that bypasses Cognito).
I have used a similar NGINX config for my "prod" Elasticsearch instance, but with no luck. The Cognito login page redirects to the VPC-based URL after authentication. I've tried manually adding my proxy's URL to the Cognito app's Callback URLs, but it still redirects by default to the VPC-based URL. I've also tried manually changing the redirect URI in the Cognito URL to refer to my proxy, but I've found that after authenticating I'm redirected to the Cognito login page again - perhaps a header or something isn't getting through?
How (or can) I get this running in Nginx, so that users can access the "prod" Elasticsearch domain while still being required to authenticate with AWS Cognito?
Thank you!
Doh! I should have read the documentation more carefully. AWS provides an example Nginx conf file for a Kibana proxy with Cognito:
{
server {
listen 443;
server_name $host;
rewrite ^/$ https://$host/_plugin/kibana redirect;
ssl_certificate /etc/nginx/cert.crt;
ssl_certificate_key /etc/nginx/cert.key;
ssl on;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
location /_plugin/kibana {
# Forward requests to Kibana
proxy_pass https://$kibana_host/_plugin/kibana;
# Handle redirects to Cognito
proxy_redirect https://$cognito_host https://$host;
# Update cookie domain and path
proxy_cookie_domain $kibana_host $host;
proxy_cookie_path / /_plugin/kibana/;
# Response buffer settings
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
}
location ~ \/(log|sign|fav|forgot|change|saml|oauth2) {
# Forward requests to Cognito
proxy_pass https://$cognito_host;
# Handle redirects to Kibana
proxy_redirect https://$kibana_host https://$host;
# Update cookie domain
proxy_cookie_domain $cognito_host $host;
}
}
I stuck to configure a simple reverse proxy on AWS.
Since we have one host (reverse proxy nginx) serving the public access I decided to follow the rules and created the following configuration.
server {
listen 9990;
server_name project-wildfly.domain.me;
access_log /var/log/nginx/wildfly.access.log;
error_log /var/log/nginx/wildfly.error.log;
proxy_buffers 16 64k;
proxy_buffer_size 128k;
root /var/www/;
index index.html index.htm;
location /console {
proxy_set_header Host $server_addr:$server_port;
proxy_set_header X-Forwarded-Proto $scheme;
add_header Cache-Control "no-cache, no-store";
proxy_pass http://10.124.1.120:9990/console;
}
location /management {
proxy_set_header Host $server_addr:$server_port;
proxy_set_header X-Forwarded-Proto $scheme;
add_header Cache-Control "no-cache, no-store";
proxy_pass http://10.124.1.120:9990/management;
}
}
This will serve the admin console and I'm able to log in with the user. Then this message appears:
Access Denied
Insufficient privileges to access this interface.
Nothing within the error log. Thanks for any hint!
I had the same issue when configuring Wildfly 15 and nginx 1.10.3 as reverse proxy.
Setup was very similar to the first post, redirecting /management & /console to wildflyhost:9990.
I was able to access the console directly via :9990 and when comparing the network traffic between direct and nginx-proxied traffic, I noticed that Origin and Host were different.
So in my case the solution was to force the Origin and Host headers in Nginx to something that Wildfly is expecting. I couldn't find this solution elsewhere, so I'm posting it here for future reference anyhow although the thread is old.
location /.../ {
proxy_set_header Host $host:9990;
proxy_set_header Origin http://$host:9990;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass_request_headers on;
proxy_pass http://wildflyhost:9990
...
}
Maybe you need turn on management module.
Try this:sh standalone.sh -b 0.0.0.0 -bmanagement 0.0.0.0 &
My nginx version is 1.4.6:
root#jung-digital:~# nginx -v
nginx version: nginx/1.4.6 (Ubuntu)
I've set up a reverse proxy to a server with version 1.8.0 on it, as confirmed by hitting an invalid path on that server.
However, my reverse proxy is showing an HTML page when attempting to use the reverse proxy saying:
404 Not Found
nginx/1.4.1 (Ubuntu)
What in the world is going on? Neither my reverse proxy server nor the target server for the proxy are using nginx 1.4.1.
For those curious, here are the relevant sections from my nginx.conf:
upstream ireport_dyndns {
server ireport.somedomain.org;
}
...
server {
listen 80;
server_name ireport.somedomain2.com;
access_log /var/log/nginx/ireport.access.log;
root /var/www/ireport.somedomain2.com/dist;
index index.html index.htm;
location /api/ {
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'POST,GET,OPTIONS';
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-proxy true;
proxy_pass http://ireport_dyndns/api/;
proxy_ssl_session_reuse off;
proxy_set_header Host $http_host;
proxy_redirect off;
}
}
Discovered that this must be a bug in nginx. The proxied server had a configuration change that was checking the HOST header and my proxy_pass settings were sending the wrong HOST and therefore the proxied server was returning a 404.
The response from the proxied server specifies NGINX 1.8.0 in the headers but 1.4.1 in the body.
Bug in nginx.