Firebase custom domain 'Needs Setup' with nginx proxy server - firebase

I have a Frontend app hosted on Firebase hosting. I also have a backend API running on a Digital Ocean droplet. I have nginx installed on the droplet which will either redirect to the frontend app or to the backend API. My nginx configuration file looks like the following:
server {
listen 80;
return 301 https://$host$request_uri;
}
server {
listen 443;
server_name myapp.com *.myapp.com;
ssl_certificate /etc/nginx/cert.crt;
ssl_certificate_key /etc/nginx/cert.key;
ssl on;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://150.101.64.193:80;
proxy_read_timeout 90;
proxy_redirect http://150.101.64.193:80 https://myapp.com;
}
location /api/ {
proxy_pass http://localhost:5000;
proxy_read_timeout 90;
proxy_redirect http://localhost:5000 https://localhost:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Firebase tells me to copy two TXT records into my DNS settings but could only get that to work if I wanted to map the domain name to the frontend app. Instead my DNS settings map my domain name to the IP address of the droplet. The proxy on the droplet should then forward the request to either the frontend or the backend depending on the route passed. e.g.
www.myapp.com/blah redirects to Firebase app
www.myapp.com/api/blah redirects to API
Currently Firebase reports that my custom domain needs setup because there is no corresponding TXT records. This is the first time I have tried to deploy a web app so I am unsure if this setup will work.

If you are using NGINX to proxy traffic to a Firebase Hosting site, you likely want to proxy that traffic to the shared domain e.g. <site>.web.app instead of re-proxying back to the same domain that is serving traffic.
We don't recommend putting proxies in front of Firebase Hosting as that defeats the purpose of Firebase Hosting's global CDN, but it should work.
You could also use Cloud Functions or Cloud Run to build your API surface directly instead of proxying to an NGINX backend.

Related

nginx reverse proxy for application

I use nginx for reverse proxy with domain name. I've some application publish on IIS and i want to proxy different location name for each application.
For example;
Domain name on nginx :
example.com.tr
application end points for app:
1.1.1.1:10
1.1.1.2:10
upstream for app in nginx.conf:
upstream app_1 {
least_conn;
server 1.1.1.1:10;
server 1.1.1.2:10;
}
server {
listen 443 ssl;
server_name example.com.tr;
proxy_set_header X-Forwarded-Port 443;
ssl_certificate /etc/cert.crt;
ssl_certificate_key /etc/cert.key;
location /app_1/ {
proxy_pass http://app_1/;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-REAL-SCHEME $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
access_log /etc/nginx/log/access.log;
error_log /etc/nginx/log/error.log;
}
}
When I try to access example.com.tr/app_1/ , I can access application but not all data.
I inspected this site and so many requests of application were failed.
All requests sended to example.com.tr/uri instead of example.com.tr/app_1/uri. How can I fix this ?
thanks,
You need a transparent path proxy setup. Means NGINX should use the requested URI without removing the matched location from it.
proxy_pass http://app_1;
Remove the tailing slash to tell NGINX not to do so. Using an upstream definition is great but make sure you apply keepalive.

WSO2 Api Manager url context

I'm using wso2am version 3.2.0 and trying to configure reverse proxy using NginX following the below mentioned documentation.
https://apim.docs.wso2.com/en/latest/install-and-setup/setup/setting-up-proxy-server-and-the-load-balancer/configuring-the-proxy-server-and-the-load-balancer/
I want to access the devportal and publisher with a new url context like https://{domain-name}/wso2am/devportal and https://{domain-name}/wso2am/publisher. My nginx configuration file is as below.
server {
listen 443 ssl;
server_name {domain-name};
proxy_set_header X-Forwarded-Port 443;
ssl_certificate /etc/nginx/ssl/{cert_name};
ssl_certificate_key /etc/nginx/ssl/{key_name};
location /wso2am/ {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_read_timeout 5m;
proxy_send_timeout 5m;
proxy_pass https://<server-ip>:9443;
}
access_log /etc/nginx/log/am/https/access.log;
error_log /etc/nginx/log/am/https/error.log;
}
with this configuration, when I try to access https://{domain-name}/wso2am/devportal it is redirected to the carbon login url (https://{domain-name}/carbon/admin/login.jsp) with a 404 not found error. Should I change any values in carbon.xml or in some other files for get this working?
Note: I tried to change the <WebContextRoot> in carbon.xml file with /wso2am as its value. When I restarted the server the value is overwritten again to the default /.
Where should I add the /wso2am context path in carbon configurations?

Subdomain with SSL certificates can't be reached

I have an asp net core application running in a Linux server on a Google Cloud engine. The home page is displayed perfectly, but when I click a button to navigate to a subdomain I get "This site can’t be reached" with "ERR_NAME_NOT_RESOLVED". I don't see any errors in the application log so I suppose the call can't even reach the application.
When the application is running on a local Windows machine all the subdomains are able to reach and completely working.
I've tried running the application in an local Linux machine and the results are the same. Everything is working fine.
On the server subdomain are only able to reach when there is no SSL certificate requested for.
For example when I create a subdomain: subdomain.example.com am I able to reach it. With this working I requested a certificate for the current subdomain. This is working aswell and the subdomain is currently secured. But when I create a new subdomain and repeat the previous steps the earlier subdomain isn't working anymore but the new subdomain is.
Beside all this subdomains are working perfectly when no certificate is requested but then are the subdomains "unsecured".
The certificates are requested using GoDaddy.
This is the Nginx configuration:
server {
server_name example.com *.example.com;
location / {
proxy_pass https://localhost:5001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location /activityhub {
proxy_pass https://localhost:5001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location /votehub {
proxy_pass https://localhost:5001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
listen 443 ssl;
ssl_certificate /root/letsencrypt/dehydrated/certs/example.com/fullchain.pem;
ssl_certificate_key /root/letsencrypt/dehydrated/certs/example.com/privkey.pem;
}
server {
if ($host = example.com) {
return 301 https://$host$request_uri;
}
}
The idea is a site where users can create different subdomains for their organisations. So when a user presses a button they can create a subdomain: organisation.example.com to display their own organisation. The application creates at this moment a subdomain on https://organisation.localhost:5001. Now this should automatically be display secured on the website with dns like "https://organisation.example.com. The domains requested for the SSL certificate are example.com and *.example.com
I expect to get all the subdomains working with SSL certificate without subdomains being unsecure or unreachable. All those subdomains should be working with wildcards.

Reverse Proxy to VPC-Based AWS Elasticsearch Domain Without Bypassing AWS Cognito

I apologize, in advance - I'm extremely new to Nginx.
I have two VPC-based AWS Elasticsearch Domains, we'll call dev and prod. I want both domains to be inaccessible to the open internet, but available in some networks outside the VPC. To that end, I set them up as VPC-based Elasticsearch domains and planned to use a reverse proxy accessible only from the networks I wish. I've setup the dev cluster, which has no authentication, using an NGINX reverse proxy with the following config:
events{
}
http{
server {
listen 80;
server_name kibana-dev.[domain name];
location / {
proxy_http_version 1.1;
proxy_set_header Connection "Keep-Alive";
proxy_set_header Proxy-Connection "Keep-Alive";
proxy_pass https://[vpc id].[vpc region].es.amazonaws.com/_plugin/kibana/;
proxy_redirect https://[vpc id].[vpc region].es.amazonaws.com/_plugin/kibana/ https://kibana-dev.[domain name]/;
}
location ~ (/app/kibana|/app/timelion|/bundles|/es_admin|/plugins|/api|/ui|/elasticsearch|/app/opendistro-alerting) {
proxy_pass https://[vpc id].[vpc region].es.amazonaws.com;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $http_host;
}
}
}
This works fine.
For the prod domain, however, I'm running into an issue. I want all users, even those that use the proxy, to have to authenticate with AWS Cognito (so I don't just want to, for example, create an access policy with an IP exception for the proxy's IP address, as that bypasses Cognito).
I have used a similar NGINX config for my "prod" Elasticsearch instance, but with no luck. The Cognito login page redirects to the VPC-based URL after authentication. I've tried manually adding my proxy's URL to the Cognito app's Callback URLs, but it still redirects by default to the VPC-based URL. I've also tried manually changing the redirect URI in the Cognito URL to refer to my proxy, but I've found that after authenticating I'm redirected to the Cognito login page again - perhaps a header or something isn't getting through?
How (or can) I get this running in Nginx, so that users can access the "prod" Elasticsearch domain while still being required to authenticate with AWS Cognito?
Thank you!
Doh! I should have read the documentation more carefully. AWS provides an example Nginx conf file for a Kibana proxy with Cognito:
{
server {
listen 443;
server_name $host;
rewrite ^/$ https://$host/_plugin/kibana redirect;
ssl_certificate /etc/nginx/cert.crt;
ssl_certificate_key /etc/nginx/cert.key;
ssl on;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
location /_plugin/kibana {
# Forward requests to Kibana
proxy_pass https://$kibana_host/_plugin/kibana;
# Handle redirects to Cognito
proxy_redirect https://$cognito_host https://$host;
# Update cookie domain and path
proxy_cookie_domain $kibana_host $host;
proxy_cookie_path / /_plugin/kibana/;
# Response buffer settings
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
}
location ~ \/(log|sign|fav|forgot|change|saml|oauth2) {
# Forward requests to Cognito
proxy_pass https://$cognito_host;
# Handle redirects to Kibana
proxy_redirect https://$kibana_host https://$host;
# Update cookie domain
proxy_cookie_domain $cognito_host $host;
}
}

How to set the real ip in a request going from nginx to a backend server

I have my backend servers fronted with nginx. When a user sends a request to my backend, it hits the NginX and then it is routed to the backend server. There, I publish some stats and one of them is the client IP. In my setup, its the Nginx IP which gets published as the client IP. Is there a way and a config to set the real IP of the client?
Following is my config.
server {
listen 8280;
server_name my.server.com;
location / {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_read_timeout 5m;
proxy_send_timeout 5m;
proxy_pass http://myserver_http/;
}
access_log /mnt/var/log/nginx/myserver/access.log;
error_log /mnt/var/log/nginx/myserver/error.log;
}
in order to forward the real client IP use inside your location block:
proxy_set_header X-Real-IP $remote_addr;

Resources