Have NGINX reverse proxy on a virtual host - nginx

This might be a dumb question but I'm kind of new to NGINX, what I'm trying to do is this:
I want a virtual host to reverse proxy another service running in the same machine in port 1000, so I have a file called jg1 inside /sites-available folder and it looks like this
server {
server_name jg1.example;
listen 80;
access_log /var/log/nginx/jg1.log;
error_log /var/log/nginx/jg1error.log;
location / {
proxy_pass http://127.0.0.1:10000/;
proxy_set_header Host $host;
}
}
As you see all I need is any browser in my computer respond when I hit http://jg1.example/ and show whatever I'm serving in http://localhost:10000 but it's not doing anything at all, btw the files jg1.log and jg1error.log do get created, I put that there just to see if nginx was actually reading the config file.

Ugh , Never Mind
I needed to add jg1.example to my /etc/hosts file as well duh! that made it work

Related

nginx reverse proxy to different applications based on an index in the host name

Previously, I had a single staging environment reachable behind a DNS staging.example.com/. Behind this address is a nginx proxy with the following config. Note that my proxy either redirects
To a (s3 behind) cloudfront distribution (app1)
To a loadbalancer by forwarding the host name (and let's consider my ALB is able to pick the appropriate app based on the host name) (app2)
server {
listen 80;
listen 443 ssl;
server_name
staging.example.com
;
location / {
try_files /maintenance.html #app1;
}
location ~ /(faq|about_us|terms|press|...) {
try_files /maintenance.html #app2;
}
[...] # Lots of similar config than redirects either to app1 or app2
# Application hosted on s3 + CloudFront
location #app1 {
proxy_set_header Host app1-staging.example.com;
proxy_pass http://d2c72vkj8qy1kv.cloudfront.net;
}
# Application hosted behind a load balancer
location #app2 {
proxy_set_header Host app2-staging.example.internal;
proxy_set_header X-ALB-Host $http_host;
proxy_pass https://staging.example.internal;
}
}
Now, my team needs a couple more staging environments. We are not yet ready to transition to docker deployments (the ultimate goal of being able to spawn a complete infra per branch that we need to test... is a bit overkill given our team size) and I'm trying to pull out some tricks instead so we can easily get a couple more staging environments using roughly the same nginx config.
Assume I have a created a few more DNS names with a index_i like staging1.example.com, staging2.example.com. So my nginx proxy will receive requests with a host header that looks like staging#{index_i}.example.com
What I'm thinking of doing :
For my s3 + Cloudfront app, I'm thinking of nesting my files under [bucket_id]/#{index_i}/[app1_files] (previously they were directly in the root folder [bucket_id]/[app1_files])
For my load balancer app, let's assume my load balancer knows where to dispatch https://staging#{iindex_i}.example.com requests.
I'm trying to pull something like this
# incoming host : staging{index_i}.example.com`
server {
listen 80;
listen 443 ssl;
server_name
staging.example.com
staging1.example.com
staging2.example.com # I can list them manually, but is it possible to have something like `staging*.example.com` ?
;
[...]
location #app1 {
proxy_set_header Host app1-staging$index_i.example.com; # Note the extra index_i here
proxy_pass http://d2c72vkj8qy1kv.cloudfront.net/$index_i; # Here proxy_passing to a subfolder named index_i
}
location #app2 {
proxy_set_header Host app2-staging$index_i.example.internal; # Note the extra index_i here
proxy_set_header X-ALB-Host $http_host;
proxy_pass http://staging$index_i.example.internal; # Here I am just forwarding the host header basically
}
So ultimately my questions are
- When my nginx server receives a connexion, can I extract the index_i variable from the request host header (using maybe some regex ?)
- If yes, how can effectively implement the app1 and app2 blocks with index_i ?
After looking at several other questions, I was able to come up with this config that works perfectly: it's possible to extract the said variable using a regex in the host name.
On the downside, for my static single page applications, to make it work with S3, I had to create one bucket per "staging index" (because of the way static hosting on S3 works with website hosting / a single index.html to be used on 404). This made it in turn impossible to work with a single Cloudfront distribution in front of my (previously single) s3.
Here is example of using a proxy with a create-react-app frontend and server-side rendering behind an ALB
server {
listen 80;
listen 443 ssl;
server_name ~^staging(?<staging_index>\d*).myjobglasses.com$
location #create-react-app-frontend {
proxy_pass http://staging$staging_index.example.com.s3-website.eu-central-1.amazonaws.com;
}
location #server-side-rendering-app {
# Now Amazon Application Load Balancer can redirect traffic based on ANY HTTP header
proxy_set_header EXAMPLE-APP old-frontend;
proxy_pass https://staging$staging_index.myjobglasses.com;
}

How to configure nginx to expose multiple services on Jelastic?

Through Jelastic's dashboard, I created this:
I just clicked "New environment", then I selected nodejs. I added a docker image (of mailhog).
Now, I would like that port 80 of my environment serves the nodejs application. This is by default so. Therefore nothing to do.
In addition to this, I would like port 8080 (or any other port than 80, like port 5000 for example) of my environment serves mailhog, hosted on the docker image. To do that, I added the following lines to the nginx-jelastic.conf (right after the first server serving the nodejs app):
server {
listen *:8080;
listen [::]:8080;
server_name _;
location / {
proxy_pass http://mailhog_upstream;
}
}
where I have also defined mailhog_upstream like this:
upstream mailhog_upstream{
server 10.102.8.215; ### DEFUPPROTO for common ###
sticky path=/; keepalive 100;
}
If I now browse my environment's 8080 port, then I see ... the nodejs app. If I try any other port than 80 or 8080, I see nothing. Putting another server_name doesn't help. I tried several things but nothing seems to work. Why is that? What am I doing wrong here?
Then I tried to get rid of the above mailhog_upstream and instead write
server {
listen *:5000;
listen [::]:5000;
server_name _;
location / {
proxy_pass http://10.102.8.215;
}
}
Browsing the environment's port 5000 doesn't work either.
If I replace the IP of the nodejs' app with that of my mailhog service, then mailhog runs on port 80. I don't understand how I can make the nodejs app run on port 80 and the mailhog service on port 5000 (or any other port than 80).
Could someone enlighten me please?
After all those failures, I tried another ansatz. Assume the path my env is example.com/. What I've tried above is to get mailhog to work upon calling example.com:5000, which I failed doing. Then I tried to make mailhog available through a call to example.com/mailhog. In order to do that, I got rid of all my modifications above and completed the current server in nginx-jelastic.conf with
location /mailhog {
proxy_pass http://10.102.8.96:8025/;
add_header Set-Cookie "SRVGROUP=$group; path=/";
}
That works in the sense that if I know browse example.com/mailhog, then I get something on the page, but not exactly what I want: it's the mailhog's page without any styling. Also, when I call mailhog's API through example.com/mailhog/api/v2/messages, I get a successful response without body, when I should've received
{"total":0,"count":0,"start":0,"items":[]}
What am I doing wrong this time?
Edit
To be more explicit, I put the following manifest that exhibits the second problem with the nginx location.
Full locations list for your case is a following:
(please pay attention to URIs in upstreams, they are different)
location /mailhog { proxy_pass http://172.25.2.128:8025/; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection " upgrade"; }
location /mailhog/api { proxy_pass http://172.25.2.128:8025/api; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection " upgrade"; }
location /css { proxy_pass http://172.25.2.128:8025; }
location /js { proxy_pass http://172.25.2.128:8025; }
location /images { proxy_pass http://172.25.2.128:8025; }
that works for me with your application
# curl 172.25.2.127/mailhog/api/v2/messages
{"total":0,"count":0,"start":0,"items":[]}
The following ports are opened by default: 80, 8080, 8686, 8443, 4848, 4949, 7979.
Additional ports can be opened using:
endpoints - maps the container internal port to random external
via Jelastic Shared LB
Public IP - provides a direct access to all ports of your
container
Read more in the following article: "Container configuration - Ports". This one may also be useful:"Public IP vs Shared Load Balancer"

Some sites can not be proxied? How ist this behaviour achieved?

I was having trouble configuring an nginx reverse proxy within my development environment when I stumbled on a behaviour that I do not quite get.
So nginx is listening on port 8080. When I make a request to my development-server, I can access my development server on
localhost:8080
With the following directives:
server {
listen 8080;
server_name site.com;
location / {
proxy_pass http://localhost:3000/;
proxy_redirect off;
}
But when I put a known website in the proxy pass_directive like google or apple the behaviour is different. I can not access e. g. apple.com as localhost:8080 with the following directives - I am immediately pushed to the real website and not the localhost:
server {
listen 8080;
server_name site.com;
location / {
proxy_pass http://apple.com/;
proxy_redirect off;
}
How is that behaviour called and how is it achieved? Can you guys put me in the right direction to understanding this? Thanks.
This is the correct behavior for the proxy service, you can find docs here https://docs.nginx.com/nginx/admin-guide/web-server/reverse-proxy/
Some information regarding proxies here https://en.wikipedia.org/wiki/Proxy_server
Example: if you want to go to http://apple.com/apple-card/, you can point out to localhost:8080/apple-card and you will be redirected to /requested_path
I'm using proxies with docker containers just to route the requests to the correct application using different ports.

Running gogs as nginx subdomain

I am trying to run gogs off of my nas. I run other stuff off my nas, so I decided to make gogs a subdomain. Here is what I tried:
/etc/nginx/sites-enabled/default:
server {
listen 80;
server_name gogs.nas.me;
location / {
proxy_pass http://127.0.0.1:3237;
proxy_set_header Host $host;
proxy_buffering off;
}
}
I do not have a domain name for it, but I have nas.me pointing towards 192.168.0.120 in /etc/hosts.
When I go to gogs.nas.me, I get gogs.nas.me’s server DNS address could not be found. When I go to nas.me, I get the index of my nas. What am I doing wrong?
Edit: I also tried using nas.me/gogs, which worked but all of the assets did not get the /gogs prefix so I got 404s on everything but /.
You should add gogs.nas.me to the file /etc/hosts like this:
192.168.0.120 nas.me gogs.nas.me
Both names should be on the same line, otherwise there will be wrong behavior (look at this answer - unix.stackexchange.com/a/102663)
Usually adding a new DNS entry in /etc/hosts does not need restarting - but just to be safe, you can do a restart.

Nginx Proxy for PlayFramework from port to path prefix

I have a Play application listening on a local port :9000. There are other applications running.
I would like to server this application at a path like:
http://myhost/this-play-app -> localhost:9000
So that other apps could be nested at other paths.
I've tried the basic proxy_pass but it doesn't seem to work.
server {
listen 80;
server_name myhost;
# MMC Tool
# ----------------------------------------------------
location /this-play-app {
proxy_pass http://localhost:9000;
}
}
The play app seems to forward to the root. Is there a way to trick the play app to work within the /this-play-app path ?
Like /this-play-app/some-controller instead of /some-controller ?
Thanks
Using apps in folders isn't comfortable idea - you would need to at least prepare some dedicated config and change it each time when changing the location.
Instead as other suggested you should use subdomains, in this case each app behaves exactly the same as in the root domain and even if you will need/want to change that domain all you'll need will be change in the nginx's config.
Typical nginx's config looks like
upstream your_app {
server 127.0.0.1:9000;
}
server {
listen 80;
server_name your-app.domain.com;
location / {
proxy_pass http://your_app;
}
}
Most probably on some VPS or shared hosts you'll need to add the subdomain by some kind of admin's panel - on localhost just need add the subdomain to the hosts file.
Edit if using subdomain is not possible anyway (pity) you can workaround it anyway by config, in nginx use (as you did in question:
...
location /this-play-app {
proxy_pass http://your_app;
}
...
and then add this line into your application.conf (Play 2.1+)
application.context = "/this-play-app"
Or this in case of Play 2.4+ (info)
play.http.context = "/this-play-app"

Resources