Through Jelastic's dashboard, I created this:
I just clicked "New environment", then I selected nodejs. I added a docker image (of mailhog).
Now, I would like that port 80 of my environment serves the nodejs application. This is by default so. Therefore nothing to do.
In addition to this, I would like port 8080 (or any other port than 80, like port 5000 for example) of my environment serves mailhog, hosted on the docker image. To do that, I added the following lines to the nginx-jelastic.conf (right after the first server serving the nodejs app):
server {
listen *:8080;
listen [::]:8080;
server_name _;
location / {
proxy_pass http://mailhog_upstream;
}
}
where I have also defined mailhog_upstream like this:
upstream mailhog_upstream{
server 10.102.8.215; ### DEFUPPROTO for common ###
sticky path=/; keepalive 100;
}
If I now browse my environment's 8080 port, then I see ... the nodejs app. If I try any other port than 80 or 8080, I see nothing. Putting another server_name doesn't help. I tried several things but nothing seems to work. Why is that? What am I doing wrong here?
Then I tried to get rid of the above mailhog_upstream and instead write
server {
listen *:5000;
listen [::]:5000;
server_name _;
location / {
proxy_pass http://10.102.8.215;
}
}
Browsing the environment's port 5000 doesn't work either.
If I replace the IP of the nodejs' app with that of my mailhog service, then mailhog runs on port 80. I don't understand how I can make the nodejs app run on port 80 and the mailhog service on port 5000 (or any other port than 80).
Could someone enlighten me please?
After all those failures, I tried another ansatz. Assume the path my env is example.com/. What I've tried above is to get mailhog to work upon calling example.com:5000, which I failed doing. Then I tried to make mailhog available through a call to example.com/mailhog. In order to do that, I got rid of all my modifications above and completed the current server in nginx-jelastic.conf with
location /mailhog {
proxy_pass http://10.102.8.96:8025/;
add_header Set-Cookie "SRVGROUP=$group; path=/";
}
That works in the sense that if I know browse example.com/mailhog, then I get something on the page, but not exactly what I want: it's the mailhog's page without any styling. Also, when I call mailhog's API through example.com/mailhog/api/v2/messages, I get a successful response without body, when I should've received
{"total":0,"count":0,"start":0,"items":[]}
What am I doing wrong this time?
Edit
To be more explicit, I put the following manifest that exhibits the second problem with the nginx location.
Full locations list for your case is a following:
(please pay attention to URIs in upstreams, they are different)
location /mailhog { proxy_pass http://172.25.2.128:8025/; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection " upgrade"; }
location /mailhog/api { proxy_pass http://172.25.2.128:8025/api; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection " upgrade"; }
location /css { proxy_pass http://172.25.2.128:8025; }
location /js { proxy_pass http://172.25.2.128:8025; }
location /images { proxy_pass http://172.25.2.128:8025; }
that works for me with your application
# curl 172.25.2.127/mailhog/api/v2/messages
{"total":0,"count":0,"start":0,"items":[]}
The following ports are opened by default: 80, 8080, 8686, 8443, 4848, 4949, 7979.
Additional ports can be opened using:
endpoints - maps the container internal port to random external
via Jelastic Shared LB
Public IP - provides a direct access to all ports of your
container
Read more in the following article: "Container configuration - Ports". This one may also be useful:"Public IP vs Shared Load Balancer"
Related
This might be a dumb question but I'm kind of new to NGINX, what I'm trying to do is this:
I want a virtual host to reverse proxy another service running in the same machine in port 1000, so I have a file called jg1 inside /sites-available folder and it looks like this
server {
server_name jg1.example;
listen 80;
access_log /var/log/nginx/jg1.log;
error_log /var/log/nginx/jg1error.log;
location / {
proxy_pass http://127.0.0.1:10000/;
proxy_set_header Host $host;
}
}
As you see all I need is any browser in my computer respond when I hit http://jg1.example/ and show whatever I'm serving in http://localhost:10000 but it's not doing anything at all, btw the files jg1.log and jg1error.log do get created, I put that there just to see if nginx was actually reading the config file.
Ugh , Never Mind
I needed to add jg1.example to my /etc/hosts file as well duh! that made it work
Previously, I had a single staging environment reachable behind a DNS staging.example.com/. Behind this address is a nginx proxy with the following config. Note that my proxy either redirects
To a (s3 behind) cloudfront distribution (app1)
To a loadbalancer by forwarding the host name (and let's consider my ALB is able to pick the appropriate app based on the host name) (app2)
server {
listen 80;
listen 443 ssl;
server_name
staging.example.com
;
location / {
try_files /maintenance.html #app1;
}
location ~ /(faq|about_us|terms|press|...) {
try_files /maintenance.html #app2;
}
[...] # Lots of similar config than redirects either to app1 or app2
# Application hosted on s3 + CloudFront
location #app1 {
proxy_set_header Host app1-staging.example.com;
proxy_pass http://d2c72vkj8qy1kv.cloudfront.net;
}
# Application hosted behind a load balancer
location #app2 {
proxy_set_header Host app2-staging.example.internal;
proxy_set_header X-ALB-Host $http_host;
proxy_pass https://staging.example.internal;
}
}
Now, my team needs a couple more staging environments. We are not yet ready to transition to docker deployments (the ultimate goal of being able to spawn a complete infra per branch that we need to test... is a bit overkill given our team size) and I'm trying to pull out some tricks instead so we can easily get a couple more staging environments using roughly the same nginx config.
Assume I have a created a few more DNS names with a index_i like staging1.example.com, staging2.example.com. So my nginx proxy will receive requests with a host header that looks like staging#{index_i}.example.com
What I'm thinking of doing :
For my s3 + Cloudfront app, I'm thinking of nesting my files under [bucket_id]/#{index_i}/[app1_files] (previously they were directly in the root folder [bucket_id]/[app1_files])
For my load balancer app, let's assume my load balancer knows where to dispatch https://staging#{iindex_i}.example.com requests.
I'm trying to pull something like this
# incoming host : staging{index_i}.example.com`
server {
listen 80;
listen 443 ssl;
server_name
staging.example.com
staging1.example.com
staging2.example.com # I can list them manually, but is it possible to have something like `staging*.example.com` ?
;
[...]
location #app1 {
proxy_set_header Host app1-staging$index_i.example.com; # Note the extra index_i here
proxy_pass http://d2c72vkj8qy1kv.cloudfront.net/$index_i; # Here proxy_passing to a subfolder named index_i
}
location #app2 {
proxy_set_header Host app2-staging$index_i.example.internal; # Note the extra index_i here
proxy_set_header X-ALB-Host $http_host;
proxy_pass http://staging$index_i.example.internal; # Here I am just forwarding the host header basically
}
So ultimately my questions are
- When my nginx server receives a connexion, can I extract the index_i variable from the request host header (using maybe some regex ?)
- If yes, how can effectively implement the app1 and app2 blocks with index_i ?
After looking at several other questions, I was able to come up with this config that works perfectly: it's possible to extract the said variable using a regex in the host name.
On the downside, for my static single page applications, to make it work with S3, I had to create one bucket per "staging index" (because of the way static hosting on S3 works with website hosting / a single index.html to be used on 404). This made it in turn impossible to work with a single Cloudfront distribution in front of my (previously single) s3.
Here is example of using a proxy with a create-react-app frontend and server-side rendering behind an ALB
server {
listen 80;
listen 443 ssl;
server_name ~^staging(?<staging_index>\d*).myjobglasses.com$
location #create-react-app-frontend {
proxy_pass http://staging$staging_index.example.com.s3-website.eu-central-1.amazonaws.com;
}
location #server-side-rendering-app {
# Now Amazon Application Load Balancer can redirect traffic based on ANY HTTP header
proxy_set_header EXAMPLE-APP old-frontend;
proxy_pass https://staging$staging_index.myjobglasses.com;
}
I was having trouble configuring an nginx reverse proxy within my development environment when I stumbled on a behaviour that I do not quite get.
So nginx is listening on port 8080. When I make a request to my development-server, I can access my development server on
localhost:8080
With the following directives:
server {
listen 8080;
server_name site.com;
location / {
proxy_pass http://localhost:3000/;
proxy_redirect off;
}
But when I put a known website in the proxy pass_directive like google or apple the behaviour is different. I can not access e. g. apple.com as localhost:8080 with the following directives - I am immediately pushed to the real website and not the localhost:
server {
listen 8080;
server_name site.com;
location / {
proxy_pass http://apple.com/;
proxy_redirect off;
}
How is that behaviour called and how is it achieved? Can you guys put me in the right direction to understanding this? Thanks.
This is the correct behavior for the proxy service, you can find docs here https://docs.nginx.com/nginx/admin-guide/web-server/reverse-proxy/
Some information regarding proxies here https://en.wikipedia.org/wiki/Proxy_server
Example: if you want to go to http://apple.com/apple-card/, you can point out to localhost:8080/apple-card and you will be redirected to /requested_path
I'm using proxies with docker containers just to route the requests to the correct application using different ports.
In the long run what I'm trying to do is to be able to connect to any domain through any port, for example, mysite.com:8000 and then through Nginx have it get routed to an internal ip through the same port. So for example to 192.168.1.114:8000.
I looked into iptables although I'm planning on having multiple domains so that really doesn't work for me in this case (feel free to correct me if I'm wrong). I made sure that the internal ip and port that I'm trying to access is connectable and running and also that the ports I'm testing with are accessible from outside my network.
Here's my Nginx config that I'm currently using:
server {
set $server "192.168.1.114";
set $port $server_port;
listen 80;
listen 443;
listen 9000;
server_name mysite.com;
location / {
proxy_pass http://$server:$port;
proxy_set_header Host $host:$server_port;
}
}
Currently what happens is that when I send a request it just times out. I've been testing using port 80 and also port 9000. Any ideas on what I might be doing wrong? Thanks!
EDIT:
I changed my config file to look like the following
server {
listen 9000;
server_name _;
location / {
add_header Content-Type text/html;
return 200 'test';
}
I keep getting the same exact error. The firewall is turned off so it just seems like Nginx isn't listening on port 9000. Any ideas on why that might be the case?
The most effective way would be to have three separate server directives, one for each port. That way, the upstream server isn't dynamic, so Nginx knows it can keep long-lived connections open to each one.
If you really don't want to do this, you might be able to get around it by doing something like this:
proxy_pass http://upstream_server.example:$server_port;
$port doesn't exist, but $server_port does, so that should work. (It's not $port because there are two ports for each connection: the server port and the client port, which are $server_port and $remote_port, respectively.)
I've been hitting a wall for 3 days on this. Allow me to explain the matter:
We have a domain named demo1.example.com. We want demo1.example.com:90 to do proxy pass for 123.123.123.123:90, but not any other vhosts in the server like demo2.example.com.
What I mean is, that port should only work for that vhost, if someone tries to access demo2.example.com:90, it should not work. Currently, it is doing proxy_pass for any vhosts:90.
I hope I have explained the situation and that there is an actual solution for this.
Here's my current code:
server {
listen ip:80;
server_name subdomain.url.here;
and other normal server stuff for port 80
}
server {
listen ip:90;
location / {
proxy_pass 123.123.123.123:90;
proxy_set_header Host $host:$server_port;
}
}
I will really appreciate any help.