How to access minicron using Nginx at specified URL - nginx

I've installed minicron from https://jamesrwhite.github.io/minicron/ on an ubuntu aws server with Nginx and when I don't specify a directory in location, the primary url is redirected to the Cron page as expected. However, if I add 'minicron' to the location as you see below, I get "Sinatra doesn't know this ditty" error. I'm at a lost and google hasn't been any help for either of these problems. My main goal is just to get minicron to load at a separate sub domain or directory than the main url. Any help would be appreciated.
http{
...
server {
# The port you want minicron to be available, with nginx port 80
# is implicit but it's left here for demonstration purposes
listen 80;
# The host you want minicron to available at
server_name *.com;
location /minicron {
# Pass the real ip address of the user to minicron
proxy_set_header X-Real-IP $remote_addr;
# minicron defaults to running on port 9292, if you change
# this you also need to change your minicron.toml config
proxy_pass http://127.0.0.1:9292;
}
}
}

Currently the URI /minicron is being sent upstream to http://127.0.0.1:9292/minicron. Your previous test would have hit http://127.0.0.1:9292/.
You need to rewrite the URI before sending it upstream. Maybe:
location /minicron/ {
...
proxy_pass http://127.0.0.1:9292/;
}
location = /minicron { rewrite ^ /minicron/ last; }
See this and this for details.

Related

Nginx rewrite with proxy_pass

I recently have a requirement in Nginx to rewrite a URL and then forward this onto to another backend server to a dynamic proxy pass address. I've tried a few things but have not had much luck at the moment. For example, this is the kind of setup I have in my nginx.conf file:
server {
listen 443;
server_name scheduler.domain-name;
rewrite ^scheduler(.*)/(.*)/(.*) $2$1$3; # scheduler.domain.local/changepass/report?target=service
...
location / {
proxy_pass $to-rewrite-address:9443; # changepass.domain.local/report?target=service
...
}
Essentially, I just need to use a re-written URL variable to forward the request on a different port but can't seen to get this to work.
I've done quite a bit of searching but haven't found the solution to this as of yet, though I understand that the DNS resolver has to be set when using a proxy pass variable (Dynamic proxy_pass to $var with nginx 1.0).
Grateful if anyone could advise on how the above could be achieved, many thanks.
Assuming your endpoint would always be specified as the first part of your URI, here is an example of configuration that should work:
server {
listen 443;
server_name scheduler.domain-name;
resolver <your resolver for domain.local>;
...
location ~ ^/(?<endpoint>changepass|endpoint2|endpoint3|...)(?<route>/.*) {
proxy_pass http://$endpoint.domain.local:9443$route;
}
}
I'm using named capturing groups here for a better readibility, this location block is equal to
location ~ ^/(changepass|endpoint2|endpoint3|...)(/.*) {
proxy_pass http://$1.domain.local:9443$2;
}
I'm not sure if query arguments would be preserved with such a construction, if they won't, change
proxy_pass http://$endpoint.domain.local:9443$route;
to
proxy_pass http://$endpoint.domain.local:9443$route$is_args$args;

nginx reverse proxy to different applications based on an index in the host name

Previously, I had a single staging environment reachable behind a DNS staging.example.com/. Behind this address is a nginx proxy with the following config. Note that my proxy either redirects
To a (s3 behind) cloudfront distribution (app1)
To a loadbalancer by forwarding the host name (and let's consider my ALB is able to pick the appropriate app based on the host name) (app2)
server {
listen 80;
listen 443 ssl;
server_name
staging.example.com
;
location / {
try_files /maintenance.html #app1;
}
location ~ /(faq|about_us|terms|press|...) {
try_files /maintenance.html #app2;
}
[...] # Lots of similar config than redirects either to app1 or app2
# Application hosted on s3 + CloudFront
location #app1 {
proxy_set_header Host app1-staging.example.com;
proxy_pass http://d2c72vkj8qy1kv.cloudfront.net;
}
# Application hosted behind a load balancer
location #app2 {
proxy_set_header Host app2-staging.example.internal;
proxy_set_header X-ALB-Host $http_host;
proxy_pass https://staging.example.internal;
}
}
Now, my team needs a couple more staging environments. We are not yet ready to transition to docker deployments (the ultimate goal of being able to spawn a complete infra per branch that we need to test... is a bit overkill given our team size) and I'm trying to pull out some tricks instead so we can easily get a couple more staging environments using roughly the same nginx config.
Assume I have a created a few more DNS names with a index_i like staging1.example.com, staging2.example.com. So my nginx proxy will receive requests with a host header that looks like staging#{index_i}.example.com
What I'm thinking of doing :
For my s3 + Cloudfront app, I'm thinking of nesting my files under [bucket_id]/#{index_i}/[app1_files] (previously they were directly in the root folder [bucket_id]/[app1_files])
For my load balancer app, let's assume my load balancer knows where to dispatch https://staging#{iindex_i}.example.com requests.
I'm trying to pull something like this
# incoming host : staging{index_i}.example.com`
server {
listen 80;
listen 443 ssl;
server_name
staging.example.com
staging1.example.com
staging2.example.com # I can list them manually, but is it possible to have something like `staging*.example.com` ?
;
[...]
location #app1 {
proxy_set_header Host app1-staging$index_i.example.com; # Note the extra index_i here
proxy_pass http://d2c72vkj8qy1kv.cloudfront.net/$index_i; # Here proxy_passing to a subfolder named index_i
}
location #app2 {
proxy_set_header Host app2-staging$index_i.example.internal; # Note the extra index_i here
proxy_set_header X-ALB-Host $http_host;
proxy_pass http://staging$index_i.example.internal; # Here I am just forwarding the host header basically
}
So ultimately my questions are
- When my nginx server receives a connexion, can I extract the index_i variable from the request host header (using maybe some regex ?)
- If yes, how can effectively implement the app1 and app2 blocks with index_i ?
After looking at several other questions, I was able to come up with this config that works perfectly: it's possible to extract the said variable using a regex in the host name.
On the downside, for my static single page applications, to make it work with S3, I had to create one bucket per "staging index" (because of the way static hosting on S3 works with website hosting / a single index.html to be used on 404). This made it in turn impossible to work with a single Cloudfront distribution in front of my (previously single) s3.
Here is example of using a proxy with a create-react-app frontend and server-side rendering behind an ALB
server {
listen 80;
listen 443 ssl;
server_name ~^staging(?<staging_index>\d*).myjobglasses.com$
location #create-react-app-frontend {
proxy_pass http://staging$staging_index.example.com.s3-website.eu-central-1.amazonaws.com;
}
location #server-side-rendering-app {
# Now Amazon Application Load Balancer can redirect traffic based on ANY HTTP header
proxy_set_header EXAMPLE-APP old-frontend;
proxy_pass https://staging$staging_index.myjobglasses.com;
}

How to configure nginx to expose multiple services on Jelastic?

Through Jelastic's dashboard, I created this:
I just clicked "New environment", then I selected nodejs. I added a docker image (of mailhog).
Now, I would like that port 80 of my environment serves the nodejs application. This is by default so. Therefore nothing to do.
In addition to this, I would like port 8080 (or any other port than 80, like port 5000 for example) of my environment serves mailhog, hosted on the docker image. To do that, I added the following lines to the nginx-jelastic.conf (right after the first server serving the nodejs app):
server {
listen *:8080;
listen [::]:8080;
server_name _;
location / {
proxy_pass http://mailhog_upstream;
}
}
where I have also defined mailhog_upstream like this:
upstream mailhog_upstream{
server 10.102.8.215; ### DEFUPPROTO for common ###
sticky path=/; keepalive 100;
}
If I now browse my environment's 8080 port, then I see ... the nodejs app. If I try any other port than 80 or 8080, I see nothing. Putting another server_name doesn't help. I tried several things but nothing seems to work. Why is that? What am I doing wrong here?
Then I tried to get rid of the above mailhog_upstream and instead write
server {
listen *:5000;
listen [::]:5000;
server_name _;
location / {
proxy_pass http://10.102.8.215;
}
}
Browsing the environment's port 5000 doesn't work either.
If I replace the IP of the nodejs' app with that of my mailhog service, then mailhog runs on port 80. I don't understand how I can make the nodejs app run on port 80 and the mailhog service on port 5000 (or any other port than 80).
Could someone enlighten me please?
After all those failures, I tried another ansatz. Assume the path my env is example.com/. What I've tried above is to get mailhog to work upon calling example.com:5000, which I failed doing. Then I tried to make mailhog available through a call to example.com/mailhog. In order to do that, I got rid of all my modifications above and completed the current server in nginx-jelastic.conf with
location /mailhog {
proxy_pass http://10.102.8.96:8025/;
add_header Set-Cookie "SRVGROUP=$group; path=/";
}
That works in the sense that if I know browse example.com/mailhog, then I get something on the page, but not exactly what I want: it's the mailhog's page without any styling. Also, when I call mailhog's API through example.com/mailhog/api/v2/messages, I get a successful response without body, when I should've received
{"total":0,"count":0,"start":0,"items":[]}
What am I doing wrong this time?
Edit
To be more explicit, I put the following manifest that exhibits the second problem with the nginx location.
Full locations list for your case is a following:
(please pay attention to URIs in upstreams, they are different)
location /mailhog { proxy_pass http://172.25.2.128:8025/; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection " upgrade"; }
location /mailhog/api { proxy_pass http://172.25.2.128:8025/api; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection " upgrade"; }
location /css { proxy_pass http://172.25.2.128:8025; }
location /js { proxy_pass http://172.25.2.128:8025; }
location /images { proxy_pass http://172.25.2.128:8025; }
that works for me with your application
# curl 172.25.2.127/mailhog/api/v2/messages
{"total":0,"count":0,"start":0,"items":[]}
The following ports are opened by default: 80, 8080, 8686, 8443, 4848, 4949, 7979.
Additional ports can be opened using:
endpoints - maps the container internal port to random external
via Jelastic Shared LB
Public IP - provides a direct access to all ports of your
container
Read more in the following article: "Container configuration - Ports". This one may also be useful:"Public IP vs Shared Load Balancer"

Nginx: Listen to Specific host's specific port and do proxy_pass for that host:port only

I've been hitting a wall for 3 days on this. Allow me to explain the matter:
We have a domain named demo1.example.com. We want demo1.example.com:90 to do proxy pass for 123.123.123.123:90, but not any other vhosts in the server like demo2.example.com.
What I mean is, that port should only work for that vhost, if someone tries to access demo2.example.com:90, it should not work. Currently, it is doing proxy_pass for any vhosts:90.
I hope I have explained the situation and that there is an actual solution for this.
Here's my current code:
server {
listen ip:80;
server_name subdomain.url.here;
and other normal server stuff for port 80
}
server {
listen ip:90;
location / {
proxy_pass 123.123.123.123:90;
proxy_set_header Host $host:$server_port;
}
}
I will really appreciate any help.

Nginx: how to add /something to a uri and still keep it working

I have a nginx instance running. My config is something like the following.
server {
listen 80;
listen 443;
location / {
...
proxy_pass http://127.0.0.1:8080;
...
proxy_redirect http://127.0.0.1:8080 example.com;
}
}
I have some software running in 8080 and I want that the user enters example.com/somepath and be able to be redirected to the root 127.0.0.1:8080 through my domain. The software should receive all urls without /somepath but the browser should still show /somepath in the name.
I am quite new so sorry for the basic question I could not find any relevant info on how to do this exactly: I tried rewrite rules and setting location /mysoftware { tests with no luck.
The client browser uses /somepath/... to access /...in the application. This means that nginx must rewrite the URI before passing it upstream.
The proxy_pass directive has a basic rewrite capability. See this document for details. For example:
location /somepath/ {
proxy_pass http://127.0.0.1:8080/;
...
}
Alternatively, you might use a rewrite ... break statement. See this document for details. For example:
location /somepath {
rewrite ^/somepath/?(.*)$ /$1 break;
proxy_pass http://127.0.0.1:8080;
...
}
The difficult part is preventing your application from breaking out of /somepath. The proxy_redirect directive can handle the 3xx responses from your application. But the location of resource files (.css and .js) and the target for hyperlinks, can cause problems for applications that are not aware that they need to stay inside a subdirectory.

Resources