nginx/haproxy performance with many lines of configuration - nginx

I'm planning to build an environment that can programmatically setup child servers and sandbox them using nginx/ha. First I would ensure *.example.com points to nginx/ha. Then, for example, I would setup app x to only serve from x.example.com and then to allow app x to talk to a specific method of app y, I would add the following config:
server {
server_name x.example.com;
location /y/allowed/method/ {
proxy_pass y.example.com;
}
}
(And the corresponding haproxy config if I were to use ha)
My question is, how many servers and locations like this could I include in a given instance of nginx or haproxy while still maintaining high performance ? I know I can move access restrictions up a layer into the applications themselves though I'd prefer it at the network layer
Edit:
Answer is in the comments below. Essentially, if the config can fit in RAM, performance won't be affected.

You should generate nginx config with many server blocks (one per domain) like this:
server {
server_name x.example.com;
location /y/allowed/method/ {
proxy_pass y;
}
}
Reference:
http://nginx.org/en/docs/http/server_names.html
http://nginx.org/en/docs/http/request_processing.html

Related

Nginx reverse proxy without defining server_name?

I need to access a webserver in a private network, that has no direct access from outside. Opening router ports etc. is not an option.
I try to solve this with a raspi in that network, that i can manage via upswift.io.
Amongst other things, upswift allows temporary remote access to a given port over url's like
http://d-4307-5481-nc7nflrh26s.forwarding.upswift.io:56947/
This will map to a port that i can define.
With this, i can access a VNC Server on the pi, start a browser there and access the webserver i need.
But i hope to find a more elegant way, where i can access the Site from my local browser, and where the Pi does not need to run a Desktop.
As far as i found out, this can be done with a reverse proxy like nginx.
I found a lot of tutorials on it, but i struggle at one point:
After being able to install nginx and accessing it's default index page from my local browser through the temporary upswift.io url, i can't get it to work as reverse proxy.
I think my conf needs to look like
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://192.x.x.2;
}
}
Where example.com would be the name or IP under which the device is accessed.
Now, this would not work for me, as that name is dynamic.
So i wonder if there's a way to configure nginx so it does not need that name. I would expect that is possible, as the default webserver config works without it too. Are reverse proxies different in that regard?
Or, is there a better way than with a reverse proxy to do what i want?
You could try to define it as a default block
server {
listen 80 default_server;
server_name _;
location / {
proxy_pass http://192.x.x.2;
}
}

Super basic question on linking domain to server

I have a super basic question. I have a GoDaddy account set up with subdomain xxx.mydomain.com. I also have some services running in an AWS instance on xxx.xxx.xxx.xxx:7000. My question is, what do I do to configure so that when people click xxx.mydomain.com it goes to xxx.xxx.xxx.xxx:7000?
I am not talking about domain forwarding. In fact, I also hope to do the same for yyy.mydomain.com to link it to xxx.xxx.xxx.xxx:5000. I am running Ngnix in xxx.xxx.xxx.xxx. Maybe I need to configure something there?
You want a reverse proxy.
Add two A-records to your DNS configuration to map the subdomains to the IP address of the AWS instance. With GoDaddy, put xxx / yyy in the "Host" field and the IP address in the "Points to" field. (more info)
Since you already have Nginx running, you can use it as a reverse proxy for the two subdomains. Therefore, add two more server blocks to Nginx's configuration file. A very simple one could look like this:
http {
# ...
server {
server_name xxx.mydomain.com;
location / {
proxy_pass http://localhost:7000;
}
}
server {
server_name yyy.mydomain.com;
location / {
proxy_pass http://localhost:5000;
}
}
}
You might want to rewrite some headers depending on your services/applications (more info). Also, consider to use Nginx for SSL termination (more info).

Use Nginx to proxy to multiple sites under one domain name

I have a jenkins and sonarqube container running on a server. Is it possible to use Nginx to connect to each under the same domain name? So for example my.domain.com/jenkins hits the jenkins container and my.domain.com/sonar hits sonarqube?
My initial guess at the setup is something like this.
server {
listen 80;
server_name my.domain.com;
location /sonar {
proxy_pass http://sonarqube:9000/;
}
location /jenkins {
proxy_pass http://jenkins:8080/;
}
}
This issues I keep running into involve the subsequent calls made after the initial page. Is there a way to keep the /sonar/ and /jenkins/ piece in all the calls made?
You need to make those applications aware of the different context, so that they generate links correctly. For Jenkins you need to specify --prefix=/jenkins when starting Jenkins, for SonarQube you need to set up environment variable SONAR_WEB_CONTEXT=/sonar when starting SonarQube.
See:
https://www.jenkins.io/doc/book/installing/initial-settings/
https://docs.sonarqube.org/latest/setup/environment-variables/#header-2

Use nginx to proxy request to two different services?

Goal: Stand up a service that will accept requests to
http://foo.com/a
and turn around and proxy that request to two different services
http://bar.com/b
http://baz.com/c
The background is that I'm using a service that can integrate with other 3rd party services by accepting post request, and then posting event callbacks to that 3rd party service via posting to a URL. Trouble is that it only supports a single URL in its configuration, so it becomes impossible to integrate more than one service this way.
I've looked into other services like webhooks.io (waaaay too expensive for a moderate amount of traffic) and reflector.io (beta - falls over with a moderate amount of traffic), but so far nothing meets my needs. So I started poking around at standing up my own service, and I'm hoping for as hands-off as possible. Feels like nginx ought to be able to do this...
I came across the following snippet which someone else classified as a bug, but feels like the start of what I want:
upstream apache {
server 1.2.3.4;
server 5.6.7.8;
}
...
location / {
proxy_pass http://apache;
}
Rather than round robin request to apache, that will apparently send the same request to both apache servers, which sounds promising. Trouble is, it sends it to the same path on both server. In my case, the two services will have different paths (/b and /c), and neither is the same path as the inbound request (/a)
So... Any way to specify a destination path on each server in the upstream configuration, or some other clever way of doing this?
You can create local servers. Local servers proxy_pass add the different path (b,c).
upstream local{
server 127.0.0.1:8000;
server 127.0.0.1:8001;
}
location / {
proxy_pass http://local ;
}
server {
listen 8000;
location / {
proxy_pass http://1.2.3.4/b;
}
server {
listen 8001;
location / {
proxy_pass http://5.6.7.8/c;
}

Domain name and port based proxy

I think I finally grasped how Docker works, so I am getting ready for the next step: cramming a whole bunch of unrelated applications into a single server with a single public IP. Say, for example, that I have a number of legacy Apache2-VHost-based web-sites, so the best I could figure was to run a LAMP container to replicate the current situation, and improve later. For argument sake, here is what I have a container at 172.17.0.2:80 that serves
http://www.foo.com
http://blog.foo.com
http://www.bar.com
Quite straightforward: publishing port 80 lets me correctly access all those sites. Next, I have two services that I need to run, so I built two containers
service-a -> 172.17.0.3:3000
service-b -> 172.17.0.4:5000
and all is good, I can privately access those services from my docker host. The trouble comes when I want to publicly restrict access to service-a through service-a.bar.com:80 only, and to service-b through www.foo.com:5000 only. A lot of reading after, it would seem that I have to create a dreadful artefact called a proxy, or reverse-proxy, to make things more confusing. I have no idea what I'm doing, so I dove nose-first into nginx -- which I had never used before -- because someone told me it's better than Apache at dealing with lots of small tasks and requests -- not that I would know how to turn Apache into a proxy, mind you. Anyway, nginx sounded perfect for a thing that has to take a request a pass it onto another server, so I started reading docs and I produced the following (in addition to the correctly working vhosts):
upstream service-a-bar-com-80 {
server 172.17.0.3:3000;
}
server {
server_name service-a.bar.com;
listen 80;
location / {
proxy_pass http://service-a-bar-com-80;
proxy_redirect off;
}
}
upstream www-foo-com-5000 {
server 172.17.0.4:5000;
}
server {
server_name www.foo.com;
listen 5000;
location / {
proxy_pass http://www-foo-com-5000;
proxy_redirect off;
}
}
Which somewhat works, until I access http://blog.bar.com:5000 which brings up service-b. So, my question is: what am I doing wrong?
nginx (like Apache) always has a default server for a given ip+port combination. You only have one server listening on port 5000, so it is your defacto default server for services on port 5000.
So blog.bar.com (which I presume resolves to the same IP address as www.foo.com) will use the default server for port 5000.
If you want to prevent that server block being the default server for port 5000, set up another server block using the same port, and mark it with the default_server keyword, as follows:
server {
listen 5000 default_server;
root /var/empty;
}
You can use a number of techniques to render the server inaccessible.
See this document for more.

Resources