I have a jenkins and sonarqube container running on a server. Is it possible to use Nginx to connect to each under the same domain name? So for example my.domain.com/jenkins hits the jenkins container and my.domain.com/sonar hits sonarqube?
My initial guess at the setup is something like this.
server {
listen 80;
server_name my.domain.com;
location /sonar {
proxy_pass http://sonarqube:9000/;
}
location /jenkins {
proxy_pass http://jenkins:8080/;
}
}
This issues I keep running into involve the subsequent calls made after the initial page. Is there a way to keep the /sonar/ and /jenkins/ piece in all the calls made?
You need to make those applications aware of the different context, so that they generate links correctly. For Jenkins you need to specify --prefix=/jenkins when starting Jenkins, for SonarQube you need to set up environment variable SONAR_WEB_CONTEXT=/sonar when starting SonarQube.
See:
https://www.jenkins.io/doc/book/installing/initial-settings/
https://docs.sonarqube.org/latest/setup/environment-variables/#header-2
Related
I need to access a webserver in a private network, that has no direct access from outside. Opening router ports etc. is not an option.
I try to solve this with a raspi in that network, that i can manage via upswift.io.
Amongst other things, upswift allows temporary remote access to a given port over url's like
http://d-4307-5481-nc7nflrh26s.forwarding.upswift.io:56947/
This will map to a port that i can define.
With this, i can access a VNC Server on the pi, start a browser there and access the webserver i need.
But i hope to find a more elegant way, where i can access the Site from my local browser, and where the Pi does not need to run a Desktop.
As far as i found out, this can be done with a reverse proxy like nginx.
I found a lot of tutorials on it, but i struggle at one point:
After being able to install nginx and accessing it's default index page from my local browser through the temporary upswift.io url, i can't get it to work as reverse proxy.
I think my conf needs to look like
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://192.x.x.2;
}
}
Where example.com would be the name or IP under which the device is accessed.
Now, this would not work for me, as that name is dynamic.
So i wonder if there's a way to configure nginx so it does not need that name. I would expect that is possible, as the default webserver config works without it too. Are reverse proxies different in that regard?
Or, is there a better way than with a reverse proxy to do what i want?
You could try to define it as a default block
server {
listen 80 default_server;
server_name _;
location / {
proxy_pass http://192.x.x.2;
}
}
I'm not sure if I can even describe what I want/my current problem because honestly, I don't know exactly what I am doing. currently I have arenu.com.br going to a nginx server reverse proxying to port 8080. Here's what I did on my server: https://www.digitalocean.com/community/tutorials/how-to-set-up-a-node-js-application-for-production-on-ubuntu-16-04
I have a simple node express hello.js file listening at port 8080 like this:
const express = require('express')
const app = express()
app.listen(8080, () => console.log('listening at 8080'))
app.use(express.static('public'))
inside my 'public' folder there is an index.html page which is what you see when you go to the website.
The only way I found how to make other subdomains point to other static files is through the nginx settings like this:
server{
location /snakegame/ {
proxy_pass http://localhost:8080/snakegame/index.html;
}
}
Here I am accessing another index.html file located at /public/snakegame/index.html by proxying arenu.com.br/snakegame to it. This setup however, is not versatile at all and I have to manually add every single subfolder to the settings everytime, and multiple problems occur. One of the problems as an example is this:
While accessing directly my server at port 8080: http://142.93.144.178:8080/snakegame/index.html the javascript on that page works perfectly (use arrow keys to control snake). But when accessing the same file using this reverse proxy method, the javascript does not work: https://arenu.com.br/snakegame/ (you can even look at the console to see that it can't access the js file)
Is there a better way for nginx to access multiple subfolders and static content easily? What should I do?
I believe I had the same Kinda problem just a few days ago, there is little documentation about this as most people prefer to define their subfolders manually in the Nginx server config. In my case, I was using LXD containers and needed to pass a domain through the hypervisor to the LXD container. In order to do this I had to change the config for the Nginx hypervisor container (proxy_pass):
location /snakegame/ {
proxy_pass http://127.0.0.0:8080/snakegame/;
try_files $uri $uri/ #backend;
}
location #backend {
proxy_pass http://127.0.0.0:8080/snakegame/;
}
I have changed the IP to match your requirments. I belive what happens is any request to the main index in your case index.html is forwarded but then the rest is sent to the Nginx proxy. the try_files and #backend seems to fix this. I'm not going to claim I know why :).
I have a server with CentOS, and there I will have at least 4 Golang applications running, every one of them is a different site that I should be able to access in the browser with domain/subdomains as follows:
dev00.mysite.com
dev01.mysite.com
dev02.mysite.com
dev03.mysite.com
So, I need to configure some kind of software that redirects the requests to the correct Golang process. Every site will be running in a different port, so for example if someone calls dev00.mysite.com I should be able to send that request to the process of dev00 site (this is for development porpouses, not production). So, here I'm starting to believe that I need Nginx or Caddy as I read, but I have no experience with none of them.
Can someone confirm that this is the way to fix that problem? and where can I find some example of configuration of any of that servers redirecting to Golang applications?
And, in the future if a have a lot (really a lot) of domains running in the same server, which of that servers is better? who is better with high load?
Yes, Nginx can solve your problem:
Start a web server using the standard library of Go or Caddy.
Redirect request to Go application using Nginx:
Example Nginx configuration:
server {
listen 80;
server_name dev00.mysite.com;
...
location / {
proxy_pass http://localhost:8000;
...
}
}
server {
listen 80;
server_name dev01.mysite.com;
...
location / {
proxy_pass http://localhost:8001;
...
}
}
I think I finally grasped how Docker works, so I am getting ready for the next step: cramming a whole bunch of unrelated applications into a single server with a single public IP. Say, for example, that I have a number of legacy Apache2-VHost-based web-sites, so the best I could figure was to run a LAMP container to replicate the current situation, and improve later. For argument sake, here is what I have a container at 172.17.0.2:80 that serves
http://www.foo.com
http://blog.foo.com
http://www.bar.com
Quite straightforward: publishing port 80 lets me correctly access all those sites. Next, I have two services that I need to run, so I built two containers
service-a -> 172.17.0.3:3000
service-b -> 172.17.0.4:5000
and all is good, I can privately access those services from my docker host. The trouble comes when I want to publicly restrict access to service-a through service-a.bar.com:80 only, and to service-b through www.foo.com:5000 only. A lot of reading after, it would seem that I have to create a dreadful artefact called a proxy, or reverse-proxy, to make things more confusing. I have no idea what I'm doing, so I dove nose-first into nginx -- which I had never used before -- because someone told me it's better than Apache at dealing with lots of small tasks and requests -- not that I would know how to turn Apache into a proxy, mind you. Anyway, nginx sounded perfect for a thing that has to take a request a pass it onto another server, so I started reading docs and I produced the following (in addition to the correctly working vhosts):
upstream service-a-bar-com-80 {
server 172.17.0.3:3000;
}
server {
server_name service-a.bar.com;
listen 80;
location / {
proxy_pass http://service-a-bar-com-80;
proxy_redirect off;
}
}
upstream www-foo-com-5000 {
server 172.17.0.4:5000;
}
server {
server_name www.foo.com;
listen 5000;
location / {
proxy_pass http://www-foo-com-5000;
proxy_redirect off;
}
}
Which somewhat works, until I access http://blog.bar.com:5000 which brings up service-b. So, my question is: what am I doing wrong?
nginx (like Apache) always has a default server for a given ip+port combination. You only have one server listening on port 5000, so it is your defacto default server for services on port 5000.
So blog.bar.com (which I presume resolves to the same IP address as www.foo.com) will use the default server for port 5000.
If you want to prevent that server block being the default server for port 5000, set up another server block using the same port, and mark it with the default_server keyword, as follows:
server {
listen 5000 default_server;
root /var/empty;
}
You can use a number of techniques to render the server inaccessible.
See this document for more.
I'm planning to build an environment that can programmatically setup child servers and sandbox them using nginx/ha. First I would ensure *.example.com points to nginx/ha. Then, for example, I would setup app x to only serve from x.example.com and then to allow app x to talk to a specific method of app y, I would add the following config:
server {
server_name x.example.com;
location /y/allowed/method/ {
proxy_pass y.example.com;
}
}
(And the corresponding haproxy config if I were to use ha)
My question is, how many servers and locations like this could I include in a given instance of nginx or haproxy while still maintaining high performance ? I know I can move access restrictions up a layer into the applications themselves though I'd prefer it at the network layer
Edit:
Answer is in the comments below. Essentially, if the config can fit in RAM, performance won't be affected.
You should generate nginx config with many server blocks (one per domain) like this:
server {
server_name x.example.com;
location /y/allowed/method/ {
proxy_pass y;
}
}
Reference:
http://nginx.org/en/docs/http/server_names.html
http://nginx.org/en/docs/http/request_processing.html