nginx multiple virtual hosts - error - too much files? - http

We use nginx for Hosting multiple hosts.
We have haves mixes, some sites are only available at http:// and other sites are able with https://
We create a new config-file for every virtual hosts (domain) if there is a new customer with a new Homepage.
All works correctly.
Today we create 2 new config-files on the nginx, copy the file to sites-enabled and make a nginx reload.
Now, no sites works again.
In the Browser we get the error that the Site is not available.
In the nginx error.log we get the message
*2948... no "ssl_certificate" is defined in server listening on SSL port while SSL handshaking, client: 178...., server 0.0.0.0:443
The Virtual-Host config File we create look like:
server {
listen 80;
server_name example.de;
return 301 http://www.$http_host$request_uri;
}
server {
listen 80;
server_name *.example.de;
location / {
access_log off;
proxy_pass http://example.test.de;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwareded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_sestheader Connection "upgrade";
}
}
We get the error only if we create a new virtual-host file in the sites-enabled. if we copy the code into a exisiting virtual host file it works correctly and all other sites works again.
Any Ideas why it doesn't work if we create a new file?
We deleted the new file, create it again but always get the same effect with the error Message in the error-log file.
I don't know if its important but we have 196 Files in the sites-enabled directory. If we create a new one the error come again, if we delete the file and write the code into a existing file it works correctly?!
We don't think that is a ssl error, we think that the count of files are the problem?!
We want to create always a new virtual-host config-file for each customer and don't edit add the config to a existing file.

Related

502 error bad gateway with nginx on a server behind corporate proxy

I'm trying to install a custom service to one of our corporae server (those kind of server that are not connected to internet, unless all the trafic passes to a corporate proxy).
This proxy has been setup with the classic export http_proxy=blablabla in the .bashrc file, among other things.
Now the interesting part. I'm trying to configure nginx to redirect all traffic from server_name to local url localhost:3000
Here is the basic configuration I use. Nothing too tricky.
server {
listen 443 ssl;
ssl_certificate /crt.crt;
ssl_certificate_key /key.key;
server_name x.y.z;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass &http_upgrade;
}
}
When I try to access the server_name, from my browser, I get a 502 error (a nginx error, so the requests hits my server).
When I try to access the local url curl --noproxy '*' https://localhost:3000, from the server, it is working. (I have to write the --noproxy '*' flag because of the export http_proxy=blablabla setup in the .bashrc file. If i do not write this, the localhost reuquest is send to our distant proxy, resulting the requet to fail)
My guess is that this is has to be related to the corporate proxy configuration, but I might been missleading.
Do you have any insights that you could share with me about this?
Thanks a lot !
PS: the issue is not related to any kind of SSL configuration, this part is working great
PS2: I'm not a sysadmin, all these issuses are confusing
PS3: the server I'm working on is a RHEL 7.9
It has nothing to do with proxy, found my solution here :
https://stackoverflow.com/a/24830777/4991067
Thanks anyway

How to reverse proxy an Nginx server to different web apps based on username and password

Context:
I have two identical python web apps running on ports http://localhost:6001/ and http://localhost:6002/respectively. Both of these apps serve the same end points but they process input data (from users) slightly differently. We can consider them as dev instance and staging instance.
Now I want to add Nginx web server as a reverse proxy for both of these websites. There are 2 reasons for this:
I don't want users to access these two ports directly. I want to hide these 2 ports from public and make them accessible via only the port on which Nginx server is listening. Let's say it is listening on port 7000
To add a username password setup to authenticate the users (with the help of auth_basic and proxy_pass provided by Nginx)
Current Setup:
First I tried implementing the basic auth for one app. I have a file named pyapp under /etc/nginx/sites-available. I linked the same file to the location /etc/nginx/sites-enabled/. The configuration file looks like this:
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server{
listen 7000;
server_name _;
location / {
auth_basic "baisc auth";
auth_basic_user_file /etc/nginx/.htpasswd;
proxy_pass http://127.0.0.1:6001/;
include /etc/nginx/proxy_params;
}
location ^~ /static {
proxy_pass http://127.0.0.1:6001/static/;
}
location ^~ /healthz {
proxy_pass http://127.0.0.1:6001/healthz/;
}
location ^~ /vendor {
proxy_pass http://127.0.0.1:6001/vendor/;
}
location /stream { # this is a socket
proxy_pass http://127.0.0.1:6001/stream/;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_read_timeout 86400;
}
}
With this setup, when I directly enter http://127.0.0.1:7000/ in my browser, I get the prompt to enter username and password. I enter the credentials which are present in .htpasswd file (which I created). Now I get to see the app which is running on port 6001 while the address in browser still shows http://127.0.0.1:7000/. So far so good.
Question:
Is there a way that I can do a conditional proxy_pass based on the username and password entered? basically when user visits http://127.0.0.1:7000/
if username1 and password1 are entered, then load the app from port 6001
if username2 and password2 are entered, then load the app from port 6002
I'm aware that we can get the details of authenticated user with $REMOTE_USER and also use a variable (something like $app_port) for storing the port of the app (idea is to replace harcoded 6001 with this variable and set it to 6001 or 6002 based on $REMOTE_USER). But I'm new to Nginx and not aware if this is actually possible because I couldn't find much info regarding this
It will be of great help if anyone can point me to the right documentation. Thanks a ton!!
PS: Here is a similar question that is close to what I want to do. Please let me know if I can provide more relevant information which improves clarity.

Change document root on RStudio_AMI

It is on an amazon server so I checked the following post:
Changing Apache document root on AWS EC2 does not work
and
How to edit httpd.conf file in AMAZON EC2
or in general: How do I change the root directory of an apache server?
Well the information provided did help me so far.
The only file I could find in the etc/apache2 folder is the following:
Edit: The content of the config file is:
"Alias /javascript /usr/share/javascript/
Options FollowSymLinks MultiViews
"
I asked two month ago on his site: http://www.louisaslett.com/RStudio_AMI/, but didnt get an answer.
My question: How can i change the document root on an RStudio AMI server, so that I can change the directory of the rstudio login page away from the root directory to - say - domain.com/login and have a landing page + other folders on the root (domain.com).
Thank you for your help!
Edit:
After the answer from Frédéric Henri and edit:
Here is the content of my rstudio.conf file.
location / {
proxy_pass http://localhost:8787;
proxy_redirect http://localhost:8787/ $scheme://$host/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_read_timeout 20d;
access_log /var/log/nginx/rstudio-access.log;
error_log /var/log/nginx/rstudio-error.log;
}
Assuming i have the index.html file in the directory /home/idx/index.html, how would i change the file then.
The following didnt work for me:
proxy_pass http://localhost/home/idx;
proxy_redirect http://localhost/home/idx/ $scheme://$host/;
Or:
proxy_pass /home/idx;
proxy_redirect /home/idx/ $scheme://$host/;
and where would i configure to redirect my rstudio login to.
Thank you!
You are right and looking at the right place if you were using apache2/httpd web server; but in the case of the RStudio AMI it uses nginx web server so all configuration are stored in /etc/nginx
You can review Configure nginx with multiple locations with different root folders on subdomain to see how you can work with the conf file
In your current configuration, it is defined mainly 3 locations:
http://<web_server_ip>/
The conf file used for this case is /etc/nginx/RStudioAMI/rstudio.conf It processes all request and forward to http://localhost:8787 where rstudio is running.
http://<web_server_ip>/julia
The conf file used for this case is /etc/nginx/RStudioAMI/julia.conf It processes all request and forward to http://localhost:8000 where julia is running.
http://<web_server_ip>/shiny
The conf file used for this case is /etc/nginx/RStudioAMI/shiny.conf It processes all request and forward to http://localhost:3838 where shiny is running.
For example you could have the main location (which is simply / pointing to a specific folder) and changed the rstudio.conf to handle http://<web_server_ip>/rstudio
EDIT
where would i configure to redirect my rstudio login to
If you want the rstudio login page to be accessible from http://<server>/rtudio (for example) you would need to change in the `/etc/nginx/RStudioAMI/rstudio.conf``
location /rstudio/ {
proxy_pass http://localhost:8787/;
proxy_redirect http://localhost:8787/ $scheme://$host/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_read_timeout 20d;
access_log /var/log/nginx/rstudio-access.log;
error_log /var/log/nginx/rstudio-error.log;
}
If you want to point the main http://<server>/index.html pointing to /home/idx/index.html you need to change in /etc/nginx/sites-enabled/RStudioAMI.conf and have a main location defined pointing to your root element
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80 default_server;
index index.html;
location = / {
root /var/www/html;
}
include /etc/nginx/RStudioAMI/*.conf;
}
Note: Anytime you make a change to a nginx conf file, you need to restart nginx.
with: /etc/init.d/nginx restart.

Nginx seems to be reverting to a directory of files?

I'm trying to run Ghost on my own VPS mostly for the learning experience (and here we are).
When I SSH in and start/restart nginx my blog URL seems to show the blog I'm trying to host, but I exit and leave it alone for a while and it seems to start showing an index of files:
Index of /
HEAD
branches/
cgi-bin/
config
description
hooks/
info/
objects/
refs/
I'm not exactly sure where that directory is coming from or what's going on, despite hours of digging into the documentation.
EDIT: Here is the [url].conf file located in /etc/nginx/conf.d
server {
listen 80;
server_name [url].com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://127.0.0.1:2368;
}
}
There's nothing in /etc/nginx/sites-available or /etc/nginx/sites-enabled.

Elasticsearch head plugin not working through nginx reverse proxy

I have elasticsearch with the head plugin installed running on a different server. I also set up an nginx reverse proxy for my ES instance. The configuration looks like below:
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name es.mydomain.net;
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect off;
proxy_pass http://127.0.0.1:9200;
}
}
}
Hitting the link http://es.mydomain.net/ works fine and I get a status 200 response. However, if I try to hit the link http://es.mydomain.net/_plugin/head/, I seemingly get a blank page. Note, the page loads fine if I access the head plug-in directly without the reverse proxy, via http://SERVERIP:PORT/_plugin/head/.
EDIT:
After doing some more debugging, I saw a net::ERR_CONTENT_LENGTH_MISMATCH error in the console for the page. After looking at nginx's log, to see what the error was, I came upon the true culprit, which is this error:
2015/05/27 16:26:48 [crit] 29765#0: *655 open() "/home/web/nginx/proxy_temp/6/0
0/0000000006" failed (13: Permission denied) while reading upstream, client: 10.
183.6.63, server: es.mydomain.com, request: "GET /_plugin/head/dist/app.js HTT
P/1.1", upstream: "http://127.0.0.1:9200/_plugin/head/dist/app.js", host: "es.my
domain.com", referrer: "http://es.mydomain.com/_plugin/head/"
I googled this one particularly, and it seems this can happen because the worker process is nobody, and the folder it is trying to read/write to may not have the right permissions. Still looking into this, but will update with an answer when found
EDIT 2: Removed unnecessary information to make issue more direct.
I was able to work out two solutions to get around the permission, so I'll present them both.
One thing to know about my nginx set-up is that I did not use sudo to install it. I unarchived the tar file, configured, and make installed it, so it was residing in /home/USERNAME/nginx/.
The issue was that starting nginx was creating a worker process under "nobody", which was then trying to read/write in /home/USERNAME/nginx/proxy_temp/, which it did not have permission to do. Solutions on the web said to just chown nobody to the temp folders, but this solution wasn't really appropriate in my particular case since we were inside USERNAME's home.
Solution 1:
Add user USERNAME; to top of nginx.conf, so that it would run the worker process as the specified username. This no longer led to a permission issue, as USERNAME had the permissions to read/write in the desired temp folders.
Solution 2:
Add proxy_temp_path to the server config. With this, you could specify a folder for the nobody process to create where it would have read/write permission. Note, you might still run into permission issues if the other *_temp folders are used by your nginx server.
server {
listen 80;
server_name es.mydomain.net;
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect off;
proxy_pass http://127.0.0.1:9200;
proxy_temp_path /foo/bar/proxy_temp
}
}
I personally preferred solution 1, as it would apply to all the server blocks, and I would not have to worry about the other *_temp folders once the conf file got more complex.
You have to install the plugin head on all ES nodes.

Resources