nginx runs an subdomain to a new app with an existing website - nginx

My server already has an RoR app running on Nginx at port 80 , now I want to set the Gitlab in the same server,
Now, I have to access the Gitlab by myserver:1987/
If I want to a access the Gitlab by myserver/gitlab
How to achieve it ?
nginx.conf
http{
include /opt/nginx/sites-enabled/*;
}
...
server {
listen 80;
server_name localhost;
passenger_enabled on;
#charset koi8-r;
#access_log logs/host.access.log main;
location / {
index index.html index.htm;
root /home/poc/projects/zeus/public;
passenger_enabled on;
}
sites-enabled/gitlab.conf
# GITLAB
# Maintainer: #randx
# CHUNKED TRANSFER
# It is a known issue that Git-over-HTTP requires chunked transfer encoding [0] which is not
# supported by Nginx < 1.3.9 [1]. As a result, pushing a large object with Git (i.e. a single large file)
# can lead to a 411 error. In theory you can get around this by tweaking this configuration file and either
# - installing an old version of Nginx with the chunkin module [2] compiled in, or
# - using a newer version of Nginx.
#
# At the time of writing we do not know if either of these theoretical solutions works. As a workaround
# users can use Git over SSH to push large files.
#
# [0] https://git.kernel.org/cgit/git/git.git/tree/Documentation/technical/http-protocol.txt#n99
# [1] https://github.com/agentzh/chunkin-nginx-module#status
# [2] https://github.com/agentzh/chunkin-nginx-module
upstream gitlab {
server unix:/home/git/gitlab/tmp/sockets/gitlab.socket;
}
server {
listen *:1987; # e.g., listen 192.168.1.1:80; In most cases *:80 is a good idea
server_name dqa-test; # e.g., server_name source.example.com;
server_tokens off; # don't show the version number, a security best practice
root /home/git/gitlab/public;
# Increase this if you want to upload large attachments
# Or if you want to accept large git objects over http
client_max_body_size 5m;
# individual nginx logs for this gitlab vhost
access_log /opt/nginx/logs/gitlab_access.log;
error_log /opt/nginx/logs/gitlab_error.log;
location / {
# serve static files from defined root folder;.
# #gitlab is a named location for the upstream fallback, see below
try_files $uri $uri/index.html $uri.html #gitlab;
}
# if a file, which is not found in the root folder is requested,
# then the proxy pass the request to the upsteam (gitlab unicorn)
location #gitlab {
proxy_read_timeout 300; # Some requests take more than 30 seconds.
proxy_connect_timeout 300; # Some requests take more than 30 seconds.
proxy_redirect off;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://gitlab;
}
error_page 502 /502.html;
}

Related

Nginx response error 404 when processing a specific url

I have updated my gems and I have lost my old nginx config. I´m setting a new config in nginx.conf. My new Nginx version is 1.17.3. The home page is loading and navigation from home is also right. But, if I directly type a specific url in my browser, Nginx responds a 404.
I don´t remember what I´m missing. My nginx.conf file:
events {
worker_connections 1024;
}
http {
upstream api.development {
# Path to Puma SOCK file, as defined previously
server unix:/tmp/puma.sock fail_timeout=0;
}
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
# set client body size to 10M #
client_max_body_size 10M;
gzip on;
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
root /path-to-root/app;
index index.html index.htm;
# Proxy requests to backend API
location /api {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
#proxy_set_header X-Forwarded-Proto https;
proxy_redirect off;
rewrite ^/api(.*) /$1 break;
proxy_pass http://api.development;
}
}
include servers/*;
}
I think I have just already got it by adding: try_files $uri $uri/ /index.html =404;
My concern is that with this line it´s working on development environment, however, in production, I haven´t updated the server yet, and I don´t see that I have any try_files enabled in my config file. So´I don´t understand if this is what I should do. I´m not skilled at all in nginx config.

Unable to locate root directory of a web app in NGINX?

I'm struggling with a NGINX based web app , i need to find its root directory that's being served. Its a subdomain and a simple nano /etc/nginx/sites-available/app.refridge.com it has the following contents.
##
# You should look at the following URL's in order to grasp a solid understanding
# of Nginx configuration files in order to fully unleash the power of Nginx.
# http://wiki.nginx.org/Pitfalls
# http://wiki.nginx.org/QuickStart
# http://wiki.nginx.org/Configuration
#
# Generally, you will want to move this file somewhere, and start with a clean
# file but keep this around for reference. Or just disable in sites-enabled.
#
# Please see /usr/share/doc/nginx-doc/examples/ for more detailed examples.
##
# Default server configuration
#
server {
listen *:80;
server_name app.refridge.com;
location / {
rewrite ^ https://$server_name$request_uri? permanent;
}
}
server {
listen *:443 ssl;
server_name app.refridge.com;
access_log /var/log/nginx/app.refridge.com-access.log;
error_log /var/log/nginx/app.refridge.com-error.log;
# SSL configuration
ssl_certificate /etc/nginx/ssl/STAR_refridge_com-bundle.crt;
ssl_certificate_key /etc/nginx/ssl/star_refridge_com.key;
# listen 443 ssl default_server;
There's no root defined even for port 80 and 443 but still the website loads. I mean is there anything i'm missing i need to find the files and do a backup thats why.
Any help would be appreciated.
P.S it's a DigitalOcean droplet.
**UPDATE: ** I think there a reverse proxy setup
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header X-Forwarded-For $remote_addr;
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
# try_files $uri $uri/ =404; }
Assuming 3000 port so this is a node.js application but still there should be files which i can access and do a backup.
Thanks

How do I get Nginx to return 444 if the request doesn't match a path?

Short version:
I want to use NGINX as a reverse proxy so that a client accessing the public facing URL gets served API data from the internal Gunicorn server sitting behind the proxy:
external path (proxy) => internal app
<static IP>/ABC/data => 127.0.0.1:8001/data
I'm not getting the location mapping correct.
Long version:
I am setting up NGINX for the first time and am attempting to use it as a reverse proxy for a rest api served by Gunicorn. The api is served at 127.0.0.1:8001 and I can access it from the server and get the appropriate responses, so that piece I believe is working correctly. It's running persistently using Supervisord.
I'd like to access one of the API endpoints externally at <static IP>/ABC/data. On the Gunicorn server, this endpoint available at localhost:8001/data. Eventually I'd like to serve other web apps through NGINX with roots like <static IP>/foo, <static IP>/bar, etc. Each of these web apps would be from an independent Python app. But currently, when I try to access the endpoint externally, I get a 444 error code, so I think I am not configuring NGINX correctly.
I put together my first attempt at an NGINX config from the config posted on the Guincorn site. Instead of a single config, I've split it into a global config and a site specific one. My global config at etc/nginx/nginx.conf looks like:
user ops;
worker_processes 1;
pid /run/nginx.pid;
error_log /tmp/nginx.error.log;
events {
worker_connections 1024; # increase if you have lots of clients
accept_mutex off; # set to 'on' if nginx worker_processes > 1
use epoll;
# 'use epoll;' to enable for Linux 2.6+
# 'use kqueue;' to enable for FreeBSD, OSX
}
http {
include mime.types;
# fallback in case we can't determine a type
default_type application/octet-stream;
access_log /tmp/nginx.access.log combined;
sendfile on;
server_tokens off;
server {
# if no Host match, close the connection to prevent host spoofing
listen 80 default_server;
return 444;
}
gzip on;
gzip_disable "msie6";
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
Then my site specific configuration that is in /etc/nginx/sites-available (and is symlinked in /etc/nginx/sites-enabled) is:
upstream app_server {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response
# for UNIX domain socket setups
# server unix:/tmp/gunicorn_abc_api.sock fail_timeout=0;
# for a TCP configuration
server 127.0.0.1:8001 fail_timeout=0;
}
server {
# use 'listen 80 deferred;' for Linux
# use 'listen 80 accept_filter=httpready;' for FreeBSD
listen 80 deferred;
client_max_body_size 4G;
# set the correct host(s) for your site
server_name _;
keepalive_timeout 100;
# path for static files
#root /path/to/app/current/public;
location /ABC {
# checks for static file, if not found proxy to app
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# enable this if and only if you use HTTPS
# proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $http_host;
# we don't want nginx trying to do something clever with
# redirects, we set the Host: header above already.
proxy_redirect off;
proxy_pass http://app_server;
}
# error_page 500 502 503 504 /500.html;
# location = /500.html {
# root /path/to/app/current/public;
# }
}
The configs pass service nginx checkconfig, but I end up seeing the following in my access log:
XXX.XXX.X.XXX - - [09/Sep/2016:01:03:18 +0000] "GET /ABC/data HTTP/1.1" 444 0 "-" "python-requests/2.10.0"
I think I've somehow not configured the routes properly. Any suggestions would be appreciated.
UPDATE:
I have it working now with a few changes. I commented out the following block:
server {
# if no Host match, close the connection to prevent host spoofing
listen 80 default_server;
return 444;
}
I can't figure out how to get the behavior of returning 444 unless there is a valid route. I'd like to, but I'm still stuck on this part. This block seems to eat all incoming requests. I've also changed the app config to:
upstream app_server {
server 127.0.0.1:8001 fail_timeout=0;
}
server {
# use 'listen 80 deferred;' for Linux
# use 'listen 80 accept_filter=httpready;' for FreeBSD
listen 80 deferred;
client_max_body_size 100M;
# set the correct host(s) for your site
server_name $hostname;
keepalive_timeout 100;
location /ABC {
# checks for static file, if not found proxy to app
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# enable this if and only if you use HTTPS
# proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $http_host;
# we don't want nginx trying to do something clever with
# redirects, we set the Host: header above already.
proxy_redirect off;
rewrite ^/ABC/(.*) /$1 break;
proxy_pass http://app_server;
}
}
Basically I seem to have had to explicity set server_name and also use rewrite to get the correct mapping to the app server.
This works fine for me, returns 444 (hangs up connection) only if no other server name is matched:
server {
listen 80;
server_name "";
return 444;
}

Why is Nginx routing all traffic to one subdomain?

I am new to nginx. I am trying to install GitLabs alongside an existing php project which is currently being served by Apache on port 80. My plan is to get them both working side by side on port 90 and then turn off Apache, switching both projects to Nginx on port 80.
Okay. The problem is that both subdomains are being captured by the server for my php project which should only be served to requests for db.mydomain.com. For the php project I have a file called: ccdb symlinked into /etc/nginx/sites-enabled. It contains:
server {
server_name db.mydomain.com;
listen 90; ## listen for ipv4; this line is default and implied
#listen [::]:80 default ipv6only=on; ## listen for ipv6
root /var/www/ccdb;
index index.html index.htm index.php;
}
However, for some reason, traffic to git.mydomain.com is being serverd from /var/www/ccdb even though I have another file symlinked alongside that one called gitlab with this content:
# GITLAB
# Maintainer: #randx
# App Version: 5.0
upstream gitlab {
server unix:/home/git/gitlab/tmp/sockets/gitlab.socket;
}
server {
listen 90; # e.g., listen 192.168.1.1:80; In most cases *:80 is a good idea
server_name git.mydomain.com; # e.g., server_name source.example.com;
server_tokens off; # don't show the version number, a security best practice
root /home/git/gitlab/public;
# individual nginx logs for this gitlab vhost
access_log /var/log/nginx/gitlab_access.log;
error_log /var/log/nginx/gitlab_error.log;
location / {
# serve static files from defined root folder;.
# #gitlab is a named location for the upstream fallback, see below
try_files $uri $uri/index.html $uri.html #gitlab;
}
# if a file, which is not found in the root folder is requested,
# then the proxy pass the request to the upsteam (gitlab unicorn)
location #gitlab {
proxy_read_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694
proxy_connect_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694
proxy_redirect off;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://gitlab;
}
}
NOTE: I am accessing the two domains from an OSX machine on the same local network which has entries in it's /etc/hosts file like so:
192.168.1.100 db.mydomain.com
192.168.1.100 git.mydomain.com
Try to use:
server_name git.mydomain.com:90;
... and:
server_name db.mydomain.com:90;

Gitlab clone over http fails to authenticate from external network

I have Gitlab 5.2 + Nginx installed on a local machine in my university. Clone over http works for machines that are within the internal network, but trying to clone from a machine on an external network results in an "fatal: Authentication failed" message, even though the exact same credentials are supplied. (I use the same credentials as the ones I use to log in to Gitlab via the web interface)
The Gitlab web interface is accessible from external networks. It is only the clone over http that fails (clone over ssh is not possible because port 22 is blocked)
Here are some lines from the relevant configuration files:
from config/gitlab.yml
host: mydomain
port: 80
https: false
Here are the relevant lines from the ngnix config file
server {
listen *:80 default_server; # In most cases *:80 is a good idea
server_name mydomain; # e.g., server_name source.example.com;
root /home/git/gitlab/public;
# individual nginx logs for this gitlab vhost
access_log /var/log/nginx/gitlab_access.log;
error_log /var/log/nginx/gitlab_error.log;
location / {
# serve static files from defined root folder;.
# #gitlab is a named location for the upstream fallback, see below
try_files $uri $uri/index.html $uri.html #gitlab;
}
# if a file, which is not found in the root folder is requested,
# then the proxy pass the request to the upsteam (gitlab unicorn)
location #gitlab {
proxy_read_timeout 2000; # https://github.com/gitlabhq/gitlabhq/issues/694
proxy_connect_timeout 2000; # https://github.com/gitlabhq/gitlabhq/issues/694
proxy_redirect off;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://gitlab;
}
}
Note: I have added this line to /etc/hosts: 127.0.0.1 mydomain but it doesn't really help. (based on https://github.com/gitlabhq/gitlabhq/issues/3483#issuecomment-15783597)
Any ideas on what the issue might be/how I might debug this?
I believe this is fixed in 5.3, so try updating. See:
https://github.com/gitlabhq/gitlabhq/blob/master/CHANGELOG#L41
https://github.com/gitlabhq/gitlabhq/blob/master/config/gitlab.yml.example#L151

Resources