Filebeat nginx determine application name via context root - nginx

I currently have filebeat reading nginx logs and pushing them to a logstash, I am trying to determine which application the log is coming from by looking at the URI context root (not sure if this is the correct way to do it), but the issue is when there is no context root. Logstash will parse the value right after the host.
Here is my nginx config.
server {
listen 443;
server_name MyServer.com;
access_log /var/log/nginx/access_dev.log main if=$loggable;
error_log /var/log/nginx/error_dev.log;
ssl on;
ssl_certificate ssl/bundle.pem;
ssl_certificate_key ssl/wildcard.key;
ssl_session_timeout 5m;
proxy_ssl_verify off;
ssl_protocols TLSv1.2;
ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:!aNULL;
ssl_prefer_server_ciphers on;
add_header X-Forwarded-For $host;
add_header X-Forwarded-Proto $scheme;
add_header Host $host;
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
location / {
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host:$server_port;
proxy_set_header Host $host;
proxy_pass https://app01.domain.local:443/;
}
location /applicationOne {
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host:$server_port;
proxy_set_header Host $host;
proxy_read_timeout 120s;
proxy_pass https://app02.domain.local:443;
}
}
Is it possible to add variables to the specific location and say what the application name is? So for location / I would add lets say "Portal" and then in the nginx log it will log "Portal" at then end?
Here is my current nginx.conf
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" --
$sent_http_x_username';
map $request_uri $loggable {
default 1;
~*\.(ico|css|js|gif|jpg|jpeg|png|svg|woff|woff2|ttf|eot)$ 0;
}
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
Example of what I am asking for with Portal on the end of a log.
198.143.37.12 - - [31/Jul/2019:10:44:13 -0400] "GET /nosession HTTP/1.1" 200 3890 "-" "Safari/14607.2.6.1.1 CFNetwork/978.0.7 Darwin/18.6.0 (x86_64)" "199.83.71.22" "Portal"
And if that is not possible are there any other ideas on how to solve this?

I used the approach to add a header and then log that header.
In my
location / {
add_header x-application "App Name";
}
In the nginx.conf
$sent_http_x_application
was added to log_fromat

Related

What is the relationship of server's setting in both nginx.conf and proxy.conf?

I am very newbie on NGINX.
In my project, the server is defined in both etc/nginx/nginx.conf and etc/nginx/conf.d/proxy.conf. And etc/nginx/conf.d/proxy.conf is included in nginx.conf
I am not understand the relationship the server's setting in these two files. ex. In nginx.conf, server's setting is listen 80 ; listen [::]:80 ; and in proxy.conf, server's setting is listen 80 proxy_protocol.
In above example, which setting will be used in real communication?
Does the server's setting of proxy.conf overwrite the server's setting of nginx.conf?
or the server's setting of proxy.conf will be merged into server's setting of nginx.conf?
Please find the full conf files as below:
etc/nginx/conf.d/proxy.conf
content: |
client_max_body_size 500M;
server_names_hash_bucket_size 128;
upstream backend {
server unix:///var/run/puma/my_app.sock;
}
server {
listen 80 proxy_protocol;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
large_client_header_buffers 8 32k;
set_real_ip_from 10.0.0.0/8;
real_ip_header proxy_protocol;
location / {
proxy_http_version 1.1;
proxy_set_header X-Real-IP $proxy_protocol_addr;
proxy_set_header X-Forwarded-For $proxy_protocol_addr;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_buffers 8 32k;
proxy_buffer_size 64k;
proxy_pass http://backend;
proxy_redirect off;
Enables WebSocket support
location /v1/cable {
proxy_pass http://backend;
proxy_http_version 1.1;
proxy_set_header Upgrade "websocket";
proxy_set_header Connection "Upgrade";
proxy_set_header X-Real-IP $proxy_protocol_addr;
proxy_set_header X-Forwarded-For $proxy_protocol_addr;
}
}
}
etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
include /etc/nginx/conf.d/*.conf;
server {
listen 80 ;
listen [::]:80 ;
server_name localhost;
root /usr/share/nginx/html;
location / {
}
}
}
Nginx selects a server block to process a request based on the values of the listen and server_name directives.
If a matching server name cannot be found, the default server for that port will be used.
In the configuration in your question, the server block in proxy.conf is encountered first, so it becomes the de-facto default server for port 80.
The server block in nginx.conf will only match requests which use the correct host name, i.e. http://localhost
See this document for details.

NGINX configuration for CORS and proper HTTPS rerouting

Hi I'm quite inexperienced with NGINX and am having difficulty understanding why things aren't working as expected. I'm trying to test an API that I made with a docker container, which is being run with the command: docker run -d -v $(pwd):/app -p 8080:8000 --rm wiseeast/ya_bot.
I'm able to make API requests with Postman at http://ffpr.isi.edu:8080/api with a POST request, but the same request on AJAX with javascript returns an apparently frequent No 'Access-Control-Allow-Origin' header is present on the requested resource. error. I tried to bypass this by enabling CORS on my server by adding add_header 'Access-Control-Allow-Origin' '*' always; because I have control over it but it didn't resolve the issue. Also what is bugging me is that with Postman I can make a successful POST request to http://ffpr.isi.edu:8080/api but not to https://ffpr.isi.edu:8080/api.
Also, I have a rerouting issue that I feel should be straightforward given what I've read but isn't working. I have a webpage properly rerouting http://ffpr.isi.edu to https://ffpr.isi.edu but the rest of the rerouting doesn't work. For instance http://ffpr.isi.edu:5050/ loads through port 80 unsecurely and won't reroute to https://ffpr.isi.edu:5050/. On the other hand, https://ffpr.isi.edu:5050/ won't open at all with a time out error.
Here is my full nginx.conf file:
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
upstream frontend {
server 0.0.0.0:8000;
}
upstream ased_api {
server 0.0.0.0:5000;
}
upstream ya_bot {
server 0.0.0.0:8080;
}
upstream yesand {
server 0.0.0.0:5050;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /usr/share/nginx/html;
return 301 https://$host$request_uri;
}
# Settings for a TLS enabled server.
#
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name ffpr.isi.edu;
ssl_certificate "/etc/nginx/ssl/ffpr_isi_edu_cert.cer";
ssl_certificate_key "/etc/nginx/ssl/ffpr_isi_edu.key";
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 10m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
add_header 'Access-Control-Allow-Origin' '*';
proxy_pass http://frontend;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
location /api {
proxy_pass http://ased_api;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
location /ya_bot {
proxy_pass http://ya_bot;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
add_header 'Access-Control-Allow-Origin' '*' always;
add_header 'Access-Control-Allow-Credentials' 'true';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type';
}
location /yesand {
add_header 'Access-Control-Allow-Origin' '*';
proxy_pass http://yesand;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
server {
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
server_name _;
root /usr/share/nginx/html;
ssl_certificate "/etc/nginx/ssl/ffpr_isi_edu_cert.cer";
ssl_certificate_key "/etc/nginx/ssl/ffpr_isi_edu.key";
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 10m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
}
I've been suffering with these issues for so long, any pointers are greatly appreciated!!
In my experience, the add_header 'Access-Control-Allow-Origin' '*'; on the proxy machine did not fix the problem.
However, setting the 'Access-Control-Allow-Origin' header from the backend API as a response header did work. For example, You can run the following Go code on the backend API:
(*w).Header().Set(“Access-Control-Allow-Credentials”, “proxy-host-name”)
As for the redirect issue, you don’t need to use two separate server blocks, try this instead in the nginx.conf:
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name ffpr.isi.edu;
ssl_certificate "/etc/nginx/ssl/ffpr_isi_edu_cert.cer";
ssl_certificate_key "/etc/nginx/ssl/ffpr_isi_edu.key";
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 10m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
if ($scheme != https) {
return 301 https://$host$request_uri
}
}
I hope this helps.

NGINX: How do I remove a port when performing a reverse proxy?

I have an Nginx reverse proxy set up which is being used as an SSL offload for several servers such as confluence. I've got it successfully working for taking http://confluence and https://confluence but when I try to redirect http://confluence:8090, it tries to go to https://confluence:8090 and fails.
How can I remove the port from the URL?
The config below is a bit trimmed but maybe helpful? Is the $server_port bit in the headers causing the problem?
server {
listen 8090;
server_name confluence;
return 301 https://confluence$request_uri;
}
server {
listen 443 ssl http2;
server_name confluence;
location / {
proxy_http_version 1.1;
proxy_pass http://confbackend:8091
proxy_set_header X-Forwarded-Host $http_host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $server_name:$server_port;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Upgrade $http_upgrade; #WebSocket Support
proxy_set_header Connection $connection_upgrade; #WebSocket Support
}
}
Seems like a lot of answers here involve http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_redirect but I find no solace in that confusing mess.
I also would have thought you'd have a single server but I was trying the advice from https://serverfault.com/questions/815797/nginx-rewrite-to-new-protocol-and-port
I tried messing with the port_in_redirect off; option but maybe I was using it wrong?
EDIT 1: Add conf files
The files below are modifications from the Artifactory nginx setup. I used their setup initially and added additional conf files (in ./conf.d/) for other RP endpoints.
Confluence.conf
server {
listen 8090 ssl http2;
server_name confluence.domain.com confluence;
## return 301 https://confluence.domain.com$request_uri;
proxy_redirect https://confluence.domain.com:8090 https://confluence.domain.com;
}
server {
## add ssl entries when https has been set in config
ssl_certificate /data/rpssl/confluence.pem;
ssl_certificate_key /data/rpssl/confluence_unencrypted.key;
## server configuration
listen 443 ssl http2;
server_name confluence.domain.com confluence;
add_header Strict-Transport-Security max-age=31536000;
if ($http_x_forwarded_proto = '') {
set $http_x_forwarded_proto $scheme;
}
## Application specific logs
access_log /var/log/nginx/confluence-access.log timing;
error_log /var/log/nginx/confluence-error.log;
client_max_body_size 0;
proxy_read_timeout 1200;
proxy_connect_timeout 240;
location / {
proxy_http_version 1.1;
proxy_pass http://backendconfluence.domain.com:8091;
proxy_set_header X-Forwarded-Host $http_host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $server_name:$server_port;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Upgrade $http_upgrade; # WebSocket Support
proxy_set_header Connection $connection_upgrade; # WebSocket support
}
}
nginx.conf
# Main Nginx configuration file
worker_processes 4;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
worker_rlimit_nofile 4096;
events {
worker_connections 2048;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
variables_hash_max_size 1024;
variables_hash_bucket_size 64;
server_names_hash_max_size 4096;
server_names_hash_bucket_size 128;
types_hash_max_size 2048;
types_hash_bucket_size 64;
proxy_read_timeout 2400s;
client_header_timeout 2400s;
client_body_timeout 2400s;
proxy_connect_timeout 75s;
proxy_send_timeout 2400s;
proxy_buffer_size 32k;
proxy_buffers 40 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 250m;
proxy_http_version 1.1;
client_body_buffer_size 128k;
map $http_upgrade $connection_upgrade { #WebSocket support
default upgrade;
'' '';
}
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
log_format timing 'ip = $remote_addr '
'user = \"$remote_user\" '
'local_time = \"$time_local\" '
'host = $host '
'request = \"$request\" '
'status = $status '
'bytes = $body_bytes_sent '
'upstream = \"$upstream_addr\" '
'upstream_time = $upstream_response_time '
'request_time = $request_time '
'referer = \"$http_referer\" '
'UA = \"$http_user_agent\"';
access_log /var/log/nginx/access.log timing;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
Your problem is the STS header
add_header Strict-Transport-Security max-age=31536000;
When you add the STS header. The first request to http://example.com:8090 generates a redirect to https://example.com
This https://example.com then returns the STS header in the response and the browser remembers the example.com always needs to be served on https no matter what. The port doesn't make a difference
Now when you make another request to http://example.com:8090, STS kicks in and then converts it to https://example.com:8090, which is your problem here
Because a port can only serve http or https, you can't use 8090 to redirect http to https AND redirect https 8090 to https 443

Fastly error when reverse proxying

I am trying to setup a caching reverse proxy for Contentful CDN using Nginx
my configuration is as follows:
http {
proxy_cache_path /my/cache levels=1:2 keys_zone=STATIC:10m inactive=24h max_size=1g;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
listen 443 ssl;
server_name my.domain.com;
server_tokens off;
root /my/www/client;
index index.html;
try_files $uri $uri/ /index.html;
auth_basic "Private Property";
auth_basic_user_file /pass/.htpasswd;
ssl_certificate /my/ssl/cert.crt;
ssl_certificate_key /my/ssl/key.key;
ssl_dhparam /etc/ssl/certs/LS-dhparam.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECD$
location /spaces {
index index.html index.htm;
charset utf-8;
proxy_pass http://cdn.contentful.com/spaces/spaceId/entries?access_token=accessTokenValue;
proxy_ignore_headers Cache-Control Expires;
proxy_cache STATIC;
proxy_cache_valid 200 10m;
proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_For;
proxy_set_header Connection "";
proxy_http_version 1.1;
proxy_connect_timeout 90s;
proxy_send_timeout 90s;
proxy_read_timeout 90s;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
send_timeout 90s;
}
}
}
Where of course the spaceId and accessTokenValue are the appropriate ones
I then trigger a request on https://my.domain.com/spaces/&content_type=myContent&locale=en
Which returns this error: Fastly error: unknown domain: my.domain.com. Please check that this domain has been added to a service.
It works when I try to use the full url being: http://cdn.contentful.com/spaces/spaceId/entries?access_token=accessTokenValue&content_type=myContent&locale=en
Is there something wrong with my nginx configuration or is there something from contentful's side?
You need to remove
proxy_set_header Host $host;
For this to work. Because when you proxy_pass to proxy_pass http://cdn.contentful.com/spaces/spaceId/entries?access_toke‌​n=accessTokenValue&$‌​query_string, you want the host name to remain as cdn.contentful.com and not change it

How do I make nginx accept unencrypted http traffic on the local network?

I'm running nginx bundled with gitlab, and it has a ssl cert, but the ssl cert is only for the public domain, so now nginx wont accept traffic that isn't encrypted, and because of this I cant access my server from the local network (which is my home network). Is there a way that I can change this so that nginx will accept unencrypted traffic only on the local network?
EDIT: Similar to this question.
here is my nginx config:
user gitlab-www gitlab-www;
worker_processes 4;
error_log stderr;
pid nginx.pid;
daemon off;
events {
worker_connections 10240;
}
http {
log_format gitlab_access '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent"';
log_format gitlab_ci_access '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent"';
log_format gitlab_mattermost_access '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent"';
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
gzip on;
gzip_http_version 1.0;
gzip_comp_level 2;
gzip_proxied any;
gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript application/json;
include /opt/gitlab/embedded/conf/mime.types;
include /var/opt/gitlab/nginx/conf/gitlab-http.conf;
}
here is the gitlab-http config:
upstream gitlab {
server unix:/var/opt/gitlab/gitlab-rails/sockets/gitlab.socket fail_timeout=0;
}
upstream gitlab-workhorse {
server unix:/var/opt/gitlab/gitlab-workhorse/socket;
}
## Redirects all HTTP traffic to the HTTPS host
server {
listen 0.0.0.0:80;
listen [::]:80;
server_name git.team2roblox.tk;
server_tokens off; ## Don't show the nginx version number, a security best practice
return 301 https://git.team2roblox.tk:443$request_uri;
access_log /var/log/gitlab/nginx/gitlab_access.log gitlab_access;
error_log /var/log/gitlab/nginx/gitlab_error.log;
}
server {
listen 0.0.0.0:443 ssl spdy;
listen [::]:443 ssl spdy;
server_name git.team2roblox.tk;
server_tokens off; ## Don't show the nginx version number, a security best practice
root /opt/gitlab/embedded/service/gitlab-rails/public;
## Increase this if you want to upload large attachments
## Or if you want to accept large git objects over http
client_max_body_size 250m;
## Strong SSL Security
## https://raymii.org/s/tutorials/Strong_SSL_Security_On_nginx.html & https://cipherli.st/
ssl on;
ssl_certificate /etc/gitlab/ssl/cert.pem;
ssl_certificate_key /etc/gitlab/ssl/fullchain.pem;
# GitLab needs backwards compatible ciphers to retain compatibility with Java IDEs
ssl_ciphers 'ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4';
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_session_timeout 5m;
## Individual nginx logs for this GitLab vhost
access_log /var/log/gitlab/nginx/gitlab_access.log gitlab_access;
error_log /var/log/gitlab/nginx/gitlab_error.log;
location / {
## Serve static files from defined root folder.
## #gitlab is a named location for the upstream fallback, see below.
try_files $uri /index.html $uri.html #gitlab;
}
location /uploads/ {
## If you use HTTPS make sure you disable gzip compression
## to be safe against BREACH attack.
gzip off;
## https://github.com/gitlabhq/gitlabhq/issues/694
## Some requests take more than 30 seconds.
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_redirect off;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Ssl on;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Frame-Options SAMEORIGIN;
proxy_pass http://gitlab;
}
## If a file, which is not found in the root folder is requested,
## then the proxy passes the request to the upsteam (gitlab unicorn).
location #gitlab {
## If you use HTTPS make sure you disable gzip compression
## to be safe against BREACH attack.
gzip off;
## https://github.com/gitlabhq/gitlabhq/issues/694
## Some requests take more than 30 seconds.
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_redirect off;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Ssl on;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Frame-Options SAMEORIGIN;
proxy_pass http://gitlab;
}
location ~ ^/[\w\.-]+/[\w\.-]+/gitlab-lfs/objects {
client_max_body_size 0;
# 'Error' 418 is a hack to re-use the #gitlab-workhorse block
error_page 418 = #gitlab-workhorse;
return 418;
}
location ~ ^/[\w\.-]+/[\w\.-]+/(info/refs|git-upload-pack|git-receive-pack)$ {
client_max_body_size 0;
# 'Error' 418 is a hack to re-use the #gitlab-workhorse block
error_page 418 = #gitlab-workhorse;
return 418;
}
location ~ ^/[\w\.-]+/[\w\.-]+/repository/archive {
client_max_body_size 0;
# 'Error' 418 is a hack to re-use the #gitlab-workhorse block
error_page 418 = #gitlab-workhorse;
return 418;
}
location ~ ^/api/v3/projects/.*/repository/archive {
client_max_body_size 0;
# 'Error' 418 is a hack to re-use the #gitlab-workhorse block
error_page 418 = #gitlab-workhorse;
return 418;
}
# Build artifacts should be submitted to this location
location ~ ^/[\w\.-]+/[\w\.-]+/builds/download {
client_max_body_size 0;
# 'Error' 418 is a hack to re-use the #gitlab-workhorse block
error_page 418 = #gitlab-workhorse;
return 418;
}
# Build artifacts should be submitted to this location
location ~ /ci/api/v1/builds/[0-9]+/artifacts {
client_max_body_size 0;
# 'Error' 418 is a hack to re-use the #gitlab-workhorse block
error_page 418 = #gitlab-workhorse;
return 418;
}
location #gitlab-workhorse {
client_max_body_size 0;
## If you use HTTPS make sure you disable gzip compression
## to be safe against BREACH attack.
gzip off;
## https://github.com/gitlabhq/gitlabhq/issues/694
## Some requests take more than 30 seconds.
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_redirect off;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Ssl on;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_pass http://gitlab-workhorse;
}
## Enable gzip compression as per rails guide:
## http://guides.rubyonrails.org/asset_pipeline.html#gzip-compression
## WARNING: If you are using relative urls remove the block below
## See config/application.rb under "Relative url support" for the list of
## other files that need to be changed for relative url support
location ~ ^/(assets)/ {
root /opt/gitlab/embedded/service/gitlab-rails/public;
gzip_static on; # to serve pre-gzipped version
expires max;
add_header Cache-Control public;
}
error_page 502 /502.html;
}
Take a look at iptables or whatever firewall you have. This may possibly be different if you're using a different OS.
On the NGINX server (assuming you're using a Linux derivative), use iptables to allow network only connections and block any other connections. The first entry below is the local network CIDR range, which might be different for your network. The second is the loopback address. The last entry is for everything else.
iptables -A INPUT -p tcp --dport 80 -s 192.168.0.0/24 -j ACCEPT
iptables -A INPUT -p tcp --dport 80 -s 127.0.0.0/8 -j ACCEPT
iptables -A INPUT -p tcp --dport 80 -j DROP

Resources