I am trying to host the TensorBoard on a Heroku instance, and to secure it, I have added nginx using the Nginx-Buildpack in front of it.
The idea is that Tensorboard will create the app on port 6006, and Nginx will redirect this port to the external port provided by Heroku $Port.
When I start the app, I have the following error:
TensorBoard attempted to bind to port 6006, but it was already in use
My config files are as follows:
Procfile
web: bin/start-nginx tensorboard --logdir="/app/" --host=http://127.0.0.1 --port=6006
config/nginx.conf.erb
daemon off;
#Heroku dynos have at least 4 cores.
worker_processes <%= ENV['NGINX_WORKERS'] || 4 %>;
events {
use epoll;
accept_mutex on;
worker_connections 1024;
}
http {
gzip on;
gzip_comp_level 2;
gzip_min_length 512;
server_tokens off;
log_format l2met 'measure#nginx.service=$request_time
request_id=$http_x_request_id';
access_log logs/nginx/access.log l2met;
error_log logs/nginx/error.log;
include mime.types;
default_type application/octet-stream;
sendfile on;
#Must read the body in 5 seconds.
client_body_timeout 5;
#upstream app_server {
# server unix:/tmp/nginx.socket fail_timeout=0;
#}
server {
listen <%= ENV["PORT"] %>;
server_name http://127.0.0.1;
keepalive_timeout 5;
root /app;
port_in_redirect off;
#index index.html index.htm;
location = / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://127.0.0.1:6006;
}
}
}
Related
I am using Nginx as reverse proxy to my backend (Java app with Spring boot). In overall (avg, p50, p90, p95, p99 latencies) it performs well. But time to time, I see latency spikes around 100-200 milliseconds. When I enabled the access logs, I see that upstream response time (upstream_response_time) is very low even though request time (request_time) is high. For example,
[25/Apr/2020:18:28:17 +0000] "XXX" XXX - request="POST /v1/composite-monitoring-data HTTP/1.1" status=429 request_time=0.081 trace_id="Root=1-5ea48141-2f8e07a4c7c71a1360d9c5f5" request_length=9864 bytes_sent=979 body_bytes_sent=623 upstream_addr=127.0.0.1:5000 upstream_status=429 upstream_response_time=0.004 upstream_connect_time=0.000 upstream_header_time=0.004 user_agent="okhttp/3.10.0" current_time_msec=1587839297.256
...
[25/Apr/2020:18:28:17 +0000] "XXX" XXX - request="POST /v1/composite-monitoring-data HTTP/1.1" status=429 request_time=0.084 trace_id="Root=1-5ea48141-51f0d12a6f7c4b0651f6ef42" request_length=20534 bytes_sent=979 body_bytes_sent=623 upstream_addr=127.0.0.1:5000 upstream_status=429 upstream_response_time=0.000 upstream_connect_time=0.000 upstream_header_time=0.000 user_agent="okhttp/3.10.0" current_time_msec=1587839297.278
Also here is my nginx.conf file:
user nginx;
pid /var/run/nginx.pid;
error_log /var/log/nginx/error.log;
worker_processes auto;
worker_rlimit_nofile 32768;
events {
worker_connections 4096;
use epoll;
multi_accept on;
}
http {
include /etc/nginx/mime.types;
include /etc/nginx/conf.d/*.conf;
default_type application/json;
sendfile on;
tcp_nopush off;
tcp_nodelay on;
keepalive_timeout 300;
keepalive_requests 10000;
client_body_timeout 15;
client_header_timeout 15;
client_body_buffer_size 4m;
client_max_body_size 4m;
log_format main '[$time_local] "$http_x_forwarded_for" $remote_addr - '
'request="$request" status=$status request_time=$request_time trace_id="$http_x_amzn_trace_id" '
'request_length=$request_length bytes_sent=$bytes_sent body_bytes_sent=$body_bytes_sent '
'upstream_addr=$upstream_addr '
'upstream_status=$upstream_status '
'upstream_response_time=$upstream_response_time '
'upstream_connect_time=$upstream_connect_time '
'upstream_header_time=$upstream_header_time '
'user_agent="$http_user_agent" '
'current_time_msec=$msec';
access_log /var/log/nginx/access.log main;
upstream http_backend {
server 127.0.0.1:5000;
keepalive 1024;
}
server {
listen 80;
listen [::]:80;
server_name _ localhost;
location /v1 {
proxy_pass http://http_backend/v1;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Request-Start $msec;
proxy_set_header Connection "";
proxy_http_version 1.1;
keepalive_timeout 300;
keepalive_requests 10000;
}
location /ping {
proxy_pass http://http_backend/ping;
}
}
}
What might cause this big difference between the request time and upstream response time? Is there anything I need to configure and not configured properly?
I have an scenario on where a nginx is in front of an Artifactory server.
Recently, while trying to pull a big number of docker images in a for loop, all at the same time (first test was with 200 images, second test with 120 images), access to Artifactory gets blocked, as nginx is busy processing all the requests and users will not be able to reach it.
My nginx server is running with 4 cpu cores and 8192 of ram.
I have tried to improve the handling of files in the server, by adding the bellow:
sendfile on;
sendfile_max_chunk 512k;
tcp_nopush on;
This made it a bit better (but of course, pull's of images with 1gb+ take much more time, due to the chunk size) - still, access to the UI would cause a lot of timeouts.
Is there something else that i can do to improve the nginx performance, whenever a bigger load is pushed thru it?
I think that my last option is to increase the size of the machine (more cpu's) aswell as the number of processes on nginx (8 to 16).
The full nginx.conf file follows bellow:
user www-data;
worker_processes 8;
pid /var/run/nginx.pid;
events {
worker_connections 19000;
}
http {
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
include /etc/nginx/mime.types;
default_type application/octet-stream;
gzip on;
gzip_disable "msie6";
sendfile on;
sendfile_max_chunk 512k;
tcp_nopush on;
set_real_ip_from 138.190.190.168;
real_ip_header X-Forwarded-For;
log_format custome '$remote_addr - $realip_remote_addr - $remote_user [$time_local] $request_time'
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
server {
listen 80 default;
listen [::]:80 default;
server_name _;
return 301 https://$server_name$request_uri;
}
###########################################################
## this configuration was generated by JFrog Artifactory ##
###########################################################
## add ssl entries when https has been set in config
ssl_certificate /etc/ssl/certs/{{ hostname }}.cer;
ssl_certificate_key /etc/ssl/private/{{ hostname }}.key;
ssl_session_cache shared:SSL:1m;
ssl_prefer_server_ciphers on;
## server configuration
server {
listen 443 ssl;
server_name ~(?<repo>.+)\.{{ hostname }} {{ hostname }} _;
if ($http_x_forwarded_proto = '') {
set $http_x_forwarded_proto $scheme;
}
## Application specific logs
access_log /var/log/nginx/{{ hostname }}-access.log custome;
error_log /var/log/nginx/{{ hostname }}-error.log warn;
rewrite ^/$ /webapp/ redirect;
rewrite ^//?(/webapp)?$ /webapp/ redirect;
rewrite ^/(v1|v2)/(.*) /api/docker/$repo/$1/$2;
chunked_transfer_encoding on;
client_max_body_size 0;
location / {
proxy_read_timeout 900;
proxy_max_temp_file_size 10240m;
proxy_pass_header Server;
proxy_cookie_path ~*^/.* /;
proxy_pass http://{{ appserver }}:8081/artifactory/;
proxy_set_header X-Artifactory-Override-Base-Url $http_x_forwarded_proto://$host:$server_port;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}
Thanks for the tips.
Cheers,
Ricardo
Hi I am trying to setup a nginx to work as a reverse proxy to an application that I am running on a tomcat server. when I try to access my application through http it works fine, but when I try to access it over https I am getting a 502 error
here follows my nginx config file
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log notice;
gzip on;
gzip_disable "msie6";
rewrite_log on;
server{
ssl on;
listen 80;
listen 443 ssl;
server_name myapp.local;
ssl_certificate max.local.crt;
ssl_certificate_key server.key;
#ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
#ssl_ciphers RC3:HIGH:!aNULL:!MD5;
#ssl_prefer_server_ciphers on;
ssl_session_timeout 5m;
keepalive_timeout 60;
error_log /var/log/nginx/hybris.log;
rewrite_log on;
set $my_port 9001;
set $my_protocol "http";
if ($scheme = https){
set $myport 9002;
set $my_protocol "https";
}
location / {
if ( $http_user_agent ~ "Chrome"){
#just a proof of concept
return 301 http://$host/AE/en;
}
if ( $http_user_agent ~ "Firefox"){
#just a proof of concept
return 301 http://google.com/;
}
}
location /AE/en {
proxy_pass $scheme://10.0.2.2:$my_port;
proxy_set_header Host $host;
}
location ~(?:/..)?/_ui/(.*) {
proxy_pass http://10.0.2.2:9001/_ui/$1;
proxy_set_header Host $host;
}
}
}
When using https you are changing the port and also scheme for connecting to the tomcat server - this does not really make sense. You would only use https for a backend server if it is in another datacenter, not within a local network. It should work fine if you remove the $my_port and $my_protocol definitions and change your /AE/en location block to
location /AE/en {
proxy_pass http://10.0.2.2:9001;
proxy_set_header Host $host;
}
I think you need to create two server sections. One for listening on port 80 and the other for listening on port 453 which is for https.
I am using nginx (via gunicorn) to serve static files for a flask app.
Static files in the default static folder are working fine:
<link rel="stylesheet" href="{{ url_for('static', filename='css/fa/font-awesome.min.css') }}" />
However for other static files which I want to restrict access to for logged in users only I am using a static folder served by Flask:
app.register_blueprint(application_view)
application_view = Blueprint('application_view', __name__, static_folder='application_static')
in html I'm calling a static file thus:
<link rel="stylesheet" href="{{ url_for('application_view.static', filename='css/main.css') }}" />
then in application/application_static I have the restricted static files. This works fine on a local Flask install, however when I deploy to a production machine with Nginx serving files from the /static folder I get a "NetworkError: 404 Not Found - website.com/application_static/main.css".
Any ideas on how to configure Ngix to handle this issue?
conf.d/mysitename.conf file:
upstream app_server_wsgiapp {
server localhost:8000 fail_timeout=0;
}
server {
listen 80;
server_name www.mysitename.com;
rewrite ^(.*) https://$server_name$1 permanent;
}
server {
server_name www.mysitename.com;
listen 443 ssl;
#other ssl config here
access_log /var/log/nginx/www.mysitename.com.access.log;
error_log /var/log/nginx/www.mysitename.com.error.log info;
keepalive_timeout 5;
# nginx serve up static files and never send to the WSGI server
location /static {
autoindex on;
alias /pathtositeonserver/static;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://app_server_wsgiapp;
break;
}
}
# this section allows Nginx to reverse proxy for websockets
location /socket.io {
proxy_pass http://app_server_wsgiapp/socket.io;
proxy_redirect off;
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
}
nginx.conf:
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
gunicorn will have the old code still running, unless you reload the configuration file.
You either stop and restart gunicorn, or send a HUP signal to the gunicorn process.
I have 2 servers A and B, on Server A i have nginx installed
below is my config file loacted at /etc/nginx/nginx.conf and configured as below
user www-data;
worker_processes 4;
pid /var/run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
gzip_disable "msie6";
upstream sendforward {
server Server_IP_B:9000;
}
server {
#access_log off;
server_name my_server_name;
listen 443;
large_client_header_buffers 4 16k;
error_log /var/log/nginx/error.log;
location / {
proxy_pass http://sendforward;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect off;
}
}
}
when i send message to Server A on port 443 it writes to 'access_log' file but doesn't forward message to Server B.
I also check by ubuntu command 'nc - l 9000' and wireshark by filtering 'tcp.port==9000'.
I didn't get why it is happening or whether i miss in configuration.
Thanks in advance.
Have you tried defining your upstream before your proxy pass?
i also think you shoud remove the trailing slash of your proxypass
so http://sendforward instead of http://sendforward/