perhaps someone can help me with this issue.
I am redirecting a website over nginx with --with-http_sub_module.
So its working fine on many sites, except one picture on one of the websites.
The strange thing is that I have another Site redirected, there is a picture which is working... I have no clue for that..
Picture with chunked_encoding fehler: 59.6 kB
Picture which is working fine: 43.3 kB
I can see the picture that it is there, but only for a few milliseconds and then I get the Error: net::ERR_INCOMPLETE_CHUNKED_ENCODING
First of all my NGINX.conf:
user www-data;
worker_processes 4;
pid /var/run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
##
# nginx-naxsi config
##
# Uncomment it if you installed nginx-naxsi
##
#include /etc/nginx/naxsi_core.rules;
##
# nginx-passenger config
##
# Uncomment it if you installed nginx-passenger
##
#passenger_root /usr;
#passenger_ruby /usr/bin/ruby;
##
# Virtual Host Configs
##
#proxy_set_header X-Real-IP $remote_addr;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
So my Proxy_config is:
{
proxy_buffering off;
proxy_buffers 8 24k;
proxy_buffer_size 2k;
proxy_http_version 1.1;
proxy_headers_hash_max_size 1024;
proxy_headers_hash_bucket_size 128;
sub_filter_types *;
access_log off;
proxy_redirect http https;
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection close;
proxy_set_header Accept-Encoding "";
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;}
Do you know something about this??
I've tried every Option from the --with-http_sub_module site, but nothing seems to work.
Related
I'm trying to deploy the tomcat & Nginx server on a single AWS EC2 instance. I have 3 instances & on each instance, I wanted to deploy Nginx & Tomcat server. Below is my configuration file
/etc/nginx/nginx.conf
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
}"
/etc/nginx/conf.d/application.conf
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name localhost;
root /var/lib/tomcat9/webapps/ROOT;
index deploy.html;
location /admin {
try_files $uri $uri/ /deploy.html;
}
location /admin/admin-portal {
alias /opt/tomcat/webapps/ROOT/;
rewrite /admin-portal/(.*) /$1 break;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://localhost:8080/;
}
location ~ \.css {
add_header Content-Type text/css;
}
location ~ \.js {
add_header Content-Type application/x-javascript;
}
My goal is, when I hit http://IP/ or HTTP://IP/admin then it should redirect to deploy.html and when I hit HTTP://IP/admin/admi-portal it should open tomcat server
NOTE: I got success in both conditions except when I hit HTTP://IP/admin/admi-portal then it is opening only HTML page and CSS/png/js files getting 404:not found error
/opt/tomcat/webapps/ROOT/ this is the file path for all tomcat static file CSS/js/png etc
Can anyone help me with this?
Try hitting the compete url of your EC2 instance
<instanceip>:8080/admin/admin-portal/
also,
you can add "/" in the end:-
location /admin/admin-portal/
then try hitting the url with
<instance-ip>:8080/admin/admin-portal
Now you don't need to add "/" at the end
Here's my nginx config (using nginx 1.16.1):
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 100000;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format upstream_time '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"'
'rt=$request_time uct="$upstream_connect_time" uht="$upstream_header_time" urt="$upstream_response_time"';
##
# Logging Settings
##
error_log /var/log/nginx/error.log warn;
access_log /var/log/nginx/access.log upstream_time;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_min_length 256;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/vnd.ms-fontobject application/x-font-ttf font/opentype image/svg+xml image/x-icon;
#Nginx cache
proxy_cache_path /nginx_cache/product levels=1:2 keys_zone=product_cache:100m max_size=20g inactive=2d use_temp_path=off;
#Serve HTML, JS, CSS & Go requests
server {
client_max_body_size 102M;
listen 443 ssl http2;
server_name example.com;
root /html;
index /;
error_page 404 /404.html;
error_page 500 /500.html;
error_page 502 =503 /maintenance.html;
location = /404.html {
add_header x-nginx-cache-status $upstream_cache_status always;
}
location ~^/([a-zA-Z0-9/]+)$ {
set $product_id $1;
rewrite ^ /product?id=$product_id break;
proxy_cache product_cache;
proxy_http_version 1.1;
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
proxy_cache_background_update on;
proxy_cache_lock on;
proxy_cache_key product-$product_id;
add_header x-nginx-cache-key product-$product_id always;
add_header x-nginx-cache-status $upstream_cache_status always;
proxy_cache_valid 200 404 1d;
proxy_cache_bypass $nocache;
proxy_ignore_headers Cache-Control; #force cache
proxy_ignore_headers Set-Cookie;
proxy_intercept_errors on;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_set_header REQUEST_URI $request_uri;
proxy_pass http://go:2053;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
}
}
}
With this configuration, I can see that 404s are actually a cache HIT after the first request, however it's not saved in the nginx_cache folder as set.
Other requests which are 200 responses are cached appropriately and saved to disk as expected.
I've tried also adding the same caching config to the location = /404.html block, but that did not have any effect on whether the file was saved to disk.
I am guessing this has to do with overriding the error page by using proxy_intercept_errors and error_page, so nginx is no longer caching it using the parameters I set.
Is there a way to achieve this?
I have below nginx config which is running into this error while trying to start the nginx:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
server_names_hash_bucket_size 164;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript
text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
upstream qe {
server qe.domain.com:443;
}
upstream staging {
server staging.domain.com:443;
}
upstream beta {
server mydomain.com:443;
server mydomain-beta.com:443;
}
# map to different upstream backends based on header
map $http_x_server_select $pool {
default "staging";
qe "qe";
beta "beta";
}
server {
listen 80;
server_name 100.0.0.0 ec2.instance.compute-1.amazonaws.com;
location / {
proxy_pass https://$pool;
#standard proxy settings
proxy_set_header X-Real-IP $remote_addr;
proxy_redirect off;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-NginX-Proxy true;
proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
send_timeout 600;
}
}
}
#mail {
# # See sample authentication script at:
# # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript
#
# # auth_http localhost/auth.php;
# # pop3_capabilities "TOP" "USER";
# # imap_capabilities "IMAP4rev1" "UIDPLUS";
#
# server {
# listen localhost:110;
# protocol pop3;
# proxy on;
# }
#
everything seems in place but still seeing this error. Am I missing something? I t is definitely not the curly braces causing it as I have the braces in place. But not sure what is causing this error.
It gives the error on the last line of the config file where I have some commented config which came by default when I installed nginx. But don't think it is the reason as I tried removing that also.
Update: So I removed everything from my config file and still getting the same error. I am confused what is going on now.
I am using nginx with Heroku and I wanna enable http_gzip_static_module
to serve compressed files. I compress my files manually so I have for example
bundle.js
bunsle.js.gz
I can not make this work. If I enable gzip on dynamic compression works. I am not really familiar with ngnix and I am using configs that i found on internet for use with Heroku or should I say I am using this Heroku buildpack that says it is supported.
For now only compression is important to me. I would remove extra noise if I knew what is not important. Is there something I should change? This is my config file.
daemon off;
#Heroku dynos have at least 4 cores.
worker_processes <%= ENV['NGINX_WORKERS'] || 4 %>;
events {
use epoll;
accept_mutex on;
multi_accept on;
worker_connections 1024;
}
error_log logs/nginx/error.log;
error_log logs/nginx/error_extreme.log emerg;
error_log logs/nginx/error_debug.log debug;
error_log logs/nginx/error_critical.log crit;
http {
charset utf-8;
include mime.types;
# # - Add extra mime types
types{
application/x-httpd-php .html;
}
default_type application/octet-stream;
log_format l2met 'measure#nginx.service=$request_time request_id=$http_x_request_id';
access_log logs/nginx/access.log l2met;
# # - Basic Settings
sendfile on;
tcp_nopush on;
tcp_nodelay on;
types_hash_max_size 2048;
# # - Enable open file cache
open_file_cache max=1000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
# # - Configure buffer sizes
client_body_buffer_size 16k;
client_header_buffer_size 1k;
# # - Responds with 413 http status ie. request entity too large error if this value exceeds
client_max_body_size 8m;
large_client_header_buffers 2 1k;
# # - Configure Timeouts
client_body_timeout 12;
client_header_timeout 12;
# # - Use a higher keepalive timeout to reduce the need for repeated handshake
keepalive_timeout 300;
# # - if the request is not completed within 10 seconds, then abort the connection and send the timeout errror
send_timeout 10;
# # - Hide nginx version information
server_tokens off;
# # - Dynamic gzip compression
gzip_static on;
#gzip off;
gzip_http_version 1.0;
gzip_disable "msie6";
gzip_vary on;
#gzip_min_length 20;
#gzip_buffers 4 16k;
#gzip_comp_level 9;
gzip_proxied any;
#Turn on gzip for all content types that should benefit from it.
gzip_types application/ecmascript;
gzip_types application/javascript;
gzip_types application/json;
gzip_types application/pdf;
gzip_types application/postscript ;
gzip_types application/x-javascript;
gzip_types image/svg+xml;
gzip_types text/css;
gzip_types text/csv;
gzip_types text/javascript ;
gzip_types text/plain;
gzip_types text/xml;
gzip_types text/html;
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream nodebeats {
server unix:/tmp/nginx.socket fail_timeout=0;
keepalive 32;
}
server {
listen <%= ENV['PORT'] %>;
server_name _;
root "/app/";
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://nodebeats;
}
location /api {
proxy_pass http://nodebeats;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location /dist {
alias "/app/app-dist";
# # - 1 month expiration time
expires 1M;
access_log off;
add_header Pragma public;
add_header Cache-Control public;
add_header Vary Accept-Encoding;
}
location /offline {
alias "/app/public/offline";
# # - 1 month expiration time
expires 1M;
access_log off;
add_header Pragma public;
add_header Cache-Control public;
add_header Vary Accept-Encoding;
}
location /scripts {
alias "/app/node_modules";
# # - 1 month expiration time
expires 1M;
access_log off;
add_header Pragma public;
add_header Cache-Control public;
add_header Vary Accept-Encoding;
}
}
}
I set nginx and unicorn on ubuntu14.04 to access my rails app!
but, I access my domain, chrome responsed 'connection refused'
I don't know why...
How can I resolve this problem?
There is my nginx.conf and unicorn file.
【nginx.conf】
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
##
# nginx-naxsi config
##
# Uncomment it if you installed nginx-naxsi
##
#include /etc/nginx/naxsi_core.rules;
##
# nginx-passenger config
##
# Uncomment it if you installed nginx-passenger
##
#passenger_root /usr;
#passenger_ruby /usr/bin/ruby;
##
# Virtual Host Configs
##
#include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
【myapp-unicorn】(/etc/nginx/site-enabled/myapp-unicorn)
upstream myapp.com {
#my rails app
server unix:/var/www/rails/myapp/tmp/sockets/unicorn.sock fail_timeout=0;
}
server {
listen 80;
server_name myapp.com;
location / {
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 35M;
proxy_pass http://myapp.com;
}
}
【unicorn.rb】(/var/www/rails/myapp/config/unicorn.rb)
worker_processes 2
listen File.expand_path('tmp/sockets/unicorn.sock', ENV['RAILS_ROOT'])
stderr_path File.expand_path('log/unicorn.log', ENV['RAILS_ROOT'])
stdout_path File.expand_path('log/unicorn.log', ENV['RAILS_ROOT'])
preload_app true
before_fork do |server, worker|
defined?(ActiveRecord::Base) and ActiveRecord::Base.connection.disconnect!
old_pid = "#{ server.config[:pid] }.oldbin"
unless old_pid == server.pid
begin
Process.kill :QUIT, File.read(old_pid).to_i
rescue Errno::ENOENT, Errno::ESRCH
end
end
end
after_fork do |server, worker|
defined?(ActiveRecord::Base) and ActiveRecord::Base.establish_connection
end
Thank you for your patience with my poor English.
Add
this is unicorn.log(/var/www/rails/myapp/log/unicorn.log)
I, [2015-09-05T18:17:20.590239 #10832] INFO -- : Refreshing Gem list
I, [2015-09-05T18:17:22.099133 #10832] INFO -- : unlinking existing socket=/var/www/rails/myapp/tmp/sockets/unicorn.sock
I, [2015-09-05T18:17:22.099389 #10832] INFO -- : listening on addr=/var/www/rails/myapp/tmp/sockets/unicorn.sock fd=11
I, [2015-09-05T18:17:22.115503 #10832] INFO -- : master process ready
I, [2015-09-05T18:17:22.118878 #10836] INFO -- : worker=0 ready
I, [2015-09-05T18:17:22.127008 #10839] INFO -- : worker=1 ready
this is nginx.log(/var/log/nginx/error.log)
The nginx log is none...
If your error is
connect() failed (111: Connection refused) while connecting to upstream
Try running unicorn listen on
listen /tmp/sockets/unicorn.sock
instead of
listen File.expand_path('tmp/sockets/unicorn.sock', ENV['RAILS_ROOT'])
because some times it happens that nginx can't read that socket file due to permissions. It is rather safe if you have socket file inside tmp folder. Also point you NGINX to
server unix:/tmp/sockets/unicorn.sock fail_timeout=0;
instead of
server unix:/var/www/rails/myapp/tmp/sockets/unicorn.sock fail_timeout=0;
Please reply if you still get error
Happy Deployment ;)