With nginx/0.7.65 I'm getting this error on line 4. Why doesn't it recognize server?
#### CHAT_FRONT ####
server {
listen 7000 default deferred;
server_name example.com;
root /home/deployer/apps/chat_front/current/public;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
#### CHAT_STORE ####
server {
listen 7002 default deferred;
server_name store.example.com;
root /home/deployer/apps/chat_store/current/public;
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
#### LOGIN ####
server {
listen 7004 default deferred;
server_name login.example.com;
root /home/deployer/apps/login/current/public;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
#### PERMISSIONS ####
server {
listen 7006 default deferred;
server_name permissions.example.com;
root /home/deployer/apps/permissions/current/public;
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
#### SEARCH ####
server {
listen 7008 default deferred;
server_name search.example.com;
root /home/deployer/apps/search/current/public;
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
#### ANALYTICS ####
server {
listen 7010 default deferred;
server_name analytics.example.com;
root /home/deployer/apps/analytics/current/public;
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
The server directive must be contained in the context of http module. Additionally you are missing top-level events module, which has one obligatory setting, and a bunch of stanzas which are to be in the http module of your config. While nginx documentation is not particularly helpful on creating config from scratch, there are working examples there.
Source: nginx documentation on server directive
Adding a top level entry got around the problem:
events { }
Try to adjust line endings and the encoding of your configuration file. In my case ANSI encoding and possibly "Linux style" line endings (only LF symbols, not both CR and LF symbols) were required.
I know it is a rather old question. However the recommendation of the accepted answer (that the server directive must be contained in the context of http module) might confuse because there are a lot of examples (on the Nginx website also and in this Microsoft guide) where the server directive is not contained in the context of http module.
Possibly the answer of sd z ("I rewrote the *.conf file and it worked") comes from the same reason: there was a .config file with incorrect encoding or line endings which was corrected after the file has been rewrited.
I ran into an issue similar to Hoborg, whose comment pointed me in the right direction. In my case, I had copy and pasted a config from a another post to use for a docker container hosting an Angular app. This was the default config plus one extra line. However, when I pasted, there were a few extra characters included at the beginning of the paste (EF BB BF). The IDE I was using (visual studio) did not display anything to indicate these extra bytes, so it wasn't apparent that they existed. Removing these extra bytes resolved the issue, which I did using a hex editor (HxD) because it was convenient at the time. The windows style line endings did not seem to be an issue in my case.
The default config had server as the outermost directive, so the top answer and error message was indeed confusing. That remained the same as it was originally in my case, and the original file (without the 3 extra bytes) had not thrown the error.
I rewrote the *.conf file and it worked.
Related
I wish to view a cached static webpage on nginx through this URL:
http://localhost:8087/mycache/welcome_page.html
The welcome_page.html is kept in this location on Windows:
C:\nginx-1.22.1\html\welcome_page.html
The tricky part is that I have a reverse proxy setup using upstream with backend tomcat servers.
Despite specifying the location block for mycache the request goes to backend tomcat and thus fails with error the page you are looking for is currently unavailable. instead of looking for the cache HTML file on nginx.
An error occurred.
Sorry, the page you are looking for is currently unavailable.
Please try again later.
If you are the system administrator of this resource then you should check the error log for details.
Faithfully yours, nginx.
Below is my nginx configuration:
Can you please suggest how can i fix the problem?
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
upstream tomcatcluster {
server 127.0.0.1:8181;
server 127.0.0.1:8282;
}
server {
listen 8087;
server_name localhost;
location /mycache/ {
root C:\nginx-1.22.1\html;
index index.html index.htm;
}
location / {
proxy_pass http://tomcatcluster;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
Can you please suggest how can we get this to work?
If you are manually adding the pages to C:\nginx-1.22.1\html, remove the proxy_pass line from the location /mycache/
location /mycache/ {
root C:\nginx-1.22.1\html;
index index.html index.htm;
}
Otherwise, let me know if what your are trying to achieve is to set the proxy_temp_path
I have a website using Play! framework with multiple domains proxying to the backend, example.com and example.ca.
I have all http requests on port 80 being rewritten to https on port 443. This is all working as expected.
But when I type into the address bar http://example.com:443, I'm served nginx's default error page, which says
400 Bad Request
The plain HTTP request was sent to HTTPS port
nginx
I'd like to serve my own error page for this, but I just can't seem to get it working. Here's a snipped of my configuration.
upstream my-backend {
server 127.0.0.1:9000;
}
server {
listen 443;
ssl on;
ssl_certificate /etc/ssl/certs/example.crt;
ssl_certificate_key /etc/ssl/private/example.key;
keepalive_timeout 70;
server_name example.com;
add_header Strict-Transport-Security max-age=15768000; #six months
location / {
proxy_pass http://my-backend;
}
error_page 400 502 error.html;
location = /error.html {
root /usr/share/nginx/html;
}
}
It works when my Play! application is shut down, but when it's running it always serves up the default nginx page.
I've tried adding the error page configuration to another server block like this
server {
listen 443;
ssl off;
server_name example.com;
error_page [..]
}
But that fails with the browser complaining about the certificate being wrong.
I'd really ultimately like to be able to catch and handle any errors which aren't handled by my Play! application with a custom page, or pages. I'd also like this solution to work if the user manually enters the site's IP into the address bar instead of the server name.
Any help is appreciated.
I found the answer to this here https://stackoverflow.com/a/12610382/4023897.
In my particular case, where I want to serve a static error page under these circumstances, my configuration is as follows
server {
listen 443;
ssl on;
ssl_certificate /etc/ssl/certs/example.crt;
ssl_certificate_key /etc/ssl/private/example.key;
keepalive_timeout 70;
server_name example.com;
add_header Strict-Transport-Security max-age=15768000; #six months
location = /error.html {
root /usr/share/nginx/html;
autoindex off;
}
location / {
proxy_pass http://my-backend;
}
# If they come here using HTTP, bounce them to the correct scheme
error_page 497 https://$host:$server_port/error.html;
}
I'am trying to set up basic caching in my openresty nginx webserver. I have tried milion different combinations from many different tutorials, but I can't get it right. Here is my nginx.conf file
user www-data;
worker_processes 4;
pid /run/openresty.pid;
worker_rlimit_nofile 30000;
events {
worker_connections 20000;
}
http {
proxy_cache_path /tmp/nginx/cache levels=1:2 keys_zone=cache:10m max_size=100m inactive=60m;
proxy_cache_key "$scheme$request_method$host$request_uri";
add_header X-Cache $upstream_cache_status;
include mime.types;
default_type application/octet-stream;
access_log /var/log/openresty/access.log;
error_log /var/log/openresty/error.log;
include ../sites/*;
lua_package_cpath '/usr/local/lib/lua/5.1/?.so;;';
}
And here is my server configuration
server {
# Listen on port 8080.
listen 8080;
listen [::]:8080;
# The document root.
root /var/www/cache;
# Add index.php if you are using PHP.
index index.php index.html index.htm;
# The server name, which isn't relevant in this case, because we only have one.
server_name cache.com;
# Redirect server error pages to the static page /50x.html.
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /var/www/cache;
}
location /test.html {
root /var/www/cache;
default_type text/plain;
try_files $uri /$uri;
expires 1h;
add_header Cache-Control "public";
proxy_cache cache;
proxy_cache_valid 200 301 302 60m;
}
}
Caching should work fine, there is nothing in error.log or access.log, caching system folder is empty, X-Cache header with $upstream_cache_status is not even showing, when I get headers from curl (curl -I). Now in my nginx (openresty) configuration there is no --without-ngx_http_proxy_module flag so the module is there. I have no idea what am I doing wrong please help.
You didn't define anything that can be cached: proxy_cache works togeher with proxy_pass.
The add_header defined inside the http block will be covered the one defined in the server block. Here is the snippet from the document about add_header
There could be several add_header directives. These directives are inherited from the previous level if and only if there are no add_header directives defined on the current level.
If the always parameter is specified (1.7.5), the header field will be added regardless of the response code.
So you cannot see the X-Cache header as expected.
This should be a quick fix.
So for some reason I still can't get a request that is greater than 1MB to succeed without returning 413 Request Entity Too Large.
For example with the following configuration file and a request of size ~2MB, I get the following error message in my nginx error.log:
*1 client intended to send too large body: 2666685 bytes,
I have tried setting the configuration that is set below and then restarting my nginx server but I still get the 413 error.
Is there anything I am doing wrong?
server {
listen 8080;
server_name *****/api; (*omitted*)
client_body_in_file_only clean;
client_body_buffer_size 32K;
charset utf-8;
client_max_body_size 500M;
sendfile on;
send_timeout 300s;
listen 443 ssl;
location / {
try_files $uri #(*omitted*);
}
location #parachute_server {
include uwsgi_params;
uwsgi_pass unix:/var/www/(*omitted*)/(*omitted*).sock;
}
}
Thank you in advance for the help!
I'm surprised you haven't received a response but my hunch is you already have it set somewhere else in another config file.
Take a look at nginx - client_max_body_size has no effect
Weirdly it works after adding the same thing "client_max_body_size 100M"in http,location,server all the blocks.
I'm following the html5boilerplate nginx setup for the most part, but everything keeps breaking when I include expires.conf.
HTML5BP for Nginx setup: https://github.com/h5bp/server-configs/tree/master/nginx
I've changed only a few very minor things which I'll put below. When in include conf/expires.conf however everything returns 404.
As a sidenote, I don't think it's just HTML5BP either.. i also followed this guide and images also broke (under the heading for Nginx tip #5 static assets expire).
location ~* \.(jpg|jpeg|gif|png|css|js|ico|xml)$ {
access_log off;
log_not_found off;
expires 360d;
}
I'm using nginx on a server behind an EC2 elastic load balancer.
Here is my /etc/nginx/sites-enabled/example.conf:
server {
listen 80;
server_name localhost;
location / {
root /var/www/html/example.com;
index index.html index.htm;
}
# Specify a charset
charset utf-8;
# Custom 404 page
error_page 404 /404.html;
include conf/base.conf;
}