I have been getting the nginx error:
413 Request Entity Too Large
I have been able to update my client_max_body_size in the server section of my nginx.conf file to 20M and this has fixed the issue. However, what is the default nginx client_max_body_size?
The default value for client_max_body_size directive is 1 MiB.
It can be set in http, server and location context — as in the most cases,
this directive in a nested block takes precedence over the same directive in the ancestors blocks.
Excerpt from the ngx_http_core_module documentation:
Syntax: client_max_body_size size;
Default: client_max_body_size 1m;
Context: http, server, location
Sets the maximum allowed size of the client request body, specified in
the “Content-Length” request header field. If the size in a request
exceeds the configured value, the 413 (Request Entity Too Large) error
is returned to the client. Please be aware that browsers cannot
correctly display this error. Setting size to 0 disables checking of
client request body size.
Don't forget to reload configuration
by nginx -s reload or service nginx reload commands prepending with sudo (if any).
Pooja Mane's answer worked for me, but I had to put the client_max_body_size variable inside of http section.
Nginx default value for client_max_body_size is 1MB
You can update this value by three different way
1. Set in http block which affects all server blocks (virtual hosts).
http {
...
client_max_body_size 100M;
}
2. Set in server block, which affects a particular site/app.
server {
...
client_max_body_size 100M;
}
3. Set in location block, which affects a particular directory (uploads) under a site/app.
location /uploads {
...
client_max_body_size 100M;
}
For more info click here
You can increase body size in nginx configuration file as
sudo nano /etc/nginx/nginx.conf
client_max_body_size 100M;
Restart nginx to apply the changes.
sudo service nginx restart
You have to increase client_max_body_size in nginx.conf file. This is the basic step. But if your backend laravel then you have to do some changes in the php.ini file as well. It depends on your backend. Below I mentioned file location and condition name.
sudo vim /etc/nginx/nginx.conf.
After open the file adds this into HTTP section.
client_max_body_size 100M;
This works for the new AWS Linux 2 environment. To fix this - you need to wrap your configuration file. You should have, if you're using Docker, a zip file (mine is called deploy.zip) that contains your Dockerrun.aws.json. If you don't - it's rather easy to modify, just zip your deploy via
zip -r deploy.zip Dockerrun.aws.json
With that - you now need to add a .platform folder as follows:
APP ROOT
├── Dockerfile
├── Dockerrun.aws.json
├── .platform
│ └── nginx
│ └── conf.d
│ └── custom.conf
You can name your custom.conf whatever you want, and can have as many files as you want. Inside custom.conf, you simply need to place the following inside
client_max_body_size 50M;
Or whatever you want for your config. With that - modify your zip to now be
zip -r deploy.zip Dockerrun.aws.json .platform
And deploy. Your Nginx server will now respect the new command
More details here: https://blog.benthem.io/2022/04/05/modifying-nginx-settings-on-elasticbeanstalk-with-docker.html
Related
I have a somedomain.com.conf file under /etc/nginx/sites-available in linux (RHEL). if i want to host a web app/site, do i just edit the same file or create a new configuration file for nginx? I edited this file and it works, but trying to find the right way to do this. is the convention , create a new config file for each site/app, you host?
server {
listen 80;
server_name mysite.com;
charset utf-8;
root /var/www/mysite-folder;
index index.html index.htm;
location / {
root /var/www/mysite-folder;
try_files $uri /index.html;
}
}
I thought the RHEL-based systems doesn't make use of that sites-enabled/sites-available mechanism at all (in opposite to Debian-based distros). Of course, the most common approach is to use a separate files for each hosted domain name (maybe including the subdomains). All those files are being included from the top level configuration file /etc/nginx/nginx.conf at the http context level; the Debian packages usually have
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
lines in that file while RHEL packages usually have only the single
include /etc/nginx/conf.d/*.conf;
line. As you can see files in the sites-enabled directory may be named any way while files in the conf.d directory should have the .conf extension to be included (and you can rename it so something like <donain>.off to temporary exclude from nginx configuration). What directory to use for your vhost configuration is up to you (I personally prefer to use /etc/nginx/conf.d/ since it is a more universal way). There is a big Difference in sites-available vs sites-enabled vs conf.d directories thread on this subject on ServerFault (the whole question is more suited for ServerFault rather that StackOverflow; please next time ask this kind of question there).
I have some slate docs as website and would like to serve them on the internal server, through a subdomain as follows: internal-docs.mysite.com. For the record, accessing mysite.com shows the "nginx is running propertly" page.
I've created a config file with following path and name: /etc/nginx/sites-available/internal-docs.mysite.com:
server {
listen 80;
server_name internal-docs.mysite.com;
root /var/www/docs-internal;
index index.html;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
error_page 404 /404.html;
}
And of course, I've put the files in /var/www/docs-internal. And then I made a symlink to the uppershown config file in the /etc/nginx/sites-enabled dir:
internal-docs.mysite.com -> ../sites-available/internal-docs.mysite.com
Then I reload nginx -s reload but "this site can't be reached" error is what I get when accessing the URL.
The setup and configuration look correct to me (according to the guidelines I've followed), so that's why I'm in a dead end, sort of...
It seems you forgot the Listen directive. Try the following:
server {
listen 80;
server_name internal-docs.mysite.com;
root /var/www/docs-internal;
index index.html;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
error_page 404 /404.html;
}
If that does not work, check:
That Nginx user has read permission to the site content. For example if your Nginx user is www and you have root access, do the following:
# su www
$ cat /var/www/docs-internal/index.html
If that fails, ensure the location has correct ownership and permissions. Note that for a user to be able to browser a directory, that directory must have the execute bit for that user or user group.
That Nginx user has read permission on file ../sites-available/internal-docs.mysite.com. For example if your Nginx user is www and you have root access, do the following:
# su www
$ cat /etc/nginx/sites-available/internal-docs.mysite.com
If that fails, ensure that the config files have correct ownership. Note: normally Nginx master process is run by root, and that process spawns sub-processes run as Nginx user, so permissions on config files are unlikely to be the problem.
That maybe your config file name should end with ".conf" (on my server I have the following line: include conf.d/*.conf; so it will NOT load any conf file ending with ".com".
That Nginx tries to load files in ../sites-available/ in its main config file. Maybe it does not and looks instead in the conf.d directory (the default).
That you can do a ping and nslookup on the subdomain. If you cannot, then you have to fix that first (DNS, firewall...).
For the sake of others - the configuration I wrote was correct, and my problem was in 2 things:
I had to remove the listen 80 directive, since there is another configuration file already, that specifies that nginx should listen on port 80. One should not tell nginx twice to listen on the same port, even if it's in two separate configuration files
Permissions on the /var/www/docs-internal folder. Opening a folder requires x (execute) permissions, while opening a file requires r (read) perm. I had to provide the according permissions to all the folders in this hierarchy, so that the content could be open globally (from everyone), which is basically accessing it from the browser.
I'm using Docker to serve my simple WordPress website. A nginx container and a wordpress container. Simple setup:
upstream wordpress_english {
server wordpress_en:80;
}
server {
listen 80;
server_name my_domain.com www.my_domain.com;
location / {
proxy_pass http://wordpress_english;
}
}
Problem: Static files (css, js and images) are not loaded.
The output from the browser console shows a 404:
http://wordpress_english/wp-content/themes/twentyfifteen/genericons/genericons.css?ver=3.2
Its easy to spot the problem: The browser looks for the static files at wordpress_english (the nginx upstream name), instead of my_domain.com
How can I fix this problem?
This is not a nginx problem, but a WordPress problem.
Solution:
In wp-config.php, add the following two lines:
define('WP_HOME','http://my_domain.com');
define('WP_SITEURL','http://my_domain.com');
During the WordPress installation, WordPress automatically sets WP_HOME to nginx upstream name. The above solution overwrites the default setting.
Seems to be an issue in your nginx config file.
When declaring your server my_domain you provide location / with proxy_pass wordpress_english. I don't know a lot on nginx, but I don't see any declaration of path in your server my_domain and is root is linked to wordpress_english. Seems normal that he is looking for files in wordpress_english and not in you server. (In fact, I guess he is looking in your server but your server tells to look in wordpress).
Not sure about it cause I don't know well nginx and proxy_pass functions.
With the Synology DSM6 update, we have now to use Nginx instead of Apache. By default Nginx configuration don't allow wordpress permalinks (generate 404).
I read the idea was to transform the /uri in /?p=$uri and put this configuration in the "location" section of the server nginx config.
Where to put this configuration in DSM6 exactly ?
Have you tried the user config? Just copy your working:
/etc/nginx/app.d/server.webstation-vhost.conf
to:
/usr/local/etc/nginx/sites-enabled/httpd-vhost.conf-user
and rename the server.webstation-vhost.conf to server.webstation-vhost.conf.old or something and restart nginx (nginx -s reload)
Or better yet, remove your virtual host(s) from webstation. Only thing is you need to manually update your SSL certs when they expire instead of using the web interface.
Actually, you can add custom directives easily, without modifying the DSM behavior.
Take a look at the content of /usr/local/etc/nginx/sites-enabled/httpd-vhost.conf-user, to see where the custom configuration has to be stored:
server {
[...]
server_name NAME
[...]
include /usr/local/etc/nginx/conf.d/778943ad-0dc4-40ae-bb7f-7b2285e3203b/user.conf*;
}
Then, you just have to create the file /usr/local/etc/nginx/conf.d/778943ad-0dc4-40ae-bb7f-7b2285e3203b/user.conf.wordpress-permalink with the following content:
location /{
try_files $uri $uri/ /index.php?$args;
}
and restart nginx:
synoservicecfg --restart nginx
It will not break the future DSM update (since it is a supported customization)
I want to configure my staging environment in Elastic Beanstalk to always disallow all spiders. The nginx directive would look like this:
location /robots.txt {
return 200 "User-agent: *\nDisallow: /";
}
I understand that I would want to create a file under the .ebextensions/ folder, such as 01_nginx.config, but I'm not sure how to structure the YAML inside it such that it would work. My goal is to add this location directive to existing configuration, not have to fully replace any existing configuration files which are in place.
There is an approach which uses the more recent .platform/nginx configuration extension on Amazon Linux 2 (as opposed to older AMIs).
The default nginx.conf includes configuration partials in two locations of the overall nginx.conf file. One is immediately inside the http block, so you can't place additional location blocks here, because that's not syntactically legal. The second is inside the server block, though, and that's what we need.
This second location's partial files are included from a special sub-directory, .platform/nginx/conf.d/elasticbeanstalk. Place your location fragment here to add location blocks, like so:
# .platform/nginx/conf.d/elasticbeanstalk/packs.conf
location /packs {
alias /var/app/current/public/packs;
gzip_static on;
gzip on;
expires max;
add_header Cache-Control public;
}
I wanted to do the same thing. After a lot of digging, I found 2 ways to do it:
Option 1. Use an ebextension to replace the nginx configuration file with your custom configuration
I used this option because it is the simplest one.
Following the example given by Amazon in Using the AWS Elastic Beanstalk Node.js Platform - Configuring the Proxy Server - Example .ebextensions/proxy.config, we can see that they create an ebextension that creates a file named /etc/nginx/conf.d/proxy.conf. This file contains the same content as the original nginx configuration file. Then, they delete the original nginx configuration file using container_commands.
You need to replace the Amazon example with the contents of your current nginx configuration file. Note that the nginx configuration files to be deleted in the containter command must be updated too. The ones I used are:
nginx configuration file 1: /opt/elasticbeanstalk/support/conf/webapp_healthd.conf
nginx configuration file 2: /etc/nginx/conf.d/webapp_healthd.conf
Therefore, the final ebextension that worked for me is as follows:
/.ebextensions/nginx_custom.config
# Remove the default nginx configuration generated by elastic beanstalk and
# add a custom configuration to include the custom location in the server block.
# Note that the entire nginx configuration was taken from the generated /etc/nginx/conf.d/webapp_healthd.conf file
# and then, we just added the extra location we needed.
files:
/etc/nginx/conf.d/proxy_custom.conf:
mode: "000644"
owner: root
group: root
content: |
upstream my_app {
server unix:///var/run/puma/my_app.sock;
}
log_format healthd '$msec"$uri"'
'$status"$request_time"$upstream_response_time"'
'$http_x_forwarded_for';
server {
listen 80;
server_name _ localhost; # need to listen to localhost for worker tier
if ($time_iso8601 ~ "^(\d{4})-(\d{2})-(\d{2})T(\d{2})") {
set $year $1;
set $month $2;
set $day $3;
set $hour $4;
}
access_log /var/log/nginx/access.log main;
access_log /var/log/nginx/healthd/application.log.$year-$month-$day-$hour healthd;
location / {
proxy_pass http://my_app; # match the name of upstream directive which is defined above
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /assets {
alias /var/app/current/public/assets;
gzip_static on;
gzip on;
expires max;
add_header Cache-Control public;
}
location /public {
alias /var/app/current/public;
gzip_static on;
gzip on;
expires max;
add_header Cache-Control public;
}
location /robots.txt {
return 200 "User-agent: *\nDisallow: /";
}
}
container_commands:
# Remove the default nginx configuration generated by elastic beanstalk
removeconfig:
command: "rm -f /opt/elasticbeanstalk/support/conf/webapp_healthd.conf /etc/nginx/conf.d/webapp_healthd.conf"
Once you deploy this change, you have to reload the nginx server. You can connect to your server using eb ssh your-environment-name and then run sudo service nginx reload
Option 2. Use an ebextension to modify the nginx configuration file generator, so that it includes your custom locations in the final nginx configuration file
The second option is based on this post: jabbermarky's answer in Amazon forums
He explains this method very well in his answer, so I encourage you to read it if you want to implement it. If you are going to implement this answer, you need to update the location of the nginx file configuration generator.
Note that I have not tested this option.
In summary, he adds a shell script to be executed before the nginx configuration file is generated. In this shell script, he modifies the nginx configuration file generator to include the server block locations he wants in the generated nginx configuration file. Finally, he adds a file containing the locations he wants in the server block of the final nginx configuration file.
It seems that the mentioned approaches dont work anymore. The new approach is to place nginx .conf files into a subfolder in .ebextensions:
You can now place an nginx.conf file in the .ebextensions/nginx folder to override the Nginx configuration. You can also place configuration files in the .ebextensions/nginx/conf.d folder in order to have them included in the Nginx configuration provided by the platform.
Source
This does not require a restart of nginx either as Elastic Beanstalk will take care of that.
Mmmmm! .ebextensions!
You're probably easiest off creating a shell script to change your configuration, and then running that. Don't really know nginx, but try something along the lines of:
files:
"/root/setup_nginx.sh" :
mode: "000750"
owner: root
group: root
content: |
#!/bin/sh
# Configure for NGINX
grep robots.txt <your_config_file> > /dev/null 2>&1
if [ $? -eq 1 ] ; then
echo < EOF >> <your_config_file>
location /robots.txt {
return 200 "User-agent: *\nDisallow: /";
}
EOF
# Restart any services you need restarting
fi
container_commands:
000-setup-nginx:
command: /root/setup_nginx.sh
I.e. first create a schell script that does what you need, then run it.
Oh, and be careful there are no tabs in your YAML! Only spaces are allowed... Check the log file /var/log/cfn_init.log for errors...
Good luck!
For version Amazon Linux 2 use this path on your bundle and zip this foldes together
.platform/nginx/conf.d/elasticbeanstalk/000_my_custom.conf
This is what's working for me:
files:
"/etc/nginx/conf.d/01_syncserver.conf":
mode: "000755"
owner: root
group: root
content: |
# 10/7/17; See https://github.com/crspybits/SyncServerII/issues/35
client_max_body_size 100M;
# SyncServer uses some http request headers with underscores
underscores_in_headers on;
# 5/20/21; Trying to get the load balancer to respond with a 503
server {
listen 80;
server_name _ localhost; # need to listen to localhost for worker tier
location / {
return 503;
}
}
container_commands:
01_reload_nginx:
command: pgrep nginx && service nginx reload || true
To fix this - you need to wrap your configuration file. You should have, if you're using Docker, a zip file (mine is called deploy.zip) that contains your Dockerrun.aws.json. If you don't - it's rather easy to modify, just zip your deploy via
zip -r deploy.zip Dockerrun.aws.json
With that - you now need to add a .platform folder as follows:
├── .platform
│ └── nginx
│ └── conf.d
│ └── custom.conf
You can name your custom.conf whatever you want, and can have as many files as you want. Inside custom.conf, you simply need to place the following inside
client_max_body_size 50M;
Or whatever you want for your config. With that - modify your zip to now be
zip -r deploy.zip Dockerrun.aws.json
And deploy. Your Nginx server will now respect the new command
This is achievable using .ebextension config files, however I'm having difficulty kicking nginx to restart after a change to its configuration files.
# .ebextensions/nginx.config
files:
"/etc/nginx/conf.d/robots.conf":
mode: "000544"
owner: root
group: root
content: |
location /robots.txt {
return 200 "User-agent: *\nDisallow: /";
}
encoding: plain
Now, I've done similar to add a file to kick the nginx tyres, however for some odd reason it's not executing:
"/opt/elasticbeanstalk/hooks/appdeploy/enact/03_restart_nginx.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
. /opt/elasticbeanstalk/containerfiles/envvars
sudo service nginx restart
ps aux | grep nginx > /home/ec2-user/nginx.times.log
true
encoding: plain