Related
i'm trying to run a server using Nginx with sslv3 and ciphers RC4-SHA:RC4-MD5 support (i need exactly these ciphers).
I was able to do this on Ubuntu 16.04 using Openssl 1.0.2u source + last nginx version source (nginx-1.19.6). I builded Nginx using this command:
./configure --with-http_ssl_module --with-openssl=/path/to/openssl-1.0.2u --with-openssl-opt=enable-ssl3 --with-openssl-opt=enable-ssl3-method --with-openssl-opt=enable-weak-ssl-ciphers
Nginx config i used is:
server {
listen 443 ssl;
listen [::]:443 ssl;
ssl_protocols SSLv3;
ssl_ciphers RC4-SHA:RC4-MD5:#SECLEVEL=0;
ssl_certificate /path/to/server-chain.crt;
ssl_certificate_key /path/to/server.key;
server_name server.name.net;
underscores_in_headers on;
proxy_pass_request_headers on;
location / {
proxy_set_header X-Forwarded-Host \$host:\$server_port;
proxy_set_header X-Forwarded-Server \$host;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:9000;
}
}
After setting up nginx config file everything worked perfectly. I was able to obtain the ssl certificate using this command from an Ubuntu 14.04 machine:
openssl s_client -connect MyIP:443 -ssl3 -cipher RC4-SHA:RC4-MD5.
I tryed to do the same thing building Nginx with Openssl 1.1.1i source with the same configuration options, but after setting up nginx conf file, when i try to run openssl s_client -connect... command, i get this error:
CONNECTED(00000003)
140420793624224:error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number:s3_pkt.c:339:
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 5 bytes and written 7 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
SSL-Session:
Protocol : SSLv3
Cipher : 0000
Session-ID:
Session-ID-ctx:
Master-Key:
Key-Arg : None
PSK identity: None
PSK identity hint: None
SRP username: None
Start Time: 1612540521
Timeout : 7200 (sec)
Verify return code: 0 (ok)
---
In Nginx error.log file i got this:
SSL_do_handshake() failed (SSL: error:141FC044:SSL routines:tls_setup_handshake:internal error) while SSL handshaking, client: 192.168.1.10, server: 0.0.0.0:443
Did something change with openssl 1.1.1? Am i missing any configuration options to enable SSLv3 + RC4-SHA:RC4-MD5?
Thanks for any tips!
In the end i was able to fix this!
I downloaded the last openssl source (1.1.1i) and the last nginx source (1.19.6).
I compiled and installed openssl with the following commands:
./config enable-ssl3 enable-ssl3-method enable-weak-ssl-ciphers
make
sudo make install
I edited openssl.cnf file (/usr/local/ssl/openssl.cnf) adding
openssl_conf = default_conf
at the beginning of the file and adding
[default_conf]
ssl_conf = ssl_sect
[ssl_sect]
system_default = system_default_sect
[system_default_sect]
CipherString = ALL:#SECLEVEL=0
at the bottom of the file. This enables old ciphers (i needed RC4-SHA and RC4-MD5).
Then i compiled and installed nginx with the following commands:
./configure --with-http_ssl_module --with-ld-opt="-L/usr/local"
make
sudo make install
After configuring nginx for ssl certificates i was able to get them using the openssl s_client... command!
SSL Ciphers in nginx need to be supported by your openSSL Version. From the openSSL Changelog of 1.0.2h and 1.1.0:
RC4 based libssl ciphersuites are now classed as "weak" ciphers and are disabled by default. They can be re-enabled using the enable-weak-ssl-ciphers option to Configure.
I'm trying to figure out how to use the Porkbun Let's Encrypt Files with Nginx.
They have generated a zip file with the following files for me to use
domain.cert.pem, intermediate.cert.pem, private.key.pem, public.key.pem
From this site https://wbxpress.net/install-porkbun-ssl-nginx-wordpress/
I've worked out that
ssl_certificate is domain.cert.pem
ssl_certificate_key is private.cert.pem
But for my needs I have to specify the ssl_trusted_certificate as well.
Can anybody point me in the right direction ?
If you used the certbot you will get these files: README cert.pem chain.pem fullchain.pem privkey.pem
ssl_certificate should point to fullchain.pem
ssl_certificate_key should point to privkey.pem
ssl_trusted_certificate should point to chain.pem
From what I see, the PorkBun generated files are just renamed and mapped like this:
fullchain.pem -> domain.cert.pem
privkey.pem -> private.key.pem
chain.pem -> intermediate.cert.pem
cert.pem -> public.key.pem
So you would do this for the files given by PorkBun:
ssl_certificate should point to domain.cert.pem
ssl_certificate_key should point to private.key.pem
ssl_trusted_certificate should point to intermediate.cert.pem
Basically fullchain.pem is just made up of cert.pem + chain.pem concatenated together. See here for more information: Generate CRT & KEY ssl files from Let's Encrypt from scratch
Personally, I would not use their generated ones because you would have to manually replace it every 90 days. Best if you use another option like certbot which lets you automatically renew it or do it 'manually' via some cronjob. Good luck!
When I try to start the nginx server with this configuration I get an error
nginx: [emerg] no ssl_client_certificate for ssl_client_verify
My Configuration looks like
# HTTPS server
server {
listen 4443;
server_name localhost;
ssl on;
ssl_certificate /home/user/conf/ssl/server.pem;
ssl_certificate_key /home/user/conf/ssl/server.pem;
ssl_protocols TLSv1.2;
ssl_verify_client optional;
ssl_trusted_certificate /home/user/ssl/certs/certificate_bundle.pem;
include conf.d/api_proxy.conf;
}
As per the error, I should use ssl_client_certificate directive but as per the documentation if I don't want to send the list of certificates to clients I should use ssl_trusted_certificate.
http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_client_certificate
Can someone help me figure out what am I missing?
The answer is in the error message itself:
nginx: [emerg] no ssl_client_certificate for ssl_client_verify
If you disable ssl_client_verify in your configuration, the error will disappear upon the next start of nginx. It appears that the semantics of ssl_trusted_certificate only pertain to its exclusive use and is subject to "logical override" when other configuration directives are in play.
I would personally prefer enabling ssl_client_verify to mean: "client certificates are verified if presented; clients are given no information regarding what client certificates or authorities are trusted by the web server". But, I can also see advantage in troubleshooting TLS handshakes when this information is available. From a security perspective, I only see metadata presented to the client via openssl s_client; there is no public key or other "signature-critical" information that a malicious client could use to, for example, attempt to clone/reconstruct the CA.
For example, running the following against a local instance of nginx with your configuration:
openssl s_client -key client.key -cert client.crt -connect localhost:443
... would show data similar in structure to the following in the response:
Acceptable client certificate CA names
/CN=user/OU=Clients/O=Company/C=Location
Client Certificate Types: RSA sign, DSA sign, ECDSA sign
Requested Signature Algorithms: RSA+SHA512:DSA+SHA512:ECDSA+SHA512:RSA+SHA384:DSA+SHA384:ECDSA+SHA384:RSA+SHA256:DSA+SHA256:ECDSA+SHA256:RSA+SHA224:DSA+SHA224:ECDSA+SHA224:RSA+SHA1:DSA+SHA1:ECDSA+SHA1
Shared Requested Signature Algorithms: RSA+SHA512:DSA+SHA512:ECDSA+SHA512:RSA+SHA384:DSA+SHA384:ECDSA+SHA384:RSA+SHA256:DSA+SHA256:ECDSA+SHA256:RSA+SHA224:DSA+SHA224:ECDSA+SHA224:RSA+SHA1:DSA+SHA1:ECDSA+SHA1
Peer signing digest: SHA512
Server Temp Key: ECDH, P-256, 256 bits
In my opinion, the value of the above is confined in value to troubleshooting.
Your DN structures are (ideally) not "security relevant" (especially if the context is an Internet-facing web service).
I am trying to do basic auth on Nginx. I have version 1.9.3 up and running on Ubuntu 14.04 and it works fine with a simple html file.
Here is the html file:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title></title>
</head>
<body>
"Some shoddy text"
</body>
</html>
And here is my nginx.conf file:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
server {
listen 80;
server_name 192.168.1.30;
location / {
root /www;
index index.html;
auth_basic "Restricted";
auth_basic_user_file /etc/users;
}
}
}
I used htpasswd to create two users in the "users" file under /etc (username "calvin" password "Calvin", and username "hobbes" password "Hobbes"). It's encrypted by looks like this:
calvin:$apr1$Q8LGMfGw$RbO.cG4R1riIfERU/175q0
hobbes:$apr1$M9KoUUhh$ayGd8bqqlN989ghWdTP4r/
All files belong to root:root. The server IP address is 192.168.1.30 and I am referencing that directly in the conf file.
It all works fine if I comment out the two auth lines and restart nginx, but if I uncomment them, then I do indeed get the username and password prompts when I try to load the site, but immediately thereafter get an Error 500 Internal Server error which seems to persist and I have to restart nginx.
Anybody can see what I'm doing wrong here? I had the same behaviour on the standard Ubuntu 14.04 apt-get version of Nginx (1.4.something) so I don't think it's the nginx version.
Not really an answer to your question as you are using MD5. However as this thread pops up when searching for the error, I am attaching this to it.
Similar errors happen when bcrypt is used to generate passwords for auth_basic:
htpasswd -B <file> <user> <pass>
Since bcrypt is not supported within auth_basic ATM, mysterious 500 errors can be found in nginx error.log, (usually found at /var/log/nginx/error.log), they look something like this:
*1 crypt_r() failed (22: Invalid argument), ...
At present the solution is to generate a new password using md5, which is the default anyway.
Edited to address md5 issues as brought up by #EricWolf in the comments:
md5 has its problems for sure, some context can be found in the following threads
Is md5 considered insecure?
Is md5 still considered secure for single use authentications?
Of the two, speed issue can be mitigated by using fail2ban, by banning on failed basic auth you'll make online brute forcing impractical (guide). You can also use long passwords to try and fortify a bit as suggested here.
Other than that it seems this is as good as it gets with nginx...
I had goofed up when initially creating a user. As a result, the htpasswd file looked like:
user:
user:$apr1$passwdhashpasswdhashpasswdhash...
After deleting the blank user, everything worked fine.
I was running Nginx in a Docker environment and I had the same issue. The reason was that some of the passwords were generated using bcrypt. I resolved it by using nginx:alpine.
Do you want a MORE secure password hash with nginx basic_auth? Do this:
echo "username:"$(mkpasswd -m sha-512) >> .htpasswd
SHA-512 is not considered nearly as good as bcrypt, but it's the best nginx supports at the moment.
I will just stick the htpassword file under "/etc/nginx" myself.
Assuming it is named htcontrol, then ...
sudo htpasswd -c /etc/nginx/htcontrol calvin
Follow the prompt for the password and the file will be in the correct place.
location / {
...
auth_basic "Restricted";
auth_basic_user_file htcontrol;
}
or auth_basic_user_file /etc/nginx/htcontrol; but the first variant works for me
I just had the same problem - after checking log as suggested by #Drazen Urch I've discovered that the file had root:root permissions - after changing to forge:forge (I'm using Forge with Digital Ocean) - the problem went away.
Well, just use correct RFC 2307 syntax:
passwordvalue = schemeprefix encryptedpassword
schemeprefix = "{" scheme "}"
scheme = "crypt" / "md5" / "sha" / altscheme
altscheme = "x-" keystring
encryptedpassword = encrypted password
For example: sha1 for helloworld for admin will be
admin:{SHA}at+xg6SiyUovktq1redipHiJpaE=
I had same error cause i wrote {SHA1} what against RFC syntax. When i fixed it - all worked like a charm. {sha} will not work too. Only correct {SHA}.
First, check out your nginx error logs:
tail -f /var/log/nginx/error.log
In my case, I found the error:
[crit] 18901#18901: *6847 open() "/root/temp/.htpasswd" failed (13: Permission denied),
The /root/temp directory is one of my test directories, and cannot be read by nginx. After change it to /etc/apache2/ (follow the official guide https://docs.nginx.com/nginx/admin-guide/security-controls/configuring-http-basic-authentication/) everything works fine.
===
After executing the ps command we can see the nginx worker process maintained by the user www-data, I had tried to chown www-data:www-data /root/temp to make sure www-data can access this file, but it still not working. To be honest, I don't have a very deep understanding on Linux File Permissions, so I change it to /etc/apache2/ to fix this in the end. And after a test, you can put the .htpasswd file in other directories which in /etc (like /etc/nginx).
I too was facing the same problem while setting up authentication for kibana. Here is the error in my /var/log/nginx/error.log file
2020/04/13 13:43:08 [crit] 49662#49662: *152 crypt_r() failed (22:
Invalid argument), client: 157.42.72.240, server: 168.61.168.150,
request: “GET / HTTP/1.1”, host: “168.61.168.150”
I resolved this issue by adding authentication using this.
sudo sh -c "echo -n 'kibanaadmin:' >> /etc/nginx/htpasswd.users"
sudo sh -c "openssl passwd -apr1 >> /etc/nginx/htpasswd.users"
You can refer this post if you are trying to setup kibana and got this issue.
https://medium.com/#shubham.singh98/log-monitoring-with-elk-stack-c5de72f0a822?postPublishedType=repub
In my case, I was using plain text password by -p flag, and coincidentally my password start with $ character.
So I updated my password and thus the error was gone.
NB: Other people answer helped me a lot to figure out my problem. I am posting my solution here if anyone stuck in a rare case like me.
In my case, I had my auth_basic setup protecting an nginx location that was served by a proxy_pass configuration.
The configured proxy_pass location wasn't returning a successful HTTP200 response, which caused nginx to respond with an Internal Server Error after I had entered the correct username and password.
If you have a similar setup, ensure that the proxy_pass location protected by auth_basic is returning an HTTP200 response after you rule out username/password issues.
nginx keeps saying client intended to send too large body. Googling and RTM pointed me to client_max_body_size. I set it to 200m in the nginx.conf as well as in the vhost conf, restarted Nginx a couple of times but I'm still getting the error message.
Did I overlook something? The backend is php-fpm (max_post_size and max_upload_file_size are set accordingly).
Following nginx documentation, you can set client_max_body_size 20m ( or any value you need ) in the following context:
context: http, server, location
NGINX large uploads are successfully working on hosted WordPress sites, finally (as per suggestions from nembleton & rjha94)
I thought it might be helpful for someone, if I added a little clarification to their suggestions. For starters, please be certain you have included your increased upload directive in ALL THREE separate definition blocks (server, location & http). Each should have a separate line entry. The result will like something like this (where the ... reflects other lines in the definition block):
http {
...
client_max_body_size 200M;
}
(in my ISPconfig 3 setup, this block is in the /etc/nginx/nginx.conf file)
server {
...
client_max_body_size 200M;
}
location / {
...
client_max_body_size 200M;
}
(in my ISPconfig 3 setup, these blocks are in the /etc/nginx/conf.d/default.conf file)
Also, make certain that your server's php.ini file is consistent with these NGINX settings. In my case, I changed the setting in php.ini's File_Uploads section to read:
upload_max_filesize = 200M
Note: if you are managing an ISPconfig 3 setup (my setup is on CentOS 6.3, as per The Perfect Server), you will need to manage these entries in several separate files. If your configuration is similar to one in the step-by-step setup, the NGINX conf files you need to modify are located here:
/etc/nginx/nginx.conf
/etc/nginx/conf.d/default.conf
My php.ini file was located here:
/etc/php.ini
I continued to overlook the http {} block in the nginx.conf file. Apparently, overlooking this had the effect of limiting uploading to the 1M default limit. After making the associated changes, you will also want to be sure to restart your NGINX and PHP FastCGI Process Manager (PHP-FPM) services. On the above configuration, I use the following commands:
/etc/init.d/nginx restart
/etc/init.d/php-fpm restart
As of March 2016, I ran into this issue trying to POST json over https (from python requests, not that it matters).
The trick is to put "client_max_body_size 200M;" in at least two places http {} and server {}:
1. the http directory
Typically in /etc/nginx/nginx.conf
2. the server directory in your vhost.
For Debian/Ubuntu users who installed via apt-get (and other distro package managers which install nginx with vhosts by default), thats /etc/nginx/sites-available/mysite.com, for those who do not have vhosts, it's probably your nginx.conf or in the same directory as it.
3. the location / directory in the same place as 2.
You can be more specific than /, but if its not working at all, i'd recommend applying this to / and then once its working be more specific.
Remember - if you have SSL, that will require you to set the above for the SSL server and location too, wherever that may be (ideally the same as 2.). I found that if your client tries to upload on http, and you expect them to get 301'd to https, nginx will actually drop the connection before the redirect due to the file being too large for the http server, so it has to be in both.
Recent comments suggest that there is an issue with this on SSL with newer nginx versions, but i'm on 1.4.6 and everything is good :)
You need to apply following changes:
Update php.ini (Find right ini file from phpinfo();) and increase post_max_size and upload_max_filesize to size you want:
sed -i "s/post_max_size =.*/post_max_size = 200M/g" /etc/php5/fpm/php.ini
sed -i "s/upload_max_filesize =.*/upload_max_filesize = 200M/g" /etc/php5/fpm/php.ini```
Update NginX settings for your website and add client_max_body_size value in your location, http, or server context.
location / {
client_max_body_size 200m;
...
}
Restart NginX and PHP-FPM:
service nginx restart
service php5-fpm restart
NOTE: Sometime (In my case almost every time) you need to kill php-fpm process if it didn't refresh by service command properly. To do that you can get list of processes (ps -elf | grep php-fpm) and kill one by one (kill -9 12345) or use following command to do it for you:
ps -elf | grep php-fpm | grep -v grep | awk '{ print $4 }' | xargs kill -9
Please see if you are setting client_max_body_size directive inside http {} block and not inside location {} block. I have set it inside http{} block and it works
Someone correct me if this is bad, but I like to lock everything down as much as possible, and if you've only got one target for uploads (as it usually the case), then just target your changes to that one file. This works for me on the Ubuntu nginx-extras mainline 1.7+ package:
location = /upload.php {
client_max_body_size 102M;
fastcgi_param PHP_VALUE "upload_max_filesize=102M \n post_max_size=102M";
(...)
}
I had a similar problem recently and found out, that client_max_body_size 0; can solve such an issue. This will set client_max_body_size to no limit. But the best practice is to improve your code, so there is no need to increase this limit.
I meet the same problem, but I found it nothing to do with nginx. I am using nodejs as backend server, use nginx as a reverse proxy, 413 code is triggered by node server. node use koa parse the body. koa limit the urlencoded length.
formLimit: limit of the urlencoded body. If the body ends up being larger than this limit, a 413 error code is returned. Default is 56kb.
set formLimit to bigger can solve this problem.
Assuming you have already set the client_max_body_size and various PHP settings (upload_max_filesize / post_max_size , etc) in the other answers, then restarted or reloaded NGINX and PHP without any result, run this...
nginx -T
This will give you any unresolved errors in your NGINX configs. In my case, I struggled with the 413 error for a whole day before I realized there were some other unresolved SSL errors in the NGINX config (wrong pathing for certs) that needed to be corrected. Once I fixed the unresolved issues I got from 'nginx -T', reloaded NGINX, and EUREKA!! That fixed it.
I'm setting up a dev server to play with that mirrors our outdated live one, I used The Perfect Server - Ubuntu 14.04 (nginx, BIND, MySQL, PHP, Postfix, Dovecot and ISPConfig 3)
After experiencing the same issue, I came across this post and nothing was working. I changed the value in every recommended file (nginx.conf, ispconfig.vhost, /sites-available/default, etc.)
Finally, changing client_max_body_size in my /etc/nginx/sites-available/apps.vhost and restarting nginx is what did the trick. Hopefully it helps someone else.
In case you are using Kubernetes, add the following annotations to your Ingress:
annotations:
nginx.ingress.kubernetes.io/client-max-body-size: "5m"
nginx.ingress.kubernetes.io/client-body-buffer-size: "8k"
nginx.ingress.kubernetes.io/proxy-body-size: "5m"
nginx.ingress.kubernetes.io/proxy-buffer-size: "8k"
Confirm the changes were applied:
kubectl -n <namespace> describe ingress <ingress-name>
References:
Client Body Buffer Size
Custom max body size
Had the same issue that the client_max_body_size directive was ignored.
My silly error was, that I put a file inside /etc/nginx/conf.d which did not end with .conf. Nginx will not load these by default.
If you tried the above options and no success, also you're using IIS (iisnode) to host your node app, putting this code on web.config resolved the problem for me:
Here is the reference: https://www.inflectra.com/support/knowledgebase/kb306.aspx
Also, you can chagne the length allowed because now I think its 2GB. Modify it by your needs.
<security>
<requestFiltering>
<requestLimits maxAllowedContentLength="2147483648" />
</requestFiltering>
</security>
The following config worked for me. Notice I only set client_max_body_size 50M; once, contrary to what others are saying...
File: /etc/nginx/conf.d/sites.conf
server {
listen 80 default_server;
server_name portal.myserver.com;
return 301 https://$host$request_uri;
}
server {
resolver 127.0.0.11 valid=30s;
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
ssl_certificate /secret/portal.myserver.com.crt;
ssl_certificate_key /secret/portal.myserver.com.pem;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
server_name portal.myserver.com;
client_max_body_size 50M;
location /fileserver/ {
set $upstream http://fileserver:6976;
proxy_pass $upstream;
}
}
If you are using windows version nginx, you can try to kill all nginx process and restart it to see.
I encountered same issue In my environment, but resolved it with this solution.