My goal is to see the newest log messages on web page.
I know I could use tail -f <file> to trace the latest 10 line log messages on terminal.
But today, I want to config nginx, so that I could see the same result on web.
For example, when I access to http://192.168.1.200/nginx (My Nginx Host)
I could see the files under /var/log/nginx
Index of /nginx/
-------------------------------------------------
../
access.log 08-Aug-2019 16:43 20651
error.log 08-Aug-2019 16:43 17810
And when I access to http://192.168.1.200/nginx/access.log
I could see the same result as tail -f /var/log/nginx/access.log in terminal (and it is dynamic).
/etc/nginx/conf.d/log.conf
server {
listen 80;
root /var/log/nginx;
location /nginx {
root /var/log/;
default_type text/plain;
autoindex on;
}
}
This is my config, but there are 2 points that doesn't meet my requirements:
I want to access to log page by accessing /log/access.log not by /nginx/access.log
When I access to /log/access.log, this page is static.
I am trying to do basic auth on Nginx. I have version 1.9.3 up and running on Ubuntu 14.04 and it works fine with a simple html file.
Here is the html file:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title></title>
</head>
<body>
"Some shoddy text"
</body>
</html>
And here is my nginx.conf file:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
server {
listen 80;
server_name 192.168.1.30;
location / {
root /www;
index index.html;
auth_basic "Restricted";
auth_basic_user_file /etc/users;
}
}
}
I used htpasswd to create two users in the "users" file under /etc (username "calvin" password "Calvin", and username "hobbes" password "Hobbes"). It's encrypted by looks like this:
calvin:$apr1$Q8LGMfGw$RbO.cG4R1riIfERU/175q0
hobbes:$apr1$M9KoUUhh$ayGd8bqqlN989ghWdTP4r/
All files belong to root:root. The server IP address is 192.168.1.30 and I am referencing that directly in the conf file.
It all works fine if I comment out the two auth lines and restart nginx, but if I uncomment them, then I do indeed get the username and password prompts when I try to load the site, but immediately thereafter get an Error 500 Internal Server error which seems to persist and I have to restart nginx.
Anybody can see what I'm doing wrong here? I had the same behaviour on the standard Ubuntu 14.04 apt-get version of Nginx (1.4.something) so I don't think it's the nginx version.
Not really an answer to your question as you are using MD5. However as this thread pops up when searching for the error, I am attaching this to it.
Similar errors happen when bcrypt is used to generate passwords for auth_basic:
htpasswd -B <file> <user> <pass>
Since bcrypt is not supported within auth_basic ATM, mysterious 500 errors can be found in nginx error.log, (usually found at /var/log/nginx/error.log), they look something like this:
*1 crypt_r() failed (22: Invalid argument), ...
At present the solution is to generate a new password using md5, which is the default anyway.
Edited to address md5 issues as brought up by #EricWolf in the comments:
md5 has its problems for sure, some context can be found in the following threads
Is md5 considered insecure?
Is md5 still considered secure for single use authentications?
Of the two, speed issue can be mitigated by using fail2ban, by banning on failed basic auth you'll make online brute forcing impractical (guide). You can also use long passwords to try and fortify a bit as suggested here.
Other than that it seems this is as good as it gets with nginx...
I had goofed up when initially creating a user. As a result, the htpasswd file looked like:
user:
user:$apr1$passwdhashpasswdhashpasswdhash...
After deleting the blank user, everything worked fine.
I was running Nginx in a Docker environment and I had the same issue. The reason was that some of the passwords were generated using bcrypt. I resolved it by using nginx:alpine.
Do you want a MORE secure password hash with nginx basic_auth? Do this:
echo "username:"$(mkpasswd -m sha-512) >> .htpasswd
SHA-512 is not considered nearly as good as bcrypt, but it's the best nginx supports at the moment.
I will just stick the htpassword file under "/etc/nginx" myself.
Assuming it is named htcontrol, then ...
sudo htpasswd -c /etc/nginx/htcontrol calvin
Follow the prompt for the password and the file will be in the correct place.
location / {
...
auth_basic "Restricted";
auth_basic_user_file htcontrol;
}
or auth_basic_user_file /etc/nginx/htcontrol; but the first variant works for me
I just had the same problem - after checking log as suggested by #Drazen Urch I've discovered that the file had root:root permissions - after changing to forge:forge (I'm using Forge with Digital Ocean) - the problem went away.
Well, just use correct RFC 2307 syntax:
passwordvalue = schemeprefix encryptedpassword
schemeprefix = "{" scheme "}"
scheme = "crypt" / "md5" / "sha" / altscheme
altscheme = "x-" keystring
encryptedpassword = encrypted password
For example: sha1 for helloworld for admin will be
admin:{SHA}at+xg6SiyUovktq1redipHiJpaE=
I had same error cause i wrote {SHA1} what against RFC syntax. When i fixed it - all worked like a charm. {sha} will not work too. Only correct {SHA}.
First, check out your nginx error logs:
tail -f /var/log/nginx/error.log
In my case, I found the error:
[crit] 18901#18901: *6847 open() "/root/temp/.htpasswd" failed (13: Permission denied),
The /root/temp directory is one of my test directories, and cannot be read by nginx. After change it to /etc/apache2/ (follow the official guide https://docs.nginx.com/nginx/admin-guide/security-controls/configuring-http-basic-authentication/) everything works fine.
===
After executing the ps command we can see the nginx worker process maintained by the user www-data, I had tried to chown www-data:www-data /root/temp to make sure www-data can access this file, but it still not working. To be honest, I don't have a very deep understanding on Linux File Permissions, so I change it to /etc/apache2/ to fix this in the end. And after a test, you can put the .htpasswd file in other directories which in /etc (like /etc/nginx).
I too was facing the same problem while setting up authentication for kibana. Here is the error in my /var/log/nginx/error.log file
2020/04/13 13:43:08 [crit] 49662#49662: *152 crypt_r() failed (22:
Invalid argument), client: 157.42.72.240, server: 168.61.168.150,
request: “GET / HTTP/1.1”, host: “168.61.168.150”
I resolved this issue by adding authentication using this.
sudo sh -c "echo -n 'kibanaadmin:' >> /etc/nginx/htpasswd.users"
sudo sh -c "openssl passwd -apr1 >> /etc/nginx/htpasswd.users"
You can refer this post if you are trying to setup kibana and got this issue.
https://medium.com/#shubham.singh98/log-monitoring-with-elk-stack-c5de72f0a822?postPublishedType=repub
In my case, I was using plain text password by -p flag, and coincidentally my password start with $ character.
So I updated my password and thus the error was gone.
NB: Other people answer helped me a lot to figure out my problem. I am posting my solution here if anyone stuck in a rare case like me.
In my case, I had my auth_basic setup protecting an nginx location that was served by a proxy_pass configuration.
The configured proxy_pass location wasn't returning a successful HTTP200 response, which caused nginx to respond with an Internal Server Error after I had entered the correct username and password.
If you have a similar setup, ensure that the proxy_pass location protected by auth_basic is returning an HTTP200 response after you rule out username/password issues.
G'day.
I have fedora 21, HHVM version 3.7. My issue unfortunately is that I can start the service, and I can access my pages no issue. However if I consistently refresh the page, the HHVM crashes and upon checking the status, it returns this error:
The HHVM error log returns:
Unable to open pid file /var/run/hhvm/pid for write
Now I can restart the server and it works fine, but only after a hand full of requests it will crash as above.
PHP-FPM is not running and nothing except HHVM is running on port 9000
Here is some config info
HHVM - server.ini
; php options
pid = /var/run/hhvm/pid
; hhvm specific
hhvm.server.port = 9000
hhvm.server.type = fastcgi
hhvm.server.source_root = /srv/www
hhvm.server.default_document = index.php
hhvm.log.level = Error
hhvm.log.use_log_file = true
hhvm.log.file = /var/log/hhvm/error.log
hhvm.repo.central.path = /var/run/hhvm/hhvm.hhbc
HHVM - service
[Unit]
Description=HipHop Virtual Machine (FCGI)
[Service]
ExecStart=/usr/bin/hhvm --config /etc/hhvm/server.ini --user hhvm --mode daemon -vServer.Type=fastcgi -vServer.Port=9000
PrivateTmp=true
[Install]
WantedBy=multi-user.target
NGINX - site file
##NGINX STUFF
location ~ \.php$ {
include /etc/nginx/fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index bootstrap.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
##MORE NGINX STUFF
So from the info provided, is there any hint as to what could be the issue?
Cheers guys.
This is a very simple permission issue like your log mentioned. You have no access to the pid folder to generate the pid file.
sudo chmod -R 777 /var/run/hhvm
I had the same problem on Ubuntu.
HHVM Unable to read pid file /var/run/hhvm/pid for any meaningful pid after reboot
Another problem when you have a lot of requests can be the max open files limit. When you come over the limit HHVM crashes. Normally you should see that error in your log and you can increase that limit.
https://serverfault.com/questions/679408/hhvm-exit-after-too-many-open-files
Here is my question on ServerFault.
Say I have a directory structure like this:
/index
/contact
/view_post
All three are executables that just output html using something basically like echo-cpp from fcgi examples.
The documentation I've read have just shown how to have one program that then parses the request-uri and calls various sections from that. I want to be able to have each of these as separate programs instead of parsing for a request uri and serving the page based on that.
So if I went to localhost/index the index program would be ran with input to it (post data) and its output would go to nginx to serve up the page.
I'm not sure if fcgi is even the right tool for this, so if something else would work better then that is fine.
You can do it with nginx and fcgi. The simplest way to do this is, by using spawn-fcgi -
First you will need to setup your nginx.conf. Add the following inside the server {} block -
location /index {
fastcgi_pass 127.0.0.1:9000;
include fastcgi_params;
}
location /contact {
fastcgi_pass 127.0.0.1:9001;
include fastcgi_params;
}
location /view_post {
fastcgi_pass 127.0.0.1:9002;
include fastcgi_params;
}
Restart nginx and then run your apps listening same ports as declared in the nginx.conf.
Assuming your programs are in ~/bin/ folder -
~ $ cd bin
~/bin $ spawn-fcgi -p 9000 ./index
~/bin $ spawn-fcgi -p 9001 ./contact
~/bin $ spawn-fcgi -p 9002 ./view_post
Now the requests to localhost/index will forward to your index program and its output will go back to nginx to serve the pages! And the same for contact and view_post!
nginx keeps saying client intended to send too large body. Googling and RTM pointed me to client_max_body_size. I set it to 200m in the nginx.conf as well as in the vhost conf, restarted Nginx a couple of times but I'm still getting the error message.
Did I overlook something? The backend is php-fpm (max_post_size and max_upload_file_size are set accordingly).
Following nginx documentation, you can set client_max_body_size 20m ( or any value you need ) in the following context:
context: http, server, location
NGINX large uploads are successfully working on hosted WordPress sites, finally (as per suggestions from nembleton & rjha94)
I thought it might be helpful for someone, if I added a little clarification to their suggestions. For starters, please be certain you have included your increased upload directive in ALL THREE separate definition blocks (server, location & http). Each should have a separate line entry. The result will like something like this (where the ... reflects other lines in the definition block):
http {
...
client_max_body_size 200M;
}
(in my ISPconfig 3 setup, this block is in the /etc/nginx/nginx.conf file)
server {
...
client_max_body_size 200M;
}
location / {
...
client_max_body_size 200M;
}
(in my ISPconfig 3 setup, these blocks are in the /etc/nginx/conf.d/default.conf file)
Also, make certain that your server's php.ini file is consistent with these NGINX settings. In my case, I changed the setting in php.ini's File_Uploads section to read:
upload_max_filesize = 200M
Note: if you are managing an ISPconfig 3 setup (my setup is on CentOS 6.3, as per The Perfect Server), you will need to manage these entries in several separate files. If your configuration is similar to one in the step-by-step setup, the NGINX conf files you need to modify are located here:
/etc/nginx/nginx.conf
/etc/nginx/conf.d/default.conf
My php.ini file was located here:
/etc/php.ini
I continued to overlook the http {} block in the nginx.conf file. Apparently, overlooking this had the effect of limiting uploading to the 1M default limit. After making the associated changes, you will also want to be sure to restart your NGINX and PHP FastCGI Process Manager (PHP-FPM) services. On the above configuration, I use the following commands:
/etc/init.d/nginx restart
/etc/init.d/php-fpm restart
As of March 2016, I ran into this issue trying to POST json over https (from python requests, not that it matters).
The trick is to put "client_max_body_size 200M;" in at least two places http {} and server {}:
1. the http directory
Typically in /etc/nginx/nginx.conf
2. the server directory in your vhost.
For Debian/Ubuntu users who installed via apt-get (and other distro package managers which install nginx with vhosts by default), thats /etc/nginx/sites-available/mysite.com, for those who do not have vhosts, it's probably your nginx.conf or in the same directory as it.
3. the location / directory in the same place as 2.
You can be more specific than /, but if its not working at all, i'd recommend applying this to / and then once its working be more specific.
Remember - if you have SSL, that will require you to set the above for the SSL server and location too, wherever that may be (ideally the same as 2.). I found that if your client tries to upload on http, and you expect them to get 301'd to https, nginx will actually drop the connection before the redirect due to the file being too large for the http server, so it has to be in both.
Recent comments suggest that there is an issue with this on SSL with newer nginx versions, but i'm on 1.4.6 and everything is good :)
You need to apply following changes:
Update php.ini (Find right ini file from phpinfo();) and increase post_max_size and upload_max_filesize to size you want:
sed -i "s/post_max_size =.*/post_max_size = 200M/g" /etc/php5/fpm/php.ini
sed -i "s/upload_max_filesize =.*/upload_max_filesize = 200M/g" /etc/php5/fpm/php.ini```
Update NginX settings for your website and add client_max_body_size value in your location, http, or server context.
location / {
client_max_body_size 200m;
...
}
Restart NginX and PHP-FPM:
service nginx restart
service php5-fpm restart
NOTE: Sometime (In my case almost every time) you need to kill php-fpm process if it didn't refresh by service command properly. To do that you can get list of processes (ps -elf | grep php-fpm) and kill one by one (kill -9 12345) or use following command to do it for you:
ps -elf | grep php-fpm | grep -v grep | awk '{ print $4 }' | xargs kill -9
Please see if you are setting client_max_body_size directive inside http {} block and not inside location {} block. I have set it inside http{} block and it works
Someone correct me if this is bad, but I like to lock everything down as much as possible, and if you've only got one target for uploads (as it usually the case), then just target your changes to that one file. This works for me on the Ubuntu nginx-extras mainline 1.7+ package:
location = /upload.php {
client_max_body_size 102M;
fastcgi_param PHP_VALUE "upload_max_filesize=102M \n post_max_size=102M";
(...)
}
I had a similar problem recently and found out, that client_max_body_size 0; can solve such an issue. This will set client_max_body_size to no limit. But the best practice is to improve your code, so there is no need to increase this limit.
I meet the same problem, but I found it nothing to do with nginx. I am using nodejs as backend server, use nginx as a reverse proxy, 413 code is triggered by node server. node use koa parse the body. koa limit the urlencoded length.
formLimit: limit of the urlencoded body. If the body ends up being larger than this limit, a 413 error code is returned. Default is 56kb.
set formLimit to bigger can solve this problem.
Assuming you have already set the client_max_body_size and various PHP settings (upload_max_filesize / post_max_size , etc) in the other answers, then restarted or reloaded NGINX and PHP without any result, run this...
nginx -T
This will give you any unresolved errors in your NGINX configs. In my case, I struggled with the 413 error for a whole day before I realized there were some other unresolved SSL errors in the NGINX config (wrong pathing for certs) that needed to be corrected. Once I fixed the unresolved issues I got from 'nginx -T', reloaded NGINX, and EUREKA!! That fixed it.
I'm setting up a dev server to play with that mirrors our outdated live one, I used The Perfect Server - Ubuntu 14.04 (nginx, BIND, MySQL, PHP, Postfix, Dovecot and ISPConfig 3)
After experiencing the same issue, I came across this post and nothing was working. I changed the value in every recommended file (nginx.conf, ispconfig.vhost, /sites-available/default, etc.)
Finally, changing client_max_body_size in my /etc/nginx/sites-available/apps.vhost and restarting nginx is what did the trick. Hopefully it helps someone else.
In case you are using Kubernetes, add the following annotations to your Ingress:
annotations:
nginx.ingress.kubernetes.io/client-max-body-size: "5m"
nginx.ingress.kubernetes.io/client-body-buffer-size: "8k"
nginx.ingress.kubernetes.io/proxy-body-size: "5m"
nginx.ingress.kubernetes.io/proxy-buffer-size: "8k"
Confirm the changes were applied:
kubectl -n <namespace> describe ingress <ingress-name>
References:
Client Body Buffer Size
Custom max body size
Had the same issue that the client_max_body_size directive was ignored.
My silly error was, that I put a file inside /etc/nginx/conf.d which did not end with .conf. Nginx will not load these by default.
If you tried the above options and no success, also you're using IIS (iisnode) to host your node app, putting this code on web.config resolved the problem for me:
Here is the reference: https://www.inflectra.com/support/knowledgebase/kb306.aspx
Also, you can chagne the length allowed because now I think its 2GB. Modify it by your needs.
<security>
<requestFiltering>
<requestLimits maxAllowedContentLength="2147483648" />
</requestFiltering>
</security>
The following config worked for me. Notice I only set client_max_body_size 50M; once, contrary to what others are saying...
File: /etc/nginx/conf.d/sites.conf
server {
listen 80 default_server;
server_name portal.myserver.com;
return 301 https://$host$request_uri;
}
server {
resolver 127.0.0.11 valid=30s;
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
ssl_certificate /secret/portal.myserver.com.crt;
ssl_certificate_key /secret/portal.myserver.com.pem;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
server_name portal.myserver.com;
client_max_body_size 50M;
location /fileserver/ {
set $upstream http://fileserver:6976;
proxy_pass $upstream;
}
}
If you are using windows version nginx, you can try to kill all nginx process and restart it to see.
I encountered same issue In my environment, but resolved it with this solution.