My goal is to see the newest log messages on web page.
I know I could use tail -f <file> to trace the latest 10 line log messages on terminal.
But today, I want to config nginx, so that I could see the same result on web.
For example, when I access to http://192.168.1.200/nginx (My Nginx Host)
I could see the files under /var/log/nginx
Index of /nginx/
-------------------------------------------------
../
access.log 08-Aug-2019 16:43 20651
error.log 08-Aug-2019 16:43 17810
And when I access to http://192.168.1.200/nginx/access.log
I could see the same result as tail -f /var/log/nginx/access.log in terminal (and it is dynamic).
/etc/nginx/conf.d/log.conf
server {
listen 80;
root /var/log/nginx;
location /nginx {
root /var/log/;
default_type text/plain;
autoindex on;
}
}
This is my config, but there are 2 points that doesn't meet my requirements:
I want to access to log page by accessing /log/access.log not by /nginx/access.log
When I access to /log/access.log, this page is static.
Related
This is very puzzling:
I'm on RHEL 7.5, trying to run Nginx 1.14.0 under Supervisor 3.3.4. The ultimate aim is to serve a Django site.
My "/etc/init.d/supervisord" looks like this:
#!/bin/sh
...
# Source init functions
. /etc/rc.d/init.d/functions
prog="supervisord"
prog_bin="/bin/supervisord -c /etc/supervisord.conf"
PIDFILE="/var/run/$prog.pid"
start()
{
echo -n $"Starting $prog: "
daemon $prog_bin --pidfile $PIDFILE
sleep 1
[ -f $PIDFILE ] && success $"$prog startup" || failure $"$prog startup"
echo
}
... # "stop", "restart" functions, etc.
"/etc/supervisord.conf" looks like this:
[unix_http_server]
file=/var/run//supervisor.sock
[supervisord]
logfile=/var/log/supervisor/supervisord.log
pidfile=/var/run/supervisord.pid
childlogdir=/var/log/supervisor
[rpcinterface:supervisor]
supervisor.rpcinterface_factory =
supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///var/run//supervisor.sock
[include]
files = /etc/supervisor/conf.d/*.conf
"/etc/supervisor/conf.d/" has just one file in it: nginx.conf:
[program:nginx]
user=root
command=/usr/sbin/nginx -c /path/to/site/etc/nginx.conf
autostart=true
autorestart=true
startretries=3
redirect_stderr=True
Invoking the above command directly with sudo /usr/sbin/nginx -c /path/to/site/etc/nginx.conf & is successful. It starts right up and I can see the nginx processes with ps -ef
But if I start supervisor d like this:
$ sudo /etc/init.d/supervisord restart
It fails to launch Nginx:
$ sudo cat /var/log/supervisor/nginx-stdout---supervisor-tqI97D.log
nginx: [emerg] open() "/path/to/site/etc/nginx.conf" failed (13: Permission denied)
Permissions to read that file are good all the way down. Of course, the path is not actually called "/path/to/site/etc/nginx.conf", but there's an "x" for all users on every directory and a "r" for all users on the conf file itself:
$ namei -om /path/to/site/etc/nginx.conf
f: /path/to/site/etc/nginx.conf
dr-xr-xr-x root root /
drwxr-xr-x root root path
drwxr-xr-x root root to
drwxrwxr-x user1 group1 site
drwxrwxr-x user1 group1 etc
-rw-r--r-- root group1 nginx.conf
How can there be an error on an "open()" operation for this file? I've tried changing the "user" to root in "/etc/supervisor/conf.d/nginx.conf" and/or in "/etc/supervisord.conf" but the result is always the same.
Could the fact that this is SELinux make a difference? It's currently activated.
$ getenforce
Enforcing
If it helps, the nginx.conf file that can't be opened looks like this:
user nginx;
daemon off;
error_log /path/to/site/var/log/nginx-error.log warn;
pid /path/to/site/var/run/nginx.pid;
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /path/to/site/var/log/nginx-access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
upstream app_server {
server unix:/path/to/site/var/run/my-django.socket fail_timeout=0;
}
server {
listen 8000;
server_name xxx.xxx.xxx.xxx;
charset utf-8;
location /media {
alias /path/to/site/htdocs/media;
}
location /static {
alias /path/to/site/htdocs/static;
}
location / {
uwsgi_pass app_server;
include /path/to/site/etc/uwsgi_params;
}
}
}
Does anyone have any ideas?
I suggest you check with SELinux to really grasp the whole concept before doing anything to a production machine.
The httpd_t context permits NGINX to listen on common web server ports, to access configuration files in /etc/nginx, and to access content in the standard docroot location (/usr/share/nginx). It does not permit many other operations, such as proxying to upstream locations or communicating with other processes through sockets.
Once you are a king at SELinux (hah) check out NGINX's - Modifying SELinux Settings
Let us know, good luck!
On SELinux enabled systems process and files have security labels. SELinux policy contains the allowed access patterns between these labels. Access is denied if there is not a rule allowing the access.
The security labels on your nginx files seem to be incorrect: the AVC error message tells that access was denied for a process in httpd_t domain trying to access a file with default_t label. To allow access, you need to assign suitable security contexts to your nginx files. Possible contexts are documented in httpd_selinux man page. For configuration files, an approriate context would be httpd_config_t and for user content perhaps httpd_user_content_t.
You can manually apply a new security context to a file using chcon tool. After you have decided the security contexts to use, you should save them in file context database using semanage. Otherwise automatic relabeling will label the files incorrectly.
I've written a more detailed answer on the topic on related Unix & Linux Stack exchange question Configure SELinux to allow daemons to use files in non-default locations.
I am new to nginx and trying to serve static contents with nginx and getting 403 error.I have server config like this:
server {
listen 8000;
server_name localhost;
root /Users/ismayilmalik/Documents/github/nginx-express;
location / {
index index.html;
}
I have executed commands below:
chmod -R 755 /nginx-express
chmod -R 644 /nginx-express/*.*
And the folder has drwxr-xr-x rigt.What's wrong here?
Please go to your nginx error logs to get details.
Run this command to show last errors:
tail -20 /var/log/nginx/error.log
It's good to go through error logs located /var/log/nginx/***.error. I had problems similar to this once. The solution was the user nginx was running as.
If nginx is running as www then www will not have access to ismayilmalik folders unless you also grant access to /Users/ismayilmalik home folder, but that is not secure. The best solution would be to allow nginx to run as ismayilmalik if you want to access your home folder through nginx.
I solved it finally.Actually nginx had all permission to serve static content from:
/Users/ismayilmalik/Documents/github/nginx-express;
The reason was when started nginx could not create error.log file in it'sroot directory. After manually creating the file it worked fine. I am using macOs and to find logs folder executed the command below to find all enironment variables for nginx:
nginx -V
BTW before this I had changed nginx user to from nobody to admin in main config file like below.
user [username] [usergroup]
By default nginx master process runs under root and child process under nobody.
nginx is compiled with Brotli enabled. In my nginx.conf
http {
...
brotli_static on;
}
My .br files are located on a server with proxy_pass.
location / {
...
proxy_pass http://app;
}
And .br files have been generated on that app server:
$ ls -lh public/js/dist/index.js*
-rw-r--r-- 1 mike wheel 1.2M Apr 4 09:07 public/js/dist/index.js
-rw-r--r-- 1 mike wheel 201K Apr 4 09:07 public/js/dist/index.js.br
Pulling down the uncompressed file works:
wget https://example.com/js/dist/index.js
Pulls down 1,157,704 size uncompressed file.
wget -S --header="accept-encoding: gzip" https://example.com/js/dist/index.js
Pulls down a 309,360 size gzipped file.
But:
wget -S --header="accept-encoding: br" https://example.com/js/dist/index.js
Still gets the full 1,157,704 size uncompressed file.
I had hoped brotli_static would proxy the .br file requests too - sending something a GET request to the backend for the .br equivalent resource - but this doesn't seem to work.
Can brotli_static work through proxy_pass?
Based on Maxim Dounin (an nginx core engineer)'s comment on gzip_static - which I imagine brotli_static behaves similarly to - brotli_static only handles files, not HTTP resources:
That is, gzip_static is only expected to work when nginx is about to return regular files.
So it looks like brotli_static and proxy_pass isn't possible.
Your nginx config file needs a section to tell it to serve the static content folder. You don't want your app server to do that.
I believe you'll need to place it before the location / so that it takes precedence.
I've got a custom setup of nginx and php-fpm on arch linux. I'll post my configs below. I think I've read the documentation for these two programs front to back about 6 times by now, but I've reached a point where I simply can't squeeze any more information out of the system and thus have nothing left to google. Here's the skinny:
I compiled both nginx and php from scratch (I'm very familiar with this, so presumably no problems there). I've got nginx set up to serve things correctly, which it does consistently: php files get passed through the unix socket (which is both present and read-/write-accessible to the http user, which is the user that both nginx and php-fpm run as), while regular files that exist get served. Calls for folders and calls for files that don't exist are both sent to the /index.php file. All permissions are in order.
The Problem
My pages get served just fine until there's a php error. The error gets dumped to nginx's error log, and all further requests for pages from that specific child process of php-fpm return blank. They do appear to be processed, as evidenced by the fact that subsequent calls to the file with errors continue to dump error messages into the log file, but both flawed and clean files alike are returned completely blank with a 200 status code.
What's almost wilder is that I found if I then just sit on it for a few minutes, the offending php-fpm child process doesn't die, but a new one is spawned on the next request anyway, and the new process serves pages properly. From that point on, every second request is blank, while the other request comes back normal, presumably because the child processes take turns serving requests.
My test is the following:
// web directory listing:
mysite/
--index.php
--bad_file.php
--imgs/
----test.png
----test2.png
index.php:
<?php
die('all cool');
?>
bad_file.php*:
<?php
non_existent_function($called);
?>
* Note: I had previously posted bad_file.php to contain the line $forgetting_the_semicolon = true, but found that this doesn't actually produce the error I'm talking about (this was a simplified example that I've now implemented on my own system). The above code, however, does reproduce the error, as it produces a fatal error instead of a parse error.
test calls from terminal:
curl -i dev.mysite.com/ # "all cool"
curl -i dev.mysite.com/index.php # Redirected to / by nginx
curl -i dev.mysite.com/imgs # "all cool"
curl -i dev.mysite.com/imgs/test.png # returns test.png, printing gibberish
curl -i dev.mysite.com/nofile.php # "all cool"
curl -i dev.mysite.com/bad_file.php # blank, but error messages added to log
curl -i dev.mysite.com/ # blank! noooooooo!!
curl -i dev.mysite.com/ # still blank! noooooooo!!
#wait 5 or 6 minutes (not sure how many - probably corresponds to my php-fpm config)
curl -i dev.mysite.com/ # "all cool"
curl -i dev.mysite.com/ # blank!
curl -i dev.mysite.com/ # "all cool"
curl -i dev.mysite.com/ # blank!
#etc....
nginx.conf:
user http;
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type text/plain;
sendfile on;
keepalive_timeout 65;
index /index.php;
server {
listen 127.0.0.1:80;
server_name dev.mysite.net;
root /path/to/web/root;
try_files /maintenance.html $uri #php;
location = /index.php {
return 301 /;
}
location ~ .php$ {
include fastcgi_params;
fastcgi_pass unix:/usr/local/php/var/run/php-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
location #php {
include fastcgi_params;
fastcgi_pass unix:/usr/local/php/var/run/php-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root/index.php;
}
}
}
php-fpm.conf:
[global]
pid = run/php-fpm.pid
error_log = log/php-fpm.log
log_level = warning
[www]
user = http
group = http
listen = var/run/php-fpm.sock
listen.owner = http
listen.group = http
listen.mode = 0660
pm = dynamic
pm.max_children = 5
pm.start_servers = 1
pm.min_spare_servers = 1
pm.max_spare_servers = 3
php.ini upon request
In Summary
All pages are served as expected until there's a php error, at which point all subsequent requests to that particular php-fpm child process are apparently processed, but returned as completely blank pages. Errors that occur are reported and continue to be reported in the nginx error log file.
If anyone's got any ideas, throw them at me. I'm dead in the water til I figure this out. Incidentally, if anyone knows of a source of legitimate documentation for php-fpm, that would also be helpful. php-fpm.org appears to be nearly useless, as does php.net's documentation for fpm.
Thanks!
I've been messing with this since yesterday and it looks like it's actually a bug with output buffering. After trying everything, reading everything, going crazy on it, I finally turned off output buffering and it worked fine. I've submitted a bug report here.
For those who don't know, output buffering is a setting in php.ini that prevents php from sending output across the line as soon as it receives it. Not a totally crucial feature. I switched it from 4096 to Off:
;php.ini:
...
;output_buffering = 4096
output_buffering = Off
...
Hope this helps someone else!
I am running CentOS 6 with nginx. It is currently running perfectly, I am trying to password protect my admin directory.I can successfully login. However, I get a 403 Forbbiden when I try to view the main index page (index.php) in the directory.
2013/04/18 02:10:17 [error] 17166#0: *24 directory index of "/usr/share/ngin/html /somedir/" is forbidden, client: XXX, server: mysite.com, request: "GET /somedir/ HTTP/1.1", host: "mysite.com"
I have double checked permissions on the ".htpasswd" file. It belongs to "root:root" with chmod 640. I have also tried setting owner ship to "nginx:nginx" and the error still persists.
This is how I am getting htpasswd working:
location ~ ^/([^/]*)/(.*) {
if (-f $document_root/$1/.htpasswd) {
error_page 599 = #auth;
return 599;
}
}
location #auth {
auth_basic "Password-protected";
auth_basic_user_file $document_root/$1/.htpasswd;
}
Though the question is pretty old, but I must put my solution here to help others. This very problem was like my pain in somewhere.
I probably have read out (and implemented/tried) almost all possible threads available online (till date) but none solved this "403 Forbidden" nginx issue all-together:
I will write down the steps from beginning: ( block my site access ):
1> We will create a hidden file called .htpasswd in the /etc/nginx
sudo sh -c "echo -n 'usernamee:' >> /etc/nginx/.htpasswd"
2> Now add an encrypted password to the given username
sudo sh -c "openssl passwd -apr1 >> /etc/nginx/.htpasswd"
This will ask you to enter a password and confirm it.
3> Now we need to setup nginx to check our newly created .htpasswd before serving any content.
location / {
try_files $uri $uri/ /index.php?$query_string; # as per my configuration
auth_basic "Authorized access only";
auth_basic_user_file .htpasswd;
}
4> Finally restart the server to take effect
sudo service nginx restart
Now browse the url:
Please note: I didnt do any alteration in permissions. By default the file permission for htpasswd will be set at the time of creation, which will look something like this:
-rw-r--r-- 1 root root 42 Feb 12 12:22 .htpasswd
Read the error carefully. You are missing an index.html or similar.