Is it possible to have a multi-line log in nginx? - nginx

I'm sending logs to an nginx server and want to dump these logs to a file. When sending one log at a time, I was able to do this using the NginxEchoModule to force nginx to read the body like so:
http {
log_format log_dump '$request_body';
server {
listen 80 default_server;
listen [::]:80 default_server;
access_log /logs/dump log_dump;
location /logs {
echo_read_request_body;
}
}
}
This works fine when I send one log at a time:
POST /logs HTTP/1.1
Host: www.example.com
123456 index.html was accessed by 127.0.0.1
POST /logs HTTP/1.1
Host: www.example.com
123457 favicon.ico was accessed by 127.0.0.1
However when I try to batch logs (to avoid both connection overhead and HTTP header overhead):
POST /logs HTTP/1.1
Host: www.example.com
123456 index.html was accessed by 127.0.0.1
123457 favicon.ico was accessed by 127.0.0.1
This is what shows up in my log file:
123456 index.html was accessed by 127.0.0.1\x0A123457 favicon.ico was accessed by 127.0.0.1
Now my assumption is that because one nginx log line is intended to be one line, it's encoding my new-line characters to ensure this. Is there a way to allow multi-line nginx logs?

Actually got the answer from one of the more experienced engineers at my work this time:
log_format log_dump escape=none '$request_body';
This requires nginx version 1.13.10, but prevents nginx from escaping new-lines in the logs:
$> curl http://localhost/logs -d "Words
dquote> More words"
$> cat /logs/dump
Words
More words
$>

Related

Nginx sever block not function as expected

I'm using this config file with nginx:
server {
listen 80;
server_name harrybilney.co.uk;
location / {
proxy_pass http://localhost:8080;
}
}
server {
listen 80;
server_name kyra-mcd.co.uk;
location / {
proxy_pass http://localhost:8080;
}
}
Which is stored in /etc/nginx/sites-avaliable. The server block for the domain kyra-mcd.co.uk works perfectly as expected but the server block for harrybilney.co.uk does not and my browser cannot find the server for harrybilney.co.uk.
Both domains are hosted with GoDaddy and have the exact same DNS settings pointing towards my static IP (IPv4 and IPv6 with A and AAAA records).
Can anyone explain why I'm having this issue as I've tried changing the config but getting luck. I understand this is a very basic config file for nginx but for now I'm just trying to get both domains working on my 1 static IP before I add in anything complex.
Having both server blocks in a single file IS NO PROBLEM!
Here is a default.conf file:
server {
listen 80;
server_name harrybilney.co.uk;
location / {
return 200 "$host\n";
}
}
server {
listen 80;
server_name kyra-mcd.co.uk;
location / {
return 200 "Host should match kyra-mcd.co.uk = $host\n";
}
}
Test and reload your config by issuing sudo nginx -t && sudo nginx -s reload
The curl test:
$# curl -H "Host: kyra-mcd.co.uk" localhost
Host should match kyra-mcd.co.uk = kyra-mcd.co.uk
$# curl -H "Host: harrybilney.co.uk" localhost
harrybilney.co.uk
As you can see both servers are in a single file and the server_name taking care of finding the correct server-block based on the Host header.
Check your DNS one more time. Worh it:
kyra-mcd.co.uk. 600 IN A 90.255.228.109
harrybilney.co.uk. 3600 IN A 90.255.228.109
Looks good to me as well. So the traffic should hit the server.
So your configuration looks good for me. Make sure everything is loaded by issuing sudo nginx -T.
curl is working on my end. So looks like the problem is related to DNS on your end. Can you confirm curl is working from your end as well?

Nginx+Gunicorn - reverse proxy not working

I am trying to setup a python flask application on a server following this guide: https://www.digitalocean.com/community/tutorials/how-to-serve-flask-applications-with-gunicorn-and-nginx-on-ubuntu-18-04. I have this working running on my local machine by following the guide. However when I am trying to implement on the actual server with the same config I am running into an issue on proxying requests back to the gunicorn server. I am able to serve static content from Nginx with no problem. When I make a web service call from the static content back to Nginx, it should be proxied back to the gunicorn server.
For example when I try to make the call 'http://example.com/rest/webService', I would expect Nginx to pass anything starting with /rest/ back to gunicorn. The error below is all I can see in the error logs about what is happening:
2019/01/18 12:48:18 [error] 2930#2930: *18 open() "/var/www/html/rest/webService" failed (2: No such file or directory), client: ip_address, server: example.com, request: "GET /rest/webService HTTP/1.1", host: "example.com", referrer: "http://example.com/"
Here is the setup for python_app:
server {
listen 80;
server_name example.com www.example.com;
root /var/www/html;
index index.html;
location ^/rest/(.*)$ {
include proxy_params;
proxy_pass http://unix:/home/username/python_app/python_app.sock;
} }
The only change to my nginx.conf file was to change 'include /etc/nginx/sites-enabled/*' to 'include /etc/nginx/sites-enabled/python_app'.
Please let me know if you have any ideas at all on what I may be missing! Thanks!
Not a solution, but some questions....
If you run
sudo systemctl status myproject
Do you see affirmation that gunicorn is running, and what socket it is bound to?
And does
sudo nginx -t
come back saying no diagnostic?
The regex in the location block for nginx -- I don't see anything similar to that in the guide, I see that you're trying to capture everything after "rest/", but looking at the nginx documents, I think you'd have to have $1 to reference the captured part of the URL. Can you try without the "^/rest/(.*)$" and see whether nginx finds anything?
Is the group that owns your directory a group that nginx is part of (a lot of setups are www-data)

xmlrpc attack on nginx configured with HTTPS redirection

I am trying to avoid nasty xmlrpc attack with the following configuration:
server {
listen 443 ssl default deferred;
server_name myserver.com;
...
}
server {
listen 80;
server_name myserver.com;
location /xmlrpc.php {
deny all;
access_log off;
log_not_found off;
return 444;
}
return 301 https://$host$request_uri;
}
Apparently the location block is not working, since requests to /xmlrpc.php get redirected as showed by the logs:
[02/Jun/2016:11:24:10 +0000] "POST /xmlrpc.php HTTP/1.0" 301 185 "-" "Mozilla/4.0 (compatible: MSIE 7.0; Windows NT 6.0)"
How can I discard all requests to /xmlrpc.php right away without having them redirected to HTTPS?
XMLRPC attack is a common attack that lets an attacker constantly calls your xmlrpc.php file with random authentication credentials.
You need to use utilities like Fail2Ban which is quite effective in banning and preventing your WordPress site against common xmlrpc attack.
First of all disable XMLRPC if you are not posting content from outside. Add the following line of code in your theme's function.php file
add_filter( 'xmlrpc_enabled', '__return_false' );
Add the following in your .conf file
location = /xmlrpc.php {
deny all;
access_log off;
log_not_found off;
}
Then, you need to install Fail2Ban on your server
apt-get install fail2ban iptables
or
yum install fail2ban
Post installation, you need to edit jail.conf file
vim /etc/fail2ban/jail.conf
Inside the jail.conf file add the following lines of code
[xmlrpc]
enabled = true
filter = xmlrpc
action = iptables[name=xmlrpc, port=http, protocol=tcp]
logpath = /var/log/apache2/access.log
bantime = 43600
maxretry = 2
This will read the access.log file (provide actual path of your access log) and looks for failed attempts. If it detects more than 2 failed attempts, the attackers IP address is added in your iptables.
Now, we have to create a filter for fail2ban. Type this in terminal
cd /etc/fail2ban/filter.d/
vim xmlrpc.conf
Inside this filter file paste the following definition
[Definition]
failregex = ^<HOST> .*POST .*xmlrpc\.php.*
ignoreregex =
Now, just restart the fail2ban service
service fail2ban restart or /etc/init.d/fail2ban/restart
See the log like this
tail -f /var/log/fail2ban.log
Also in your iptables you will constantly see lots of entries which must be seeing connection refused error.
watch iptables -L
to constantly monitor. It should immediately block xmlrpc attack and you should see lots of entries in your iptables.
If there are plugins which depends on XMLRPC, you can allow your own IP in the config file.
You can try it that way:
location ^~/xmlrpc.php {
deny all;
access_log off;
log_not_found off;
return 444;
}

Nginx redirect to non-https address failes in Firefox

On a website which currently is serving on HTTPS, I want to redirect some pages to HTTP and not use SSL on them. In nginx I configured it like this:
server{
listen 80;
server_name example.com;
location / {
root /var/some/where;
}
location /secure {
return 301 https://example.com$request_uri;
}
}
server{
listen 443 ssl;
server_name example.com;
include ssl_params;
location /secure {
root /var/some/where/secure;
}
location / {
return 301 http://example.com$request_uri;
}
}
Using curl I can see everything is fine as follow:
$ curl -sIL https://example.com/ | grep HTTP
HTTP/1.1 301 Moved Permanently
HTTP/1.1 200 OK
$ curl -sIL http://example.com/ | grep HTTP
HTTP/1.1 200 OK
But when I try to open HTTPS url in Firefox, I'll get this error:
The page isn't redirecting properly.
Firefox has detected that the server is redirecting the request for this address in a way that will never complete.
This problem can sometimes be caused by disabling or refusing to accept cookies
Using a private window, when I try to open url in HTTP for the first time, is OK. But as soon as I refresh the page, It'll be redirected to HTTPS scheme and the error will appear again.
Do you have a certificate set up? None showing in your config.
The way it works is to open the secure connection (using the certificate) then return the content (including any redirect). It is not possible to redirect before creating the secure connection (as that would be a huge security risk to https if it was possible).
Considering Strict-Transport-Security: max-age=15768000 as result of curl -i -L on target domain, which means hsts header exists and by definition:
tells your browser to always connect to a particular domain over HTTPS. Attackers aren’t able to downgrade connections, and users can’t ignore TLS warnings.

nginx not responding to GET requests

nginx is not responding to any requests and I'm not sure why. Here is what my file in sites-available looks like (it is symlinked from sites-available):
server {
listen 80;
server_name 127.0.0.1;
access_log /srv/www/logs/access_log;
error_log /srv/www/logs/error_log;
root /srv/www/public_html;
location / {
index index.html;
}
}
If I try to access 127.0.0.1 or localhost, the browser (Firefox) just tells me that is "loading" for a very long time.
Nginx is listening on port 80 when I run netstat -lpn:
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 10952/nginx
I have tried to telnet 127.0.0.1 80 and then typed the following:
GET /index.html HTTP/1.1
Host: 79.124.59.177
<blank line>
However, there is no response to this. Any ideas or suggestions? I'm completely stumped.
I think you should modify Host as "127.0.0.1", which represent the name of server, and it should same as server_name in config file, I guess.

Resources