Print upstream URL in nginx access logs - nginx

I have a group of upstream servers,I have been trying to print the upstream URL in the access logs where the request went to. I tried proxy_host and upstream_addr. But they didn't solve my issue. "proxy_host" prints the upstream name, not the URL, and upstream_addr prints the IP address. Is there a way to print the URL in the access logs?
upstream backend {
server backend1.example.com:8080 weight=90;
server backend2.example.com:8080 weight=10
}
location /test
set $foo backend;
proxy_pass https://$foo
with proxy_host, the access log prints "proxy_host=backend". If I use "upstream_addr", it prints the IP "upstream_addr=10.1.0.0".
Is there a way to print, If the request is routed to backend1.example.com or backend2.example.com?

Related

haproxy rewrite uri keep backend header host

By making a rewrite in Haproxy 1.8, I need to make a URI redirect to another domain (host), but keep header host in request.
Example:
www.mysite.com/api -> 104.4.4.4/api (rw) -> result www.mysite.com/api (response)
I made a lot of tests with some parameters of HA, and I managed to obtain some succes, but with one problem.
This is my actual scnenario
backend site1
acl path_to_rw url_beg /api
acl mysite hdr(host) -i www.mymainsite.com
http-request set-header Host www.mymainsite.com if mysite path_to_rw
reqirep ^Host Host:\ host_to_forward/api if mysite path_to_rw
cookie SERVERID insert indirect nocache maxlife 1h
server site1 myhost:80 check cookie site1
My backend is a IIS server, and my rewrite works. But, I get error bellow:
"HTTP Error 400. The request hostname is invalid"
It seems that my backend does not accept the headerhost tha i send. Have somebody already had this problem ?
I managed to fix this problem, with a simple combination between acl´s and "use backend" directive.
e.g:
Header host:
www.mysite.com
Path to aplication in another origin
/api
acl myhost hdr(host) -i www.myhost.com
acl path_api url_reg -i /API(.*)
use_backend be_origin_servers if myhost path_api
backend be_origin_servers
server myserver1 10.10.10.10 check cookie myserver1

Nginx: Setting up SSL-passthorugh

I'm trying to configure SSL-passthrough for multiple webapps using the same nginx server (nginx version: nginx/1.13.6), but when restarting the nginx server, I get an error complaining that
nginx: [emerg] "stream" directive is duplicate
The configuration I have is the following:
2 files for the ssl passthrough that look like this:
server1.conf:
stream {
upstream workers {
server 192.168.1.10:443;
server 192.168.1.11:443;
server 192.168.1.12:443;
}
server {
listen server1.com:8443;
proxy_pass workers;
}
}
and server2.conf:
stream {
upstream workers {
server 192.168.1.20:443;
server 192.168.1.21:443;
server 192.168.1.22:443;
}
server {
listen server2.com:8443;
proxy_pass workers;
}
}
If I remove one of the two files, then nginx starts correctly.
How can this be achieved?
Thanks,
Cristi
Streams work on Layer 5, and cannot read encrypted traffic (which is Layer 6 on the OSI model), and thus cannot tell apart requests hitting server1.com and server2.com unless they are pointing to different IPs.
This can be solved by one of the following solutions
Decrypt the traffic on nginx, then proxy-pass it to backend processes/wockers using HTTP.
Bind server1.com to a port that is different to server2.com.
Get an additional IP address and bind server2.com on that.
Get an additional load balancer and move server2.com there.

Nginx reverse proxy.. dynamic hostname with header key and value or url path

I have got some nginx problem.I hope you will help me to solve this problem.
There are sevral servers
User PC internet networked;
Nginx proxy, hostnamed "nginxproxy", located in internal network, and it has only server which has Public IP "1.1.1.1" but jumphost, 8090 listen.
server1 hostnamed "tomcat1" located in internal network (only has private IP "70.1.1.1")
server2 hostnamed "tomcat2" located in internal network (only has private IP "70.1.1.2")
and 5, 6, ... There are more servers hostnamed apache1, apache2, redis1 etc...
Now my client wants to send http request call to server located in internal network directly. but it is not possible (because there don't have Public ips..) so the call has to passed in to nginx proxy first.
I just wander that when i call request from user pc, destination server hostname put on the request's header or url, the nginx can parse it and combine to there destination in internal network?
for example i call like this,
http://nginxproxy:1888/[destination hostname]/[path, files like index.html, some keys and values.&k1=v1. etc....]
i hope nginx pass and convert it and call there destination host like this
http://[destination hostname]:8888/[path, files like index.html, some keys and values.&k1=v1. etc....]
i tried to do this. there were some errors..
error log printed
"localhost could not be resolved (10060: Operation timed out), client: 127.0.0.1, server: localhost, request: "GET /localhost/8080/index"
server {
listen 1888;
server_name localhost;
location ~^\/([a-zA-Z0-9]+)\/([0-9]+)\/([a-zA-Z0-9]+) {
proxy_pass http://$1:$2/$3;
}
}
and one more..
in the java code,
i set like this
import org.apache.http.HttpMessage;
HttpMessage request;
request.addHeader("destinationHost", "tomcat2");
request.addHeader("destinationPort", "8888");
and call to this url
http://nginxproxy:1888/[path, files like index.html, some keys and values.&k1=v1. etc....]
can nginx convert url to
http://tomcat2:8888/[path, files like index.html, some keys and values.&k1=v1. etc....]
and pass to there??
if so, how can i set nginx.conf
thank you so much and have a nice day..

Nginx memcached with fallback to remote service

I can't get Nginx working with memcached module, the requirement is to query remote service, cache data in memcached and never fetch remote endpoint until backend invalidates the cache. I have 2 containers with memcached v1.4.35 and one with Nginx v1.11.10.
The configuration is the following:
upstream http_memcached {
server 172.17.0.6:11211;
server 172.17.0.7:11211;
}
upstream remote {
server api.example.com:443;
keepalive 16;
}
server {
listen 80;
location / {
set $memcached_key "$uri?$args";
memcached_pass http_memcached;
error_page 404 502 504 = #remote;
}
location #remote {
internal;
proxy_pass https://remote;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
}
I tried to set memcached upstream incorrectly but I get HTTP 499 instead and warnings:
*3 upstream server temporarily disabled while connecting to upstream
It seems with described configuration Nginx can reach memcached successfully but can't write or read from it. I can write and read to memcached with telnet successfully.
Can you help me please?
My guesses on what's going on with your configuration
1. 499 codes
HTTP 499 is nginx' custom code meaning the client terminated connection before receiving the response (http://lxr.nginx.org/source/src/http/ngx_http_request.h#0120)
We can easily reproduce it, just
nc -k -l 172.17.0.6 172.17.0.6:11211
and curl your resource - curl will hang for a while and then press Ctrl+C — you'll have this message in your access logs
2. upstream server temporarily disabled while connecting to upstream
It means nginx didn't manage to reach your memcached and just removed it from the pool of upstreams. Suffice is to shutdown both memcached servers and you'd constantly see it in your error logs (I see it every time with error_log ... info).
As you see these messages your assumption that nginx can freely communicate with memcached servers doesn't seem to be true.
Consider explicitly setting http://nginx.org/en/docs/http/ngx_http_memcached_module.html#memcached_bind
and use the -b option with telnet to make sure you're correctly testing memcached servers for availability via your telnet client
3. nginx can reach memcached successfully but can't write or read from it
Nginx can only read from memcached via its built-in module
(http://nginx.org/en/docs/http/ngx_http_memcached_module.html):
The ngx_http_memcached_module module is used to obtain responses from
a memcached server. The key is set in the $memcached_key variable. A
response should be put in memcached in advance by means external to
nginx.
4. overall architecture
It's not fully clear from your question how the overall schema is supposed to work.
nginx's upstream uses weighted round-robin by default.
That means your memcached servers will be queried once at random.
You can change it by setting memcached_next_upstream not_found so a missing key will be considered an error and all of your servers will be polled. It's probably ok for a farm of 2 servers, but unlikely is it what your want for 20 servers
the same is ordinarily the case for memcached client libraries — they'd pick a server out of a pool according to some hashing scheme => so your key would end up on only 1 server out of the pool
5. what to do
I've managed to set up a similar configuration in 10 minutes on my local box - it works as expected. To mitigate debugging I'd get rid of docker containers to avoid networking overcomplication, run 2 memcached servers on different ports in single-threaded mode with -vv option to see when requests are reaching them (memcached -p 11211 -U o -vv) and then play with tail -f and curl to see what's really happening in your case.
6. working solution
nginx config:
https and http/1.1 is not used here but it doesn't matter
upstream http_memcached {
server 127.0.0.1:11211;
server 127.0.0.1:11212;
}
upstream remote {
server 127.0.0.1:8080;
}
server {
listen 80;
server_name server.lan;
access_log /var/log/nginx/server.access.log;
error_log /var/log/nginx/server.error.log info;
location / {
set $memcached_key "$uri?$args";
memcached_next_upstream not_found;
memcached_pass http_memcached;
error_page 404 = #remote;
}
location #remote {
internal;
access_log /var/log/nginx/server.fallback.access.log;
proxy_pass http://remote;
proxy_set_header Connection "";
}
}
server.py:
this is my dummy server (python):
from random import randint
from flask import Flask
app = Flask(__name__)
#app.route('/')
def hello_world():
return 'Hello: {}\n'.format(randint(1, 100000))
This is how to run it (just need to install flask first)
FLASK_APP=server.py [flask][2] run -p 8080
filling in my first memcached server:
$ telnet 127.0.0.1 11211
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
set /? 0 900 5
cache
STORED
quit
Connection closed by foreign host.
checking:
note that we get a result every time although we stored data
only in the first server
$ curl http://server.lan && echo
cache
$ curl http://server.lan && echo
cache
$ curl http://server.lan && echo
cache
this one is not in the cache so we'll get a response from server.py
$ curl http://server.lan/?q=1 && echo
Hello: 32337
whole picture:
the 2 windows on the right are
memcached -p 11211 -U o -vv
and
memcached -p 11212 -U o -vv

Tell Nginx to try another upstream on error

I'm trying to determine if it is possible to tell Nginx to choose another server for a specified upstream when the first server selected returns a specific error.
Eg:
try upstream server 0
if it returns a certain error code (eg: 503)
try upstream 1
else
return response to client
Here's a sample of what you need to do, you can read more details on this answer
upstream myservers{
#the first server is the main server
server xxx.xxx.xxx.xxx weight=999 fail_timeout=5s max_fails=1;
server xxx.xxx.xxx.xxx;
}
server {
# all config
proxy_pass http://myservers;
}

Resources