I am trying to get the requester host/ip as it comes to haproxy node.
My haproxy config is as below:
frontend www-http
bind *:9000
http-request redirect location https://%fi:9143
frontend www-https
bind *:9143 ssl crt /root/keys.pem
reqadd X-Forwarded-Proto:\ https
default_backend www-backend
backend www-backend
balance roundrobin
cookie SERVERID insert indirect nocache
server server1 1.1.1.1:9080 cookie server1 weight 1 maxconn 1024 check
So here, if any http request comes, then I need to forward to https.
Now request may come either with ip address or hostname in fully qualified form, like
http://10.10.10.10:9000
this needs to be forwarded to https://10.10.10.10:9143
Again, the request may come hostname in fully qualified form, like
http://myhost.domain.com:9000
this needs to be forwarded to https://myhost.domain.com:9143
basically 10.10.10.10 and myhost.domain.com is same system.
Now with the above haproxy configuration, I am not able to get the below, as it is %fi (frontend_ip), so it is redirecting to https://10.10.10.10:9143
So my question is how I can get the haproxy node's ip/host as it comes to haproxy.
I tried below options, which did not work:
http-request redirect location https://%f:9143
http-request redirect location https://%[req.hdr(Host)]:9143
from https://www.haproxy.com/doc/aloha/7.0/haproxy/log_format_rules.html
See How do I set a dynamic variable in HAProxy? for additional details, but using that as a base, here is what should work for you:
frontend www-http
bind *:9000
# Redirect user from http port to https port
http-request set-var(req.hostname) req.hdr(Host),field(1,:),lower
http-request redirect code 301 location https://%[var(req.hostname)]:9143 if !{ ssl_fc }
frontend www-https
bind *:9143 ssl crt /root/keys.pem
reqadd X-Forwarded-Proto:\ https
default_backend www-backend
backend www-backend
balance roundrobin
cookie SERVERID insert indirect nocache
server server1 1.1.1.1:9080 cookie server1 weight 1 maxconn 1024 check
My situation was a little different as I was only looking to redirect a stats UI URL so I didn't have to go update each stats URL in our internal documentation. Here is what worked for my situation (in case it helps someone else):
userlist stats-auth
group admin users adminuser
group readonly users readonlyuser
# Passwords created via mkpasswd -m sha-512 PASSWORD_HERE
user adminuser password NOT_REAL_PASSWORD
user readonlyuser password NOT_REAL_PASSWORD
listen stats
# Used just for the initial connection before we redirect the user to https
bind *:4711
# Combined file containing server, intermediate and root CA certs along
# with the private key for the server cert.
bind *:4712 ssl crt /etc/ssl/private/my-site-name_combined_cert_bundle_with_key.pem
option dontlognull
mode http
option httplog
# Redirect user from http port to https port
http-request set-var(req.hostname) req.hdr(Host),field(1,:),lower
http-request redirect code 301 location https://%[var(req.hostname)]:4712/ if !{ ssl_fc }
acl AUTH http_auth(stats-auth)
acl AUTH_ADMIN http_auth_group(stats-auth) admin
stats enable
# The only "site" for using these ports is the admin UI, so use '/' as
# the base path instead of requiring something like '/haproxy_stats' or
# '/stats' in order to display the UI.
stats uri /
# Force a login if not already authenticated
stats http-request auth unless AUTH
# Allow administrator functionality if user logged in using admin creds
# (there are separate read-only username and password pairs)
stats admin if AUTH_ADMIN
I left out the frontend and backend config as those are much longer/detailed.
You can get the Source address through the src var.
Haproxy holds the requester IP under this , and can be used in acl's and other places.
For logging use it in the following manner : %[src]
Check out these links : src and fetching-samples(under layer 4)
Related
We have a AngularJS application where we have a nodejs app which creates certificates and key for service hostname only when HTTPS port is 443. Then created certificates are consumed in nginx as shown below:
<% if ENV["HTTPS__ENABLED"] == "true" %>
listen <%= ENV["HTTPS__PORT"] %> ssl;
# These files are generated by the node app
ssl_certificate /cert.csr;
ssl_certificate_key /tls_private_key.csr;
ssl_protocols TLSv1.2;
<% end %>
But when I set 443 port in route with re-encrypt termination it gives below error while accessing application
Application is not available
The application is currently not serving requests at this endpoint. It may not have been started or is still starting.
The request is not even reaching pod. If I create route with edge termination is gives error
400 Bad Request
The plain HTTP request was sent to HTTPS port
As in Edge termination there is no encryption from router to pod.
I cannot use passthrough termination policy as we have path in our route which is not supported by passthrough termination.
can someone please let me know how to achieve end to end encryption in openshift 4.3. We do not use custom domain here.
I was checking the way for creating re-encrypt route
oc create route reencrypt --service=frontend --cert=tls.crt --key=tls.key --dest-ca-cert=destca.crt --ca-cert=ca.crt --hostname=www.example.com
but as we are not using custom domain our route should use default cert and key right? So no need to provide those right? I am not getting how to create --dest-ca-cert for this route.
TLS is already enabled in our AngularJS app using a nodejs app which creates cert and key which is consumed by Nginx.Pod inside the cluster uses TLS, it’s issued by a CA, that’s the cert we should put in destinationCACert for the route.The CA cert is how the router determines if it can trust the upstream POD for the TLS communication.
We used ca.cert located at /var/run/secrets/kubernetes.io/serviceaccount/ca.crt as destination certificate while creating re-encrypt route. We selected HTTPS port while creating route.
oc create route reencrypt --service=frontend --cert=tls.crt --key=tls.key --dest-ca-cert=destca.crt --ca-cert=ca.crt --hostname=www.example.com
Here tls.cert and tls.key is not needed for us as we were using default domain of the openshift cluster.Only cert we used is --dest-ca-cert which is also found at secret service-serving-cert-signer-sa-token-l42lm of openshift-service-ca namespace
For re-encrypt route the pod needs to be configured with a TLS certificate as it has to respond to TLS request originated at Openshift router. You already have it as evident from the error you are getting when trying to use edge route.
Now this TLS certificate, must be created with same host name that you want to use in the actual route. It is not necessary that this TLS certificate is a CA signed one, but the hostname must match with the route. Only then the route can forward traffic to your pod.
I need your help.
I have implemented a haproxy configuration which correctly manages both http and websocket backends, except in one specific scenario.
Here below a summary about how this stuff works:
When I connect to :2703/webapp, haproxy correctly redirects to one of the two http configured backends (webapp-lb1 or webapp-lb2).
When I connect to :2703/webapp/events, haproxy correctly redirects to one of the two websocket configured backends (websocket-lb1 or websocket-lb2)
Webapp is a servlet running in apache tomcat.
When I stop one of the two backend tomcats, haproxy correctly switches to the other one (for both the http and the websocket).
On the contrary, when I try to simulate an outage of one of the http backends by stopping the webapp via the tomcat manager, haproxy reports a HTTP Status 404 error but does not switch to the other backend.
Being that I explicitly configured the http-check expect status 302 directive, I would expect that - in case of a 404 status - haproxy switches to the other backend.
I had a look at the haproxy official documentation and I also tested the http-check disable-on-404 configuration but this is not what I need, as the haproxy behavior remains exactly the same of the above one.
For info, activating the http-check disable-on-404, haproxy detects the DOWN of the backend I stopped but does nothing (which as far as I understand, is exactly what we have to expect from the http-check disable-on-404 configuration in case of 404 status); here below the haproxy log when this option is enabled:
Jul 23 14:19:23 localhost haproxy[4037]: Server webapp-lb/webapp-lb2 is stopping, reason: Layer7 check conditionally passed, code: 404, info: "Not Found", check duration: 0ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Here below an extract of my haproxy configuration:
frontend haproxy-webapp
bind *:2703
monitor-uri /haproxy_check
stats enable
stats uri /haproxy_stats
stats realm Strictly Private
stats auth admin:zqxwcevr
acl is_websocket url_beg /webapp/events
use_backend websocket-lb if is_websocket
default_backend webapp-lb
log global
backend webapp-lb
server webapp-lb1 192.168.136.129:8888 maxconn 400 check cookie webapp-lb1
server webapp-lb2 192.168.136.130:8888 maxconn 400 check cookie webapp-lb2
balance roundrobin
cookie JSESSIONID prefix nocache
log global
#http-check disable-on-404
option httpchk GET /webapp
http-check expect status 302
backend websocket-lb
server websocket-lb1 192.168.136.129:8888 maxconn 400 check
server websocket-lb2 192.168.136.130:8888 maxconn 400 check
balance roundrobin
log global
Please give me a hint as I am spending ages in reading documentation and forums with no success.
Thanks!
I am using haproxy in front of my web-server for ssl termination.
I am forwarding request on port 81 if request is https and 80 if request is normal http-
backend b1_http
mode http
server bkend_server
backend b1_https
mode http
server bkend_server:81
Problem is, when haproxy sends request to back-end, it sends HTTP_HOST header as request.domain.com:81.
Is it possible in haproxy that I can send https request to back-end at specific port without appending the port in HTTP_HOST request header?
There are two issues, here.
First, there is no HTTP_HOST header. The header is Host:. It sounds like HTTP_HOST is something being generated internally by your web server or framework.
Second, HAProxy doesn't modify the Host: header just because your back-end is listening on a port other than 80. It doesn't actually modify the Host: header at all, unless explicit configured to, using a mechanism like reqirep ^Host: ... or http-request set-header host ....
You can confirm this with a packet capture. You should find that whatever HTTP_HOST is, the value is necessarily being generated internally on the back-end system itself, because it's not coming from HAProxy.
I'm using HAProxy for load balancing the traffics. It works perfect for HTTP request but however it shows this webpage has a redirect loop on HTTPS. How to solve this looping?
Your httpsclient backend isn't being used in your current config (all traffic is going to the http backend because its set to default and you have no rules that would map it to a specified backend) try mapping a rule like this maybe:
acl http_80 dst_port 80
acl https_443 dst_port 443
use_backend httpclient if http_80
use_backend httpsclient if http_443
(add this at the bottom of your frontend params, just above your first backend)
I'm creating an application, where frontend is Haproxy and nginx.
Do you know a way to get client IP address if navigates behind anonymous proxy with HAproxy ?
My actual configuration for haproxy use "option forwardfor", but I get anonymous proxy IP instead real client IP in nginx logs (using $http_x_forwarded_for var)
frontend general_frontend
bind 111.111.111.111:80
default_backend nginx_farm_backend
backend nginx_farm_backend
balance roundrobin
option abortonclose
option forwardfor
http-check disable-on-404
http-check expect string nginx
option httpchk GET /index.html HTTP/1.0
# - Nodes
server nginx-server-1 222.222.222.222:8080 check on-error mark-down observe layer7 error-limit 1
server nginx-server-1 333.333.333.333:8080 check on-error mark-down observe layer7 error-limit 1
Thank you
Do you Using $remote_addr var for nginx log format?