I have a kong acting as a reverse proxy sitting in front of an ingress controller terminating TLS upstream. I want the kong reverse proxy to do a TLS passthrough and not terminate TLS at the reverse proxy I followed these steps and am able to successfully see that the reverse proxy kong passing the traffic through without terminating TLS.
The problem I am having is that the client sometimes accesses my server without a hostname (e.g. myhostname.com) and uses the IP directly, but since for the TLS passthrough to work we need to specify SNI to match the request to, when the client uses the ip to access the server the Host header becomes the ip and the reverse proxy doesn't match it therefore block the request. Is there a way to allow the reverse proxy to passthrough clients that make requests using an IP?
services:
- host: 192.168.100.1
protocol: tcp
port: 443
name: my_service
routes:
- name: my_route
protocols:
- tls_passthrough
snis:
- myhostname.com
Related
I want to define out port of my nginx server.
Actually some thing like port forwatding by iptables in nginx.
Request:
Client via(ip:port) send to nginx(ip:80).
Nginx via( nginx ip:client port) send to server B(ip:80).
Response:
Server B via(ip:80) send to nginx(ip:client port).
Nginx via(ip:80)send to client(Ip:port)
I have a server running nginx on it listen to port 80.
It recives requests from client and acording to location in request, forwards it to diferent proxy servers.
Problem:
I need to nginx connects to my proxy server by port which client conects to it.
For example:
Client connects to port 80 of my nginx server via port 1000 and now I want to my nginx connects to lisetenig port of my other server via port 1000.
Forwarded Ip is no matter.
And connection protocol is tcp http.
To keep things simple, I think its better to just check the TCP port for liveness and readiness in kubernetes as it doesn't require knowledge of health-check endpoint (HTTP path) but just the port number. Any guide on the disadvantages of just relying on the TCP port for service health-check is greatly appreciated, please assume that the pods are not proxy for some other service and all the business-logic is in the pods itself.
https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
In my experience HTTP is chosen over TCP when you have a reverse-proxy sidecar in front of your app in the same pod, e.g. nginx. In this case, nginx will always accept TCP even when the app is not ready yet. Thus you'd want HTTP.
Otherwise:
if this is an app server listening directly on a port
you know it starts to listen only when fully loaded
you don't want any additional logic inside /health (like check db connection)
If all of the above is true - just use TCP.
TIP You don't even need to know the port number for TCP, you can use named port: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#use-a-named-port
I have a kubernetes cluster that exposes Postgresql on port 5432 via this information, this works like a charm. I'm currently testing this on my machine, and it works on db.x.io (x being my domain). But it also works on localhost. This seems fair, as it only creates a binding upon port 5432 to my service.
How can i also filter on subdomain? So its only accessible via db.x.io
There is not much that TCP protocol has in terms of filtering. This is because TCP protocol uses only IP:Port combination, no headers like in HTTP. Your subdomain is resolved by DNS to IP address before connection is made.
According to Nginx documentation you can do the following:
Restricting Access by IP Address
Limiting the Number of TCP Connections
Limiting the Bandwidth
You can try to limit access from localhost by adding deny 127.0.0.1 to nginx configuration, however it will most likely break the Postgresql instead. So it is a risky suggestion.
For kubernetes ingress object it would be:
metadata:
annotations:
nginx.org/server-snippets: |
deny 127.0.0.1;
Based on Nginx documentation.
I do have HAProxy configured with https termination using the http mode:
frontend apache-https
#mode tcp
bind 192.143.56.150:443 ssl crt /etc/ssl/private/rabbit.pem
option http-server-close # needed for forwardfor
option forwardfor # forward IP Address of client
reqadd X-Forwarded-Proto:\ https
default_backend apache-http
acl fx_static hdr(host) -i static.rabbit.fx-com
use_backend nginx-cluster if fx_static
Now I do want to change the static. domain to http2. The problem is, that I would need to switch to tcp mode in order to do that and in the same time I would loose the ACL http-mode feature.
How is it possible to configure HAProxy for the same IP and port in tcp mode to use 2 different backends?
I would like to use this line together with tcp mode just for static.
use_backend nginx-cluster-http2 if { ssl_fc_alpn -i h2 }
The solution below eliminates the http mode and therefore the injection of forward headers in favor of using the PROXY protocol via the send-proxy directive. The backend server must be able to accept the PROXY protocol, and both Apache and Nginx supports it.
The host match is performed using SNI rather than the Host header.
A HTTP/2 request for the static domain will be forwarded to the HTTP/2 backend server, in the example listening on 127.0.0.1:8888, where a clear-text HTTP/2 server must be listening.
All other requests will be forwarded to 127.0.0.1:9999 where a clear-text HTTP/1.1 server must be listening.
frontend fe
mode tcp
bind *:443 ssl no-sslv3 crt /etc/ssl/domain.pem
acl static_domain req.ssl_sni -i static.domain.com
acl http2 ssl_fc_alpn -i h2
use_backend be_static if static_domain http2
default_backend be_non_static
backend be_static
mode tcp
server 127.0.0.1:8888 send-proxy
backend be_non_static
mode tcp
server 127.0.0.1:9999 send-proxy
If you really need to have the forward headers, for example because your application relies on them, you can use the solution below:
frontend fe
mode tcp
bind *:443 ssl no-sslv3 crt /etc/ssl/domain.pem
acl static_domain req.ssl_sni -i static.domain.com
acl http2 ssl_fc_alpn -i h2
use_backend be_static if static_domain http2
default_backend be_non_static
backend be_static
mode tcp
server 127.0.0.1:8888 send-proxy
backend be_non_static
mode tcp
server 127.0.0.1:7777 send-proxy
frontend fe_non_static
mode http
bind 127.0.0.1:7777 accept-proxy
option forwardfor
reqadd X-Forwarded-Proto:\ https
default_backend be_other
backend be_other
mode tcp
server 127.0.0.1:9999
For this second solution the idea is that HTTP/2 static requests would work as before, while the other requests will first be directed to a private, "local", frontend listening on port 7777, working in http mode, where you can inject the forward headers.
From the private, "local", frontend you can forward to the backend server as before - only this time you don't need the send-proxy directive.
Given the wide support for the PROXY protocol by virtually any server, I would recommend to not use forward headers unless really necessary.
Im having a strange issue, I have a Nginx webserver running with valid ssl certs on it. On LAN i can access it at http://192.xxx.x.xxx and https://192.xxx.x.xxx no issue but when i go to outside my network i can only access https://example.com the regular http:// connection times out and cant connect.
here is my router port forwards
(name - remote port - lan ip - local port - protocol)
(Webserver80 - 80 - 192.xxx.x.xxx - 80 - TCP)
(Webserver443 - 443 - 192.xxx.x.xxx - 443 - TCP)
I dont get why i can see the http on lan but not wan, and https works fine.
I Did some open port checks and both are seen by the internet
Success: I can see your service on xx.xx.xx.xx on port (80)
Your ISP is not blocking port 80
Success: I can see your service on xx.xx.xx.xx on port (443)
Your ISP is not blocking port 443
Think I fixed it.
Deleted the port entry in firewall port forward. Rebooted the router and reapplied the port 80... Working now. Odd.