I have a kubernetes cluster that exposes Postgresql on port 5432 via this information, this works like a charm. I'm currently testing this on my machine, and it works on db.x.io (x being my domain). But it also works on localhost. This seems fair, as it only creates a binding upon port 5432 to my service.
How can i also filter on subdomain? So its only accessible via db.x.io
There is not much that TCP protocol has in terms of filtering. This is because TCP protocol uses only IP:Port combination, no headers like in HTTP. Your subdomain is resolved by DNS to IP address before connection is made.
According to Nginx documentation you can do the following:
Restricting Access by IP Address
Limiting the Number of TCP Connections
Limiting the Bandwidth
You can try to limit access from localhost by adding deny 127.0.0.1 to nginx configuration, however it will most likely break the Postgresql instead. So it is a risky suggestion.
For kubernetes ingress object it would be:
metadata:
annotations:
nginx.org/server-snippets: |
deny 127.0.0.1;
Based on Nginx documentation.
Related
I set up a wireguard instance in a docker container and use nginx proxy manager to set up all reverse proxy settings. Now I want the website to be only accessible when I am connected to the VPN.
I tried to add localhost as the forward address and set the only allow to the local server ip, but it doesn't work and just displays a cant connect to server message in my browser.
Add this to a server block (or a location or http block) in your nginx configuration:
allow IP_ADDRESS_OR_NETWORK; # allow only connections from Wireguard VPN network
deny all; # block the rest of the world
The allowed network has to match your specific Wireguard VPN network. All peer IP addresses which should have access must be part of the network range. Depending on your NAT settings, you should verify the actual IP address or network by checking the access log: tail -f /var/log/nginx/access.log
Be sure to reload your nginx config to apply changes: service nginx reload
See also http://nginx.org/en/docs/http/ngx_http_access_module.html for usage hints on the HTTP access module.
If one have to install docker, docker-compose, kubectl in an AWS ubuntu instance then which inbound rules should add in the security group of the instance ?
For SSHing into the server, you will be needing TCP port 22 open for your public/private IP. If you are accessing the server over Internet, and your public IP changes as per ISP, you can allow 0.0.0.0/0 for TCP port 22 in the ingress rule for the security group.
Further, for installing packages inside the server, you need to have Internet connectivity from the server itself, therefore, you need to have TCP ports opened for Internet in the egress rules of the security group, mostly you will be needed to allow TCP port 443 for HTTPS connection (or TCP port 80 for HTTP, however depends on how/from where you are installing the packages).
I have services with ClusterIP in Kubernetes and using nginx (https://github.com/helm/charts/tree/master/stable/nginx-ingress) to expose these services to the internet. When I try to get client IP address in application I am getting cluster's node IP. How can I retrieve actual client IP?
I looked into "externalTrafficPolicy": "Local" settings in service but for that service type must be LoadBalancer.
I also tried update ingress annotations with:
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-headers: "DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Authorization,X-Forwarded-For,csrf-token"
nginx.ingress.kubernetes.io/cors-allow-origin: "https://example.com"
But, still, it's not working. Please advice!
This is unfortunately not possible today. Please see https://github.com/kubernetes/kubernetes/issues/67202 and https://github.com/kubernetes/kubernetes/issues/69811 for more discussion around this.
If you want to get the client IP address, you'll need to use NodePort or LoadBalancer types.
I have a kubernetes 1.13 version cluster(a single node at the moment) set up on bare metal with kubeadm. The node has 2 network interfaces connected to it for testing purposes. Ideally, in the future, one interface should be facing the intranet and the other the public network. By then the number of nodes will also be larger than one.
For the intranet ingress I'm using HAProxy's helm chart ( https://github.com/helm/charts/tree/master/incubator/haproxy-ingress ) setup with this configuration:
rbac:
create: true
serviceAccount:
create: true
controller:
ingressClass: "intranet-ingress"
metrics:
enabled: true
stats:
enabled: true
service:
type: LoadBalancer
externalIPs:
- 10.X.X.X # IP of one of the network interfaces
service:
externalIPs:
- 10.X.X.X # IP of the same interface
The traffic then reaches haproxy as follows:
1. Client's browser, workstation has an IP from 172.26.X.X range
--local network, no NAT -->
2. Kubernetes server, port 443 of HAProxy's load balancer service
--magic done by kube-proxy, possibly NAT(which shoudn't have been here)-->
3. HAProxy's ingress controller pod
The HAProxy access logs shows the source IP of 10.32.0.1. This is an IP from the kubernete's network layer. Kubernetes pod CIDR is 10.32.0.0/12. I, however, need the access log to show the actual source IP of the connection.
I've tried manually editing the loadbalancer service created by HAProxy and setting the externalTrafficPolicy: Local. That did not help.
How can I get the source IP of the client in this configuration?
I've fixed the problem, turns out there were a couple of issues in the original configuration that I had.
First, I didn't mention what's my network provider. I am using weave-net, and it turns out that even though kubernetes documentation states that for preserving source IP it's enough to add externalTrafficPolicy: Local to the load balancer service it wouldn't work with weave-net unless you enable it specifically. So, on the version of weave-net I'm using(2.5.1) you have to add the following environment variable to weave-net DeamonSet NO_MASQ_LOCAL=1. For more details refer to their documentation.
Honestly, after that, my memory is a bit fuzzy, but I think what you get at this stage is a cluster where:
NodePort service: does not support source IP preservation. Somehow this works on AWS but is not supported on bare metal by kubernetes itself, weave-net is not at fault.
LoadBalancer service on the node with IP X bound to an IP of another node Y: does not support source IP preservation as traffic has to be routed inside the kubernetes network.
LoadBalancer service on a node with IP X bound to the same IP X: I don't remember clearly, but I think this works.
Second, the thing is that kubernetes, out of the box, does not support true LoadBalancer services. If you decide to stick with "standard" setup without anything additional, you'll have to restrict your pods to run only on nodes of the cluster that have the LB IP addresses bound to them. This makes managing a cluster a pain in the ass as you're becoming very dependant on the specific arrangement of components on the nodes. You also lose redundancy.
To address the second issue, you have to configure a load balancer implementation provider for bare metal setup. I personally used MetalLB. With it configured, you give the load balancer service a list of IP addresses which are virtual, in the sense that they are not attached to a particular node. Every time kubernetes launches a pod that accepts traffic from the LB service; it attaches one of the virtual IP addresses to that same node. So, the LB IP address always moves around with the pod and you never have to route external traffic through the kubernetes network. As a result, you get 100% source IP preservation.
I installed ingress-nginx in a cluster. I tried exposing the service with the kind: nodePort option, but this only allows for a port range between 30000-32767 (AFAIK)... I need to expose the service at port 80 for http and 443 for tls, so that I can link A Records for the domains directly to the service. Does anyone know how this can be done?
I tried with type: LoadBalancer before, which worked fine, but this creates a new external Load Balancer at my cloud provider for each cluster. In my current situation I want to spawn multiple mini clusters. It would be too expensive to create a new (digitalocean) Load Balalancer for each of those, so I decided to run each cluster with it's own internal ingress-controller and expose that directly on 80/443.
If you want on IP for 80 port from a service you could use the externalIP field in service config yaml. You could find how to write the yaml here
Kubernetes External IP
But if your usecase is really like getting the ingress controller up and running it does not need the service to be exposed externally.
if you are on bare metal so change your ingress-controller service type to NodePort and add a reverse proxy to flow traffic to your ingress-controller service with selected NodePort.
As #Pramod V answerd if you use externalIP in ingress-controller service so you loose real remote address in your EndPoints.
A more complete answer could be found Here