I'm looking for some guidance on implementing IP whitelisting. EKS cluster receives traffic through WAF (web application firewall) and I'd like to whitelist WAF IPs. I want to block all traffic other than that from the WAF.
The setup I have is:
Client >> WAF >> AWS LB (network LB) >> EKS >> ingress controller (nginx) >> pods
Right now the the WAF can be bypassed by directly sending requests to network LB.
Any help regarding this would be much appreciated.
Thanks
since you are using network ELB, your nodes should already have a security group rule that allows access from client's IP to your cluster (probably 0.0.0.0/0). you need to change that rule and limit it to your WAF IPs
Related
I'm looking for some help in understanding how external ips
are supposed to work (specifically on OpenShift 4.4/4.5 baremetal).
It looks like I can assign arbitrary external ips to a service
regardless of the setting of spec.externalIP.policy on the cluster
network. Is that expected?
Once an external ip is assigned to a service, what's supposed to
happen? The openshift docs are silent on this topic. The k8s docs
say:
Traffic that ingresses into the cluster with the external
IP (as destination IP), on the Service port, will be routed to one
of the Service endpoints.
Which suggests that if I (a) assign an externalip to a service and
(b) configure that address on a node interface, I should be able to
reach the service on the service port at that address, but that
doesn't appear to work.
Poking around the nodes after setting up a service with an external ip, I don't see netfilter rules or anything else that would direct traffic for the external address to the appropriate pod.
I'm having a hard time findings docs that explain how all this is
supposed to operate.
I have k8s cluster with three nodes (Node A, Node B, Node C) and deployed simple nginx with replica 4 and exposed through k8s service.
Now All my nginx are up with thier own pod IP as well as service IP.
Now I need all the igress and egress traffic of my nginx pods to monitor.
I am planning to create a another pod with simple tcpdump utility to log the network traffic but how can i redirect all the other pods traffic into the pod where tcpdump is running.
Thanks in advance for suggestions.
I would suggest using a service mesh such as Linkerd or Istio for monitoring network traffic.
A service mesh deploys a proxy as a sidecar along with your pod and since all network traffic goes through this proxy it can capture metrics and store those metrics in Prometheus and then Grafana can be used as a dashboard.
Now we're using Kubernetes to implement the PaaS service and users can ssh into the containers. Because container runs inside the network of Kubernetes, users can access the services like kube-apiserver.
We want to restrict the outbound of the user's pods. It seems that Kubernetes Network Policy is only for inbound traffic now.
Is that possible to do that? Should we setup the rules of iptables in compute nodes?
Outbound traffic is now supported by Network Policies since v1.8, you should check again and see if your use case is fully supported.
https://kubernetes.io/docs/concepts/services-networking/network-policies/#default-deny-all-egress-traffic
This may be an extremely simple question, but I can't seem to figure out how to only allow my kubernetes cluster to be accessible ONLY from my office IP.
In my firewall rules I see my rules for the gke nodes to be 2 internal ips and my office ip.
I also see a firewall rule for an external ip range that I don't see in my external IP addresses. That IP address also doesn't appear in my load balancer IPs...
Finally I have a loadbalancing firewall rule that allows the external IP ranges from the load balancing tab, which are my kubernetes ingress rules.
Long story short, how do I only allow my kubernetes cluster to be only accessible from my office IP?
This isn't currently possible in Google Container Engine.
You don't see any firewall rules for your cluster control plane because it isn't running inside your cloud project. Therefore the endpoint for your cluster won't show up in your networking views and you cannot add firewall rules to restrict access to it.
This is a shortcoming that the team is aware of and we hope to be able to provide a solution for you in the future.
So basically.
I have one external IP.
I am running few web servers on my internal network.
All web servers are configured in NAT with different ports (80,81,82,...)
My domain's DNS is configured on my external IP. And NAT forwards it to my first web server.
Until now when I open my domain let's say example.com it opens my first web server's page.
When I open example.com:81 it opens second server, etc..
What I am trying to achieve is some way to open my other web servers on different sub-domains without specifying port.
So I would like to have something like:
second.example.com -> example.com:81
third.example.com -> example.com:82
I am using SRV record for my TeamSpeak3 server, so my TS3 is running on port 2222 and SRV record translates my ts3.example.com to example.com:2222 and it works like a charm.
Can those sub-domains be configured by SRV records in DNS?
If it can't. Is there any other way?
Thanks
Since you are behind a NAT all your web-servers share the same endpoint. You will need to set up virtual hosts that resolve all requests based on domain information passed in an HTTP packet. The server that hosts these virtual hosts will then parse the incoming packets and distribute the request to the appropriate virtual host based on domain name resolution.
Apache makes this pretty easy through its implementation of name-based virtual hosts.
IIS has a solution as well as this stack overflow answer points to.
Good luck!