I have a compute engine instance with persistent file storage that I need outside of my GKE cluster.
I would like to open a specific TCP port on the Compute Engine instance so that only nodes within the GKE cluster can access it.
The Compute Engine instance and GKE cluster are in the same GCP project, network, and subnet.
The GKE cluster is not private and I have an ingress exposing the only service I want exposed to the internet.
I've tried creating firewall rules of three different types that do not work:
By shared service account on both Compute Engine instance and K8s nodes.
By network tags - (yes I am using the network tags as explicitly specified on the VM instance page).
By IP address, where I use network tag for target and private IANA IP ranges 10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16 for source.
The only thing that works is the last option but using 0.0.0.0/0 for source IP range.
I've looked at a few related questions such as:
Google App Engine communicate with Compute Engine over internal network
Can I launch Google Container Engine (GKE) in Private GCP network Subnet?
But I'm not looking to make my GKE cluster private and I have tried to create the firewall rules using network tags to no avail.
What am I missing or is this not possible?
Not sure how I missed this, fairly certain I tried something similar a couple months back but must have had something else misconfigured.
On the GKE cluster Details page, there is a pod address range. Setting the firewall source range to GKE pod address range gave me the the desired outcome.
Related
I have my web app running in GKE cluster and I am trying to create Redis and Mongo deployment for databases in compute engines/VMs in the same GCP project.
I would like only my GKE cluster to have have access to Redis and Mongo via internal/private network, so that the DBs are shielded from the public internet. What would be a preferred solution? I read one could use VPC peering or shared VPC or deploy GKE and DBs in the same VPC but I am not sure what to choose or if there is any other better way. I read one should also be aware of IP overlapping.
Any tips/help would be greatly appreciated, thanks.
You need to create a firewall rule to allow connections from GKE to your compute engine vms.
Use this command to get the source ip range for your cluster
ip_range = `gcloud container clusters describe #{cluster_name} --format=get"(clusterIpv4Cidr)" --region="us-central1" --project=#{project_id}`
Then use the below command to create the firewall rule.
`gcloud compute firewall-rules create "#{cluster_name}-to-all-vms-on-network" --network=#{network} --source-ranges=#{ip_range} --allow=tcp,udp,icmp,esp,ah,sctp --project=#{project_id}`
I am assuming you are talking about self hosting Redis and Mongo on compute engine VMs. You can create DB VMs in the same VPC as the GKE cluster but without Public IP address. This will ensure that these VMs are not accessible from internet. Create the firewall rules to allow the traffic from Cluster's Pod ip ranges on the DB VMs. See this answer for details on the firewall rules.
I'm looking for some help in understanding how external ips
are supposed to work (specifically on OpenShift 4.4/4.5 baremetal).
It looks like I can assign arbitrary external ips to a service
regardless of the setting of spec.externalIP.policy on the cluster
network. Is that expected?
Once an external ip is assigned to a service, what's supposed to
happen? The openshift docs are silent on this topic. The k8s docs
say:
Traffic that ingresses into the cluster with the external
IP (as destination IP), on the Service port, will be routed to one
of the Service endpoints.
Which suggests that if I (a) assign an externalip to a service and
(b) configure that address on a node interface, I should be able to
reach the service on the service port at that address, but that
doesn't appear to work.
Poking around the nodes after setting up a service with an external ip, I don't see netfilter rules or anything else that would direct traffic for the external address to the appropriate pod.
I'm having a hard time findings docs that explain how all this is
supposed to operate.
Unfortunately, we have to interface with a third-party service which instead of implementing authentication, relies on the request IP to determine if a client is authorized or not.
This is problematic because nodes are started and destroyed by Kubernetes and each time the external IP changes. Is there a way to make sure the external IP is chosen among a fixed set of IPs? That way we could communicate those IPs to the third party and they would be authorized to perform requests. I only found a way to fix the service IP, but that does not change at all the single nodes' IPs.
To be clear, we are using Google's Kubernetes Engine, so a custom solution for that environment would work too.
Yes, it's possible by using KubeIP.
You can create a pool of shareable IP addresses, and use KubeIP to automatically attach IP address from the pool to the Kubernetes node.
IP addresses can be created by:
opening Google Cloud Dashboard
going VPC Network -> External IP addresses
clicking on "Reserve Static Address" and following the wizard (on the Network Service Tier, I think it needs to be a "Premium", for this to work).
The easiest way to have a single static IP for GKE nodes or the entire cluster is to use a NAT.
You can either use a custom NAT solution or use Google Cloud NAT with a private cluster
GKE uses the kubenet network plugin for setting up container interfaces and configures routes in the VPC so that containers can reach eachother on different hosts.
Wikipedia defines an overlay as a computer network that is built on top of another network.
Should GKE's network model be considered an overlay network? It is built on top of another network in the sense that it relies on the connectivity between the nodes in the cluster to function properly, but the Pod IPs are natively routable within the VPC as the routes inform the network which node to go to to find a particular Pod.
VPC-native and non VPC native GKE clusters uses GCP virtual networking. It is not strictly an overlay network by definition. An overlay network would be one that's isolated to just the GKE cluster.
VPC-native clusters work like this:
Each node VM is given a primary internal address and two alias IP ranges. One alias IP range is for pods and the other is for services.
The GCP subnet used by the cluster must have at least two secondary IP ranges (one for the pod alias IP range on the node VMs and the other for the services alias IP range on the node VMs).
Non-VPC-native clusters:
GCP creates custom static routes whose destinations match pod IP space and services IP space. The next hops of these routes are node VMs by name, so there is instance based routing that happens as a "next step" within each VM.
I could see where some might consider this to be an overlay network. I don’t believe this is the best definition because the pod and service IPs are addressable from other VMs, outside of GKE cluster, in the network.
For a deeper dive on GCP’s network infrastructure, GCP’s network virtualization whitepaper can be found here.
In GCloud we have one Kubernetes cluster with two nodes, it is possible to setup all nodes to get the same external IP? Now we are getting two external IP's.
Thank you in advance.
The short answer is no, you cannot assign the very same external IP to two nodes or two instances, but you can use the same IP to access them, for example through a LoadBalancer.
The long answer
Depending on your scenario and the infrastructure you want to set up, several ways are available to expose different resources through the very same IP.
I do not know why you want to assign the same IP to the nodes, but since each node it is a Google Compute Engine instance you can set up a Load Balancer (TCP, SSL, HTTP(s), internal, ecc). In this way you reach the nodes as if they were not part of a Kubernetes cluster, basically you are treating them as Compute Engine instances and you will able to connect to any port they are listening on (for example an HTTP server or an external health check).
Notice that you will be not able to connect to the PODs in this way: the services and the containers are running in a separate software bases network and they will be not reachable if not properly set, for example with a NodePort.
On the other hand if you are interested in making your PODs running in two different kubernetes nodes reachable through a unique entry point you have to set up Kubernetes related ingress and load balancing to expose your services. This resources are based as well on the Google Cloud Platform Load Balancer components, but when created they trigger as well the required change to the Kubernetes Network.