on-premise to GCloud compute engine VPN using on-premise range IP - vpn

I have a compute engine with a service on it. This compute engine has an internal IP (10.208.0.X) and external IP and I can reach the service through external IP.
Now I want to create a VPN from on-premise to GCloud but I want that when I call the service from on-premise use and IP from on-premise range (172.30.XX) and be route to the compute engine.
I have configure the VPN between on-premise and gcloud using the next link:
https://cloud.google.com/vpn/docs/how-to/creating-static-vpns
Created an VPC net and subnet using range 172.30.X.X
Created an Classic VPN IKEv2 with Policy-based routing using the VPC net and subnet.
Attached network tag to Compute engine and create a firewall rule allowing incoming trafic from on-premise.
The VPN gateway and tunnel is up and running but I'm not able to reach compute engine neither using on-premise range (172.30.x.x) nor internal ip (10.208.0.X).
Any help would be appreciated.
Regards
SOLVED:
Finally we have solved in that way.
Created an VPC net and subnet using range 172.30.X.X. OUR-NET
Create a Compute engine (CE) using OUR-NET as primary interface eth0/nic0 and NOT External IP address.
Created an Classic VPN IKEv2 with Policy-based routing using the VPC net and subnet. https://cloud.google.com/vpn/docs/how-to/creating-static-vpns. At this point we are able to connect from our on-premise hosts to CE using OUR-NET range.
Because our CE is in OUR-NET and we haven't an external IP address we have needed to enable Identity-Aware Proxy IAP for access through "gcloud compute ssh"
https://cloud.google.com/iap/docs/using-tcp-forwarding
thanks for the help.

Related

GKE cluster private network access to Compute engines(VMs)

I have my web app running in GKE cluster and I am trying to create Redis and Mongo deployment for databases in compute engines/VMs in the same GCP project.
I would like only my GKE cluster to have have access to Redis and Mongo via internal/private network, so that the DBs are shielded from the public internet. What would be a preferred solution? I read one could use VPC peering or shared VPC or deploy GKE and DBs in the same VPC but I am not sure what to choose or if there is any other better way. I read one should also be aware of IP overlapping.
Any tips/help would be greatly appreciated, thanks.
You need to create a firewall rule to allow connections from GKE to your compute engine vms.
Use this command to get the source ip range for your cluster
ip_range = `gcloud container clusters describe #{cluster_name} --format=get"(clusterIpv4Cidr)" --region="us-central1" --project=#{project_id}`
Then use the below command to create the firewall rule.
`gcloud compute firewall-rules create "#{cluster_name}-to-all-vms-on-network" --network=#{network} --source-ranges=#{ip_range} --allow=tcp,udp,icmp,esp,ah,sctp --project=#{project_id}`
I am assuming you are talking about self hosting Redis and Mongo on compute engine VMs. You can create DB VMs in the same VPC as the GKE cluster but without Public IP address. This will ensure that these VMs are not accessible from internet. Create the firewall rules to allow the traffic from Cluster's Pod ip ranges on the DB VMs. See this answer for details on the firewall rules.

How to connect to Community Edition Databricks Cluster via Outside Public Address / Application

Can someone let me know if its possible to connect or PING a Databricks Cluster via its public ip address?
For example I have issued the command ping --all-ip-addresses and I get the ip address 10.172.226.115.
I would like to be able to PING that ip address(10.172.226.115) from my on-premise PC (or connect to the cluster with an application using the ip address?
Can someone let me know if that is possible?
That public IP is not guaranteed to be your cluster; unless somehow you've installed Databricks into your own cloud provider account, where you fully control the network routes, it would be connecting to Databricks managed infrastructure where the public ip would likely be an API gateway or router that serves traffic for more than one account
Note: just because you can ping Google DNS with outbound traffic doesn't mean inbound traffic from the internet is even allowed through the firewall
connect to the cluster with an application
I'd suggest using other Databricks support channels (i.e their community forum) to see if that's even possible, but I thought you're just supposed to upload and run code within their ecosystem. At least, for the community plans
Specifically, they have a REST API to submit a remote job from your local system, but if you want to be able to send data back to your local machine, I think you'd have to write and download from DBFS or other cloud filesystem

Azure Network Security Group Vs Route Tables

Networking newbie here. From the Documentation it feels like both NSG and Routing tables(UDR) are doing the same thing - capable of defining ACLs at multiple levels (Vnet, Subnet, VM)
https://learn.microsoft.com/en-us/azure/virtual-network/network-security-group-how-it-works
https://learn.microsoft.com/en-us/azure/virtual-network/virtual-networks-udr-overview
So how are they different and when is each used?
thanks.
Azure automatically creates a route table for each subnet within an Azure virtual network and adds system default routes to the table. The route table is like a networking map that tells the traffic from one place to another place via the next hop. This generates the "path" but does not filter traffic.
The Azure network security group is used to filter network traffic to and from Azure resources in an Azure virtual network. It contains security rules that allow or deny inbound network traffic to, or outbound network traffic from, several types of Azure resources. If there is no route to one place from a subnet, you even do not need to configure the security rules because there is no path. So when you consider the NSG it should have a successful network route.
For example, usually, we can access Azure VM in Azure virtual network via SSH or RDP over the Internet but it has a less secure way to expose the port 22 or 3389. We can restrict access to your Azure VM via specifying the source IP address in the NSG. This setting allows traffic only from a specific IP address or range of IP addresses to connect to the VM. Read more details here. In this scenario, we need to ensure that there is a route to the internet from your Azure virtual network and vice versa.

How to connect to on-premise OpenVPN server from OCI (Oracle Cloud Infrastructure) Compute instance?

My company has an on-premise network which is opened by OpenVPN server.
In the ordinary scenarios, I used to connect to that server very easily.
However, when I tried to that server from the OCI compute instance which I connected by SSH from my laptop, there exist some problems. As soon as I try to connect VPN server, my SSH connection is closed.
IMHO, this may occurred because VPN connection changes network information and so my SSH connection might be lost.
I tried to look around to find out how to connect to VPN from OCI, but almost everything was using IPSec protocol which Oracle provided, others were about builting OpenVPN Server on the OCI instance.
I'm very novice for the network structure. So, please give me some hint to resolve this problem.
Thanks,
I get the following:
You have Ubuntu 18.04 VM on a Public Subnet in OCI
You have OpenVPN Server running on On-Prem.
You would like to access your On-Prem from Ubuntu VM on OCI.
If I understood it correctly, the best way is to set up IPSec VPN. It isn't that hard if you hit right steps. At the high level, you will be doing the following steps. I have used IKEv1 in my attempts in the past.
OCI:
Create a DRG
Attach/Associate it to your VCN
Create a CPE (Customer Premise Equipment) and mark the IP Address of OpenVPN server to it.
Create an IPSec Connection on the DRG. It will create two Tunnels with its own Security Information.
Set up Routing on associated subnet (i.e., one that hosts Ubuntu VM) so traffic associated to On-Prem CIDR are routed to DRG.
On-Prem:
Create necessary configuration to create the Tunnels upto OCI (Using the configuration information from previous steps such as VPN Server IP Addresses and Shared Secrets)
Set up Routing so that the Traffic destined for OCI CIDR ranges are sent to associated Tunnel Interface
This ensures that you can create multiple VMs on the OCI Subnet all of which can connect to your On-Prem infrastructure. OCI Documentation has sufficient information in setting up this VPN Connection.
Alternatively if your only requirement is to establish connectivity between Ubuntu VM on OCI to OpenVPN server On-Prem, you might use any VPN Client software and set it up. This doesn't need any of the configuration steps mentioned above.
Worker nodes in private subnets have private IP addresses only (they do not have public IP addresses). They can only be accessed by other resources inside the VCN. Oracle recommends using bastion hosts to control external access (such as SSH) to worker nodes in private subnets. You can learn more on using SSH to connect through a bastion host here - https://docs.cloud.oracle.com/en-us/iaas/Content/Resources/Assets/whitepapers/bastion-hosts.pdf

Please Example Kubernetes External Address vs Internal Addresses

In a vmware environment, should the external address become populated with the VM's (or hosts) ip address?
I have three clusters, and have found that only those using a "cloud provider" have external addresses when I run kubectl get nodes -o wide. It is my understanding that the "cloud provider" plugin (GCP, AWS, Vmware, etc) is what assigns the public ip address to the node.
KOPS deployed to GCP = external address is the real public IP addresses of the nodes.
Kubeadm deployed to vwmare, using vmware cloud provider = external address is the same as the internal address (a private range).
Kubeadm deployed, NO cloud provider = no external ip.
I ask because I have a tool that scrapes /api/v1/nodes and then interacts with each host that is finds, using the "external ip". This only works with my first two clusters.
My tool runs on the local network of the clusters, should it be targeting the "internal ip" instead? In other words, is the internal ip ALWAYS the IP address of the VM or physical host (when installed on bare metal).
Thank you
Baremetal will not have an "extrenal-IP" for the nodes and the "internal-ip" will be the IP address of the nodes. You are running your command from inside the same network for your local cluster so you should be able to use this internal IP address to access the nodes as required.
When using k8s on baremetal the external IP and loadbalancer functions don't natively exist. If you want to expose an "External IP", quotes because most cases it would still be a 10.X.X.X address, from your baremetal cluster you would need to install something like MetalLB.
https://github.com/google/metallb

Resources