kubectl losing connection to minikube when I connect to the VPN - networking

I install minikube in my pc and created a pod.
Networking is:
minikube -> https://172.26.174.50:8443
my pc -> 192.168.18.129
All of that works fine. Now, I need to consume a PVC in the cloud, for that I must use a VPN. The problem is when I connect to the VPN, kubectl loses connection with minikube (I don't have ping between minikube and my pc).
What are the correct network settings to use minikube when using a VPN?

Related

Unable to reach pod from outside of cluster using exposing external IP via metallb

I try to deploy nginx deployment to see if my cluster working properly on basic k8s installed on VPS (kubeadm, ubuntu 22.04, kubernetes 1.24, containerd runtime)
I successfully deployed metallb via helm on this VPS and assigned public IP of VPS to the
using CRD: apiVersion: metallb.io/v1beta1 kind: IPAddressPool
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
nginx LoadBalancer 10.106.57.195 145.181.xx.xx 80:31463/TCP
my target is to send a request to my public IP of VPS to 145.181.xx.xx and get nginx test page of nginx.
the problem is that I am getting timeout, and connection refused when I try to reach this IP address outside the cluster, inside the cluster -everything is working correctly - it means that calling 145.181.xx.xx inside cluster returns Test page of nginx.
There is no firewall issue - I tried to setup simple nginx without kubernetes with systemctl and I was able to reach port 80 on 145.181.xx.xx.
any suggestions and ideas what can be the problem or how I can try to debug it?
I'm facing the same issue.
Kubernetes cluster is deployed with Kubespray over 3 master and 5 worker nodes. MetalLB is deployed with Helm, IPAddressPool and L2Advertisement are configured. And I'm also deploying simple nginx pod and a service to check of MetalLB is working.
MetalLB assigns first IP from the pool to nginx service and I'm able to curl nginx default page from any node in the cluster. However, if I try to access this IP address from outside of the cluster, I'm getting timeouts.
But here is the fun part. When I modify nginx manifest (rename deployment and service) and deploy it in the cluster (so 2 nginx pods and services are present), MetalLB assigns another IP from the pool to the second nginx service and I'm able to access this second IP address from outside the cluster.
Unfortunately, I don't have an explanation or a solution to this issue, but I'm investigating it.

Can't resolve hostname when connected to computer via VPN

I have computer with self hosted WireGuard VPN in docker container. When I'm in local network and I'm not connected trough VPN, it's possible to connect with machine using hostname instead ip address:
ssh username#computer_name
but when I'll connect trough VPN from external network then I have to use local ip addresess like
ssh username#xxx.xxx.x.x
because when I try use hostname I receive message:
ssh: Could not resolve hostname computer_name: Unknown host.
The machine with the VPN is the same machine I am trying to connect to via ssh using hostname.

Tcpdump from a pod for cluster in kubernetes setup (In Minikube setup)

I am new to kubernetes. The whole setup I have configured in Minikube. I am not sure it should different than any other kubernetes setup.
I have created a POD in my setup and an spring boot application is running inside on 8080 port and this service to expose to Cluster on 20080 port.
I am running another pod inside the cluster where tcpdump is running. I have requirement to dump the HTTP packets hitting the cluster on 20080. Please let me know how I can access Cluster interface from the tcpdump pod.
I tried google and tried using Cluster IP directly from the POD,but it didn't work.
The POD that is running tcpdump can only see its own netns, except you run the POD with the hostNetwork: true option.
So maybe what you can do is running POD with hostNetwork: true option, then use tcpdump to monitor the host's physical interface to grab the network packages on port 20080. Also you can monitor the network interface of the POD that's running the spring boot, if you can find the POD's network interface, which depends on the network configurations.

Access docker-machine containers from external network

I have setup a docker-machine with 3 docker containers on my mac (server). Now I have setup third network-adapter in bridged mode, on my virtualbox instance. I can access my docker instances in the internal network now, without problems.
If also setup a port forwarding on my router, but I can't reach my docker instances from there any ideas?
This is how the structure looks like:
Internet --> Router (External-IP:80 -> Docker-Host-IP:80) -> Mac -> Virutalbox -> Docker Webserver

How to send http request from Docker to localhost or Virtual Machine

Being new to Docker and VM's, I have run into a blocker. I have a node app that needs to send a POST request from a Docker container to a Virtual Machine or to my local machine.
I have read through the Docker documentation, but still don't understand what I need to do in order to accomplish this.
So how can I send an http request from my node app running in a Docker Container to my Vagrant Box?
By default, Docker creates a virtual interface (docker0) in your host machine with IP 172.17.42.1. Each of the container launched will have an IP of the network 172.17.42.1/16, and they will be able to connect to host machine connecting to IP 172.17.42.1.
If you want to connect a docker container with another service running in a virtual machine running with other provider (e.g.: virtualbox, vmware), the easiest way is forwarding the ports needed by the service to you host machine and then, from your docker container, connecting to IP 172.17.42.1. You should check your virtual machine provider documentation to see details about this. And if you are using libvirt/KVM (or with any other provider), you can use iptables to enable port forwarding.

Resources