Kubunetes(Non-Clouds) Access cluster from outside(Public IP) - networking

I want to make my own cluster, So I plan to buy 3 raspberry pi( for being as server) 1 for master node and 2 for worker nodes. I got one public Ip(router). And I will use kubeadm to create a master node. And use token to join master node from other 2 remaining ras-pi. All raspberry pi are in the same LAN ( router). Normally if I run webserver on my laptop on port 80. My laptop has 192.168.1.3 private Ip. And I do port forwarding from router to my laptop. I can access website through Public Ip.And if I run my webserver on container and do load balancing with k8s on 2 worker nodes( raspberry pi) So how can I handle it. Where should I do port forward to from my router. So how can I bring client from public ip to my any service in my own cluster. I research for it and I can use node port to access but I think it’s not good to do it because it access directly to a host machine not through cluster so another way is making my own loadbalancer but I don’t know How to do
So I want to get some advice how to do it or anything that to achieve my goal. I don’t care if it’s tough or difficult. I just want to success it to get some knowledge and publish it please can someone clarify me

Use nginx ingress controller to route the calls to k8s services in the cluster. That way you don't have to use node port type service objects.

Please consider using of MetalLB (a load-balancer implementation for bare metal Kubernetes clusters) together with the NGINX Ingress controller.
It would require to setup port forwarding in your home router on ports (80/443) to one of your worker nodes.
Here is how this setup would look like in your case:

Related

Exposing application deployed on kubernetes cluster in front of Bigip

We have an application that is deployed to a Kubernetes cluster on a baremetal system. I have exposed the service as NodePort. We need to expose the service to the outside world using a domain name myapp.example.com. We have created the necessary DNS mapping and we have configured our VIP in our Bigip Loadbalancer. I would like to know what ingress solution we need to implement? Is it from the Nginx/Kubernetes or the Bigip controller? Will Nginx/Kubernetes Nginx controller support Bigip and how do we need to expose the ingress-nginx? is it type LB or Nodeport?
I haven't used Bigip that much but I found that they have a controller for kubernetes.
But I think the simplest way if you have Bigip Loadbalancer already setup and a k8s cluster running then just create the NodePort service for the pod that you want to expose and get the node port number of that service (lets assume 30001). This port is now open and can be used to communicate to the service inside the K8s using the Node's IP. Now configure the Bigip Loadbalancer pool to forward all the incoming traffic to < Node's IP >:30001.
All this is theory from what I know about k8s and how it works. Give it a try and let me know if it works.

Kubernetes service makes outbound connections - how to make it originate from a virtual ip

I currently have a Kubernetes cluster, and we have a service that needs to be accessible from a virtual ip.
This in itself is not a difficult process - can use keepalived and nodeports. However, I need that service when its making outbound connections to be bound to that virtual ip (this is due to a legacy system we interact with).
Is there anything in place or that I can use that will help me with this in a generic way.
I essentially want traffic from a specific service to come out of the virtual ip and not the kubernetes host's ip.
You can use hostNetwork: true for your deployment, this will start your pods outside of NAT, and you will be able to see all the system interfaces.
Keep in mind that nodePort won’t be available when hostNetwork is enabled.

Kubernetes LoadBalancer with new IP per service from LAN DHCP

i am trying out Kubernetes on bare-metal, as a example I have docker containers exposing port 2002 (this is not HTTP).
I do not need to load balance traffic among my pods since each of new pod is doing its own jobs not for the same network clients.
Is there a software that will allow to access each new created service with new IP from internal DHCP so I can preserve my original container port?
I can create service with NodePort and access this pod by some randomly generated port that is forwarded to my 2002 port.
But i need to preserve that 2002 port while accessing my containers.
Each new service would need to be accessible by new LAN IP but with the same port as containers.
Is there some network plugin (LoadBalancer?) that will allow to forward from IP assigned by DHCP back to this randomly generated service port so I can access containers by original ports?
Starting service in Kubernetes, and then accessing this service with IP:2002, then starting another service but the same container image as previous, and then accessing it with another_new_IP:2002
Ah, that happens automatically within the cluster -- each Pod has its own IP address. I know you said bare metal, but this post by Lyft may give you some insight into how you can skip or augment the SDN and surface the Pod's IPs into routable address space, doing exactly what you want.
In more real terms: I haven't ever had the need to attempt such a thing, but CNI is likely flexible enough to interact with a DHCP server and pull a Pod's IP from a predetermined pool, so long as the pool is big enough to accommodate the frequency of Pod creation and termination.
Either way, I would absolutely read a blog post describing your attempt -- successful or not -- to pull this off!
On a separate note, be careful because the word Service means something specific within kubernetes, even though it is regrettably a word often used in a more generic term (as I suspect you did). Thankfully, a Service is designed to do the exact opposite of what you want to happen, so there was little chance of confusion -- just be aware.

How to set up CloudFoudry in my data center

I want to deploy a CloudFoundry private in my data center. I do want to expose port 80 traffic for internet accress.
I do not want to expose all the CloudFoundry roles (Cloud Controller, DEA, Haelth Manager. ..etc) on the public network.
Is a there a best practice document on configuring Cloud Foundry?
Do I need to implement a external router that will do port 80 port forwarding to Uhuru NGIX Router?
The network isolation is done at the cloud layer, i.e. vSphere, OpenStack, VCloud or AWS. Assuming you deploy this using bosh, you need to configure your networks so that everything is on a private network, except for the routers, which need to have an interface on the internet facing side. But in front of the routers, you should have your load balancers, so not even the routers need to be connected directly to the Internet.

How to expose a service in kubernetes running on Barematel

Kubernetes Version: 1.10 Running on Barematel
No. of masters: 3
We are running our multiple microservices inside a Kubernetes cluster. Currently, we are exposing these services outside of the cluster using NodePort. Each microservice has it's own NodePort so we have to maintain a list with the corresponding microservices. Since we are running on Barematel we don't have features like LodeBalancer while exposing a microservice.
Problem: - Since we have multiple masters and workers inside the cluster we have to use a static IP or DNS for any master at a time. If I want to access any service from outside the cluster I have to use as - IP_ADDRESS:NODEPORT or DNS:NODEPORT. At a time I can use the address of any one master. If that master goes gown then I have to change microservices address with other master's address. I don't want to use a static IP or DNS of any master.
What could we a better way to expose these microservices without NodePort? Is there any feature like LoadBalancer over Baremetal? Can INGRESS or Nginx help us?
There is a LoadBalancer for Baremetal, it's called METALLB. Project is available on GitHub, unfortunately this solution is in alpha state and is more complex.
You can also follow the instructions from NGINX and setup round-robin method for TCP or UDP.
Ingress only supports http(s) over ports 80, 443 only.
You can of course setup your own ingress controller but it will be a lot of extra work.
NodePort downside is a limited number of usable ports which is from 30000 to 32767, and if IP of the machine changes your services will be inaccessible.

Resources