What I am doing right now:
I own many VPS which I use to deploy applications with Docker compose, most of the machines come from different subnets and have a public static IP address.
For each new application I would pick a random VPS, assign the new application's subdomain's DNS with the VPS' IP address and deploy my application in this VPS behind an Nginx proxy (jwilder Nginx).
This approach is in my opinion very comfortable since jwilder's Nginx does almost the work for me and I only have to assign the correct DNS.
What I want to achieve:
For the purpose of learning, I would like to take the machines and make a Kubernetes cluster out of them, so I could learn more about this technology. My idea is that I only have to assign new subdomain's DNS to one single point, which also plays the role of a load balancer and pass the traffic to corresponding pods.
To redirect traffic to a new application I only have to configure the load balancer.
My problem:
I know this question is not very precise since I don't know a lot of Kubernetes. Moreover, my servers are not from a cloud provider like Google or AWS and I, therefore, can not use their solutions. They are not even from a single cloud provider, most of them are of my university and some are from a private cloud provider.
Could anybody tell me how can I achieve this?
I think the answer is kubeadm, you can install it on your own pc or vm.
It is gonna create a single control-plane cluster which could be joined by other of your vms and create a kubernetes cluster.
kubeadm helps you bootstrap a minimum viable Kubernetes cluster that conforms to best practices
kubeadm is designed to be a simple way for new users to start trying Kubernetes out, possibly for the first time, a way for existing users to test their application on and stitch together a cluster easily, and also to be a building block in other ecosystem and/or installer tool with a larger scope.
Your cluster pods will communicate via CNI.
CNI was created as a minimal specification, built alongside a number of network vendor engineers to be a simple contract between the container runtime and network plugins
Related
I’ve build a bare-metal multi-node, multi-server Kubernetes cluster and this is my first experience.
The cluster is built across many servers; each server contains a set of nodes.
The connection is done over public ip addresses on the LAN.
I run deployments on the cluster and it’s working.
But I want to expose a service over the external network.
If I were using Minikube, I would use a LoadBalancer to expose the service externally.
Troubleshooting:
I am thinking about using an ingress-controller or a NodePort Service as a solution to access the pods-network.
I tried to expose a NodePort service, but I didn't get an external ip.
I am asking if someone could help me set a running hello-world, but choosing the right architecture for this bare-metal cluster.
Thank you.
I suggest using MetalLB which is a LoadBalancer for bare metal clusters.
Also you could combine this with a bare metal Ingress controller like Nginx.
Regarding Nginx you can find more details here.
I have successfully used this combination as well as with a wildcard domain (e.g *.mydomain) pointing to one of the cluster IPs. This allows to define as many combinations as you like to point to different services deployed on the cluster (e.g. service1.mydomain, service2.mydomain, etc.).
What I would also suggest is installing Helm as this would greatly help you with deployments. You can find a lot of charts for most of the widely spread services and it gives you the ability to configure them easily. Also it is quite a good practice to create charts for your future services as well for good maintenance and customization.
Nodeport service helps here
In bare metal lets you have host1,master as cluster members.
If you create node port service on node port ex : 31000
you can use http://host1IP:31000/ for QA
We have a Google Cloud project with several VM instances and also Kubernetes cluster.
I am able to easily access Kubernetes services with kubefwd and I can ping them and also curl them. The problem is that kubefwd works only for Kubernetes, but not for other VM instances.
Is there a way to mount the network locally, so I could ping and curl any instance without it having public IP and with DNS the same as inside the cluster?
I would highly recommend rolling a vpn server like openvpn. You can also run this inside of the Kubernetes Cluster.
I have a make install ready repo for ya to check out at https://github.com/mateothegreat/k8-byexamples-openvpn.
Basically openvpn is running inside of a container (inside of a pod) and you can set the routes that you want the client(s) to be able to see.
I would not rely on kubefwd as it isn't production grade and will give you issues with persistent connections.
Hope this help ya out.. if you still have questions/concerns please reach out.
I am trying to move a currently docker based app to Kubernetes.
My app inspects network traffic that passes through it, and because of that it needs an accessible External IP, and it needs to accept traffic on all ports, not just some.
Right now, I am using docker with a macvlan network driver in order to attach docker containers to multiple interfaces and allow them to inspect traffic that way.
After research, I've found that the only way to access pods in Kubernetes is using Services, but services only allow that through some specific ports, because it is mostly intended for "server" type applications, and not "forwarder"/"sniffer" type which is what I am looking for.
Is Kubernetes a good fit for this type of application? Does it offer tools to cope with this problem?
Is Kubernetes a good fit for this type of application? Does it offer tools to cope with this problem?
Being a good fit is more of an opinion, the pods in Kubernetes have their own PodCidr that is not exposed to the outside world and a sniffer doesn't quite fit in either a service or a job definition which are the typical workloads in Kubernetes.
Having said, it can be done if you can use your custom CNI plugin that supports macvlan
You can also use something like Multus that supports the macvlan plugin.
Intro:
On AWS, Loadbalancers are expensive ($20/month + usage), so I'm looking for a way to achieve flexible load-balancing between the k8s nodes, without having to pay that expense. The load is not that big, so I don't need the scalability of the AWS load balancer any time soon. I just need services to be HA. I can get a small EC2 instance for $3.5/month that can easily handle the current traffic, so I'm chasing that option now.
Current setup
Currently, I've set up a regular standalone Nginx instance (outside of k8s) that does load balancing between the nodes in my cluster, on which all services are set up to expose through NodePorts. This works really well, but whenever my cluster topology changes during restarts, adding, restarting or removing nodes, I have to manually update the upstream config on the Nginx instance, which is far from optimal, given that cluster nodes cannot be expected to stay around forever.
So the question is:
Can Trækfik be set up outside of Kubernetes to do simple load-balancing between the Kubernetes nodes, just like my Nginx setup, but keep the upstream/backend servers of the traefik config in sync with Kubernetes list of nodes, such that my Kubernetes services are still HA when I make changes to my node setup? All I really need is for Træfik to listen to the Kubernetes API and change the backend servers whenever the cluster changes.
Sounds simple, right? ;-)
When looking at the Træfik documentation, it seems to want an ingress resource to send its trafik to, and an ingress resource requires an ingress controller, which I guess, requires a load balancer to become accessible? Doesn't that defeat the purpose, or is there something I'm missing?
Here is something what would be useful in your case https://github.com/unibet/ext_nginx but I'm note sure if project is still in development and configuration is probably hard as you need to allow external ingress to access internal k8s network.
Maybe you can try to do that on AWS level? You can add cron job on Nginx EC2 instance where you query AWS using CLI for all EC2 instances tagged as "k8s" and make update in nginx configuration if something changed.
So Kubernetes has a pretty novel network model, that I believe is based on what it perceives to be a shortcoming with default Docker networking. While I'm still struggling to understand: (1) what it perceives the actual shortcoming(s) to be, and (2) what Kubernetes' general solution is, I'm now reaching a point where I'd like to just implement the solution and perhaps that will clue me in a little better.
Whereas the rest of the Kubernetes documentation is very mature and well-written, the instructions for configuring the network are sparse, largely incoherent, and span many disparate articles, instead of being located in one particular place.
I'm hoping someone who has set up a Kubernetes cluster before (from scratch) can help walk me through the basic procedures. I'm not interested in running on GCE or AWS, and for now I'm not interested in using any kind of overlay network like flannel.
My basic understanding is:
Carve out a /16 subnet for all your pods. This will limit you to some 65K pods, which should be sufficient for most normal applications. All IPs in this subnet must be "public" and not inside of some traditionally-private (classful) range.
Create a cbr0 bridge somewhere and make sure its persistent (but on what machine?)
Remove/disable the MASQUERADE rule installed by Docker.
Some how configure iptables routes (again, where?) so that each pod spun up by Kubernetes receives one of those public IPs.
Some other setup is required to make use of load balanced Services and dynamic DNS.
Provision 5 VMs: 1 master, 4 minions
Install/configure Docker on all 5 VMs
Install/configure kubectl, controller-manager, apiserver and etcd to the master, and run them as services/daemons
Install/configure kubelet and kube-proxy on each minion and run them as services/daemons
This is the best I can collect from 2 full days of research, and they are likely wrong (or misdirected), out of order, and utterly incomplete.
I have unbridled access to create VMs in an on-premise vCenter cluster. If changes need to be made to VLAN/Switches/etc. I can get infrastructure involved.
How many VMs should I set up for Kubernetes (for a small-to-medium sized cluster), and why? What exact corrections do I need to make to my vague instructions above, so as to get networking totally configured?
I'm good with installing/configuring all the binaries. Just totally choking on the network side of the setup.
For a general introduction into kubernetes networking, I found http://www.slideshare.net/enakai/architecture-overview-kubernetes-with-red-hat-enterprise-linux-71 pretty helpful.
On your items (1) and (2): IMHO they are nicely described in https://github.com/kubernetes/kubernetes/blob/master/docs/admin/networking.md#docker-model .
From my experience: What is the Problem with the Docker NAT type of approach? Sometimes you need to configure e.g. into the software all the endpoints of all nodes (172.168.10.1:8080, 172.168.10.2:8080, etc). in kubernetes you can simply configure the IP's of the pods into each others pod, Docker complicates it using NAT indirection.
See also Setting up the network for Kubernetes for a nice answer.
Comments on your other points:
1.
All IPs in this subnet must be "public" and not inside of some traditionally-private (classful) range.
The "internal network" of kubernetes normally uses private IP's, see also slides above, which uses 10.x.x.x as example. I guess confusion comes from some kubernetes texts that refer to "public" as "visible outside of the node", but they do not mean "Internet Public IP Address Range".
For anyone who is interested in doing the same, here is my current plan.
I found the kube-up.sh script which installs a production-ish quality Kubernetes cluster on your AWS account. Essentially it creates 1 Kubernetes master EC2 instance and 4 minion instances.
On the master it installs etcd, apiserver, controller manager, and the scheduler. On the minions it installs kubelet and kube-proxy. It also creates an auto-scaling group for the minions (nice), and creates a whole slew of security- and networking-centric things on AWS for you. If you run the script and it fails creating the AWS S3 bucket, create a bucket of the same exact name manually and then re-run the script.
When the script is finished you will have Kubernetes up and running and ready for near-production usage (I keep saying "near" and "production-ish" because I'm too new to Kubernetes to know what actually constitutes a real deal productionalized cluster). You will need the AWS CLI installed and configured with a user that has full admin access to your AWS account (it goes ahead and creates IAM roles, etc.).
My game plan will be to:
Get comfortable working with Kubernetes on AWS
Keep hounding the Kubernetes team on Slack to help me understand how Kubernetes works under the hood
Reverse engineer the kube-up.sh script so that I can get Kubernetes running on premise (vCenter)
Blog about this process
Update this answer with a link to said blog.
Give me some time and I'll follow through.