Kubernetes LoadBalancer with new IP per service from LAN DHCP - networking

i am trying out Kubernetes on bare-metal, as a example I have docker containers exposing port 2002 (this is not HTTP).
I do not need to load balance traffic among my pods since each of new pod is doing its own jobs not for the same network clients.
Is there a software that will allow to access each new created service with new IP from internal DHCP so I can preserve my original container port?
I can create service with NodePort and access this pod by some randomly generated port that is forwarded to my 2002 port.
But i need to preserve that 2002 port while accessing my containers.
Each new service would need to be accessible by new LAN IP but with the same port as containers.
Is there some network plugin (LoadBalancer?) that will allow to forward from IP assigned by DHCP back to this randomly generated service port so I can access containers by original ports?

Starting service in Kubernetes, and then accessing this service with IP:2002, then starting another service but the same container image as previous, and then accessing it with another_new_IP:2002
Ah, that happens automatically within the cluster -- each Pod has its own IP address. I know you said bare metal, but this post by Lyft may give you some insight into how you can skip or augment the SDN and surface the Pod's IPs into routable address space, doing exactly what you want.
In more real terms: I haven't ever had the need to attempt such a thing, but CNI is likely flexible enough to interact with a DHCP server and pull a Pod's IP from a predetermined pool, so long as the pool is big enough to accommodate the frequency of Pod creation and termination.
Either way, I would absolutely read a blog post describing your attempt -- successful or not -- to pull this off!
On a separate note, be careful because the word Service means something specific within kubernetes, even though it is regrettably a word often used in a more generic term (as I suspect you did). Thankfully, a Service is designed to do the exact opposite of what you want to happen, so there was little chance of confusion -- just be aware.

Related

How are external ips supposed to work in OpenShift (4.x)?

I'm looking for some help in understanding how external ips
are supposed to work (specifically on OpenShift 4.4/4.5 baremetal).
It looks like I can assign arbitrary external ips to a service
regardless of the setting of spec.externalIP.policy on the cluster
network. Is that expected?
Once an external ip is assigned to a service, what's supposed to
happen? The openshift docs are silent on this topic. The k8s docs
say:
Traffic that ingresses into the cluster with the external
IP (as destination IP), on the Service port, will be routed to one
of the Service endpoints.
Which suggests that if I (a) assign an externalip to a service and
(b) configure that address on a node interface, I should be able to
reach the service on the service port at that address, but that
doesn't appear to work.
Poking around the nodes after setting up a service with an external ip, I don't see netfilter rules or anything else that would direct traffic for the external address to the appropriate pod.
I'm having a hard time findings docs that explain how all this is
supposed to operate.

Create a UDP Load Balancer with Failover at Amazon for EC2 Instances

Task:
Create a UDP Load Balancer with Failover at Amazon for EC2 Instances.
Problems:
Based on the explanation below, I have the following problems:
AWS EC2 Doesn't have a Public DNS Name that works for both IPv4 and IPv6 traffic.
Unable to reassign the current IPv6 address to a new instance in another availability zone.
Explanation:
By Failover, I mean that if the instance goes down for whatever reason, spin up a new one and replace it. If the availability zone it is in is down spin up a new instance in another availability zone and replace it.
With Elastic IP Addresses I am able to re-assign the existing Elastic IP Address to the new instance, regardless of its availability zone.
With IPv6 Addresses, I am unable to reassign the existing IPv6 Address if the new instance is created in a different availability zone, because it is not in the same subnet. By availability zone, I am referring to Amazon's Availability Zones, such as us-west-2a, us-west-2b, us-west-2c, etc.
The only way I know how to resolve this is to update the Host Record at my registrar (Godaddy in my case.) with the new IPv6 address. Godaddy has an API and I believe I can update my host record programmatically. However, Godaddy has a minimum TTL of 600 seconds, that means my server could be unreachable by IPv6 Traffic for 10 minutes or more based on propagation.
Amazon has an amazing Load Balancer system if I am just doing normal TCP traffic. This problem would be non existent if that were the case. Since I need to load balance UDP traffic, I'm running into this problem. The AWS ELB (Amazon Elastic Load Balancer) provides me with a CNAME that I can point all of my traffic to for TCP traffic. So I don't need to worry about the separate IPv4 vs IPv6 traffic. I can just point the CNAME directly to the DNS Name that Amazon provides with the ELB.
Amazon Also Provides a Public DNS for EC2, but it is only for IPv4 Traffic. So that would work for my IPv4 Traffic but not my IPv6 Traffic.
The only option I can think of is to setup a Software Based Load Balancer, in my case NGINX on an EC2 Instance. Then point the domain to the NGINX Load Balancer's IPv4 and IPv6 Addresses. Then when a zone crashes, I spin up a new AWS EC2 Instance in another zone. Then use Godaddy's API to update the IPv6 Address to the New Instance's IPv6 Address.
Request
Does anyone know how to assign a CNAME to an EC2 Instance without an AWS ELB? The instance would need to be able to receive both IPv4 and IPv6 traffic at the CNAME.
The only way I can think of doing it, will cause down time due to propagation issues with DNS changes at my Domain Registrar.
I've been looking at the Route 53 options in Amazon and it appears to have the same propagation delays.
I've thought about setting up my own DNS server for the domain. Then if the IP Address changes I could potentially change the DNS entry faster than using Godaddy. But DNS Propagation issues are going to be a problem with any dns change.
[EDIT after thinking about my answer]
One item that I did not mention is that Route 53 supports simple load balancing and failover. Since you will need two systems in my answer below, just spin up two EC2 instances for your service, round-robin load balance with Route 53 and add a failover record. Create a CloudWatch alarm so that when one of your instances fail, you know to replace it manually. This will give you a "poor man" load balancer for UDP.
Configuring DNS Failover
[END of EDIT]
First, I would move from GoDaddy DNS to Route 53. I have no experience with programming GoDaddy DNS entries, but Route 53's API is excellent.
GoDaddy does not support zone apex CNAME records (example.com). You would need to use IPv4 A records and IPv6 AAAA records. This should not be a problem. I would use AWS EIP records so that when launching the new instance, at least IPv4 DNS entries would not require DNS delays.
I would not setup my own DNS server. I would switch to Route 53 first. When you mention propagation delays, you mean TTL. You can change the TTL to be short. Route 53 supports a 1 second TTL entry, but most DNS clients will ignore short TTL values so you will have little to no control over this. Short TTLs also mean more DNS requests.
AWS does not offer UDP load balancing but there are third party products and services that do that run on AWS. If your service is critical or revenue producing use a well tested solution.
I would not try to reinvent the wheel. However, sometimes this is fun to do so to better understand how real systems work.
STEP 1: You will need to design a strategy to detect that your instance has failed. You will need to duplicate the health check that a load balancer performs and then trigger an action.
STEP 2: You will need to write code that can update Route 53 (GoDaddy) DNS entries.
STEP 3: You will need to write code that can launch an EC2 instance and to terminate the old instance.
STEP 4: You will need to detect the new addresses for the new instance and update Route 53 (GoDaddy).
The above steps will require a dedicated always on computer with a highly reliable Internet connection. I would use EC2 for the monitoring system. T2-micro is probably fine.
However, look at the amount of time that it will take you to develop and test this new system. Go back and rethink your strategy.

Kubernetes - do I need to use https for container communication inside a pod?

Been googling it for a while and can't figure out the answer: suppose I have two containers inside a pod, and one has to send the other some secrets. Should I use https or is it safe to do it over http? If I understand correctly, the traffic inside a pod is firewalled anyway, so you can't eavesdrop on the traffic from outside the pod. So... no need for https?
Containers inside a Pod communicate using the loopback network interface, localhost.
TCP packets would get routed back at IP layer itself, if the address is localhost.
It is implemented entirely within the operating system's networking software and passes no packets to any network interface controller. Any traffic that a computer program sends to a loopback IP address is simply and immediately passed back up the network software stack as if it had been received from another device.
So when communication among Containers inside a Pod, it is not possible to get hijacked/ altered.
If you want to understand more, take a look understanding-kubernetes-networking
Hope it answers your question

How to force IP packets over a specific server?

I need to configure hosts to route their network packets in a way that they pass another specific server (in my own network) before they reach their destination.
An example network setup can be seen at the following link which was provided by StackOverflow:
I have two hosts (Host-1 and Host-2) which communicate with each other over a Gateway/Router in a star network. The specific server (called NetEM-Host) is needed for NetEM manipulation and also connected to the Gateway.
Now, every packet that the two hosts send needs to be first routed over NetEM-Host and only afterwards reach its destination.
How can I configure the network in this way, without altering the Gateway?
(FYI: The Gateway was configured by OpenStack and I cannot SSH on it to change things.)
I was thinking about altering the routing table ("route") of Host-1 and Host-2 but the NetEM-Host would not be the next-hop for the two hosts and thus I cannot define it there, am I correct? Any suggestions are welcomed!

TCP connection between two openshift containers

I have two applications (diy container type) which have to be connected via TCP. Let's take as example application clusternode1 and clusternode2.
Each one has TCP listener set up for $OPENSHIFT_DIY_IP:$OPENSHIFT_DIY_PORT.
For some reason clusternode1 fails to connect to any of the following options for clusternode2:
$OPENSHIFT_DIY_IP:$OPENSHIFT_DIY_PORT
$OPENSHIFT_APP_DNS
Can you please help in understanding what should be url for external TCP connection?
You might check the logs to see if the OPENSHIFT_DIY_IP for both apps are within the same subnet. If one, say, is...
1.2.3.4
...and the other is...
1.5.6.7
...for example, then you might not expect Amazon's firewalls to just arbitrarily allow TCP traffic from one subnet to another. If this were allowed by default then one person's app might try to hack another's.
I know that when you're dealing directly with Amazon AWS and you spin up multiple virtual servers you have to create virtual zones to allow traffic between them. This might be something that's necessary.
Proxy Ports I don't know if this is useful but it's possible that a private IP address is being bound to your application(s) and then a NAT server is translating that into a public IP address.

Resources