I can't seem to find information on setting up multiple NLB clusters on a single NIC.
I've already setup my first NLB cluster. This is used to load balance traffic to web server running on two hosts. I am now looking to setup a second web server on each of these hosts. The second web server will given a unique IP address and I'm hoping to create a second NLB cluster instance to support the second web server.
I have bound a second IP address to the network card on each of my hosts. However, when I launch NLB and chose the option to add a new cluster there are no interfaces available to create the cluster.
Has anyone else attempted this?
I haven't tried setting something up quite like you describe, but we do have multiple websites running out of our single Windows Server 2008 R2 NLB cluster. The NLB interface lets you add additional IP addresses to the cluster itself, so one cluster managing multiple IP addresses should be able to do what you need. You can then assign the different IP addresses to different web sites.
Related
Followed this tutorial to setup two ec2 instances: 12 . Creation of two EC2 instances and how to establish ping communication - YouTube
The only difference is I used a linux image.
I setup a simple python http server on a machine (on port 8000). But I cannot access this from my other machine; whenever I curl, the program kind of waits. (It might eventually timeout but I wasn't patient enough to witness that).
However, the workaround, I figured, was that you have to add a port rule via the security group. I do not like this option since it means that that port (for the machine that hosts the web server) can be accessed via the internet.
I was looking for an experience similar to what people usually have at home with their routers; machines connected to the same home router can reach out to other machines on any port (provided the destination machine has some service hosted on that port).
What is the solution to achieve something like this when working with ec2?
The instance is open to the internet because you are allowing access from '0.0.0.0/0' (anywhere) in the inbound rule of the security group.
If you want to the communication to be allowed only between the instances and not from the public internet. You can achieve that by assigning the same security group to both the instances and modifying the inbound rule in the security group to allow all traffic or ICMP traffic sourced from security group itself.
You can read more about it here:
AWS Reference
i am trying out Kubernetes on bare-metal, as a example I have docker containers exposing port 2002 (this is not HTTP).
I do not need to load balance traffic among my pods since each of new pod is doing its own jobs not for the same network clients.
Is there a software that will allow to access each new created service with new IP from internal DHCP so I can preserve my original container port?
I can create service with NodePort and access this pod by some randomly generated port that is forwarded to my 2002 port.
But i need to preserve that 2002 port while accessing my containers.
Each new service would need to be accessible by new LAN IP but with the same port as containers.
Is there some network plugin (LoadBalancer?) that will allow to forward from IP assigned by DHCP back to this randomly generated service port so I can access containers by original ports?
Starting service in Kubernetes, and then accessing this service with IP:2002, then starting another service but the same container image as previous, and then accessing it with another_new_IP:2002
Ah, that happens automatically within the cluster -- each Pod has its own IP address. I know you said bare metal, but this post by Lyft may give you some insight into how you can skip or augment the SDN and surface the Pod's IPs into routable address space, doing exactly what you want.
In more real terms: I haven't ever had the need to attempt such a thing, but CNI is likely flexible enough to interact with a DHCP server and pull a Pod's IP from a predetermined pool, so long as the pool is big enough to accommodate the frequency of Pod creation and termination.
Either way, I would absolutely read a blog post describing your attempt -- successful or not -- to pull this off!
On a separate note, be careful because the word Service means something specific within kubernetes, even though it is regrettably a word often used in a more generic term (as I suspect you did). Thankfully, a Service is designed to do the exact opposite of what you want to happen, so there was little chance of confusion -- just be aware.
In GCloud we have one Kubernetes cluster with two nodes, it is possible to setup all nodes to get the same external IP? Now we are getting two external IP's.
Thank you in advance.
The short answer is no, you cannot assign the very same external IP to two nodes or two instances, but you can use the same IP to access them, for example through a LoadBalancer.
The long answer
Depending on your scenario and the infrastructure you want to set up, several ways are available to expose different resources through the very same IP.
I do not know why you want to assign the same IP to the nodes, but since each node it is a Google Compute Engine instance you can set up a Load Balancer (TCP, SSL, HTTP(s), internal, ecc). In this way you reach the nodes as if they were not part of a Kubernetes cluster, basically you are treating them as Compute Engine instances and you will able to connect to any port they are listening on (for example an HTTP server or an external health check).
Notice that you will be not able to connect to the PODs in this way: the services and the containers are running in a separate software bases network and they will be not reachable if not properly set, for example with a NodePort.
On the other hand if you are interested in making your PODs running in two different kubernetes nodes reachable through a unique entry point you have to set up Kubernetes related ingress and load balancing to expose your services. This resources are based as well on the Google Cloud Platform Load Balancer components, but when created they trigger as well the required change to the Kubernetes Network.
I have an ELB (in EC2 classic) running and one of my client want to hardcode an IP to his firewall rule to access our site.
I know that ELB doesn't provide static IP but is there a way to set up an instance (just for them) that they could hit and be used as gateway to our API?
(I was thinking of using HA Proxy on OpsWorks but it points directly to my instances and I need something that points to my ELB because SSL resolution happens at this level)
Any recommendation would be very helpful.
I assume you are running one or more instances behind your ELB.
You should be able to assign an elastic IP to one of those instances. In EC2 classic, EIPs will need to get reattached when you restart your instance.
I am trying to setup a consul server in an openstack cluster. I have the server provisioned and have associated an IP with the server that is accessible from vagrants on developer machines.
I am able to join the server from a local vagrant if I use the -advertise flag on the consul agent -server command and use the floating ip I set. However, I am provisioning the server with salt and need to the machine to be able to determine that IP automatically.
By default, the server is using its bind address which is set to its 10.x.x.x local IP. That local IP is the only one I seem to be able to easily determine.
Is there a way to get an instance's floating ip(s)?
Bonus points: Is there a way to get an instances name?
The information you are looking for is available to an instance using the Openstack metadata service. It is basically a REST API that an instance can hit to get information specific to this instance. See more information here:
http://docs.openstack.org/grizzly/openstack-compute/admin/content/metadata-service.html
You should be able to get both the instance name and its floating ip (look for "public-ipv4")