TCP Packet loss after Azure load balancer - networking

If we have one VM after the Azure load balancer (external) at first. The TCP connection will be routed to VM1.
--connection1 packet--> Azure Load Balancer (20.20.20.20) --connection1 packet--> VM1
Then we add a new VM after the lb.
--connection1 packet--> Azure Load Balancer (20.20.20.20) ----> VM1
--connection1 packet--> VM2
Normally the connection would still be routed to VM1 as there is connection tracking. But this is not guranteed because the Azure load balancer is implemented as a distributed software load balancer as described in this article.
So the packet might be routed to VM2. The expected behaviour is that the packet could get inside VM2 and gets a TCP RST to end this connection. But it turns out the packet would be dropped before it gets inside VM2.
I hope to know why this packet is dropped. Is it because the NAT?

I have gone through the article, and I am not sure that I am correctly spotting the reason:
The reason may be based on the Traffic Manager routing methods of azure. As in the article, they 've used a weighted random load policy. You can make look at this link for reference to various methods of handling traffic methods.
I am not damn sure, I just replied maybe this could be the reason.

Related

How do you downstream data from Cloud to Thread?

I am new to openthread I have some interrogations about Thread device connectivity to cloud server.
Cloud server <------------> local internet <-----> Thread network
ipv4 Router(firewall) ipv4 OTBR ipv6
Our products (Thread network) will be built in clients networks which have various internet routeur/firewall and network configurations.
Using UDP(DTLS) to PUT/GET/POST... (CoAP) data on cloud server did you experience any issues with UDP timeout parameters ? Will I need to do hole punching to ensure cloud server can contact end device ?
As i understand it, from cloud server point of view, he can only Rest method on the Border Router CoAP server, as he don't know end device ipv6 and i don't plan to do port forwarding ?
Would allowing cloud server to contact specific thread end device require ipv6 tunnel ?
To finish, if im spouting nonsense, please enlight me about how you build your connection with cloud server :) !
Thank you for reading this post, I hope I was clear.
Best,
Let me try to sort some things.
There are two general approaches:
the clients from your local network starts the communication and the cloud-server answer. The router acts as NAT. In that scenario there are usually timeouts on which the "NAT" rules expires and the traffic from the cloud-server will not be forwarded to a client in the local network.
the cloud server starts the communication. That traffic is sent to your router, and the router and forwards the message to a local network node. This approach requires usually configuration of the router (there are some protocols to do that from your client devices, but even that requires to enable that function). You configure a port on the router to forward the traffic to a specific address+port of your clients. Though this requires either configuration of a lot of ports (for each client one port) or one coap-node, which acts as coap-proxy and configuration for that.
The first approach will end up in a lot of traffic just to keep the NAT open.
The seconds requires either a lot of configuration or a "coap-proxy", where I'm not sure, if you can find a proper implementation.
(By the way, the router may have only a temporary fixed ip-address, e.g. one change peer day. So the second approach requires rare updates of the router's address in your cloud server. And sure, there are some Internet provider, which doesn't offer that your router is reachable, because they add a extra NAT.)

How to find the server ip from a public website?

Is there any command to find out the server IP instead of load balancer IP or proxy IP for any website?
Why can't we connect to few server IP directly. What config or setting is blocking us to connect using IP and what is the need of disabling this feature?
Best-practice for both security and load balancing is typically:
Expose the Load Balancer to the Internet
Put servers behind a firewall so that they are not directly accessible
Configure the Load Balancer to send traffic to the servers
The benefits are:
Minimum surface area exposed to the Internet (limits potential security problems)
Allows servers to be added/removed without impacting end users since they all connect via the Load Balancer (but the Load Balancer will need to know when servers are being added/removed)
Ensures that requests are balanced between the servers rather than allowing end users to directly access a server

What is pass-through load balancer? How is it different from proxy load balancer?

Google Cloud Network load balancer is a pass-through load balancer and not a proxy load balancer. ( https://cloud.google.com/compute/docs/load-balancing/network/ ).
I can not find any resources in general on a pass through LB. Both HAProxy and Nginx seems to be proxy LBs. I'm guessing that pass through LB would be redirecting the clients directly to the servers. In what scenarios it would be beneficial?
Are there any other type of load balancers except pass-through and proxy?
It's hard to find resources for pass-through load balancing because everyone came up with a different way of calling it: pass-though, direct server return(DSR), direct routing,...
We'll call it pass-through here.
Let me try to explain the thing:
The IP packets are forwarded unmodified to the VM, there is no address or port translation.
The VM thinks that the load balancer IP is one of its own IPs.
In the specific case of Compute Engine Network Load Balancing https://cloud.google.com/compute/docs/load-balancing/: For Linux this is done by adding a route to this IP in the "local" routing table, Windows by adding a secondary IP on the network interface.
The routing logic has to make sure that packets for a TCP connection or UDP "connection" are always sent to the same VM.
For GCE network LB see here https://cloud.google.com/compute/docs/load-balancing/network/target-pools#sessionaffinity
Regarding other load balancer types there can't be a definitive list, here are a few examples:
NAT. An example with iptables is here https://tipstricks.itmatrix.eu/use-iptables-to-load-balance-web-trafic/.
TCP Proxy. In Google Cloud Platform you can use TCP Proxy Load Balancing https://cloud.google.com/compute/docs/load-balancing/tcp-ssl/tcp-proxy
HTTP Proxy. In Google Cloud Platform you can use HTTP(s) Load Balancing https://cloud.google.com/compute/docs/load-balancing/http/
DNS, called "DNS forwarder". For example: dnsmasq http://www.thekelleys.org.uk/dnsmasq/doc.html, or bind in "forwarding" mode https://www.digitalocean.com/community/tutorials/how-to-configure-bind-as-a-caching-or-forwarding-dns-server-on-ubuntu-14-04
Database communication protocols. For example the MySQL Protocol with https://github.com/mysql/mysql-proxy
SIP protocol. Big list of implementations here https://www.voip-info.org/wiki/view/Open+Source+VOIP+Software#SIPProxies
As for the advantages of pass-through over other methods:
Some applications won't work or need to be adapted if the addresses on the IP packets is changing, for example the SIP protocol. See the Wikipedia for more on applications that don't play along well with NAT https://en.wikipedia.org/wiki/Network_address_translation#NAT_and_TCP/UDP.
Here the advantage pass-through is that it does not change the source and destination IPs.
Note that there is a trick for a load balancer working at a higher layer to keep the IPs: the load balancer spoofs the IP of the client when connecting to the backends. As of this writing no load balancing product uses this method in Compute Engine.
If you need more control over the TCP connection from the client, for example to tune the TCP parameters. This is an advantage of pass-through or NAT over TCP (or higher layer) proxy.

AWS Distribute HTTP request with Load Balancer

Using AWS, I want to:
Distribute http request sending over several different IPs
Send this requests without using proxy
Send this using Elastic Load Balancer and AutoScaling Group
Send these requests from one instance to several instances in AutoScaling Group
Each of those several instances assigns different IP to the incoming request so output the request in its IP
How do I do this? Is there anyway to set up load balancer just to send through http request? I want each http request to have different IP address.
So, basically you're willing to connect to EC2 instances behind a ELB and willing to know, on the EC2 instances, the original IP address of the connection, not the ELB's IP address.
If my understanding of your question is correct, then the answer is
Use TCP listener on the ELB, instead of HTTP listeners.
Enable Proxy Protocol on the ELB
on your EC2 instances, collect the original IP address
Full step by step and demo application is available on AWS' blog.

Load on load balancer

In our TCP servers deployment, we have load balancer to which all clients initially connect. Then load balancer gives each of them actual server IP address to which they are suppose to connect. Client then disconnects from load balancer and proceeds with TCP connection to the server IP address they've got. Thus, load is being distributed amongst servers.
This arrangement works perfectly well for thousands of connections. But we are worried if this would work for millions of number of connections? Load balancer itself will not be able to cater server IP address to all those clients in timely manner, is what our nightmare. What are alternatives here?
Really depends on the load balancer you are using on whether it can cater or not. Some load balancers can do millions of L4 connections. Also I don't think that having connections go direct to the server is a good idea because what happens to the connections if the server becomes unavailable. I would keep all traffic going to the load balancer. You could also consider Direct Server Return which is where requests from clients go through the load balancer and responses go direct to the client (bypassing the load balancer).

Resources