How does AMQP overcome the difficulties of using TCP directly? - tcp

How does AMQP overcome the difficulties of using TCP directly when sending messages? Or more specifically in a pub/sub scenario?

In AMQP there is a broker, that broker receives the messages and then does the hard part about routing them to exchanges and queues. You can also setup durable queues which save the messages for clients even when they are disconnected.
You could certainly do all this yourself, but it's a tremendous amount of work to do correctly. RabbitMQ in particular has been battle tested in many deployments.
You are still using the TCP protocol underneath AMQP, AMQP provides a higher abstraction.
You would also have to choose a wire protocol to use with all your clients, where AMQP already defines that wired protocol.

It overcomes difficulties by using one and same TCP connection for all of your threads for performance. AMQP is able to do it by using channels. These channels is a virtual connection inside the “real” TCP connection, and it’s over the channel that you issue AMQP commands.
As each thread spins up, it creates a channel on the existing connection and gets its own
private communication path to broker without any additional load on your operating
system’s TCP stack.
As a result, you can create a channel hundreds or thousands of times a second without your operating system seeing so much as a blip. There’s no limit to how many AMQP channels you can have on one TCP connection. Think of it like a bundle of fiber optic cable.
Source book: RabbitMq in Action

Related

gRPC call, channel, connection and HTTP/2 lifecycle

I read the gRPC Core concepts, architecture and lifecycle, but it doesn't go into the depth I like to see. There is the RPC call, gRPC channel, gRPC connection (not described in the article) and HTTP/2 connection (not described in the article).
I'm interested in knowing how these come together. For example, what happens to the channel when a RPC throws an exception? What happens to the gRPC connection when the channel is closed? When is the channel closed? When is the gRPC connection closed? Heart beats? What if the deadline is exceeded?
Can anyone answer these questions, or point me to resources that can?
The connection is not a gRPC concept. It is not part of the normal API and is an implementation detail. This should be seen as fairly normal, like HTTP libraries providing details about HTTP exchanges but not exposing connections.
It is best to view RPCs and connections as two mostly-separate systems.
The only real guarantee is that "connections are managed by channels," for varying definitions of "managed." You must shut down channels when no longer used if you want connections and other resources to be freed. Other details are either an implementation detail or an advanced API detail.
There is no "gRPC connection." A "gRPC connection" would just be a standard "HTTP/2 connection." Except that is even an implementation detail of the transport in many gRPC implementations. That allows having alternative "connection" types like "inprocess" or QUIC (via Cronet, where there is not a classic "connection" at all).
It is the channel's job to hold all the connections and reconnect as necessary. It delegates part of that responsibility to load balancers and the load balancing APIs do have a concept of connections (subchannels). By not exposing connections to the application, load balancers have a lot of freedom to operate.
I'll note that gRPC C-core based implementations share connections across channels.
What happens to the channel when a RPC throws an exception?
The channel and connection is not impacted by a failed RPC. Note that connection-level failures typically cause RPCs to fail. But things like retries could allow the RPC to be re-sent on a new connection.
What happens to the gRPC connection when the channel is closed?
The connections are closed, eventually. Channel shutdown isn't instantaneous because existing RPCs can continue, and connection shutdown isn't instantaneous as well. But once all RPCs complete the connections are closed. Although C-core won't shut down a connection until no channels are using it.
When is the channel closed?
Only when the user closes it.
When is the gRPC connection closed?
Lots of times. The client may close it when no longer needed. For example, let's say the server IP address changes and the client need to connect to 1.1.1.2 instead of 1.1.1.1. A new connection will be created and new RPCs will go to the new IP address. The client may also close connections it thinks are dead (e.g., via keepalive timeouts).
Servers have a lot of say of when to close connections. They may close them simply because they are old, or because they have been idle, or because the server is overloaded. But those are simply use-cases; the server can shut down a connection at-will.
What if the deadline is exceeded?
Deadline only applies to RPCs and doesn't impact the channel or a connection.
I was actually waiting for Eric to answer this as he is the expert in this!
I also have been playing with gRPC for a while now, I would like to add few things here for beginners. Anyone more experienced, please feel free to edit!
Channel is an abstraction over a long-lived connection! The client application will create a channel on start up. The channel can be reused/shared among multiple threads. It is thred safe. One channel is enough (for most of the use cases) for multiple threads and multiplexing concurrent requests. It is channel's responsibility to close / reconnect / keep the connection alive etc. We as the users do not have to worry about this in general. The client application can close the channel anytime it wants. Channel creation seems to be an expensive process. So we would not open/close for every RPC.
When you use gRPC loadbalancer/nameresolver for a domain name and the nameresolver resolves the domain with multiple ip addresses, a channel creates multiple subchannels where each subchannel is an abstraction over a connection to 1 server. So a channel can also represent multiple connections!!
Adding some points to note from Eric's comment.
adding the default load balancer still only creates (approximately)
one connection if the name resolver returns multiple addresses, as the
default is pick_first. But if you change the load balancer to
round_robin or virtually any other policy, then yes, there will be
multiple connections in a channel. Even if a name resolver returns one
address, the load balancer is free to create multiple connections
(e.g., for higher throughput), but that's not common today
An underlying connection can be closed any time for any reason. For ex: remote server is shutting down gracefully for a scheduled maintenance or a connection is idle for longer duration. In that case, the server could send GOAWAY signal to the client and client might disconnect and reconnect to some other server. or Server might crash due to OOM error. In this case channel will detect connection failure and will retry for new connection for some other server etc.
A channel can keep sending PING frame to the server to keep the connection alive. These are all configurable via channel builder.
With these information above, if we look at your questions,
what happens to the channel when a RPC throws an exception?
Nothing happens to the channel. The unhandled exception on the server might the fail the RPC on the client side. But channel is still usable for any RPC calls.
What happens to the gRPC connection when the channel is closed?
Channel is an abstraction over the connection. So it will be closed. (again there is no gRPC connection as such as Eric had mentioned. It would be a HTTP2 connection)
When is the channel closed?
Any time you want. But normally when the application shuts down.
When is the gRPC connection closed?
It is not our problem. Channel takes care of this.
Heart beats?
Channel sends PING frames periodicaly to keep the connection alive.
What if the deadline is exceeded?
It is something like timeout on the client side. When the deadline exceeds, the client might cancel the request. Once again nothing happens to the channel. (But it might trigger exception on the server side which I had noticed few times. (Received DATA frame for an unknown stream. https://github.com/grpc/grpc-java/issues/3548). It seems to have been fixed now).

Should my IoT device poll with HTTP or listen with TCP?

I'm creating an IoT Device + Server system using .NET Micro Framework and ASP.NET WebAPI (Probably in Azure).
The IoT device needs to be able to frequently update the server with stats whilst also being able to receive occasional incoming commands from the server that would change its behaviour. In this sense, the device needs to act as both client and server itself.
My concern is getting the best balance between the security of the device and the load on the server. Furthermore, there must be a relatively low amount of latency between the server needing to issue a command and the device carrying it out; of the order of a few seconds.
As I see it my options are:
Upon connection to the internet, the device establishes a persistent TCP connection to the server which is then used for both polling and receiving commands.
The device listens on a port (e.g. HttpListener) for incoming commands whilst updating the server via frequent HTTP requests.
The device only ever polls the server with HTTP requests. The server uses the response to give the device commands.
The 2nd option seems to be the least secure as the device would have open incoming ports. The 1st option looks the most difficult to reliably implement as it would require low level socket programming. The 3rd option seems easy and secure but due to the latency requirements the device would need to poll every few seconds. This impacts the scalability of the system.
So at what frequency does HTTP polling create more overhead than just constantly keeping a TCP connection open? 5s? 3s? 1s? Or am I overstating the overhead of keeping a TCP connection open in ASP.NET? Or is there a completely different way that this can be implemented?
Thanks.
So at what frequency does HTTP polling create more overhead than just constantly keeping a TCP connection open? 5s? 3s? 1s?
There is nothing to do to keep a TCP connection open. The only thing you might need to do is to use TCP keep-alive (which have nothing to do with HTTP keep-alive!) in case you want to keep the connection idle (i.e no data to send) for a long time.
with HTTP your overhead already starts with the first request, since your data need to be encapsulated into a HTTP message. This overhead can be comparable small if the message is large or it can easily be much larger than the message itself for small messages. Also, HTTP server close the TCP connection after some idle time so you might need to re-establish the TCP connection for the next data exchange which is again overhead and latency.
HTTP has the advantage to pass through most firewalls and proxies, while plain TCP does not. You also get encryption kind of free with HTTPS, i.e. there are established standards for direct encrypted connection and for tunneling through a proxy.
WebSockets is something in between: you do a HTTP request and then upgrade HTTP to WebSocket. The initial overhead is thus as large as for HTTP but for the next messages the overhead is not that much higher than TCP. And you can do also WebSockets with HTTPS (i.e. wss:// instead of ws://). It passes through most simple firewalls and proxies, but more deeper inspection firewalls might still have trouble with it.
Setting up a TCP listener will be a problem if you have your IoT device behind some NAT router, i.e. the usual setup inside private or SoHo networks. To reach the device one would need to open a tunnel at the router from outside into the network, either by administrating the router by hand or with UPnP (which is often switched off for security reasons). So you would introduce too much problems for the average user.
Which means that the thing which the fewest problems for the customer is probably HTTP polling. But this is also the one with the highest overhead. Still mostly compatible are WebSockets which have less overhead and more problems but even less overhead can be reached with simple TCP to the server. TCP listener instead would cause too much trouble.
As for resources on the server side: each HTTP polling request might use new TCP connection but you can also reuse an existing one. In this case you could decide between more overhead and latency one the client side (new TCP connection for each request) which needs few resources on the server side and less overhead and latency on the client side which needs more resources on the server side (multiple HTTP requests per TCP connection). With WebSockets and plain TCP connection you always need more server side resources, unless your client will automatically re-establish the connection on loss of connectivity.
These days you should use a IOT Specific communication protocol over TLS 2.0 for secure light weight connections. For example AWS uses MQTT http://mqtt.org/ and Azure uses AMQP https://www.amqp.org/
The idea is you get a broker you can connect to securely then you use a messaging protocol with a topic to route messages to the proper devices. Also IBM has been using MQTT for a long time and routers now typically come with port 8883 open which is MQTT over TLS.
Good Luck!
Simply use SignalR to connect client and server. It provides you minimal latency without polling. The API is very simple to use.
Physically, this runs over WebSockets which are scalable to a large number of concurrent connections. If you don't have a need for more than 100k per Windows server this would not be a concern.

HTTP push over 100,000 connections

I want to use a client-server protocol to push data to clients which will always remain connected, 24/7.
HTTP is a good general-purpose client-server protocol. I don't think the semantics possibly could be very different for any other protocol, and many good HTTP servers exist.
The critical factor is the number of connections: the application will gradually scale up to a very large number of clients, say 100,000. They cannot be servers because they have dynamic IP addresses and may be behind firewalls. So, a socket link must be established and preserved, which leads us to HTTP push. Only rarely will data actually be pushed to a given client, so we want to minimize the connection overhead too.
The server should handle this by accepting the connection, inserting the remote IP and port into a table, and leaving it idle. We don't want 100,000 threads running, just so many table entries and file descriptors.
Is there any way to achieve this using an off-the-shelf HTTP server, without writing at the socket layer?
Use Push Framework : http://www.pushframework.com.
It was designed for that goal of managing a large number of long-lived asynchronous full-duplex connections.
LightStreamer (http://www.lightstreamer.com/) is the tool that is made specifically for PUSH operations of HTTP.
It should solve this problem.
You could also look at Jetty + Continuations.

How to implement a bidirectional "mailbox service" over tcp?

The idea is to allow to peer processes to exchange messages (packets) over tcp as much asynchronously as possible.
The way I'd like it to work is each process to have an outbox and an inbox. The send operation is just a push on the outbox. The receive operation is just a pop on the inbox. Underlying protocol would take care of the communication details.
Is there a way to implement such mechanism using a single TCP connection?
How would that be implemented using BSD sockets and modern OO Socket APIs (like Java or C# socket API)?
Yes, it can be done with a single TCP connection. For one obvious example, (though a bit more elaborate than you really need) you could take a look at the NNTP protocol (RFC 3977). What you seem to want would be similar to retrieving and posting articles.

Unix TCP servers and UDP Servers

Why is the design of TCP servers mostly such that whenever it accepts a connection, a new process is invoked to handle it . But, why in the case of UDP servers, mostly there is only a single process that handles all client requests ?
The main difference between TCP and UDP is, as stated before, that UDP is connectionless.
A program using UDP has only one socket where it receives messages. So there's no problem if you just block and wait for a message.
If using TCP you get one socket for every client which connects. Then you can't just block and wait for ONE socket to receive something, because there are other sockets which must be processed at the same time.
So you got two options, either use nonblocking methods or use threads. Code is usually much simpler when you don't have one while loop which has to handle every client, so threading is often prefered. You can also save some CPU time if using blocking methods.
When you talk with client via TCP connection you maintain TCP session. So when new connection established you need separate process(or thread, no matter how it implemented and what OS used) and maintain conversation. But when you use UDP connection you may recieve datagram(and you will be informed about senders ip and port) but in common case you cannot respond on it.
First of all, the classic Unix server paradigm is filter based. For example, various network services can be configured in /etc/services and a program like inetd listens on all of the TCP and UDP sockets for incoming connections and datagrams. When a connection / DG arrives it forks, redirects stdin, stdout and stderr to the socket using the dup2 system call, and then execs the server process. You can take any program which reads from stdin and writes to stdout and turn it into a network service, such as grep.
According to Steven's in "Unix Network Programming", there are five kinds of server I/O models (pg. 154):
blocking
non-blocking
multiplexing (select and poll)
Signal Driven
asynchronous ( POSIX aio_ functions )
In addition the servers can be either Iterative or Concurrent.
You ask why are TCP servers are typically concurrent, while UDP servers are typically iterative.
The UDP side is easier to answer. Typically UDP apps follow a simple request response model where a client sends a short request followed by a reply with each pair constituting a stand alone transaction. UDP servers are the only ones which use Signal Drive I/O, and at the very rarely.
TCP is a bit more complicated. Iterative servers can use any of the I/O models above, except #4. The fastest servers on a single processor are actually Iterative servers using non-blocking I/O. However, these are considered relatively complex to implement and that plus the Unix filter idiom where traditionally the primary reasons for use of the concurrent model with blocking I/O, whether multiprocess or multithreaded. Now, with the advent of common multicore systems, the concurrent model also has the performance advantage.
Your generalization is too general. This is a pattern you might see with a Unix-based server, where process creation is inexpensive. A .NET-based service will use a new thread from the thread pool instead of creating a new process.
Programs that can continue to do useful work while they are waiting for I/O
will often be multithreaded. Programs that do lots of computation which
can be neatly divided into separate sections can benefit from
multithreading, if there are multiple processors. Programs that service
lots of network requests can sometimes benefit by having a pool of
available threads to service requests. GUI programs that also need to
perform computation can benefit from multithreading, because it allows the
main thread to continue to service GUI events.
Thats why we use TCP as an internet protocol.

Resources