TCPListener accepting single TCPClient at one time - tcpclient

I am implementing a TCPListener server. How can I accept only one TCPClient at one time and block for other TCPClients (queue other attempts)? When the active client closed, then listener accepts the following one from the queue. I need some thread-safe source code.

Related

Some questions about start a new stream in libp2p host

First question is: When a host try to start a new stream to a remote with some streamhandler in libp2p, it seems the remote peer would automatically start a goroutine to handle this stream, while the local peer need to manually start this handler. What is the purpose of such design?
Second question is: If some remote peer start a new stream or connect my local host, can I monitor the inbound stream or get the inbound stream. Something like net.Listener.Accept().If so which method should I use.
Thank you.

TCP sessions with gRPC

Sorry if this question is naive. (gRPC novice here). But, I would like to understand this.
Let's say I have a gRPC service definition like this:
service ABC {
// Update one or more entities.
rpc Write(WriteRequest) returns (WriteResponse) {
}
// Read one or more entities.
rpc Read(ReadRequest) returns (stream ReadResponse)
{
}
// Represents the bidirectional stream
rpc StreamChannel(stream StreamMessageRequest)
returns (stream StreamMessageResponse) {
}
}
Our potential use case would be the server built using C++ and the client using Java. (Not sure is that matters).
I would like to understand how the TCP sessions are managed. The Stream Channel would be used for constant telemetry data streaming between the client and the server. (Constant data transfer, but the bulk from the server to the client).
Does the StreamChannel have a separate TCP session, while for every Write and Read a new session would be established and terminated after the call is done?
Or is there a single TCP session over which all the communication happens?
Again, please excuse me if this is very naive.
Thanks for your time.
Since gRPC uses HTTP/2, it can multiplex multiple RPCs on the same TCP connection. The Channel abstraction in gRPC lets gRPC make connection decisions without the application needing to be strongly-aware.
By default, gRPC uses the "pick first" load balancing policy, which will use a single connection to the backend. All new RPCs would go over that connection.
Connections may die (due to I/O failures) or need to be shut down (various reasons), so gRPC handles reconnecting automatically. Because it can take a very long time to shut down a connection (as gRPC waits for RPCs on that connection to complete), it's still possible that gRPC would have 2 or more connections to the same backend.
So for your case, all the RPCs would initially exist on the same connection. As time goes on new RPCs may use a newer connection, and an old, long-lived StreamChannel RPC may keep the initial TCP connection alive. If that long-lived StreamChannel is closed and re-created by the application, then it could share the newer connection again.
I also posted the same question in grpc.io, and the response I got was inline with the marked answer.
Summary:
If there is no load-balancing, all the RPCs use the same session. The session remains connected across requests. The session establishment happens the first time a call is attempted on the channel.

Async Netty HttpServer and HttpClient

I have been exploring Netty for the past days, as I am writing a quick and tight HTTP server that should receive lots of requests, and Netty's HTTP server implementation is quite simple and does the job.
My next step is as part of the request handling, I need to launch an HTTP request to an external web server. My intuition is to implement an asynchronous client that can send a lot of requests simultaneously, but I am a little confused as what is the right approach. My understanding is that Netty server uses a worker thread for each incoming message, therefore that worker thread would not be freed to accept new messages until my handler finishes its work.
Here is the punch: even if I have an asynchronous HTTP client in hand, it won't matter if I need to wait for each response and process it back with my server handler - the same worker thread would remain blocking all this time. The alternative is to use the async nature of the client, returning a future object quickly to release the thread and place a listener (meaning I have to return 200 or 202 status to the client), and check my future object to indicate when the response is received and I can push it to the client.
Does this make sense? Am I way off with my assumptions? What is a good practice to implement such kind of Netty acceptor server + external client with high concurrency?
Thanks,
Assuming you're asking about Netty 4.
Netty configured with a ServerBootstrap will have a fixed number of worker threads that it uses to accept requests and execute the channel, like so:
Two threads accepting / processing requests
bootstrap.group(NioEventLoopGroup(2))
One thread accepting requests, two threads processing.
bootstrap.group(NioEventLoopGroup(1), NioEventLoopGroup(1))
In your case, you have a channel includes a bunch of Http Codec decoding/encoding stuff and your own handler which itself makes an outgoing Http request. You're right that you don't want to block the server from accepting incoming requests, or decoding the incoming Http message, and there are two things you can do to mitigate that, you've struck on the first already.
Firstly, you want to use an Async netty client to make the outgoing requests, have a listener write the response to the original requests channel when the outgoing request returns. This means you don't block and wait, meaning you can handle many more concurrent outgoing requests than the number of threads available to process those requests.
Secondly, you can have your custom handler run in its own EventExecutorGroup, which means it runs in a separate threadpool from the acceptor / http codec channel handlers, like so:
// Two separate threads to execute your outgoing requests..
EventExecutorGroup separateExecutorGroup new DefaultEventExecutorGroup(2);
bootstrap.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) {
ChannelPipeline pipeline = ch.pipeline();
.... http codec stuff ....
pipeline.addLast(separateExecutorGroup, customHandler);
}
};
Meaning your outgoing requests don't hog the threads that would be used for accepting / processing incoming ones.

Query on RMI working

I don't get one thing in RMI. It's a bit confusing actually.
On client side, we have the business interface (Hello.class), the client code (HelloClient.class) and the remote stub (probably Hello_stub.class) and on server side we have the server code (HelloImpl.class), the business interface (Hello.class) and the skeleton .
For Java 5 onwards, we don't create stubs but still they are c=in picture i believe.
So, how does the communication happen ?
The client calls method on Hello.class which then calls Hello_stub.class for all n/w operations. The Hello_stub.class calls the skeleton which then calls Hello.class and then calls methods on HelloImpl.class ?
I am a bit confused after reading Head first EJB :) .It would be glad if someone clarified it.
When the stub's method is called:
It gets a TCP connection to s target out of the client connection pool, or creates one if there isn't a pooled connection
Bundles up the call and the arguments into a serializable object.
Writes the object to the connection along with some other stuff like a JRMP protocol header and a remote objectID.
Reads the reply object from the connection.
Returns the connection to the pool, where it gets closed after a certain idle time.
If the reply object is an exception, throws it.
Otherwise returns the reply object as the method result.
At the server, a thread sits on the listening socket, accepting connections, creating threads, and dispatching incoming remote calls to the correct remote object via the specified object ID.
This is done via reflection. RMI skeletons haven't been used since 1998, except in the case of stubs you deliberately generate with rmic -v1.1, but the principle is the same either way.

Erlang accept incoming tcp connections dynamically

What I am trying to solve: have an Erlang TCP server that listens on a specific port (the code should reside in some kind of external facing interface/API) and each incoming connection should be handled by a gen_server (that is even the gen_tcp:accept should be coded inside the gen_server), but I don't actually want to initially spawn a predefined number of processes that accepts an incoming connection). Is that somehow possible ?
Basic Procedure
You should have one static process (implemented as a gen_server or a custom process) that performs the following procedure:
Listens for incoming connections using gen_tcp:accept/1
Every time it returns a connection, tell a supervisor to spawn of a worker process (e.g. another gen_server process)
Get the pid for this process
Call gen_tcp:controlling_process/2 with the newly returned socket and that pid
Send the socket to that process
Note: You must do it in that order, otherwise the new process might use the socket before ownership has been handed over. If this is not done, the old process might get messages related to the socket when the new process has already taken over, resulting in dropped or mishandled packets.
The listening process should only have one responsibility, and that is spawning of workers for new connections. This process will block when calling gen_tcp:accept/1, which is fine because the started workers will handle ongoing connections concurrently. Blocking on accept ensure the quickest response time when new connections are initiated. If the process needs to do other things in-between, gen_tcp:accept/2 could be used with other actions interleaved between timeouts.
Scaling
You can have multiple processes waiting with gen_tcp:accept/1 on a single listening socket, further increasing concurrency and minimizing accept latency.
Another optimization would be to pre-start some socket workers to further minimize latency after accepting the new socket.
Third and final, would be to make your processes more lightweight by implementing the OTP design principles in your own custom processes using proc_lib (more info). However, this you should only do if you benchmark and come to the conclusion that it is the gen_server behavior that slows you down.
The issue with gen_tcp:accept is that it blocks, so if you call it within a gen_server, you block the server from receiving other messages. You can try to avoid this by passing a timeout but that ultimately amounts to a form of polling which is best avoided. Instead, you might try Kevin Smith's gen_nb_server instead; it uses an internal undocumented function prim_inet:async_accept and other prim_inet functions to avoid blocking.
You might want to check out http://github.com/oscarh/gen_tcpd and use the handle_connection function to convert the process you get to a gen_server.
You should use "prim_inet:async_accept(Listen_socket, -1)" as said by Steve.
Now the incoming connection would be accepted by your handle_info callback
(assuming you interface is also a gen_server) as you have used an asynchronous
accept call.
On accepting the connection you can spawn another ger_server(I would recommend
gen_fsm) and make that as the "controlling process" by calling
"gen_tcp:controlling_process(CliSocket, Pid of spwned process)".
After this all the data from socket would be received by that process
rather than by your interface code. Like that a new controlling process
would be spawned for another connection.

Resources