JDK 11 HttpClient: BindException: Cannot assign requested address - http

I am using the new HttpClient shipped with JDK 11 to make many requests (to Github's API, but I think that's irrelevant), especially GETs.
For each request, I build and use an HttpClient, like this:
final ExecutorService executor = Executors.newSingleThreadExecutor();
final HttpClient client = client = HttpClient
.newBuilder()
.followRedirects(HttpClient.Redirect.NORMAL)
.connectTimeout(Duration.ofSeconds(10))
.executor(executor)
.build();
try {
//send request and return parsed response;
} finally {
//manually close the specified executor because HttpClient doesn't implement Closeable,
//so I'm not sure when it will release resources.
executor.shutdownNow();
}
This seems to work fine, except every now and then, I get the bellow exception and requests will not work anymore until I restart the app:
Caused by: java.net.ConnectException: Cannot assign requested address
...
Caused by: java.net.BindException: Cannot assign requested address
at java.base/sun.nio.ch.Net.connect0(Native Method) ~[na:na]
at java.base/sun.nio.ch.Net.connect(Net.java:476) ~[na:na]
at java.base/sun.nio.ch.Net.connect(Net.java:468) ~[na:na]
Note that this is NOT the JVM_Bind case.
I am not calling localhost or listening on a localhost port. I am making GET requests to an external API. However, I've also checked the etc/hosts file and it seems fine, 127.0.0.1 is mapped to localhost.
Does anyone know why this happens and how could I fix it? Any help would be greatly appreciated.

You can try using one shared HttpClient for all requests, since it manages connection pool internally and may keep connections alive for same host (if supported). Performing a lot of requests on different HttpClients is not effective, because you'll have n thread pools and n connection pools, where n is an amount of clients. And they won't share underlying connections to the host.
Usually, an application creates a single instance of HttpClient in some kind of main() and provides it as a dependency to users.
E.g.:
public static void main(String... args) {
final HttpClient client = client = HttpClient
.newBuilder()
.followRedirects(HttpClient.Redirect.NORMAL)
.connectTimeout(Duration.ofSeconds(10))
.build();
new GithubWorker(client).start();
}
Update: how to stop current client
According to JavaDocs in internal private class of JDK in HttpClientImpl.stop method:
// Called from the SelectorManager thread, just before exiting.
// Clears the HTTP/1.1 and HTTP/2 cache, ensuring that the connections
// that may be still lingering there are properly closed (and their
// possibly still opened SocketChannel released).
private void stop() {
// Clears HTTP/1.1 cache and close its connections
connections.stop();
// Clears HTTP/2 cache and close its connections.
client2.stop();
// shutdown the executor if needed
if (isDefaultExecutor) delegatingExecutor.shutdown();
}
This method is called from SelectorManager.showtdown (SelectorManager is created in HttpClient's constructor), where shutdown() method called in finally block around busy loop in SelectorManager.run() (yes, it implements Thread). This busy loop is while (!Thread.currentThread().isInterrupted()). So to enter this finally block you need to either fail this loop with exception or interrupt the running thread.

Related

Apache Camel Netty Socket

I want to use apache camel netty connection in client mode. And also this client is not in syncrionized mode. I provided following configuration to achive this but appache created two connection to server one for receving message and one for replying to it. how we can use netty connector in this mode.
from("netty4:tcp://localhost:7000?sync=false&allowDefaultCodec=false&encoder=#stringEncoder&decoder=#stringDecoder&clientMode=true&reconnect=true&reconnectInterval=1000")
.process(new Processor() {
public void process(Exchange exchange) throws Exception {
exchange.getOut().setBody("Hello " + exchange.getIn().getBody());
}
})
.to("netty4:tcp://localhost:7000?sync=false&allowDefaultCodec=false&encoder=#stringEncoder&decoder=#stringDecoder&clientMode=true");
and in Hercules Utitly i see two connection for this request processing
11:00:51 AM: 127.0.0.1 Client connected
11:00:51 AM: 127.0.0.1 Client connected
So this is what you want right?
"after receiving request from server. i want to push that in a MQ and wait on other MQ for processed response. so when packet is processed and available in MQ i want to use same connection to transmit response to socket".
So first thing is to probably agree on some requirements. If you need to send a response back i.e. a client is waiting to hear back regarding the request it sent, then it should be synchronous communication and not asynchronous.
So you can then simply write:
from("netty4:tcp://localhost:7000?sync=true&allowDefaultCodec=false&encoder=#stringEncoder&decoder=#stringDecoder&clientMode=true&reconnect=true&reconnectInterval=1000")
.process(new Processor() {
public void process(Exchange exchange) throws Exception {
exchange.getOut().setBody("Hello " + exchange.getIn().getBody());
}
})
.to("ACTIVE_MQ");
Off course in the active mq part you need to set the reply to and time out so that if you don't get a response in time it times out and you notify the client with some good error message.
What will happen is that the message is received, and sent to an active mq queue with the appropiate reply to properties. If the message is received, the response is sent back over the same connection to the client.
I would advise you to read upon on the JMS request/reply in Camel as it will help you to setup the active mq part.
http://camel.apache.org/jms.html

Asynchronous web socket application server with two event loops

I'm trying to make a distributed RPC-type web application that uses websockets for its main interface. I want to use a queuing system (like RabbitMQ) in order to distribute the expensive jobs that are requested through the websocket connections.
Basically, the flow would go like this:
A client sends a job via websocket connection to the server
The server would send this message to a RabbitMQ exchange to be processed by a worker
The worker would execute the job and add the result of the job to a response queue
The server would check the response queue and send the result of the job back to the client via websocket connection.
As far as I can tell, on the server I need two event loops that share memory. The websocket server needs to be listening for incoming jobs, and a RabbitMQ consumer needs to be listening for job results to send back to the clients.
What's the appropriate technologies for me to use here? I've considered the following:
multithreading the application and starting one event loop on each thread
using two processes with shm (shared memory)
using two processes that communicate via socket (either a unix socket or maybe even set up the workers as special websocket clients)
hooking into the websocket server's event loop to check the result queue
I'm new to both websockets and distributed computing, so I really have no idea which of these (or maybe something I didn't think of) would work best for me.
As far as I can tell, on the server I need two event loops that share memory. The websocket server needs to be listening for incoming jobs, and a RabbitMQ consumer needs to be listening for job results to send back to the clients.
Since you can have multiple clients sending jobs concurrently, you will need a multithreaded server. Unless your application would process client per client. Now there are multiple approaches to implement a multithreaded server, each with their own advantages/disadvantages. Take a look at multithreading through :
A thread per request (+ : throughput potentially maximized, - : threads are expensive to create, must manage concurrency)
A thread per client (+ : less thread management overhead, - : doesn't scale to many many connections, still manage concurrency)
A thread pool (+ : Avoids overhead of thread creation, scalable up to N concurrent connections (N = size of thread pool), - : Manage concurrency between N threads)
It's up to you to choose one of the above approaches (I would opt for a thread per client as it is relatively easy to implement and the chance that you'll have tens of thousands of clients is relatively small).
Notice that this is a multithreaded approach and not an event-driven approach ! But since you are not limited to one thread (in which case it should be event driven in order to be able to process multiple clients "concurrently") I wouldn't go for that option as it is more difficult to implement. (Programmers sometimes speak about a "callback hell" in an event-driven approach).
This is how I would implement it (one thread per client, Java) :
Basically, the flow would go like this:
A client sends a job via websocket connection to the server
Server part :
public class Server {
private static ServerSocket server_skt;
private static ... channel; // channel to communicate with the rabbitMQ distributed priority queue.
// Constructor
Server(int port) {
server_skt = new ServerSocket(port);
/*
* Set up connection with the distributed queue
* channel = ...;
*/
}
public static void main(String argv[]) {
Server server = new Server(5555); // Make server instance
while(true) {
// Always waiting for new clients to connect
try {
System.out.println("Waiting for a client to connect...");
// Spawn new thread for communication with client (hence one thread per client approach)
new CommunicationThread(server_skt.accept(), server.channel).start(); // Will listen for new jobs and send them
} catch(IOException e) {
System.out.println("Exception occured :" + e.getStackTrace());
}
}
}
}
The server would send this message to a RabbitMQ exchange to be processed by a worker
...
The server would check the response queue and send the result of the job back to the client via websocket connection.
public class CommunicationThread extends Thread {
private Socket client_socket;
private InputStream client_in;
private OutputStream client_out;
private ... channel; // Channel to communicate with rabbitMQ
private ... resultQueue;
public CommunicationThread(Socket socket, ... channel) { // replace ... by type of the rabbitMQ channel
try {
this.client_socket = socket;
this.client_in = client_socket.getInputStream();
this.client_out = client_socket.getOutputStream();
this.channel = channel;
this.resultQueue = ...;
System.out.println("Client connected : " + client_socket.getInetAddress().toString());
} catch(IOException e) {
System.out.println("Could not initialize communication properly. -- CommunicationThread.\n");
}
}
public yourJobType readJob() {
// Read input from client (e.g. read a String from "client_in")
// Make a job from it (e.g. map String to a job)
// return the job
}
#Override
public void run() {
while(active) {
/*
* Always listen for incoming jobs (sent by client) and for results (to be sent back to client)
*/
// Read client input (only if available, else it would be blocking!)
if(client_in.available() > 0) {
yourJobType job = readJob();
channel.basicPublish(...); // Send job to rabbitMQ
}
/* Check result queue (THIS is why reading client input MUST be NON-BLOCKING! Else while loop could be blocked on reading input
* and the result queue won't be checked until next job arrives)
*/
ResultType next_result = resultQueue.poll(); // Could be "null" if the queue is empty
if(next_result != null) {
// There is a result
client_out.write(next_result.toByteArray());
client_out.flush();
}
}
client_in.close();
client_out.close();
}
}
Note that when reading from the result queue it is important that you only read results of jobs sent by that client.
If you have one result queue containing the results of jobs (of all clients) and you retrieve a result like in the code above, then that result could be the result of a job of another client, hence sending the result back the the wrong client.
To fix this you could poll() the result queue with a filter and a wildcard (*) or have a result queue for each client, hence knowing that a result retrieved from our queue wil be sent to the corresponding client.
(*) : You could assign an ID to every client. When receiving a job from a client, pair the job with the client ID (e.g. in a tuple < clientID, job >) and put it in the queue. And do the same for the results (pair the result with the client ID and put it in the result queue). Then in the run() method of CommunicationThread you would have to poll the result queue only for results of the form < clientID, ? >.
Important : You'll also have to assign an ID for every job! Because sending job A and then job B doesn't guarantee that result of job A will come before the result of job B. (Job B could be less time consuming then job A and thus the result could be sent back to the client before job A's result).
(PS : It's up to you to see how to implement the workers (executed by server with one thread for each worker? Or executed by other processes?))
The above answer is a possible, multithreaded solution. I only discussed the server part, the client part should send jobs and wait for results (how to implement this depends on your goals, do clients first send all jobs and then receive the results of each job or can this be mixed ?).
There are other ways it could be implemented, but for a beginner in distributed computing I think this is the easiest solution (using thread pools, ... would make it trickier).

SignalR long polling reconnection behavior

Using a SignalR persistent connection with a JS long polling client we see inconsistent reconnection behavior in different scenarios. When the client machine's network cable is unplugged the JS connection does not enter the reconnecting state and it never (at least not after 5 minutes) reaches the disconnected state. For other scenarios such as a restart of the IIS web application a long polling JS connection does enter the reconnecting state and successfully reconnects. I understand that the reasoning behind this is that keep-alive is not supported for the long polling transport.
I can see that a suggestion has been made on github to better support reconnections for the long polling transport (https://github.com/SignalR/SignalR/issues/1781), but it seems that there is no commitment to change it.
First, is there a proper workaround for detecting disconnections on the client in the case of long polling.
Second, does anyone know if there are plans to support reconnection in the case described?
Cheers
We've debated different alternatives to support a keep alive "like" feature for long polling; however, due to how long polling works under the covers it's not easy to implement without affecting the vast majority of users. As we continue to debate the "correct" solution I'll leave you with one work around for detecting network failure in the long polling client (if it's absolutely needed).
Create a server method, lets call it ping:
public class MyHub : Hub
{
public void Ping()
{
}
}
Now on the client create an interval in which you will "ping" the server:
var proxy = $.connection.myHub,
intervalHandle;
...
$.connection.hub.disconnected(function() {
clearInterval(intervalHandle);
});
...
$.connection.hub.start().done(function() {
// Only when long polling
if($.connection.hub.transport.name === "longPolling") {
// Ping every 10s
intervalHandle = setInterval(function() {
// Ensure we're connected (don't want to be pinging in any other state).
if($.connection.hub.state === $.signalR.connectionState.connected) {
proxy.server.ping().fail(function() {
// Failed to ping the server, we could either try one more time to ensure we can't reach the server
// or we could fail right here.
TryAndRestartConnection(); // Your method
});
}
}, 10000);
}
});
Hope this helps!

How to send as many http get to the same host from the same client in Java

I must apologise for asking a very general questions on sending http get using Java.
(The through put could be affected by many things.)
I am asked to investigate how many http get requests from a single client to a single remote host across the internet in Java.
Lets assume the remote host can handle as many http get requests as the client sends to it.
Basically, my approach is to run the following piece of Java code in as many threads as possible.
private void send(int i) throws IOException {
final String urlStr = String.format(urlTemplate, host, i);
URL urlObj = new URL(urlStr);
HttpURLConnection con = (HttpURLConnection) urlObj.openConnection();
con.setRequestMethod("GET");
con.setRequestProperty("User-Agent", USER_AGENT);
con.getResponseCode();
con.disconnect();
}
If I have ten or more threads I get
java.net.NoRouteToHostException: Cannot assign requested address
After some googling, I found setting /proc/sys/net/ipv4/tcp_tw_reuse
to 1, helps to get ride of the above NoRouteToHostException, with 20 threads.
With 20 threads, now i could send about 2000 http get requests from a single client to a remote host.
Is there any other changes in the client side that would increase the no of http get requests that I could sent from a single client ?
Thanks in advance for any assistance !
Shing

how can i keep my jvm from exiting while netty client connection is open?

I have an API which uses netty to open client connection to a tcp server. The server may send data to the client at any time. I'm facing the following scenario:
Client connects to server
Sends data to server
Disconnects and the JVM exist (not sure happens first)
This is what I expect:
Client connects to server
Sends data to server
Client simply keeps the connections open, waiting to receive data or for the user of client API to send data.
This is an outline of my connection method (obviously there is a much larger API around it):
```
public FIXClient connect(String host, int port) throws Throwable {
...
ChannelPipeline pipe = org.jboss.netty.channel.Channels.pipeline(...);
ChannelFactory factory = new NioClientSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool());
ClientBootstrap bootstrap = new ClientBootstrap(factory);
bootstrap.setPipeline(pipe);
bootstrap.setOption("tcpNoDelay", true);
bootstrap.setOption("keepAlive", true);
ChannelFuture future = bootstrap.connect(new InetSocketAddress(host, port));
//forcing the connect call to block
//don't want clients to deal with async connect calls
future.awaitUninterruptibly();
if(future.isSuccess()){
this.channel = future.getChannel();
//channel.getCloseFuture();//TODO notifies whenever channel closes
}
else{
throw future.getCause();//wrap this in a more specific exception
}
return this;
}
```
That has nothing todo with netty... You need to make sure your "main" method will not exist if you call it from there. Otherwise it the job of the container..
There's a couple of ways you can do this, but one thing I have observed, is that with this code:
ChannelFactory factory = new NioClientSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool());
... if you make a successful connection, your JVM will not shutdown of it's own accord for some time until you force it (like a kill) or you call a releaseExternalResources() on your channel factory. This is because:
The threads created by Executors.newCachedThreadPool() are nonDaemon threads.
At least 1 thread would be created once you submit your connection request.
The cached thread pool threads have a keep alive time of 60 seconds, meaning they don't go away until they've been idle for 60 seconds, so that would be 60 seconds after your connect and send (assuming that they both completed).
So I'm not sure if you're diagnosing the issue correctly. Having said that, I recommend you handle the task this this way:
Once you boot in your main method (in the main thread)
Now launch all your actual useful work in new threads.
Once the useful threads have been launched, in the main thread, call Thread.currentThread().join(). Since main is always non-dameon, you have made sure the JVM will not shutdown until you're good and ready.
At some point, unless you want to kill -9 the JVM as a shutdown strategy, you will want a controlled shutdown, so you can add a shutdown hook to shutdown Netty and then interrupt the main thread.
I hope that's helpful.

Resources