H2 Database Server to serve TCP clients from connection pool - tcp

I have a H2 server that I start from console. The client from different machine accesses the server and calls a function alias (registered at the database). The problem is, this function is called more than a million times. So, in this case, the connection timesout but then I solved it with changing the client connection string to have AUTORECONNECT=TRUE. This solves the problem but adds a reconnection time delay (which I want to avoid).
Is there any flag/command that we can use to specify with Server to dedicate X amount of connections?
Also, I looked into the possibility of starting the server from within Application. Like,
JdbcConnectionPool cp = JdbcConnectionPool.create(
"jdbc:h2:tcp://IPADDRESS:9092/~/test", "sa", "");
cp.setMaxConnections(MAX_CONN_IN_POOL);
// start the TCP Server
Server server = Server.createTcpServer().start();
Connection conn = cp.getConnection();
Statement stat = conn.createStatement();
stat.execute("SELECT myFunctionAlias(arg)");
cp.dispose();
server.stop();
The above sample code does start the server and will only run once. I want the server to be open and keep listening to clients, and serve them from the connection pool. Any pointers?

You should start the server in AUTO_SERVER mode, this way a leader election algorithm will elect a leader who will read the database file and open a TCP server for all other clients.
Thsi way the server will be really fast to read the file while other clients will be as fast as the network is.
This is transparent to the user as long as you use the same connection string.
Connection connection;
String dataBaseString = "jdbc:h2:/path/to/db/" + File.separator + "db;create=true;AUTO_SERVER=TRUE;AUTO_RECONNECT=TRUE";
try
{
Class.forName("org.h2.Driver");
log.info("getConnection(), driver found");
}
catch (java.lang.ClassNotFoundException e)
{
log.error("getConnection(), ClassNotFoundException: " + e.getMessage(), e);
Main.quit();
}
try
{
connection = DriverManager.getConnection(dataBaseString);
}

Related

Does the HTTP protocol support detecting whenever the connection is lost from the server side?

Lets say that I have a ASP.NET application and I hold a connection for 10 seconds. In that time the client lost network access.
Can I detect that before returning the response?
You can't detect lost connection "in HTTP", because it is an application layer protocol and too abstract for that.
But you could detect that your client has closed the connection on a network level. I'm not familiar with ASP.net, but you could start from here: Instantly detect client disconnection from server socket.
You can check the IsClientConnectedProperty. For example
void HeavyProcessing()
{
while (longLoop)
{
if (!HttpContext.Current.Response.IsClientConnected) Response.End();
//Do heavy processing
}
}

Asynchronous web socket application server with two event loops

I'm trying to make a distributed RPC-type web application that uses websockets for its main interface. I want to use a queuing system (like RabbitMQ) in order to distribute the expensive jobs that are requested through the websocket connections.
Basically, the flow would go like this:
A client sends a job via websocket connection to the server
The server would send this message to a RabbitMQ exchange to be processed by a worker
The worker would execute the job and add the result of the job to a response queue
The server would check the response queue and send the result of the job back to the client via websocket connection.
As far as I can tell, on the server I need two event loops that share memory. The websocket server needs to be listening for incoming jobs, and a RabbitMQ consumer needs to be listening for job results to send back to the clients.
What's the appropriate technologies for me to use here? I've considered the following:
multithreading the application and starting one event loop on each thread
using two processes with shm (shared memory)
using two processes that communicate via socket (either a unix socket or maybe even set up the workers as special websocket clients)
hooking into the websocket server's event loop to check the result queue
I'm new to both websockets and distributed computing, so I really have no idea which of these (or maybe something I didn't think of) would work best for me.
As far as I can tell, on the server I need two event loops that share memory. The websocket server needs to be listening for incoming jobs, and a RabbitMQ consumer needs to be listening for job results to send back to the clients.
Since you can have multiple clients sending jobs concurrently, you will need a multithreaded server. Unless your application would process client per client. Now there are multiple approaches to implement a multithreaded server, each with their own advantages/disadvantages. Take a look at multithreading through :
A thread per request (+ : throughput potentially maximized, - : threads are expensive to create, must manage concurrency)
A thread per client (+ : less thread management overhead, - : doesn't scale to many many connections, still manage concurrency)
A thread pool (+ : Avoids overhead of thread creation, scalable up to N concurrent connections (N = size of thread pool), - : Manage concurrency between N threads)
It's up to you to choose one of the above approaches (I would opt for a thread per client as it is relatively easy to implement and the chance that you'll have tens of thousands of clients is relatively small).
Notice that this is a multithreaded approach and not an event-driven approach ! But since you are not limited to one thread (in which case it should be event driven in order to be able to process multiple clients "concurrently") I wouldn't go for that option as it is more difficult to implement. (Programmers sometimes speak about a "callback hell" in an event-driven approach).
This is how I would implement it (one thread per client, Java) :
Basically, the flow would go like this:
A client sends a job via websocket connection to the server
Server part :
public class Server {
private static ServerSocket server_skt;
private static ... channel; // channel to communicate with the rabbitMQ distributed priority queue.
// Constructor
Server(int port) {
server_skt = new ServerSocket(port);
/*
* Set up connection with the distributed queue
* channel = ...;
*/
}
public static void main(String argv[]) {
Server server = new Server(5555); // Make server instance
while(true) {
// Always waiting for new clients to connect
try {
System.out.println("Waiting for a client to connect...");
// Spawn new thread for communication with client (hence one thread per client approach)
new CommunicationThread(server_skt.accept(), server.channel).start(); // Will listen for new jobs and send them
} catch(IOException e) {
System.out.println("Exception occured :" + e.getStackTrace());
}
}
}
}
The server would send this message to a RabbitMQ exchange to be processed by a worker
...
The server would check the response queue and send the result of the job back to the client via websocket connection.
public class CommunicationThread extends Thread {
private Socket client_socket;
private InputStream client_in;
private OutputStream client_out;
private ... channel; // Channel to communicate with rabbitMQ
private ... resultQueue;
public CommunicationThread(Socket socket, ... channel) { // replace ... by type of the rabbitMQ channel
try {
this.client_socket = socket;
this.client_in = client_socket.getInputStream();
this.client_out = client_socket.getOutputStream();
this.channel = channel;
this.resultQueue = ...;
System.out.println("Client connected : " + client_socket.getInetAddress().toString());
} catch(IOException e) {
System.out.println("Could not initialize communication properly. -- CommunicationThread.\n");
}
}
public yourJobType readJob() {
// Read input from client (e.g. read a String from "client_in")
// Make a job from it (e.g. map String to a job)
// return the job
}
#Override
public void run() {
while(active) {
/*
* Always listen for incoming jobs (sent by client) and for results (to be sent back to client)
*/
// Read client input (only if available, else it would be blocking!)
if(client_in.available() > 0) {
yourJobType job = readJob();
channel.basicPublish(...); // Send job to rabbitMQ
}
/* Check result queue (THIS is why reading client input MUST be NON-BLOCKING! Else while loop could be blocked on reading input
* and the result queue won't be checked until next job arrives)
*/
ResultType next_result = resultQueue.poll(); // Could be "null" if the queue is empty
if(next_result != null) {
// There is a result
client_out.write(next_result.toByteArray());
client_out.flush();
}
}
client_in.close();
client_out.close();
}
}
Note that when reading from the result queue it is important that you only read results of jobs sent by that client.
If you have one result queue containing the results of jobs (of all clients) and you retrieve a result like in the code above, then that result could be the result of a job of another client, hence sending the result back the the wrong client.
To fix this you could poll() the result queue with a filter and a wildcard (*) or have a result queue for each client, hence knowing that a result retrieved from our queue wil be sent to the corresponding client.
(*) : You could assign an ID to every client. When receiving a job from a client, pair the job with the client ID (e.g. in a tuple < clientID, job >) and put it in the queue. And do the same for the results (pair the result with the client ID and put it in the result queue). Then in the run() method of CommunicationThread you would have to poll the result queue only for results of the form < clientID, ? >.
Important : You'll also have to assign an ID for every job! Because sending job A and then job B doesn't guarantee that result of job A will come before the result of job B. (Job B could be less time consuming then job A and thus the result could be sent back to the client before job A's result).
(PS : It's up to you to see how to implement the workers (executed by server with one thread for each worker? Or executed by other processes?))
The above answer is a possible, multithreaded solution. I only discussed the server part, the client part should send jobs and wait for results (how to implement this depends on your goals, do clients first send all jobs and then receive the results of each job or can this be mixed ?).
There are other ways it could be implemented, but for a beginner in distributed computing I think this is the easiest solution (using thread pools, ... would make it trickier).

how can i keep my jvm from exiting while netty client connection is open?

I have an API which uses netty to open client connection to a tcp server. The server may send data to the client at any time. I'm facing the following scenario:
Client connects to server
Sends data to server
Disconnects and the JVM exist (not sure happens first)
This is what I expect:
Client connects to server
Sends data to server
Client simply keeps the connections open, waiting to receive data or for the user of client API to send data.
This is an outline of my connection method (obviously there is a much larger API around it):
```
public FIXClient connect(String host, int port) throws Throwable {
...
ChannelPipeline pipe = org.jboss.netty.channel.Channels.pipeline(...);
ChannelFactory factory = new NioClientSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool());
ClientBootstrap bootstrap = new ClientBootstrap(factory);
bootstrap.setPipeline(pipe);
bootstrap.setOption("tcpNoDelay", true);
bootstrap.setOption("keepAlive", true);
ChannelFuture future = bootstrap.connect(new InetSocketAddress(host, port));
//forcing the connect call to block
//don't want clients to deal with async connect calls
future.awaitUninterruptibly();
if(future.isSuccess()){
this.channel = future.getChannel();
//channel.getCloseFuture();//TODO notifies whenever channel closes
}
else{
throw future.getCause();//wrap this in a more specific exception
}
return this;
}
```
That has nothing todo with netty... You need to make sure your "main" method will not exist if you call it from there. Otherwise it the job of the container..
There's a couple of ways you can do this, but one thing I have observed, is that with this code:
ChannelFactory factory = new NioClientSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool());
... if you make a successful connection, your JVM will not shutdown of it's own accord for some time until you force it (like a kill) or you call a releaseExternalResources() on your channel factory. This is because:
The threads created by Executors.newCachedThreadPool() are nonDaemon threads.
At least 1 thread would be created once you submit your connection request.
The cached thread pool threads have a keep alive time of 60 seconds, meaning they don't go away until they've been idle for 60 seconds, so that would be 60 seconds after your connect and send (assuming that they both completed).
So I'm not sure if you're diagnosing the issue correctly. Having said that, I recommend you handle the task this this way:
Once you boot in your main method (in the main thread)
Now launch all your actual useful work in new threads.
Once the useful threads have been launched, in the main thread, call Thread.currentThread().join(). Since main is always non-dameon, you have made sure the JVM will not shutdown until you're good and ready.
At some point, unless you want to kill -9 the JVM as a shutdown strategy, you will want a controlled shutdown, so you can add a shutdown hook to shutdown Netty and then interrupt the main thread.
I hope that's helpful.

Heartbeat Direction and Acknowledgement

We have a Java client server application with a custom protocol using TCP/IP. We have found it necessary to use a heartbeat within the protocol due to dead socket connection issues.
We have had the heartbeat since the beginning going from client to server with the server responding with an acknowledgment.
We have recently had a timeout issues with the clients, and after analysing the code have come up with a couple of questions I am unsure about.
1 - What direction is best for a heartbeat, I think we chose 'client to server' as it takes the load of the server.
I was thinking of changing it to 'server to client' however we have control of both the client and server code, so we don't need worry so much about time wasting clients.
2 - Is it necessary to acknowledge heartbeats to prove the connection is alive in both directions?
Many thanks
I'm thinking any traffic in either direction should be enough to keep it alive but it doesn't hurt to respond with a "ping" with a "pong". Traditionally the client sends the heartbeat and the server will be responsible to shutting down unresponsive clients so what you have sounds right.
have you tried setting the timeout to zero? Could it be network devices that are interfering with your socket connection timeout?
try {
ServerSocket server = new ServerSocket(2048);
server.setSoTimeout(0); // never time out
try {
Socket s = server.accept( );
// handle the connection
// ...
}
catch (InterruptedIOException e) {
System.err.println("No connection within 30 seconds");
}
finally {
server.close( );
}
catch (IOException e) {
System.err.println("Unexpected IOException: " + e);
}

How do I make a TCP connection between 2 servers if both can start the connection?

I have a defined number of servers that can locally process data in their own way. But after some time I want to synchronize some states that are common on each server. My idea was that establish a TCP connection from each server to the other servers like a mesh network.
My problem is that in what order do I make the connections since there is no "master" server here, so that each server is responsible for creating there own connections to each server.
My idea was that make each server connect and if the server that is getting connected already has a connection to the connecting server, then just drop the connection.
But how do I handle the fact that 2 servers is trying to connect at the same time? Because then I get 2 TCP connections instead of 1.
Any ideas?
You will need to have a handshake protocol when you're connection to a server so you can verify whether it's ok to start sending/receiving data, otherwise you might end up with one of the endpoint connecting and start sending data immediately only to have the other end drop the connection.
For ensuring only one connection is up to a server,you just need something like this pseudocode:
remote_server = accept_connection()
lock mutex;
if(already_connected(remote_server)) {
drop_connection(remote_server)
}
unlock mutex;
If your server isn't multithreaded you don't need any locks to guard you when you check whether you're already connected - as there won't be any "at the same time" issues.
You will also need to have a retry mechanism to connect to a server based on a small random grace period in the event the remote server closed the connection you just set up.
If the connection got closed, you wait a little while, check if you're already connected (maybe the other end set up a connection to you in the mean time) and try to connect again. This is to avoid the situation where both ends set up the connection at the same time but the other end closes it because of the above logic.
Just as an idea. Each server accept a connection, then find out that it has got two TCP connections between the same servers. Then one connection is chosen to be closed. The way to choose what connection to close you just need to implement. For example both servers should compare their names ( or their IP address or their UID) and connection initiated by the server whose name is less (or UID) should be closed.
While better solution implies making a separate "LoadBalancer" to which all your servers are connected here is the small suggestion to make sure that connections are not created simultaneously.
Your servers can start connections in different times by using
bool CreateConnection = (time() % i == 0)
if (CreateConnection){ ... }
where i is the ID of the particular server.
and time() could be in seconds or fractions of a second, depending on your requerements.
This will guarantee that you never get two servers connecting at the same time to each other. If you do not have IDs for servers, you can use a random value.

Resources