I don't get one thing in RMI. It's a bit confusing actually.
On client side, we have the business interface (Hello.class), the client code (HelloClient.class) and the remote stub (probably Hello_stub.class) and on server side we have the server code (HelloImpl.class), the business interface (Hello.class) and the skeleton .
For Java 5 onwards, we don't create stubs but still they are c=in picture i believe.
So, how does the communication happen ?
The client calls method on Hello.class which then calls Hello_stub.class for all n/w operations. The Hello_stub.class calls the skeleton which then calls Hello.class and then calls methods on HelloImpl.class ?
I am a bit confused after reading Head first EJB :) .It would be glad if someone clarified it.
When the stub's method is called:
It gets a TCP connection to s target out of the client connection pool, or creates one if there isn't a pooled connection
Bundles up the call and the arguments into a serializable object.
Writes the object to the connection along with some other stuff like a JRMP protocol header and a remote objectID.
Reads the reply object from the connection.
Returns the connection to the pool, where it gets closed after a certain idle time.
If the reply object is an exception, throws it.
Otherwise returns the reply object as the method result.
At the server, a thread sits on the listening socket, accepting connections, creating threads, and dispatching incoming remote calls to the correct remote object via the specified object ID.
This is done via reflection. RMI skeletons haven't been used since 1998, except in the case of stubs you deliberately generate with rmic -v1.1, but the principle is the same either way.
Related
if HTTP is connection-less, how does ASP.net response property, HttpResponse.IsClientConnected detect client is connected or not?
HTTP is not "connection-less" - you still need a connection to receive data from the server; more correctly, HTTP is stateless. Applications running on-top of HTTP will most likely actually be stateful, but HTTP itself is not.
"Connectionless" can also refer to a system using UDP as the transport instead of TCP. HTTP primarily runs over TCP and pretty much every real webserver expects, and returns, TCP messages instead of UDP. You might see HTTP-like traffic in UDP-based protocols like UPnP, but because you want your webpage to be delivered reliably, TCP will always be used instead of UDP.
As for IsClientConnected, when you access that property it calls into the current HttpWorkerRequest which is an abstract class implemented by the current host environment.
IIS7+ implements it such that if it previously received a TCP disconnect message (that sets a field) the method would now return false.
The ISAPI implementation (IIS 6) instead calls into a function within IIS that informs the caller if the TCP client on the current request/response context is still connected, though presumably it works on the same basis: when the webserver receives a TCP timeout, disconnect or connection-reset message it sets a flag and lets execution continue instead of terminating the response-generator thread.
Here's the relevant source code:
HttpResponse.IsClientConnected: http://referencesource.microsoft.com/#System.Web/HttpResponse.cs,80335a4fb70ac25f
IIS7WorkerRequest.IsClientConnected: http://referencesource.microsoft.com/#System.Web/Hosting/IIS7WorkerRequest.cs,1aed87249b1e3ac9
ISAPIWorkerRequest.IsClientConnected: http://referencesource.microsoft.com/#System.Web/Hosting/ISAPIWorkerRequest.cs,f3e25666672e90e8
It all starts with an HTTP request. Inside it, you can, for example, spawn worker threads, that can outlive the request itself. Here is where IsClientConnected comes in handy, so that the worker thread knows that the client has already received the response and disconnected or not.
As far as I understand, RPC is a client-server model while the client sends some requests to the server side and get some results back. Then, is Java servlet also a kind of RPC which uses HTTP protocol? Am I right?
Here is the very first sentence of the wikipedia article on RPC:
In computer science, a remote procedure call (RPC) is an inter-process communication that allows a computer program to cause a subroutine or procedure to execute in another address space (commonly on another computer on a shared network) without the programmer explicitly coding the details for this remote interaction.1 That is, the programmer writes essentially the same code whether the subroutine is local to the executing program, or remote.
So, Servlets would be an RPC mechanism if you could invoke a servlet from a client using
SomeResult r = someObject.doSomething();
That's not the case at all. To invoke a servlet, you need to explicitely send a HTTP request and encode parameters in the way the servlet expects them, then read and parse the response.
I'm having issues with the NetworkAccessManager.get method. When i make two http-connections, the second connection fails with the error "99: The bound addres is already in use".
I start the second connection in the finish-slot of the first connection. Maybe multiple async http-connections are not supported on BB-10?
Doest anyone got the same error?
In essence you should only be using a single instance of the NetworkAccessManager but passing multiple requests through it. The documentation (http://developer.blackberry.com/cascades/reference/qnetworkaccessmanager.html) specifies the following:
One QNetworkAccessManager should be enough for the whole Qt
application.
...
QNetworkAccessManager has an asynchronous API. When the replyFinished slot above is called, the parameter it takes is the QNetworkReply object containing the downloaded data as well as meta-data (headers, etc.).
...
Note: QNetworkAccessManager queues the requests it receives. The number of requests executed in parallel is dependent on the protocol. Currently, for the HTTP protocol on desktop platforms, 6 requests are executed in parallel for one host/port combination.
So basically what you should be doing is sending multiple requests through the same NetworkAccessManager and then handling the response based on the meta-data. The NetworkAccessManager will handle the async processing for you.
I'm building an ASP.NET service (a simple aspx) that requires a REQ call to a ZeroMQ REP node.
So I've to use the REQ/REP pattern, but I can't figure out the proper way to initialize the ZeroMQ context in the ASP.NET pipeline.
Moreover, can I share a single connection among the different ASP.NET threads and if so how?
edit: After some study it looks to me that an inproc router in a dedicated thread should be the way to go, since it would handle sincronization.
But more questions arise:
The other end of such an inproc node should be a DEALER? If so, should it connect to the REQ node? Or it should bind to a tcp port and I should code the REP server node to connect to it (the latter would be a bit cumbersome, since I could have different servers exposing the service)?
As an alternative, is it correct to build an inproc node bound to a ROUTER socket at one end, and connecting with REQ on the other? If so, should I code the node so that it handles a manual envelop of each message just to be able to send responses back to the correct requesting thread?
Is Application_Start the correct pipeline point to initialize the thread handling such router?
At the moment a ROUTER/DEALER inproc node that connect to the REQ server looks like the best option, but I'm not sure that it's possibile to connect from a DEALER socket. But this is still just a speculation and could be entirely wrong.
The zmq_socket manual states:
ØMQ sockets are not thread safe. Applications MUST NOT use a socket
from multiple threads except after migrating a socket from one thread
to another with a "full fence" memory barrier.
What I am trying to solve: have an Erlang TCP server that listens on a specific port (the code should reside in some kind of external facing interface/API) and each incoming connection should be handled by a gen_server (that is even the gen_tcp:accept should be coded inside the gen_server), but I don't actually want to initially spawn a predefined number of processes that accepts an incoming connection). Is that somehow possible ?
Basic Procedure
You should have one static process (implemented as a gen_server or a custom process) that performs the following procedure:
Listens for incoming connections using gen_tcp:accept/1
Every time it returns a connection, tell a supervisor to spawn of a worker process (e.g. another gen_server process)
Get the pid for this process
Call gen_tcp:controlling_process/2 with the newly returned socket and that pid
Send the socket to that process
Note: You must do it in that order, otherwise the new process might use the socket before ownership has been handed over. If this is not done, the old process might get messages related to the socket when the new process has already taken over, resulting in dropped or mishandled packets.
The listening process should only have one responsibility, and that is spawning of workers for new connections. This process will block when calling gen_tcp:accept/1, which is fine because the started workers will handle ongoing connections concurrently. Blocking on accept ensure the quickest response time when new connections are initiated. If the process needs to do other things in-between, gen_tcp:accept/2 could be used with other actions interleaved between timeouts.
Scaling
You can have multiple processes waiting with gen_tcp:accept/1 on a single listening socket, further increasing concurrency and minimizing accept latency.
Another optimization would be to pre-start some socket workers to further minimize latency after accepting the new socket.
Third and final, would be to make your processes more lightweight by implementing the OTP design principles in your own custom processes using proc_lib (more info). However, this you should only do if you benchmark and come to the conclusion that it is the gen_server behavior that slows you down.
The issue with gen_tcp:accept is that it blocks, so if you call it within a gen_server, you block the server from receiving other messages. You can try to avoid this by passing a timeout but that ultimately amounts to a form of polling which is best avoided. Instead, you might try Kevin Smith's gen_nb_server instead; it uses an internal undocumented function prim_inet:async_accept and other prim_inet functions to avoid blocking.
You might want to check out http://github.com/oscarh/gen_tcpd and use the handle_connection function to convert the process you get to a gen_server.
You should use "prim_inet:async_accept(Listen_socket, -1)" as said by Steve.
Now the incoming connection would be accepted by your handle_info callback
(assuming you interface is also a gen_server) as you have used an asynchronous
accept call.
On accepting the connection you can spawn another ger_server(I would recommend
gen_fsm) and make that as the "controlling process" by calling
"gen_tcp:controlling_process(CliSocket, Pid of spwned process)".
After this all the data from socket would be received by that process
rather than by your interface code. Like that a new controlling process
would be spawned for another connection.