I'm having issues with the NetworkAccessManager.get method. When i make two http-connections, the second connection fails with the error "99: The bound addres is already in use".
I start the second connection in the finish-slot of the first connection. Maybe multiple async http-connections are not supported on BB-10?
Doest anyone got the same error?
In essence you should only be using a single instance of the NetworkAccessManager but passing multiple requests through it. The documentation (http://developer.blackberry.com/cascades/reference/qnetworkaccessmanager.html) specifies the following:
One QNetworkAccessManager should be enough for the whole Qt
application.
...
QNetworkAccessManager has an asynchronous API. When the replyFinished slot above is called, the parameter it takes is the QNetworkReply object containing the downloaded data as well as meta-data (headers, etc.).
...
Note: QNetworkAccessManager queues the requests it receives. The number of requests executed in parallel is dependent on the protocol. Currently, for the HTTP protocol on desktop platforms, 6 requests are executed in parallel for one host/port combination.
So basically what you should be doing is sending multiple requests through the same NetworkAccessManager and then handling the response based on the meta-data. The NetworkAccessManager will handle the async processing for you.
Related
I would like to make POST request through a DoFn for a Apache Beam Pipeline running on Dataflow.
For that, I have created a client which instanciate an HttpClosableClient configured on a PoolingHttpClientConnectionManager.
However, I instanciate a client for each element that I process.
How could I setup a persistent client used by all my elements?
And is there other class for parallel and high-speed HTTP requests that I should use?
You can put the client into a member variable, use the #Setup method to open it, and #Teardown to close it. Implementation of almost all IOs in Beam uses this pattern, e.g. see JdbcIO.
if HTTP is connection-less, how does ASP.net response property, HttpResponse.IsClientConnected detect client is connected or not?
HTTP is not "connection-less" - you still need a connection to receive data from the server; more correctly, HTTP is stateless. Applications running on-top of HTTP will most likely actually be stateful, but HTTP itself is not.
"Connectionless" can also refer to a system using UDP as the transport instead of TCP. HTTP primarily runs over TCP and pretty much every real webserver expects, and returns, TCP messages instead of UDP. You might see HTTP-like traffic in UDP-based protocols like UPnP, but because you want your webpage to be delivered reliably, TCP will always be used instead of UDP.
As for IsClientConnected, when you access that property it calls into the current HttpWorkerRequest which is an abstract class implemented by the current host environment.
IIS7+ implements it such that if it previously received a TCP disconnect message (that sets a field) the method would now return false.
The ISAPI implementation (IIS 6) instead calls into a function within IIS that informs the caller if the TCP client on the current request/response context is still connected, though presumably it works on the same basis: when the webserver receives a TCP timeout, disconnect or connection-reset message it sets a flag and lets execution continue instead of terminating the response-generator thread.
Here's the relevant source code:
HttpResponse.IsClientConnected: http://referencesource.microsoft.com/#System.Web/HttpResponse.cs,80335a4fb70ac25f
IIS7WorkerRequest.IsClientConnected: http://referencesource.microsoft.com/#System.Web/Hosting/IIS7WorkerRequest.cs,1aed87249b1e3ac9
ISAPIWorkerRequest.IsClientConnected: http://referencesource.microsoft.com/#System.Web/Hosting/ISAPIWorkerRequest.cs,f3e25666672e90e8
It all starts with an HTTP request. Inside it, you can, for example, spawn worker threads, that can outlive the request itself. Here is where IsClientConnected comes in handy, so that the worker thread knows that the client has already received the response and disconnected or not.
As far as I understand, RPC is a client-server model while the client sends some requests to the server side and get some results back. Then, is Java servlet also a kind of RPC which uses HTTP protocol? Am I right?
Here is the very first sentence of the wikipedia article on RPC:
In computer science, a remote procedure call (RPC) is an inter-process communication that allows a computer program to cause a subroutine or procedure to execute in another address space (commonly on another computer on a shared network) without the programmer explicitly coding the details for this remote interaction.1 That is, the programmer writes essentially the same code whether the subroutine is local to the executing program, or remote.
So, Servlets would be an RPC mechanism if you could invoke a servlet from a client using
SomeResult r = someObject.doSomething();
That's not the case at all. To invoke a servlet, you need to explicitely send a HTTP request and encode parameters in the way the servlet expects them, then read and parse the response.
I am implementing a hub/servers MPI application. Each of the servers can get tied up waiting for some data, then they do an MPI Send to the hub. It is relatively simple for me to have the hub waiting around doing a Recv from ANY_SOURCE. The hub can get busy working with the data. What I'm worried about is skipping data from one of the servers. How likely is this scenario:
server 1 and 2 do Send's
hub does Recv and ends up getting data from server 1
while hub busy, server 1 gets more data, does another Send
when hub does its next Recv, it gets the more recent server 1 data rather than the older server2
I don't need a guarantee that the order the Send's occur is the order the ANY_SOURCE processes them (though it would be nice), but if I new in practice it will be close to the order they are sent, I may go with the above. However if it is likely I could skip over data from one of the servers, I need to implement something more complicated. Which I think would be this pattern:
servers each do Send's
hub does an Irecv for each server
hub does a Waitany on all server requests
upon completion of one server request, hub does a Test on all the others
of all the Irecv's that have completed, hub selects the oldest server data (there is timing tag in the server data)
hub communicates with the server it just chose, has it start a new Send, hub a new Irecv
This requires more complex code, and my first effort crashed inside the Waitany call in a way that I'm finding difficult to debug. I am using the Python bindings mpi4py - so I have less control over buffers being used.
It is guaranteed by the MPI standard that the messages are received in the order they are sent (non-overtaking messages). See also this answer to a similar question.
However, there is no guarantee of fairness when receiving from ANY_SOURCE and when there are distinct senders. So yes, it is the responsibility of the programmers to design their own fairness system if the application requires it.
I don't get one thing in RMI. It's a bit confusing actually.
On client side, we have the business interface (Hello.class), the client code (HelloClient.class) and the remote stub (probably Hello_stub.class) and on server side we have the server code (HelloImpl.class), the business interface (Hello.class) and the skeleton .
For Java 5 onwards, we don't create stubs but still they are c=in picture i believe.
So, how does the communication happen ?
The client calls method on Hello.class which then calls Hello_stub.class for all n/w operations. The Hello_stub.class calls the skeleton which then calls Hello.class and then calls methods on HelloImpl.class ?
I am a bit confused after reading Head first EJB :) .It would be glad if someone clarified it.
When the stub's method is called:
It gets a TCP connection to s target out of the client connection pool, or creates one if there isn't a pooled connection
Bundles up the call and the arguments into a serializable object.
Writes the object to the connection along with some other stuff like a JRMP protocol header and a remote objectID.
Reads the reply object from the connection.
Returns the connection to the pool, where it gets closed after a certain idle time.
If the reply object is an exception, throws it.
Otherwise returns the reply object as the method result.
At the server, a thread sits on the listening socket, accepting connections, creating threads, and dispatching incoming remote calls to the correct remote object via the specified object ID.
This is done via reflection. RMI skeletons haven't been used since 1998, except in the case of stubs you deliberately generate with rmic -v1.1, but the principle is the same either way.