I would like to queue requests made by mobile application that uses API to send some data to the server.
The scenario for now is like this:
Mobile app sends a request with some data
I need to get the data, validate it (a few DB queries) and save to a few tables in DB.
I need to return OK response to mobile app or bad request with erros list in case the validation has failed.
Now if I have 1 000 requests like this in 3 seconds my server will collapse.
I would like to use RabbitMQ to queue those requests. But what should I do with a response? I cannot send OK after RabbitMQ has received the message cause I don't know if the validation will pass. So mobile app will wait until RabbitMQ message will be properly consumed?
This could be a solution to your problem:
The client sends a request
The server queues the request and generate a unique identifier that belongs to the queued request, and then sends a response containing the generated identifier with 202 (Accepted) status code that means the request has been queued or submitted on the server but there is no response yet.
The client subscribes to the generated identifier on a message broker
After a queued request finished on the server it will publish a response to the message broker based on the generated identifier for a request
The client will receive published response on the subscribing identifier
Tips:
I use EMQTT for the message broker. Another option would be Rabbitmq MQTT plugin
Related
I design a RabbitMq System with RPC Technology that have one Client and One Server (claculated Fibonacci).
My problem is here :
when i send two or more request to server each request is processed after previous request done.
Question : is it true? i mean, why all request can`t process Asynchronous
When speaking from a conceptual point of view, is it standard practice to mix WebSockets and HTTP requests when making a chat application (or any application that requires real-time communication between devices)?
Imagine a scenario with a client and a server in a chat app. What would be the best approach for connecting and sending data between the client and the server? Would it be using sockets for both sending and receiving or HTTP requests for sending (so the client would get a response and then know if the message was received), and then using WebSocket for only receiving new messages?
No this is not standard practice.
If you need real-time communication between client and server you normally just use a websocket connection and keep that one open. The client can send messages to the server and receive messages through the same connection.
Using HTTP requests for sending messages to the server and receiving new messages via websocket seems odd and just adds unnecessary complexity.
Now if your server has some endpoints to subscribe for real-time data e.g. a chat room and endpoints for getting information you don't necessary want to subscribe to e.g. information about a certain user, than you can use the appropriate protocol for each endpoint
I have a question regarding http requests and responses.
I know that I can send a request to a server from my device (I can build and send a GET request to http://google.com for example). But what if I am Google and I want to send a request from the server to the user's device? How do I do that?
I understand that when the server receives a request, it can answer it, but in this case I want the server to send the request to the user's device. Just like WhatsApp does when you receive a new message.
Thanks for the help!
There are several options for sending information from the server to a client:
Push notifications - depends on the platform you are using
Constructing a Websocket connection that allows bi-directional communication
I'm sure there are more options but those are the two that come up to my mind right away.
It really depends on your application use case. For example, a chat application would like to have a socket open between it and the server so it can update frequently on new messages, etc. On the other had some simple Calendar applications might want to use push notifications to send reminders on certain dates and times.
Can the HTTP client send a request while receiving the HTTP response?
For example, a client sends HTTP request A to server. Then, the server starts to send HTTP response. Before the client finish to receive HTTP response A, the client sends additional request B. Can it be possible? or Does it follow the HTTP RFC?
I think that above scenario is different from the pipelining. What I know about the pipelining is the scenario that client send multiple request A,B,C then the server response A,B,C consecutively. However, in the above scenario, request B is issued while the processing the response A.
Thank you
With the same connection object you must read the whole response before you can send a new request to the server, because response provides access to the request headers, return type and the entity body, If you send new request before fully reading response, client may get confused with mismatched responses.
Again it totally depends upon client library you using. Library could allow asynchronous requests.
There are concepts like
AsyncTask in android, promis in Angularjs etc.
allow asynchronous request.
In one of our solutions (pure messaging, no orchestrations), we receive a message on a request response receive location, using the MLLP adaptor and HL7v2x pipelines. The receive pipeline generates the response message and publishes it, which due to promoted properties gets routed through the send pipeline of the request/response receive port, back to the client.
We then have a two way send port that subscribes to the received message, uses a map to translate the message and send to a request response WCF receive port that is also on the same BizTalk machine. It sounds odd but we have clients that send their data via web services, and some that send via HL7v2 MLLP so that's why we're doing it this way.
The WCF request response receive port also generates a response by publishing a message and promoting the EpmRRCorrelationToken, CorrelationToken, ReqRespTransmitPipelineID, RouteDirectToTP, IsRequestResponse and ReceivePipelineResponseConfig properties, which causes the response to get routed through the send pipeline as a SOAP response.
We subscribe to the messages received on the WCF request response port, and drop them to a file.
Technically it works. The client using MLLP gets an acknowledgement response. The client using WCF SOAP web services get a SOAP response. The file system contains all messages that were sent.
So if I think about the messages published I imagine there is the HL7v2 message received over MLLP (A), and the HL7v2 response (B). Then the translated messages received by the WCF receive port (C), and it's response (D). Plus the WCF response that was received by the WCF send port (E), as shown in the attached .
The problem is that we're getting suspended messages in BizTalk such as "A response message for two-way receive port "xxx" is being suspended as the messaging engine could not correlate the response to an existing request message. This usually happens when the host process has been recycled." In this case, the service is the MLLP receive port, and there are 3 suspended messages - the HL7v2 ACK (B), and 2 copies of the response message for the WCF service (D) and (E) I suspect.
We're also seeing errors "The instance completed without consuming all of its messages. The instance and its unconsumed messages have been suspended."
With MLLP adapter Its possible that the Receive MLLP is timing out. There are few things you can look at:
The Persistent Connection Property should be set to True with Receive Timeout Set to 0
Since you are calling a WCF service for published HL7 message which come via MLLP, I am wondering if you are dealing with WCF service response. If not then it will cause these error messages. You need to subscribe to the service response or use a one way send port.
Relook at your design and need of calling a local WCF service when a message is received via MLLP to send the message to a FILE location. You can do this directly skipping all the WCF route in between.