Idea : Generate multiple http response using a single http request - http

I would like to know your view/comments regarding this concept. If an alternative is available? And if this would be feasible/beneficial?
As per my understanding, for every http request, the server performs some operation and sends back a http response.
Now consider any scenario, where we want to have more control over the process running on the server.
Situation 1 : http request send -> server start processing (long task in process) -> user closes the browser.
Here the process still executes, consuming the server and http response will be ignored at the client.
Here resources are wasted.
Situation 2 : http request send -> server starts processing (long task in process)
Here the client is unaware of the status of the process running in the server.
The client has to wait till it gets back the http response.
My Idea : In between the initial http request and final http response, add a feature to send multiple intermediate http responses, that will carry information regarding the process running at the server end.
Solution to Situation 1 : http request send -> server start processing (long task in process) -> [ return the process id as an intermediate http response] -> user closes the browser -> [ send an http request to closes the server process using the process id]
Solution to Situation 2 : http request send -> server starts processing (long task in process) -> [return http responses with details of the process running at the server at intervals] -> [perform any operation if required]
Kindly comment :) and correct if I'm missing anything.

For "Situation 2", you should have a look at informational responses; see https://greenbytes.de/tech/webdav/rfc7231.html#status.1xx.

Related

I need send request to rabbitMq Asynchronous

I design a RabbitMq System with RPC Technology that have one Client and One Server (claculated Fibonacci).
My problem is here :
when i send two or more request to server each request is processed after previous request done.
Question : is it true? i mean, why all request can`t process Asynchronous

HTTP server timeout. When should it be sent

I’m writing small http server and try to understand timeout issues.
RFC7230 don’t give an answer for the question what are conditions that forces server to send timeout (408 Request timeout). Should it be sent when client sends request too long? Or if nothing was sent in opened connection for some time? What the logic should be? Is there any standard or behavioral model?
The whole process would be
server wait for a request -> read request header -> read request body -> prepare response header -> prepare response body
So if the request take to long Ex: 30 seconds, then server will return a response header with code 408 Request timeout
The next case is when server can read whole request header and body and try to process that request but can not complete in an amount of time then it will return 504 Gateway Timeout or 503 Service Unavailable.
It will depend of each situation. But the rule is always use 4xx for request errors and 5xx for server errors
The short explaination for thoose http code is listed here: HTTP response status codes

How to queue APIs requests with RabbitMQ

I would like to queue requests made by mobile application that uses API to send some data to the server.
The scenario for now is like this:
Mobile app sends a request with some data
I need to get the data, validate it (a few DB queries) and save to a few tables in DB.
I need to return OK response to mobile app or bad request with erros list in case the validation has failed.
Now if I have 1 000 requests like this in 3 seconds my server will collapse.
I would like to use RabbitMQ to queue those requests. But what should I do with a response? I cannot send OK after RabbitMQ has received the message cause I don't know if the validation will pass. So mobile app will wait until RabbitMQ message will be properly consumed?
This could be a solution to your problem:
The client sends a request
The server queues the request and generate a unique identifier that belongs to the queued request, and then sends a response containing the generated identifier with 202 (Accepted) status code that means the request has been queued or submitted on the server but there is no response yet.
The client subscribes to the generated identifier on a message broker
After a queued request finished on the server it will publish a response to the message broker based on the generated identifier for a request
The client will receive published response on the subscribing identifier
Tips:
I use EMQTT for the message broker. Another option would be Rabbitmq MQTT plugin

Can the client send http request while it is getting the response?

Can the HTTP client send a request while receiving the HTTP response?
For example, a client sends HTTP request A to server. Then, the server starts to send HTTP response. Before the client finish to receive HTTP response A, the client sends additional request B. Can it be possible? or Does it follow the HTTP RFC?
I think that above scenario is different from the pipelining. What I know about the pipelining is the scenario that client send multiple request A,B,C then the server response A,B,C consecutively. However, in the above scenario, request B is issued while the processing the response A.
Thank you
With the same connection object you must read the whole response before you can send a new request to the server, because response provides access to the request headers, return type and the entity body, If you send new request before fully reading response, client may get confused with mismatched responses.
Again it totally depends upon client library you using. Library could allow asynchronous requests.
There are concepts like
AsyncTask in android, promis in Angularjs etc.
allow asynchronous request.

What is the difference between HTTP 408 and 504 errors?

These are both timeout errors, but who is timing out in a 408 vs. a 504?
From w3, 408 is defined as:
The client did not produce a request within the time that the server was prepared to wait. The client MAY repeat the request without modifications at any later time.
...And 504 is:
The server, while acting as a gateway or proxy, did not receive a timely response from the upstream server specified by the URI (e.g. HTTP, FTP, LDAP) or some other auxiliary server (e.g. DNS) it needed to access in attempting to complete the request.
So who is the 'client' in the 408 if not an intermediary server? If it's an actual end user, how does a server know to wait for their request before they have made it?
The client is the browser or client application. The server knows to wait for a request because it has accepted a connection, or already read part of the request, say a header or two.
Amazon documentation tells: http://docs.aws.amazon.com/en_en/elasticloadbalancing/latest/classic/ts-elb-error-message.html#ts-elb-errorcodes-http408
Indicates that the client cancelled the request or failed to send a full request
Mozilla documentation tells: https://developer.mozilla.org/en/docs/Web/HTTP/Status/408
The HTTP 408 Request Timeout response status code means that the server would like to shut down this unused connection. It is sent on an idle connection by some servers, even without any previous request by the client

Resources