How can client send a request and receive multiple responses? - http

I am trying to send a request from client machine to the server A, and receive response from Server A, B, and C. Once A received the request from the Client, it forwards it to Server B, and C. So, after all servers learned about client request, every server will process the request and then respond to the client. In this case, the client will receive many responses. My question is how to send one request for one server and receive many responses for single request. I know that http is not working in this scenario. Is there any suggestions?
Note: I am using Golang.
Thanks.

Related

I need send request to rabbitMq Asynchronous

I design a RabbitMq System with RPC Technology that have one Client and One Server (claculated Fibonacci).
My problem is here :
when i send two or more request to server each request is processed after previous request done.
Question : is it true? i mean, why all request can`t process Asynchronous

What is the mechanism of grpc server side pushing?

While I'm writing a service with grpc, I'm trying to compare http/2 with websocket by server side pushing mechanism.
I know for websocket, the client will send a request with Upgrade: WebSocket and Connection: Upgrade headers to server and establish the long-lived connection. Then server will send the data freely after the connection is established.
But for grpc, as it is routed upon http/2, from the wiki page, https://en.wikipedia.org/wiki/HTTP/2_Server_Push, it says the server would need to predict the potential requests the client would send, and send a PUSH_PROMISE frame as early as possible.
Here are my two questions:
Does it mean that the server would also need to receive a corresponding response(request) from client in response to this PUSH_PROMISE header to decide if client wants to receive or decline the certain push?
In Grpc, if I have a sever side streaming, say send a message every 1 second from server. Does it mean the server need to send a PUSH_PROMISE to client every 1 second or at least before every data frame that server pushes to client?
gRPC does not currently support/use PUSH_PROMISE.
Streaming RPCs in gRPC use HTTP/2 streams; the entire RPC is contained in a request/response in HTTP. The main difference is that HTTP/2 implementations generally allow such streams to be streaming and bidirectional (the client can send more in the request after reading part of the response), while in HTTP/1 that was hit-or-miss.
In gRPC the client will always initiate the RPC. But for server-streaming the server can then reply with multiple messages over time via the stream. This would be similar to the scenario you described with websockets.

how does the HTTP protocol behind client/server communication works?

HTTP is client - server communication where client always initiates the connection and server responds.
In the client server communication with HTTP 1.1 the following steps takes place:
1. Client sends the request to the server.
2. Server sends the response to the client with the response message and the status code.
My question is how is the data transfer handled in the protocol? I know HTTP is stateless and also it is either everything or nothing mechanism but how do you prove this? How is the handshake between server and client?
For example: When the server sends the response back to the client, what happens if 50% of the data is sent and then there is connection loss...then what will happen in this scenario? Will the client wait for remaining 50% of the message or it will start new transfer where server tries to send 100% of the message again? (In synchronous communication)
HTTP relies on a TCP connection, so in your example if 50% of the data is correctly sent but others packets (yes, you should think in terms of packets) are lost, the data will be sent again following the rules defined in TCP protocol

Can the client send http request while it is getting the response?

Can the HTTP client send a request while receiving the HTTP response?
For example, a client sends HTTP request A to server. Then, the server starts to send HTTP response. Before the client finish to receive HTTP response A, the client sends additional request B. Can it be possible? or Does it follow the HTTP RFC?
I think that above scenario is different from the pipelining. What I know about the pipelining is the scenario that client send multiple request A,B,C then the server response A,B,C consecutively. However, in the above scenario, request B is issued while the processing the response A.
Thank you
With the same connection object you must read the whole response before you can send a new request to the server, because response provides access to the request headers, return type and the entity body, If you send new request before fully reading response, client may get confused with mismatched responses.
Again it totally depends upon client library you using. Library could allow asynchronous requests.
There are concepts like
AsyncTask in android, promis in Angularjs etc.
allow asynchronous request.

Reconstruct HTTP browsing from pcap

I'm currently trying to automatically reconstruct an HTTP browsing only with a pcap ( basically it means matching an HTTP reply to the next HTTP requests). Most of the times, it works fine but sometimes a certain url, u, is present in the data of multiple HTTP replies.
For example, if u1 and u2 contains u in their reply data and if the request to u happens after the request to u2, how can I decide if the request to u was caused by u1 or by u2 ? Note that no request to u was made between u1 and u2.
Are there some fields in any network layer that I can use to make this match ?
Thanks!
HTTP runs on top of TCP, which is connection-oriented. You have access to the IP header of the connection used for the HTTP request (client IP/port -> server IP/Port).
HTTP is a command/response protocol, there is 1 response for each request.
So, simply look for an HTTP response immediately following the HTTP request on the same TCP connection (server IP/Port -> client IP/Port).
HTTP is state-less, the connection may be closed between requests without affecting the overall browsing model (closing connections is the required behavior in HTTP 0.9, is the default behavior in HTTP 1.0, and is not the default behavior in HTTP 1.1+), so it is possible for an HTTP response to trigger subsequent requests on new connections, so you need to be ready to handle that. The Connection header in the HTTP request will tell you whether the client is asking for the connection to remain open or not. The Connection header in the HTTP response will tell you whether the server is actually closing the connection or not after sending the response. But even if the server leaves the connection open, that is no guarantee that the client will actually reuse the same connection for later requests to the same server (though it likely will, unless a timeout elapses between requests).

Resources