I'm building a client that "talks" to the http server. Now my client needs to download files simultaneously. Right now my client just opens a socket (actually Async Socket) for every connection, but I was wondering whether I could do that with just one socket?
Thanks
Alex
You can have multiple requests on the same socket but they must be sequentially handled. In HTTP this is called a persistent connection and you can accomplish it using the keep-alive header.
If you want to download 2 files individually at the same time you'd need 2 separate connections.
Take a look at RFC 2616 section 8 "Connections".
Related
I’m researching options to upgrade a legacy TCP socket protocol where either end can initiate messages/transactions and am intrigued by gRPC as an option.
My criteria are:
supports authentication
layer 7 (vs layer 4 for current implementation)
supports TLS
From what I’ve read so far, gRPC has all this. However, it’s not clear to me that it has peer-peer capabilities. The behavior I’m interested in is:
client can request info and send commands to server (supported)
server can send updates to client on its own initiative (supported)
#1 can happen while #2 is happening
It seems to me that defining a server stream for case #2 would work. It’d basically be a
// use case 2
rpc SubscribeToEvents(EventsSubscriptionRequest) returns (stream EventDescriptor);
But would I also be able to use case #1 while #2 was active?
// use case 1
rpc GetValue(ValueRequest) returns (ValueResponse);
Thanks in advance for your help and advice.
Assuming you use a multi-threaded implementation, it's possible for both #1 and #2 concurrently.
However, you describe a client-server scenario not peer-peer; the client must initiate both #1 (unary) and #2 (streaming) RPCs.
True peer-peer would have the endpoints implement client and server such that either peer could initiate (unary and|or streaming) RPCs against another.
You could have both sides be both client and server, as the other answer suggests. But if you are okay with having the client be the one to initiate the connection and the stream, you could use a single bidirectional stream between the two sides. Once the client initiates the stream, the client and server can each send messages on the stream whenever they want to.
I am trying to write my own HTTP proxy server and I have a question about the protocol.
First, I would like to mention that I am using this page as a reference. I think it's accurate but it's also from 1998. If anyone has a better reference for me I would be grateful to them.
So basically I understand that the connection starts with a handshake. I receive a CONNECT request, proxy-authorization, etc. Then I connect to the host and port specified in the request's resource URI. Then I send a status line, ideally HTTP/1.1 200 Connection established, followed by some headers and a CRLF like normal.
Once this handshake is complete my client and the host my client asked for are connected through my proxy server. I am supposed to tunnel data in both directions, which makes sense since I could be supporting any type of TCP based protocol, including HTTPS or even WebSocket, over this HTTP based proxy connection.
What doesn't make sense to me is how I know when to stop. If this proxy can really support any TCP based protocol then I don't understand how to know when the interaction is over. An HTTP message would be a simple 2 step read-write, an HTTPS interaction would involve several such exchanges, and a WebSocket interaction would involve indefinitely many exchanges.
I'm not asking for a perfect solution. I would be happy with something pragmatic like a timeout, but I would like to know what standard best practices are in order to do this project as well as I can.
Thanks to everyone for any help.
Just copy data in both directions simultaneously until you read an end of stream. Then:
Shutdown the opposite socket for writing and stop copying in that direction. That propagates the EOS to the peer.
If the socket you read EOS from was already shutdown for writing, which you will have to remember, close both sockets.
I am trying to create a Web Server of my own and there are several questions about working of Web servers we are using today. Questions are:
After receiving a HTTP request from a client through port 80, does server respond using same port 80?
If yes then while sending a large file say a pic in MB's, webserver will be unable to receive requests from other clients?
Is a computer port duplex or simplex? (Can it send and receive at the same time)?
If another port on server side is used to send response to client, then (if TCP is used, which is generally used), again 3-way handshaking will be done which will be overhead...
http://beej.us/guide/bgnet/output/html/singlepage/bgnet.html here is a good guide on what's going on with webservers, although it's in c but the concepts are all there. This will explain the whole client server relationship as well as some implementation details.
I'll just give a high level on what's going on:
Usually what happens is when your server gets a new request that comes in it creates a fork that will process it, that way you are not bogged down by each request, when the request comes in the child process is handed a new file to write to(again this is all implementation details).
So really you have one server waiting for requests and for each request it received it spawns a child to process to deal with this request. I'm sure there are much easier languages to implement this stuff than c(I had to do both a c and java server serving to either one in my past) but c really gets you to understand the things that are going on and I'm betting that is what you are looking for here
Now there are a couple of things to think about:
how you want the webserver to work. The example explains the parent child process.
Do you want to use tcp/UDP there are differences in the way to payload gets delivered.
You don't have to connect on port 80. that's just the default for web.
Hopefully the guide will help you.
Yes. The server sends the response using the TCP connection established by the client, so it also responds using the same port. The server can handle connections from multiple clients using the same port because TCP connections are identified by (local-ip, local-port, remote-ip, remote-port), so the server can even handle multiple connections from same client provided that the source ports are different.
There are different techniques you can use to be able to serve multiple clients at the same time. These include
using multiple processes or threads: when one is busy serving a client the others can serve other clients.
using events: the server listens for events from the OS: when it can write a block of data to a connection it writes it, when a new client connects it accepts the connection, ...
Frequently both approaches are be combined.
A TCP connection is duplex: you can send and receive at the same time. The HTTP protocol is based on a simple request-response model though: at any given time only one party is "talking."
Is a http end point suppose to respond to requests from a particular client in order that they are received?
What about if it doesn't make sense to in the case of requests handled by cluster behind a proxy or in requests handled with NIO where one request is finished faster than the other?
Is there a standard way of associating a unique id with each http request to associate with the response? How is this handled in clients like http componenets httpclient or curl?
The question comes down to the following case:
Suppose, I am downloading a file from a server and the request is not finished. Is a client capable of completing other requests on the same keep-alive connection?
Whenever a TCP connection is opened, the connection is recognized by the source and destination ports and IP addresses. So if I connect to www.google.com on destination port 80 (default for HTTP), I need a free source port which the OS will generate.
The reply of the web server is then sent to the source port (and IP). This is also how NAT works, remembering which source port belongs to which internal IP address (and vice versa for incoming connections).
As for your edit: no, a single http connection can execute one command (GET/POST/etc) at the same time. If you send another command while you are retreiving data from a previously issued command, the results may vary per client and server implementation. I guess that Apache, for example, will transmit the result of the second request after the data of the first request is sent.
I won't re-write CodeCaster's answer because it is very well worded.
In response to your edit - no. It is not. A single persistent HTTP connection can only be used for one request at once, or it would get very confusing. Because HTTP does not define any form of request/response tracking mechanism, it simply would not be possible.
It should be noted that there are other protocols which use a similar message format (conforming to RFC822), which do allow for this (using mechanisms such as SIP's cSeq header), and it would be possible to implement this in a custom HTTP app, but HTTP does not define any standard mechanism for doing this, and therefore nothing can be done that could be assumed to work everywhere. It would also present a problem with the response for the second message - do you wait for the first response to finish before sending the second response, or try and pause the first response while you send the second response? How will you communicate this in a way that guarantees messages won't become corrupted?
Note also that SIP (usually) operates over UDP, which does not guarantee packet ordering, making the cSeq system more of a necessity.
If you want to send a request to a server while another transaction is still in progress, you will need to create a new connection to the server, and hence a new TCP stream.
Facebook did some research into this while they were building their CDN, and they concluded that you can efficiently have 2 or 3 open HTTP streams at any one time, but any more than that reduces overall transfer time because of the extra packet overhead cost. I would link to the blog entry if I could find the link...
I need a way to detect a missing response to a long running HTTP POST request. This problem arises when the network infrastructure (firewalls, proxies, unplugged cables, etc.) drops the response packets. The server may detect this failure, but the client cannot send additional bytes after the POST to probe the state of the TCP connection. The failure may be limited to a single TCP connection. For example I may be able to subsequently open a new TCP connection to the server.
I'm looking for a solution that still uses HTTP POST and does not change the duration of the server side processing.
Some solutions that I can think of are:
Provide a side channel interface to retrieve request & response history. If the history lists the response as having been send (presumably resulting in a TCP error) but I have not yet received it within a reasonable time I can generate a local error.
Use an X header to request that the server deliver "spurious" 100 Continue provisional responses on a regular interval. If I fail to see an expected 100 Continue or a non-provisional response I can generate a local error.
Is there a state of the art solution for this problem?
It sounds to me like you are using Soap for something that would be much better done using a stateful connection, or a server side push technology.