Use timeouts in a HTTP Server? - http

Should I use Timeouts in a HTTP Server implementation?
E.g. if I get a request and create a HTTP Connection to listen to requests with a separate Thread, should this thread use timeouts?
Currently I don't use Timeouts in Debug Code, only in Production code, as to find the lockups in the Server.

As long as you adheres the HTTP specification, I don't forsee problems.

Related

improbable-eng/grpc-web Response closed without headers

I have a server in go using gRPC and on the react client I'm using grpc web with grpcwebproxy and I've been trying to connect my client to the server but constantly get error Code 2, with the message: Response closed without headers. Has anybody else encountered this issue? I'm currently using improbable-eng implementation of grpc-web.
You probably need to configure grpcwebproxy for CORS, see this docs.
grpcwebproxy: might have read/write timeout and close your longPolling connection by timeout. Relevant for all client-server streaming calls.
server_http_max_read_timeout – HTTP server config, max read duration (default 10s).
server_http_max_write_timeout – HTTP server config, max write duration (default 10s).

What is the best way to redirect network requests?

I've written my own HTTP Server, but given certain criteria, I want to redirect some requests made to my server to another server running on the same machine. For example, I may want to redirect all requests to "/foo/*" to be handled by an apache server I also have running. What is the best way to do this?
The only way I can think of doing this is by running apache on a different port, and then making a completely new network request from my server to localhost:1234 (assuming apache is running on port 1234) with the same exact request headers and body, and then take the response and have my server send that back to the client.
That seems like a kind of hacky, roundabout way of accomplishing this though, and I'm sure this is a problem that is tackled by every major website. Is there a certain technology or protocol for doing this that I just haven't heard of?
Thanks a lot!
Edit: Just to be clear, the client should only make one network request for all this, rather than having my server return a 3xx response
HTTP runs over TCP. The Apache server can't just send the required response to a client who hasn't asked for it. The client has asked YOUR HTTP server for the data and so it must be the one to send a response. The client is probably behind a firewall and, as such, the Apache server can't even establish a TCP connection with it (incoming connections are usually blocked).
If your server takes the clients request, forwards it to the Apache server, gets the response from the Apache server and forwards it to the client, it's acting as a proxy server (a middleman). This won't be redirection.
The only sensible way to do this would be to have the client make two network requests.

IBrowse and persistent connection per client process

I need to operate with a SOAP service from Erlang. SOAP implementation is not a subject, I have a problem with HTTP requests at a client side.
I use IBrowse as a HTTP client. This SOAP service uses a specific authorization mechanism, which relates an opened session to a client connection (socket). So, the client should use only one persistent connection to server (socket), and if it try to send a request via another socket (e.g., connection from pool) - authorization will fail.
I use IBrowse in this way:
Spawn connection process to server (ibrowse:spawn_worker_process/1)
Send request to server via spawned process with {max_sessions, 1} and {max_pipeline_size, 0}.
If I understand the docs right, this should use one socket for server connection with disabled pipelining, also, I use Connection: Keep-Alive header and HTTP version explicitly set to 1.0. But my connection is always closed after the response is received.
How can I use IBrowse (or another http-client) the way I described above?
I think you could that with hackney by reusing a connection.
Also gun is quite nice http client, easy to use, keeping connection, but with little less connection control.

Anyone aware of a simple example of a Netty HTTP server which supports persistent HTTP connections?

Can anyone provide an example of a simple HTTP server implemented using Netty, that supports persistent HTTP connections.
In other words, it won't close the connection until the client closes it, and can receive additional HTTP requests over the same connection?
This is exactly one of the things their sample http code demonstrates.

CGI to Handle Multiple Requests on a Persistent HTTP Connection

CGI programs typically get a single HTTP request.
HTTP 1.1 supports persistent HTTP connections whereby multiple HTTP requests/responses are made w/o closing the connection.
Is there a way for a CGI program (or similar mechanism) to handle multiple HTTP requests/responses on the same connection?
I am using Apache httpd.
Keep-alives are one of the higher-level HTTP features that is wholly dealt with by the web server. They are out-of-scope for CGI applications themselves.
Accessing CGI scripts through Apache mod_cgi works with keep-alive for me. The browser re-uses the same TCP connection to fetch the page and then resources referred to by it, without the scripts in question having to do anything special.
If you mean you would like to have the same CGI process handle one request and then the next (instead of the process ending and a new one being spawned), then I'm afraid that's not possible. The web server will intercept keep-alives and make them look like single requests before your scripts can do anything about it. (If you want to do that to improve performance, consider a different gateway interface, such as FastCGI or language-specific options like WSGI.)
SCGI sounds exactly like what you want. It is similar to FastCGI but a simpler solution to implement (the S stands for Simple :)).

Resources