How web server sends a file to a client - http

I am trying to create a web server which has some clients, the webserver has some users(unregistered) which request some files and web server should send the requested file to users. Now my question is how the web server should send back the file? I do not want to make it like a ftp server, so should I create a socket and send file? what are other web servers doing to send the file?

The server will have to listen on some interface. Clients will start the process and connect to the server by opening a socket and request some content. On the same connection the server will respond with the requested content or an error.
Clients (usually browsers) communicate with Web servers using HTTP. At http://www.ietf.org/rfc/rfc2616.txt you can find the description of the protocol. For basic stuff it is quite simple.
Not much changes if the client asks for an HTML file (a web page) or some other file. In the header of the server's response (the first part sent), the client will find some information about the type of content, so that he knows how to display it. The header is followed by the actual data (file or some program generated data).
Hope this helps

Related

How to act as a middleman server to add HTTP headers between client and remote server?

I have a server which acts as a middle man between an HTTP client that I don't control and a remote file hosting server I don't control. I want to expose a URL through which the client can download a chunk (specified by HTTP range headers my server provides) of a file on the remote server.
There are two important constraints here: I'd like to facilitate this partial download without having the response flow back through my server (response goes straight to client) and without writing a custom client. How can I accomplish this?
One option I tried was having my endpoint send a redirect response with the range headers set on the response, but unfortunately those do not get forwarded onto the subsequent request from the client and as a result the entire file is downloaded. Are there any other hacky tricks / network wizardry I can employ to achieve this end given the constraints?
i am also thinking about this since 5 days it's like the server give you file only when you give required header from your side and without header it will deny your request and middleman if it does get request with required header then file will be accessable through your middleman to client and you are trying to client get file from server not from your custom server which is trying to pass headers to server for your client

How is file content sent to browser after HTTP response returns?

When initiating a file download from the browser, I would send the server a request for the file. I know that the server then returns a response with content-type either as attachment or application/octect-stream. This lets the browser know that it should initiate the file download.
What I want to know is, how is the file data sent from the server to the client once the response has already be returned? Does it use a different protocol than http? is it always streamed from the server? or is the full file content sent in the response and then the browser just downloads it onto the client machine without maintaining a connection to the remote server?
Is there a way to know when this process has finished from either the server or the client?
The content of the the file being downloaded is included in the response. Response headers are followed by an empty line and then you have actual file data.

How can I config nginx to send multiple POST requests in one connection

I am developing an Upload application.
I use Google Chrome to upload a big file (GB) and use nginx to pass the file to my backend application.
I use Wireshark to find that Chrome send the file in one connection with multiple POST requests.
But nginx will split every POST request then send it in different connection to backend application.
How can I config nginx to make it send all the POST requests in one connection, not per POST request one connection?
Oh my god, it's pathetic!
The solution is just enable Nginx upstream keepalive.
Operations to enable upstream keepalive.

HTTPS key negotiation and tunneling over HTTP using Javascript

HTTPS is widely used for security online. It offers security and integrity, but not authentication. To ensure the client is not talking to a man-in-the-middle, we have digital certificates and the PKI. It all works very well, except in the situation where the following criteria apply:
The server and client do not share a common, trusted root CA, therefore they cannot validate each other's certificates
Circumstances (eg. firewall, permissions, etc) do not permit the use of regular HTTPS protocol
The question is: can we still send secure, authenticated messages between the client and server, perhaps using Javascript?
Something along the lines of:
Client sends regular HTTP request to server
Server responds with page containing Javascript code
Client's Javascript asynchronously sends data to the server used to negotiate
Server runs some sort of script (eg. PHP) to establish the tunnel
Client and server communicate over the encrypted tunnel
I can see it being possible to send messages with security and integrity in this manner, but is it possible to authenticate without making use of the PKI, perhaps by exploiting the fact that the server can dynamically rewrite the Javascript sent to the client?
There is an issue in your step 2 - Server responds with page containing Javascript code :
how do you know someone sitting on wire is not modifying this Javascript since it is being transferred in plaintext? Basically, when X wants to authenticate Y, X should know something about Y- it could be public information such as public key/certificate or shared secret that it could verify

Can I whitelist a domain for unencrypted traffic from a page served over HTTPS?

I've got an internal web application that's designed to work in concert with a server running locally on the client machine. (For the curious: the local server is used to decrypt data retrieved from the server using the client machine's GPG key.)
The internal web app is served over HTTPS while the local app is accessible via localhost. It used to be that I could make unencrypted AJAX requests from the page to localhost without any issues; but it seems that recently Chrome was updated to disallow HTTP requests to any destination from pages served over HTTPS.
I understand that in the vast majority of cases, HTTP requests from a page served via HTTPS constitute a security hole. However, since I have complete control over the endpoint in this case (i.e., localhost), it seems to me that it should still be perfectly safe to make HTTP requests to that one destination even when the host page has been served via HTTPS.
Is this possible? To whitelist localhost somehow?
Since you are in control of both the client and the server, it sounds like a good candidate for Cross-Origin Resource Sharing (CORS). The server will have to set a few response headers to give access to the client. You can learn more here: http://www.html5rocks.com/en/tutorials/cors/

Resources