Do the big guys (Google, Microsoft, etc...) remember all HTTP clients and more importantly, the User-Agents that connected to them?
If so, should you implement this as a startup? (make your server remember the clients)
I'm not asking for advice, only for practicality or if there's some protocol somewhere that requires it. Like what's the standard, not your opinion.
The standard is: if there is data you need then you collect and store it. If you don't need the data then don't bother.
That information is in the request header sent by the browser. Anything the browser sends to the server can be collected, processed, stored, etc...
There is no protocol that requires it and you do not have to store this information.
Related
High Level Description
Let's say I have a client program (an iOS app in my specific case) that should communicate with a server program running on a remote host. The system should work as follows:
The server has a set of indexed audio files and exposes them to the client using the indexes as identifiers
The client can query the server for an item with a given identifier and the server should stream its contents so the client can play it in real time
The data streamed by the server should only be used by the client itself, i.e. someone sniffing the traffic should not be able to interpret the contents and the user should not be able to access the data.
From my perspective, this is a simple implementation of what Spotify does.
Technical Questions
How should the audio data be streamed between server and client? What protocols should be used? I'm aware that using something on top of TLS will protect the information from someone sniffing the traffic, however it won't protect it from the user himself if he has access to the encryption keys.
The data streamed by the server should only be used by the client itself, i.e. someone sniffing the traffic should not be able to interpret the contents…
HTTPS is the best way for this.
…and the user should not be able to access the data.
That's not possible. Even if you had some sort of magic to prevent capture of decrypted data (which isn't possible), someone can always record the audio output, even digitally.
From my perspective, this is a simple implementation of what Spotify does.
Spotify doesn't do this. Nobody does, and nobody can. It's impossible. If the client must decode data, then you can't stop someone from modifying how that data gets decoded.
What you can do
Use HTTPS
Sign your URLs so that the raw media is only accessible for short periods of time. Everyone effectively gets their own URL to the media. (Check out how AWS S3 handles this, for an excellent example.)
If you're really concerned, you can watermark your files on-the-fly, encoding an ID within them so that should someone leak the media, you can go after them based on their account data. This is expensive, so make sure you really have a business case for doing so.
It may seem to be a trivial question but still.. I have a confusion over it.
Almost at every site I have read that HTTP persistent or keep-alive connections are better than the non-persistent one.
Ques: So, why do non-persistent even exists?
Some says that persistent has disadvantage if server is serving many clients as users are deprived of connection.
Ques: All the popular websites server millions of clients, does that mean they don't use persistent mode?
As per my understanding I can think search engines may not be using persistent connections.
Can someone please enlighten me on this topic.
Another doubt I have is regarding the HTTP requests. I have read that if a page contains link to several objects then web browser makes that many request to fetch all those (this is why persistent connections are used). My doubt is why all the objects are not embedded in the page and sent as one object? If argument is that it makes page heavy and not bandwidth friendly then anyways the browser open parallel connections to fetch multiple objects which again putting the same load on the network.
OK, I understand that this cannot be done for like image search but if a page contains few objects then can we embed them into the page and send.
These may seem foolish questions but I can't help. I have a doubt and I need to clear that and you can help.
Thanks
The original HTTP specification always uses non-persistent connections; HTTP/1.1 added persistence because it is more efficient for web pages that embed a lot of external objects (which were rare when HTTP/1.0 was written.)
However, even though HTTP/1.1 allows persistent connections there are implementations that don't support them, or which still only support HTTP/1.0. For this reason, HTTP/1.1 requires that the Connection: keep-alive header be sent in order to enable this feature, and Connection: close be sent to disable it.
It is possible to include media directly in the HTML by base64 encoding the data and including it in a data: URL. This is not usually done because it slows down your web browser. With a standard HTML page, the browser can start rendering the structure of the page without waiting for the (rather large) inline data: links to download.
As you say most of the webpages hosted over the internet will not only handle fewer data, and nobody can estimate that. The HTTP server should be generic and it should have a mechanism to avoid multiple requests in the name of dependencies. You say that the non-persistent method avoids the blocking of ports by a single client for a long time where as the server may have to serve more clients and it would give a lot of stress, that is not true. Persistent connections actually reduce the load for a server by limiting the number of queries it has to serve.
Hope this HTTP Persistent connection will help you understand.
I am working on a C# mobile application that requires major interaction with a PHP web server. However, the application also needs to support an "offline mode" as connection will be over a cellular network. This network may drop requests at random times. The problem that I have experienced with previous "Offline Mode" applications is that when a request results in a Timeout, the server may or may not have already processed that request. In cases where sending the request more than once would create a duplicate, this is a problem. I was walking through this and came up with the following idea.
Mobile sets a header value such as UniqueRequestID: 1 to be sent with the request.
Upon receiving the request, the PHP server adds the UniqueRequestID to the current user session $_SESSION['RequestID'][] = $headers['UniqueRequestID'];
Server implements a GetRequestByID that returns true if the id exists for the current session or false if not. Alternatively, this could returned the cached result of the request.
This seems to be a somewhat reliable way of seeing if a request successfully contacted the server. In mobile, upon re-connecting to the server, we check if the request was received. If so, skip that pending offline message and go to the next one.
Question
Have I reinvented the wheel here? Is this method prone to failure (or am I going down a rabbit hole)? Is there a better way / alternative?
-I was pitching this to other developers here and we thought that this seemed very simple implying that this "system" would likely already exist somewhere.
-Apologies if my Google skills are failing me today.
As you correctly stated, this problem is not new. There have been multiple attempts to solve it at different levels.
Transport level
HTTP transport protocol itself does not provide any mechanisms for reliable data transfer. One of the reasons is that HTTP is stateless and don't care much about previous requests and responses. There have been attempts by IBM to make a reliable transport protocol called HTTPR what was based on HTTP, but it never got popular. You can read more about it here.
Messaging level
Most Web Services out there still uses HTTP as a transport protocol and SOAP messaging protocol on top of it. SOAP over HTTP is not sufficient when an application-level messaging protocol must also guarantee some level of reliability and security. This is why WS-Reliability and WS-ReliableMessaging protocols where introduced. Those protocols allow SOAP messages to be reliably delivered between distributed applications in the presence of software component, system, or network failures. At the same time they provide additional security. You can read more about those protocols here and here.
Your solution
I guess there is nothing wrong with your approach if you need a simple way to ensure that message has not been already processed. I would recommend to use database instead of session to store processing result for each request. If you use $_SESSION['RequestID'][] you will run in to trouble if the session is lost (user is offline for specific time, server is restarted or has crashed, etc). Also, if you use database instead of session, you can scale-up easier later on just by adding extra web server.
In what way is HTTP inappropriate for E-mail? How (for example) does the statefulness of IMAP benefit client development?
What actually are the arguments for keeping them separate other then historical and backwards compatibility reasons?
SMTP, IMAP, and HTTP are specialized application-level protocols. If there was a generic application-level protocol which all of these could inherit from, you could usefully refactor things, but since that is not the case, wedging the other protocols into one of the existing protocols is hardly worth the effort, and would hardly simplify things.
As things are now, the history and backwards compatibility is not just a cultural heritage, it is also a long and complex process of defining application-specific features for each protocol. SMTP is store-and-forward, which introduces the need for audit headers (Received: et al.). IMAP was designed for concurrent access to a data store, which is what made it necessary to introduce state (who are you, where are you authorized to connect, which folder are you connected to, what have you already seen, read, or deleted). HTTP is fundamentally a pull protocol (pull down a web page) and the POST facility carries with it a lot of functionality specific to the CGI protocol and the overall content model of HTTP.
SMTP is a protocol that identifies the sender and the recipients to send individual mail messages, each mail server accepts (or not) mail to forward, eventually reaching the destination. HTTP is meant for anybody to connect to the server and look at (mostly the same) contents. They are quite fundamentally different, and so it makes a lot of sense to use different protocols.
I was watching many presentations about Html 5 WebSockets , where server can initialize connection with client and push the data without the request from the client.
We don't need Polling etc.
And , I am curious , why Http was designed as a "pull" and not full duplex protocol in the first place ? What where the reasons behind that kind of decision ?
Because when http was first designed it was meant to be used to retrieve documents from a server. And the easiest way to do is when the client asks the server for a document and gets it delivered as response (or an error in case it does not exist). When you have push protocol that means the server would need to keep client connections around for potentially a long time creating more resource management problems - remember we are talking about early 1990s here.
Http was designed for simply retrieving hypertext documents from a server. There were no reasons to push anything to the client when the pages were just pure, static html without scripting capabilities.
Since there was no need at the time for pushing things back to the client, the protocol was kept simple.
HTTP is mainly a pull protocol—someone loads information on a Web server and
users use HTTP to pull the information from the server at their convenience. In particular,
the TCP connection is initiated by the machine that wants to receive the file.