I've read something that looks wired to me!
I was reading an article that said HTTP uses FTP to transfer files!
I want to know is it true? if yes, how it transfers?
I mean how it can distinguish if it's a file and it's transferable over FTP? for example I can read a file with PHP and send it to user or just create a link to file! in both, headers can be same but in first case, it's impossible to transfer it over FTP!!!
Edit: I really appreciate if you provide me a good resource on this issue!
HTTP doesn't use FTP to transfer files. HTTP is a protocol in it's own right (HyperText Transfer Protocol) rather than FTP (File Transfer Protocol) but both use TCP transport layer.
the protocol hierarchy is
{http,ftp,xxx} -> {tcp,udp} -> ip
http and ftp are on the same layer(application layer)
have a look at Internet_protocol_suite
Yeah HTTP and FTP both run on the TCP protocol and do not piggy back on one another.
No HTTP don't use FTP for file transfer, but some HTTP client libraries like curl can handle both HTTP & FTP, and of course a web page can have ftp://some.org/some/ftp.link links
FTP is perhaps slightly faster, but is more complex and uses 2 connections (one for data, one for control).
There are many resources (and even books) on HTTP and FTP. I found good Shiflett's HTTP Developer's Handbook but there are many many others. Go to a library to find them.
Related
I understand that HTTP is a protocol that allows information to be transferred between a client and a server. At the moment, this protocol is used everywhere: when we're opening needed web page, downloading music, videos, applications...
MDN
HTTP is a protocol for fetching resources such as HTML documents. It is the foundation of any data exchange on the Web and it is a client-server protocol, which means requests are initiated by the recipient, usually the Web browser. A complete document is reconstructed from the different sub-documents fetched, for instance, text, layout description, images, videos, scripts, and more.
But it's not entirely clear to me what exactly HTTP does during this information transfer. If, as I read, a protocol is essentially a set of rules, then does it mean that HTTP just setting up rules for passing information between server and client? If so, what are these rules and what are they for?
Hypertext Transfer Protocol is a communications protocol. It is used to send and receive webpages and files on the internet. It is now coordinated by the W3C. HTTP version 1.1 is the most common used.
HTTP works by using a user agent to connect to a server. The user agent could be a web browser or spider. The server must be located using a URL or URI. This always contains http:// at the start. It normally connects to port 80 on a computer.
A more secure version of HTTP is called HTTPS (Hypertext Transfer Protocol Secure). This contains https:// at the beginning of the URL. It encrypts all the information that is sent and received. This can stop malicious users such as hackers from stealing the information and is often used on payment websites. HTTPS uses port 443 for communication instead of port 80.
I recently bought an ip cam for a project. So my project was just to create a button on a webpage to show the video feed coming from the cam when clicked on it. If i have to stream the rtsp link of the ip cam via a browser, i need to use ffmpeg for converting into HLS. But when i use a http video link of the cam, its easy and convenient. So my question is, what advantage does rtsp have over http ? and what method should i choose in a industrial project. At the moment i have successfully implemented the button with http video link and it works. I was just curious to know the advantage i will have if i use rtsp. Thanks a lot for you precious time.
it depends on the network environment that you are dealing with. For sure using dash/HLS will result in higher latency but on the other hand using TCP for streaming is easier to go through the firewalls.
Apple reasoning for introducing RTSP over HTTP:
Using standard RTSP/RTP it is possible to stream a presentation to a user via a single
TCP connection. (See RFC 2036 “Real Time Streaming Protocol (RTSP)”, section 10.12)
Unfortunately, that is not sufficient to reach a significant population of Internet users.
These users are typically on private IP networks where the client machines have indirect
access to the public Internet via email and HTTP Proxies.
The QuickTime HTTP transport exploits the capability of HTTP GET and POST
methods to carry an indefinite amount of data in their reply, and message body
respectively. In the most simple case, the client makes a HTTP GET request to the
streaming server to open the server to client channel. Then the client makes a POST
request to the server to open the client to server channel.
Link
Is there any possibility to send HTTP headers on an ftp:// URL? How would I go about it?
What I want is HTTP based crawlers to see an HTML Response (in the headers) while human users see the pure FTP Content.
What would be the smartest way of solving this problem? I thought about user agent specific redirection, however this seems to be against most search engine's guidelines.
What I want is for bots to index an HTTP version of the content, while normal users can get access to the FTP version, all while using a single ftp:// URL.
Is this doable?
It's NOT doable.
You cannot redirect ftp:// URL to http:// URL. The FTP protocol has no redirects. FTP protocol does not even know what URL is. Nor does a web browser (acting as an FTP client) send "user agent" or anything similar to the FTP server.
There are also no headers in FTP protocol (but that's just a technicality, comparing to the fact above).
FTP protocol is completely different to HTTP.
You are obviously confused by the web browsers (all of them) presenting an FTP resource the same way as an HTTP resource. But it's just a "game" by the browsers. They are nowhere near similar.
Note that FTP was invented eons before Internet, HTTP, web and URLs.
Though note that you may be able to do it the other way around. You should be able to redirect HTTP to FTP.
But I still do not think it's a good idea. If the clients need FTP, they probably want to use a real FTP client, not a web browser. And a real FTP client won't understand HTTP (redirects).
But this question is kind of meaningless, now that all major web browsers are gradually removing a support for FTP anyway.
I need to ask a question about HTTP protocol. I am trying to develop a sandbox (web browser) where any one can surf the website with different identities. Different identity means that on each request to a page will be from different IP address.
Now I don't know how scripts on web servers check the IP address of the one who generated the request. This is possible and I am aware of this. But I need to know whether this is HTTP request header that has the IP address or something else.
Simply speaking, I want to fool the websites. :)
Umair
Uh, the IP address is provided EVERY time you connect to ANYTHING. It has nothing to do with http headers.
See IPv4 -> packet structure -> header
You need to read up on the layers that build up a network from the wires[1] to the application. I think you'll find the the IP address is known long before HTTP gets involved.
See http://en.wikipedia.org/wiki/OSI_model
[1] or photons, or radio waves, or smoke signals...
CGI programs typically get a single HTTP request.
HTTP 1.1 supports persistent HTTP connections whereby multiple HTTP requests/responses are made w/o closing the connection.
Is there a way for a CGI program (or similar mechanism) to handle multiple HTTP requests/responses on the same connection?
I am using Apache httpd.
Keep-alives are one of the higher-level HTTP features that is wholly dealt with by the web server. They are out-of-scope for CGI applications themselves.
Accessing CGI scripts through Apache mod_cgi works with keep-alive for me. The browser re-uses the same TCP connection to fetch the page and then resources referred to by it, without the scripts in question having to do anything special.
If you mean you would like to have the same CGI process handle one request and then the next (instead of the process ending and a new one being spawned), then I'm afraid that's not possible. The web server will intercept keep-alives and make them look like single requests before your scripts can do anything about it. (If you want to do that to improve performance, consider a different gateway interface, such as FastCGI or language-specific options like WSGI.)
SCGI sounds exactly like what you want. It is similar to FastCGI but a simpler solution to implement (the S stands for Simple :)).