What is the difference between HTTP vs FTP? - http

I'm just learning about it and I have these things in mind but I'm not quite sure about them. Is there any wrong ideas about my findings? Please elaborate it in simple terms.
HTTP:
used to view websites between web servers
web server files to another web server
used to transfer small files (in text form)
used to have access in WWW
FTP
used to access and transfer files between a local machine and a web server
local files to a web server
used to transfer large files (in any form)
used to have access in a remote server

HTTP and FTP (note there is a section on the wikipedia page illustrates the differences between HTTP and FTP) are both application layer protocols.
See also here.
HTTP:
used to make request response communications between a server and a client
this communication can be used to both upload and download text and binary information
state less
faster when transferring many small files
used for web pages with or without authentication required
FTP:
also performs uploading and downloading of information
stateful control connection
faster for single large file transfers
file transfer authentication needed
limited support of pipe-lining
The big difference is that HTTP fixes many of the issues incurred by FTP. One example is that FTP has very little overhead and no metadata while HTTP provides this and there HTTP supports the sending of multiple files. Also HTTP is state less.
Some extra sources I would advise you read for more information on this are:
1. http://www.differencebetween.net/technology/difference-between-ftp-and-http/
2.
https://daniel.haxx.se/docs/ftp-vs-http.html
Also for more information on the different types of FTP, I would advise looking at this Stack Overflow post.

Related

Accessing session folder resources in a multi-instance Opencpu

I have an Opencpu server with my package in it. This package contains a function which generates a plot image. When I send a POST request to this function via Opencpu, I get the session id in response which in turns is actually a folder on the Opencpu server containing the resources, image be the one. I pass on this image url (being served by Opencpu server) to my application and it uses that to create a PDF report.
Now, I have scaled this whole scenario by creating multiple instances of Opencpu containing my package. The instances are behind a Load Balancer. When I do the same, I get the image url. When my application uses this image url, it may not be found because now the request may have gone to some another instance of the Opencpu.
How can I approach the solution of this problem? One thing I have done for now is uploading the image to some public instance and return the corresponding path to the application. But that is too coupled.
Thanks.
Load balancing is always a bit complicated, so if possible, it is
easier to just move to a larger server. Most (cloud) providers offer (virtual) instances with many cores and 100GB+ RAM, which will allow you to
support many users.
If you really need load balancing there are a few methods.
One approach is to map the ocpu-store directory on the ocpu servers to
a shared NSF server. By default opencpu stores all sessions in the
/tmp/ocpu-store directory on the server. You can set a different location by setting the
tmpdir option in your /etc/opencpu/server.conf. There is an example configuration file that sets tmpdir in the /etc/opencpu/server.conf.d/ec2.conf.disabled on your server (rename to activate).
If you don't want to setup an NSF server, a simpler approach is to
configure your load balancer to always send particular clients to a
particular backend. For example if you use nginx you can set the load
balancing method to ip-hash.
Obviously this method requires that clients do not change ip address during the session. And
it will only be effective if you have clients connecting from a variation
of ip addresses.

How NGINX Reverse Proxy handle image/video file upload or download

First off, I'll explain my situation to you. I'm building a server for storing and retrieve data for my phone application. I'm new to NGINX. What I know point of using load balancing/reverse proxy is to improve performance and reliability by distributing the workload across multiple servers. But i'm not understand while it working with image/video file. Let say below is mine NGINX config file
upstream backend {
server backend1.example.com;
server backend2.example.com;
server backend3.example.com;
}
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://backend;
}
}
I have few question here.
First while i upload an image/video file, did i need to upload image into those all backend server or there is other way?
Second if I just save it to a separate server which only store image, while I requesting and download image or video file I proxy it to that specified server then what is the point of load balancing with image/video file since reverse proxy is to improve performance and reliability by distributing the workload across multiple servers?
Third does amazon s3 really better for storing file? Does it cost cheaper?
I'm looking for solution which can be done it by my own server beside using third parties.
Thx for any help!!
You can either use shared storage e.g. NFS, or upload to both servers, or incorporate a strategy to distribute files between servers, storing each file on a single server.
First two options logically are the same and provide fallover, hence improving reliability.
Third option, as you note, does not improve reliability (may be somehow, if one server fails second may still serve some files). It can improve performance, though, if you have many concurrent requests for different files and distribute them evenly between servers. This is not achieved through nginx load balancing but rather by redirecting to different servers based on request (e.g. file name or key).
For shared storage solution, you can use, for example, NFS. There are many resources going into deeper detail, for example https://unix.stackexchange.com/questions/114699/nfs-automatic-fail-over-or-load-balanced-or-clustering
For duplicate upload solution, you can either send file twice from client or do it server side with some code. Server side solution has the benefit of single file traffic from client and sending to second server only on fast network. In simple case this can be achieved, for example, by receiving file in a servlet, storing incoming data to disk and simultaneously upload to another servlet on the second server through http or other protocol.
Note that setting up any of these options correctly can involve quite significant effort, testing and maintanance.
Here comes S3, ready to use distributed/shared storage, with simple API, integrations, clients and reasonable price. For simple solution usually it is not cheaper in terms of $ per storage, but much cheaper in terms of R&D. It also has the option to serve flies and content through http (balanced, reliable and distributed), so you either can download file in client directly from S3 hosts or make permanent or temporary redirects there from your http servers.

google hosted scripts , use http or https?

I am using google hosted javascripts libraries,(jquery ,jquery-ui & other google jsapi),and I noticed that these scripts can be accessed by both http & https schema.Now ,I want to know that what are the effects of using http or https schema to access these google hosted scripts , and for my projects ,it's just an ordinary websites ,using http as default schema, so ,what should I do ,http or https? Is there any performance issue between the two ?
https does affect performance negatively, as encryption and security negotiation aren't trivial tasks. In the vast majority of cases this performance cost is not significant enough to outweigh its benefits.
Remember that SSL also secures the identity of the web server and not just the channel.
If a "man-in-the-middle" spoofed the address of your script's location (for instance), https would prevent you from unknowingly executing unintended scripts. http would not.
Check this out: HTTP vs HTTPS performance
The performance issue is rather small considering todays hardware and internet bandwidth. Personally I try to use the same protocol for all data used by one page (or iframe / frame), meaning scripts, CSS, images etc.
Data transferred over SSL will not be cached by the visitor's browser, instead will be downloaded each time a page is loaded.
Using SSL / HTTPS is recommended if a page contains sensitive data, personal data, or offers interactions like contact forms etc. Buying and installing a SSL certificate is justified in those cases.
Google analytics for example first checks which protocol your page uses, then uses the same protocol for downloading its scripts.

Which protocol is used downloading files from Rapid share Http or FTP?

When i Download something from any File Sharing web Site ( Rapid share, FileSonic etc) then at first the browser connects to them with Http , but when the actual file download is happening does the underlying protocol changes to FTP?
the download protocol is still HTTP (at least with Rapidshare).

Sending files from server to client with ASP.NET

I am developing a C# ASP.NET 4.0 application that will reside on a Windows Server 2003. By mean of accessing this application through a network computer, any user would be able to upload files to the windows server. But also, once these files are stored on server, he/she would be able to copy these files from the windows server to another networked computer.
I have found a way to upload files to a specified location on the server disk,
but now I need to send these files that are on server disk to the client computers.
My question is: Is there any way to send or copy files from server to other client computers (not the one that is accessing the web service) without needing a program recieving those files on the client computers? FTP, WCF, cmd commands, sockets?
Any idea?
If you want users of your webapp to download files, I'd look into an "ashx generic handler." It will allow you to send files back down to clients over HTTP(s).
If you are looking to have remote users, tell your webserver to copy files to other servers ON THE SAME LAN AS THE SERVER, you would write using normal System.IO operations.
Over a LAN, if you have the correct permissions and so on, you can write to a disk on a different machine using File.Copy -- there's nothing special about that.
If we're talking about remote machines over the internet, that's a different story. Something has to be listening whether it's FTP, WCF, DropBox, etc.
If the problem is that it can be painful to get something like WCF to work from a client due to problems like firewall issues under Windows 7, you could take a different route and have the client periodically ping the server looking for new content. To give the server a point of reference, the ping could contain the name or creation date of the most recent file received. The server could reply with a list of new files, and then the client could make several WCF calls, one by one, to pull the content down. This pattern keeps all the client traffic outbound.
You can if you can run the program as an account that has access to that computer. However having this sort of access on your network that would grant access to the outside world to put an unfiltered file on your internal network is just asking to be hacked.
Finally, I decided to install a FileZilla FTP server on each client computer and my page is working very well. But another option is to create a work group in the windows server and put every client computer to work in this work group, so that Windows server have access to the computers in the same work group.
Here are some links that may help to create the work groups:
http://helpdeskgeek.com/networking/cannot-see-other-computers-on-network-in-my-network-places/
http://www.computing.net/answers/windows-2003/server-2003-workgroup-setup-/1004.html

Resources