I would like to create an HttpModule that can serve a file that is on a different server.
Server A is publicly accessible and receives a request for file.txt. This file is on server B and Server A will serve it to the user. Server B is not publicly available.
What would be the highest performance method of accomplishing this with an HttpModule?
I could let Server A download the file from Server B and stream it back to the user. That would require a lot of resources from Server A to do that.
Another option might be that Server A will serve the file directly from an open fileshare to Server B. This would probably require less memory on Server A, but would still need the file to be streamed from Server B to Server A.
I hope there is some way that the request can be redirected to Server B and then the file can be returned directly from Server B to the client, possible facilitated by Server A.
I cannot simply redirect to Server B as it's not directly available to the end user.
I would probably not invest in writing an HttpModule at all but instead I would use ARR (Application Request Routing) module to do the "proxying" for you in a way that is highly performant. You can also use its Caching functionality and that way if some files are "hot" they can be downloaded directly by the "front-end" server without hitting the back-end server. Using its cached will be the fastest option and since it can be pretty smart about caching it will be also one of the pain-less ways.
It is optimized to handle thousands of requests per second and does that in an async way that let it scale to huge numbers. Used in many content-delivery-network-type situations so you can count on it being really fast.
Related
I'm just learning about it and I have these things in mind but I'm not quite sure about them. Is there any wrong ideas about my findings? Please elaborate it in simple terms.
HTTP:
used to view websites between web servers
web server files to another web server
used to transfer small files (in text form)
used to have access in WWW
FTP
used to access and transfer files between a local machine and a web server
local files to a web server
used to transfer large files (in any form)
used to have access in a remote server
HTTP and FTP (note there is a section on the wikipedia page illustrates the differences between HTTP and FTP) are both application layer protocols.
See also here.
HTTP:
used to make request response communications between a server and a client
this communication can be used to both upload and download text and binary information
state less
faster when transferring many small files
used for web pages with or without authentication required
FTP:
also performs uploading and downloading of information
stateful control connection
faster for single large file transfers
file transfer authentication needed
limited support of pipe-lining
The big difference is that HTTP fixes many of the issues incurred by FTP. One example is that FTP has very little overhead and no metadata while HTTP provides this and there HTTP supports the sending of multiple files. Also HTTP is state less.
Some extra sources I would advise you read for more information on this are:
1. http://www.differencebetween.net/technology/difference-between-ftp-and-http/
2.
https://daniel.haxx.se/docs/ftp-vs-http.html
Also for more information on the different types of FTP, I would advise looking at this Stack Overflow post.
First off, I'll explain my situation to you. I'm building a server for storing and retrieve data for my phone application. I'm new to NGINX. What I know point of using load balancing/reverse proxy is to improve performance and reliability by distributing the workload across multiple servers. But i'm not understand while it working with image/video file. Let say below is mine NGINX config file
upstream backend {
server backend1.example.com;
server backend2.example.com;
server backend3.example.com;
}
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://backend;
}
}
I have few question here.
First while i upload an image/video file, did i need to upload image into those all backend server or there is other way?
Second if I just save it to a separate server which only store image, while I requesting and download image or video file I proxy it to that specified server then what is the point of load balancing with image/video file since reverse proxy is to improve performance and reliability by distributing the workload across multiple servers?
Third does amazon s3 really better for storing file? Does it cost cheaper?
I'm looking for solution which can be done it by my own server beside using third parties.
Thx for any help!!
You can either use shared storage e.g. NFS, or upload to both servers, or incorporate a strategy to distribute files between servers, storing each file on a single server.
First two options logically are the same and provide fallover, hence improving reliability.
Third option, as you note, does not improve reliability (may be somehow, if one server fails second may still serve some files). It can improve performance, though, if you have many concurrent requests for different files and distribute them evenly between servers. This is not achieved through nginx load balancing but rather by redirecting to different servers based on request (e.g. file name or key).
For shared storage solution, you can use, for example, NFS. There are many resources going into deeper detail, for example https://unix.stackexchange.com/questions/114699/nfs-automatic-fail-over-or-load-balanced-or-clustering
For duplicate upload solution, you can either send file twice from client or do it server side with some code. Server side solution has the benefit of single file traffic from client and sending to second server only on fast network. In simple case this can be achieved, for example, by receiving file in a servlet, storing incoming data to disk and simultaneously upload to another servlet on the second server through http or other protocol.
Note that setting up any of these options correctly can involve quite significant effort, testing and maintanance.
Here comes S3, ready to use distributed/shared storage, with simple API, integrations, clients and reasonable price. For simple solution usually it is not cheaper in terms of $ per storage, but much cheaper in terms of R&D. It also has the option to serve flies and content through http (balanced, reliable and distributed), so you either can download file in client directly from S3 hosts or make permanent or temporary redirects there from your http servers.
I am developing a C# ASP.NET 4.0 application that will reside on a Windows Server 2003. By mean of accessing this application through a network computer, any user would be able to upload files to the windows server. But also, once these files are stored on server, he/she would be able to copy these files from the windows server to another networked computer.
I have found a way to upload files to a specified location on the server disk,
but now I need to send these files that are on server disk to the client computers.
My question is: Is there any way to send or copy files from server to other client computers (not the one that is accessing the web service) without needing a program recieving those files on the client computers? FTP, WCF, cmd commands, sockets?
Any idea?
If you want users of your webapp to download files, I'd look into an "ashx generic handler." It will allow you to send files back down to clients over HTTP(s).
If you are looking to have remote users, tell your webserver to copy files to other servers ON THE SAME LAN AS THE SERVER, you would write using normal System.IO operations.
Over a LAN, if you have the correct permissions and so on, you can write to a disk on a different machine using File.Copy -- there's nothing special about that.
If we're talking about remote machines over the internet, that's a different story. Something has to be listening whether it's FTP, WCF, DropBox, etc.
If the problem is that it can be painful to get something like WCF to work from a client due to problems like firewall issues under Windows 7, you could take a different route and have the client periodically ping the server looking for new content. To give the server a point of reference, the ping could contain the name or creation date of the most recent file received. The server could reply with a list of new files, and then the client could make several WCF calls, one by one, to pull the content down. This pattern keeps all the client traffic outbound.
You can if you can run the program as an account that has access to that computer. However having this sort of access on your network that would grant access to the outside world to put an unfiltered file on your internal network is just asking to be hacked.
Finally, I decided to install a FileZilla FTP server on each client computer and my page is working very well. But another option is to create a work group in the windows server and put every client computer to work in this work group, so that Windows server have access to the computers in the same work group.
Here are some links that may help to create the work groups:
http://helpdeskgeek.com/networking/cannot-see-other-computers-on-network-in-my-network-places/
http://www.computing.net/answers/windows-2003/server-2003-workgroup-setup-/1004.html
I have servers spread across several data centers, each storing different files. I want users to be able to access the files on all servers through a single domain and have the individual servers return the files directly to the users.
The following shows a simple example:
1) The user's browser requests http://www.example.com/files/file1.zip
2) Request goes to server A, based on the DNS A record for example.com.
3) Server A analyzes the request and works out that /files/file1.zip is stored on server B.
4) Server A forwards the request to server B.
5) Server B returns file1.zip directly to the user without going through server A.
Note: steps 4 and 5 must be transparent to the user and cannot involve sending a redirect to the user as that would violate the requirement of a single domain.
From my research, what I want to achieve is called "Direct Server Return" and it is a common setup for load balancing. It is also sometimes called a half reverse proxy.
For step 4, it sounds like I need to do MAC Address Translation and then pass the request back onto the network and for servers outside the network of server A tunneling will be required.
For step 5, I simply need to configure server B, as per the real servers in a load balancing setup. Namely, server B should have server A's IP address on the loopback interface and it should not answer any ARP requests for that IP address.
My problem is how to actually achieve step 4?
I have found plenty of hardware and software that can do this for simple load balancing at layer 4, but these solutions fall short and cannot handle the kind of custom routing I require. It seems like I will need to roll my own solution.
Ideally, I would like to do the routing / forwarding at the web server level, i.e. in PHP or C# / ASP.net. However, I am open to doing it at a lower level such as Apache or IIS, or at an even lower level, i.e. a custom proxy service in front of everything.
Forgive my ignorance, but why not setup Server A to mount the files that are located on the other servers either via NFS or SMB, depending on whether you're using a unix variant, or whether you're using Windows?
Seems like what you're trying to do is overly complicate something that could be very simple. In addition, using network-mounted files will allow you to mount those files on additional machines in the future when you need them. At that point, then you could put a load balancer in front of server A (and servers x, y, and z, which also all mount files from server B).
Granted this would not solve the problem of bypassing server A on the return, technically server A would be returning the file instead of server B, but if a load balancer were to be put in front of A, then A would become B anyways, so technically B would still be returning the file, because the load balancer would use direct server return (its a standard feature for a long time now).
If I did miss something, please do elaborate.
Edit: Yes I realize this was posted nearly 3 years ago. Oh well.
Why not send an HTTP response of status code 307 Temporary Redirect?
At that point the client will re-request to the correct specified server.
I know you want a single domain, but you could have both individual subdomains plus a single common domain.
For example:
example.com has IP1, IP2, IP3.
example1.example.com has IP1
example2.example.com has IP2
example3.example.com has IP3
If the request comes to a server that it can't handle itself, it will forward the user to make another request to the correct specific server. An HTTP browser will follow this redirect transparently by the way.
I'm new to web.config and ASP.NEt. I want to make my client to point to different servers which ever is free dynamically....
Is it possible???
It's their a way to have multiple entries in web.config file which we can choose at run time???
Let me be more specific. I have multiple clients which contacts a server for the resource but due to excess load on server I want to have multiple server and which ever server is free that client should contact that server.
Thanks for the Help in advance...
You have to have a separate load balancer in front of your servers.
Also, if you need application sessions, then you will need to move application state out of process - to SQL Server or to ASP.NET State Service, so different servers will share the session state.
You can read about your options about sharing session between servers here: https://web.archive.org/web/1/http://articles.techrepublic%2ecom%2ecom/5100-10878_11-1049585.html
Redirect might work for you:
http://www.w3schools.com/asp/met_redirect.asp
Also you could try using a proxy server:
http://www.iisproxy.net/
or load balancing server:
http://www.c-sharpcorner.com/UploadFile/gopenath/Page107182007032219AM/Page1.aspx