Getting client values of IIS Server Variables in Load Balanced Environment - asp.net

I have an intranet ASP.NET web application in which I need to get the IP of the client's machine. I do this vis the following code:
HttpContext.Current.Request.ServerVariables.Item("REMOTE_HOST")
It used to work when my ASP.NET site was only hosted on a single server. However once we got the load balancer installed and migrated our apps to a web farm, the code above returns the IP of the Load Balancer device and not of the client anymore.
I am working with the networking folks to determine what can be configured differently with the load balancer, but in the meantime I was wondering if there was another way I could get the client's IP other than using that IIS Server Variable? Or any other suggestions?
Thank you!

Which load balancer are you using? It sounds as if your load balancer is acting as a proxy for the web traffic, hence the reason the source appears to come from the LB. Most hardware load balancers are built on Linux platforms and there is provision for transparency if the kernel supports it:
http://www.mjmwired.net/kernel/Documentation/networking/tproxy.txt
However, this would probably require root access to the unit and some downtime. But it is something that may be worth mentioning to the vendor's support team if they don't have any ideas.
Another (hopefully much easier) option: You may be able to configure the load balancer's proxy to write the client's source IP in the HTTP x-forwarded-for header:
http://en.wikipedia.org/wiki/X-Forwarded-For
And then you'll be able to read this header via ASP.net in a similar way:
Request.ServerVariables("X-Forwarded-For")
This may already work if the proxy is already doing this.
Really your options depend on what your load balancer is capable of, and what is configurable. Note the list of common hardware vendors at the bottom of the wiki page above.

Related

No need for HTTPS at all, but there seems to be no other way from the web browser

Given a web application running over an HTTPS connection. It also has to communicate with a Java application on the local area network.
This server is literally in the same room with the PC on which the web app is running, a simple HTTP connection would be completely fine between the two, but since the web app is running over HTTPS, the browser forces the HTTPS.
It's already stupid and a big overkill that I must employ an HTTPS server in the Java application just because of that, but still now it doesn't work yet, because now the browser is complaining about the certificate that it is self-signed..
I mean, do I really need to purchase an SSL certificate so two of my computers in the same room can communicate over HTTP? Even if I wanted I couldn't. There's not even a fix domain.
I'm confused, is there a way around?
UPDATE:
The web application is served from the Internet, that's why the HTTPS connection. Whereas it should receive data from a Java application running locally. Hundreds of megabytes in every couple of minutes (confidential medical images) so sending all that through a proxy is not really an option.
I also wanted to avoid the need of any manual configuration from the user's side to make the communication work (like importing a certificate into the web browser and similar) but maybe I have no other option.

understanding load balancing in asp.net

I'm writing a website that is going to start using a load balancer and I'm trying to wrap my head around it.
Does IIS just do all the balancing for you?
Do you have a separate web layer that sits on the distributed server that does some work before sending to the sub server, like auth or other work?
It seems like a lot of the articles I keep reading don't really give me a straight answer, or I'm just not understanding them correctly, I'd like to get my head around how true load balancing works from a techincal side, and if anyone has any code to share that would also be nice.
I understand caching is gonna be a problem but that's a different topic, session as well.
IIS do not have a load balancer by default but you can use at least two Microsoft technologies:
Application Request Routing that integrates with IIS, there you should ideally have a separate web layer to do routing work,
Network Load Balancing that is integrated with Microsoft Windows Server, there you can join existing servers into NLB cluster.
Both of those technologies do not require any code per se, it a matter of the infrastructure. But you must of course remember about load balanced environment during development. For example, to make a web sites truly balanced, they should be stateless. Otherwise you will have to provide so called stickiness between client and the server, so the same client will be connecting always to the same server.
To make service stateless, do not persist any state (Session, for example, in case of ASP.NET website) on the server but on external server shared between all servers in the farm. So it is common for example to use external ASP.NET Session server (StateServer or SQLServer modes) for all sites in the cluster.
EDIT:
Just to clarify a few things, a few words about both mentioned technologies:
NLB works on network level (as a networking driver in fact), so without any knowledge about applications used. You create so called clusters consisting of a few machines/servers and expose them as a single IP address. Then another machine can use this IP as any other IP, but connections will be routed to the one of the cluster's machines automatically. A cluster is configured on each server, there is no external, additional routing machine. Depending on the clusters settings, as we have already mentioned, a stickiness can be enabled or disabled (called here a Single or None Affinity). There is also a Load weight parameter, so you can set weighed load distribution, sending more connections to the fastest machine for example. But this parameter is static, it can't be dynamically based on network, CPU or any other usage. In fact NLB does not care if target application is even running, it just route network traffic to the selected machine. But it notices servers went offline, so there will be no routing there. The advantages of NLB is that it is quite lightweight and requires no additional machines.
ARR is much more sophisticated, it is built as a module on top of IIS and is designed to make the routing decisions at application level. Network load balancing is only one of its features as it is a more complete, routing solution. It has "rule-based routing, client and host name affinity, load balancing of HTTP server requests, and distributed disk caching" as Microsoft states. You create there Server Farms with many options like load balance algorithm, load distribution and client stickiness. You can define health tests and routing rules to forward request to other servers. Disadvantage of all of it is that there should be a dedicated machine where ARR is installed, so it takes more resources (and costs).
NLB & ARR - as using a single ARR machine can be the single point of failure, Microsoft states that it is worth consideration to create a NLB cluster of ARR machines.
Does IIS just do all the balancing for you?
Yes,if you configure Application Request Routing:
Do you have a separate web layer that sits on the distributed server
Yes.
that does some work before sending to the sub server, like auth or other work?
No, ARR is pretty 'dumb':
IIS ARR doesn't provide any pre-authentication. If pre-auth is a requirement then you can look at Web Application Proxy (WAP) which is available in Windows Server 2012 R2.
It just acts as a transparent proxy that accepts and forwards requests, while adding some caching when configured.
For authentication you can look at Windows Server 2012's Web Application Proxy.
Some tips, and perhaps items to get yourself fully acquainted with:
ARR as all the above answers above state is a "proxy" that handles the traffic from your users to your servers.
You can handle State as Konrad points out, or you can have ARR do "sticky" sessions (ensure that a client always goes to "this server" - presumably the server that maintains state for that specific client). See the discussion/comments on that answer - it's great.
I haven't worn an IT/server hat for so long and frankly haven't touched clustering hands on (always "handled for me automagically" by some provider), so I did ask this question from our host, "what/how is replication among our cluster/farm" done?" - The question covers things like
I'm only working/setting things on 1 server, does that get replicated across X VMs in our cluster/farm? How long?
What about dynamically generated,code and/or user generated files (file system)? If it's on VM1's file system, and I have 10 load balanced VMs, and the client can hit any one of them at any time, then...?
What about encryption? e.g. if you use DPAPI to encrypt web.config stuff (e.g.db conn strings/sections), what is the impact of that (because it's based on machine key, and well, the obvious thing is now you have machine(s) or VM(s). RSA re-write....?
SSL: ARR can handle this for you as well, and that's great! But as with all power, comes a "con" - if you check/validate in your code, e.g. HttpRequest.IsSecureConnection, well, it'll always be false. Your servers/VMs don't have the cert, ARR does. The encrypted conn is between client and ARR. ARR to your servers/VMs isn't. As the link explains, if you prefer it the other way around (no offloading), you can...but that means all your servers/VMs should then have a cert (and how that pertains to "replication" above starts popping in your head).
Not meant to be comprehensive, just listing things out from memory...Hth

Can Nginx be used instead of Gunicorn to manage multiple local OpenERP worker servers?

I'm currently using Nginx as a web server for Openerp. It's used to handle SSL and cache static data.
I'm considering extending it's use to also handle fail over and load balancing with a second server, using the upstream module.
In the process, it occurred to me that Nginx could also do this on multiple Openerp servers on the same machine, so I can take advantage of multiple cores. But Gunicorn seems to the the preferred tool for this.
The question is: can Nginx do a good job handling traffic to multiple local OpenERP servers, bypassing completely the need for Gunicorn?
Let first talk what they both are bascially.
Nginx is a pure web server that's intended for serving up static content and/or redirecting the request to another socket to handle the request.
Gunicorn is based on the pre-fork worker model. This means that there is a central master process that manages a set of worker processes. The master never knows anything about individual clients. All requests and responses are handled completely by worker processes.
If you see closely Gunicorn is Designed from Unicron, Follow the link for the detail more diff
which show the ngix and unicrom same model work on Gunicron also.
nginx is not a "pure web server" :) It's rather a web accelerator capable of doing load balancing, caching, SSL termination, request routing AND static content. A "pure web server" would be something like Apache - historically a web server for static content, CGIs and later for mod_something.

How to set up SSL in a load balanced environment?

Here is our current infrastructure:
2 web servers behind a shared load balancer
dns is pointing to the load balancer
web app is done in asp.net, with wcf services
My question is how to set up the SSL certificate to support https connection.
Here are 2 ideas that I have:
SSL certificate terminates at the load balancer. secure/unsecure communication behind the load balancer will be forwarded to 2 different ports.
pro: only need 1 certificate as I scale horizontally
cons: I have to check secure or not secure by checking which port the request is
coming from. doesn't quite feel right to me
WCF by design will not work when IIS is binded 2 different ports
(according to this)
SSL certificate terminates on each of the server?
cons: need to add more certificates to scale horizontally
thanks
Definitely terminate SSL at the load balancer!!! Anything behind that should NOT be visible outside. Why wouldn't two ports for secure/insecure work just fine?
You don't actually need more certificates at all. Because the externally seen FQDN is the same you use the same certificate on each machine.
This means that WCF (if you're using it) will work. WCF with the SSL terminating on the external load balancer is painful if you're signing/encrypting at a message level rather than a transport level.
You don't need two ports, most likely. Just have the SSL virtual server on the load balancer add an HTTP header to the request and check for that. It's what we do with our Zeus ZXTM 5.1.
You don't have to get a cert for every site there are such things as wildcard certs. But it would have to be installed on every server. (assuming you are using subdomains, if not then you can reuse the same cert across machines)
But I would probably put the cert on the load balancer if not just for the sake of easy configuration.

Website currently being viewed

I have 50 machines in a LAN and each of these have internet access. Can a program be developed using vc++ which will tell what are all the websites which is being opened by users in each machine?
You can easily accomplish this by writing an application which captures packets outbound on port 80 (and the associated DNS information). The problem is that this application must run on every client computer which you want to trace. The easier method, as stated by others, is to take advantage of your network architecture and tunnel all traffic through a central proxy which can record the same information.
There are many-many enterprise tools suited for just this task in the latter instance.
Route your internet traffic through a centralized proxy and monitor the traffic from proxy say using Fiddler, or something else. In case proxying is not possible, use Fiddler to generate data at known location and then collate it at required intervals.
Install a firewall, if you don't already have one, and use it to log connections.

Resources