How Does Running Multiple Server Instances Work In Regards to the User IP Address - asp.net

If I'm running multiple web server instances can a client application (like a user using a web browser) be using the different instances or would they be routed to the same instance every time? Let's say they duplicate a tab or open a new tab are those tabs still using the same instance too?
This would be in Azure with IIS/ASP.NET.

When you are using load balance in any environment, you almost always have the option to set session affinity. It means basically, a client who is directed to server 1 on his first request will always be routed to the same server. Azure does provide this flexibility too without question. Here is the documentation with some details on how to do that configuration.
There are a couple of ways how you could configure session affinity. One prominent way is by source IP. So, using a different tab or a different browser instance will not make any difference. Requests from a client machine will always carry the same IP address and hence will go to the same server.
Here is the Powershell sample to set source IP based affinity:
Set-AzureLoadBalancedEndpoint -ServiceName MyService -LBSetName LBSet1 -Protocol TCP -LocalPort 80 -ProbeProtocolTCP -ProbePort 8080 –LoadBalancerDistribution sourceIP
Here is some detail on a more specific scenario that happens when users access a load balanced site from behind a company;s firewall.

Related

Routing - Web browser logs out of first instance when second instance is logged in, can I stop this behaviour?

This is easier to explain with a diagram.
Instance One -> Port 5000 - > Forward to router 5000
Instance Two -> Port 5000 - > Forward to router 5001
Connect to instance One using WANIP:5000, this works fine until I connect to instance two (WANIP:5001).
Web browser logs me out of instance one when I log in to instance two.
How can I stop the web browser from logging me out of Instance one when I connect to instance Two?
I was expecting Instance 1 and Instance 2 to be useable simultaneously.
What have I tried?
Check that instance one and instance Two are not on the same IP address. They use the same port currently 5000 - > forward port is differnt for each instance.
I changed the port running on the instance to a differnt one and forwarded that port.
I switched UDP off on the router.
I unticked NAT on the router.
These actions did not resolve my issue.
I can connect to both instances if I use a separate web browser
to connect to each instance. For example, firefox (instance one), Edge (Instance 2).
This does not happen when the instances are on a local lan, behaviour only manifests when the instances are forwarded through the router. If it helps the instance is running .netCore MVC.
Answering my own question in case anyone else has issues with this.
This seems to be a well known problem involving cookies and the way they are shared via the web browser. Cookies are not (the way we use them anyway) independent for each instance and a new cookie is not generated when the port number changes.
You can use an alias for the IP you are connecting to which then will allow the web browser to have a cookie for each instance.

Building Proxy Site with Nginx and Rotating Proxy Service

Im' looking to build a similar application to https://www.proxysite.com/ but am not sure on the best architecture.
Looking to have a data flow like this.
User Web Browser -> myproxysite.com -> Ngninx Proxy Server (somehow rotating IP for each client session) -> Targetsite.com
Then the user would need to maintain a full session on Targetsite.com as a logged in user.
In this example, targetsite.com is always the same site and is pre-determined. The challenge we are facing is that targetsite.com is blocking our users based on IP, many of whom are accessing it from the same office network.
So my questions are:
Does this seem correct?
Is there anyway for me to configure nginx with a rotating proxy service like luminati? Or do I need to add an API software layer to handle the actual IP changes?
Any guidance on this one would be greatly appreciated!
While I can't help you with your application, I do want to suggest an alternative. You mentioned an office so it sounds like the users who will use the proxy are workers.
Luminati (now BrightData) has a proxy manager which you can host on any server. The proxy manager allows you to create ports (ie port 24000) and configure it with whatever proxy you want (doesn't have to be BrightData's proxy). It has a ton of different parameters that you can include for each proxy (including IP rotation) and each port can be configured to have a unique setup.
Then you simply go to your user PC, open the browser proxy settings, type the IP address of the server that the proxy manager is running on and the specific port you configured and voila. You have central control of the managing the proxies and your user's browser is proxied.
A big benefit of this is the logs in the proxy manager show all activity on each port you setup, so you can monitor traffic and the success rates right there.
Proxy manager: https://prnt.sc/13uyjgj

Is NLB a good way to keep a website available while deploying new code?

I want to be able to deploy a new version of my asp.net/mvc website without loosing client session state or causing any downtime. The way I'm thinking of accomplishing this is by creating a Windows Network Load Balancing server so that clients can reach it via a single url such as https://mysite.org/. It would then redirect traffic to one of two other sites (A.mysite.org or B.mysite.org). I'll set the NLB's affinity to Single, and disable site B so that all sessions are are directed to site A. When I need to deploy a new version of the website, I'll deploy to site B, enable site B, and disable site A. So, everybody that was on site A can stay there (using version 1) until they log off. All new sessions will connect to site B and run version 2. The next time I deploy, I'll do the reverse.
I've never used NLB. Is this appropriate? Is there a simpler, easier way?
How does NLB know when a request from client X already has a session on A or B? Ie. when they log off the website, and try to login again, will the nlb send them to the same site they were on before?
There are quite a few considerations here
Firstly, rather than juggling the affinity on your NLB, you will probably be better storing your ASP.NET Sessions in StateServer or SQL based Session management to allow web clients (or web service clients) to access your site without 'sticky' affinity. Once you've set up the StateServer or created the SQL Session DB, it should be a simple change to your app's web config.
NLB itself works great for keeping your site up while you upgrade your site. You will typically drainstop a server in the cluster before reinstalling your app to it, test it, and then bring it back into the NLB cluster, before repeating the process with the next server etc.
AFAIK, NLB Single Affinity works at TCP/IP level and is does not interrogate ASP.NET sessions. Basically any connection from the same client IP to the same server IP:Port combination will be directed to the same server. Also AFAIK, both servers will be sharing the NLB IP (In addition to any existing IP's they have).
Since it seems your site uses SSL, it seems that unless you have affinity, that the SSL session keys will need to be renegotiated on each request, which could have performance implications.

Redirecting http traffic to another server temporarily

Assume you have one box (dedicated server) that's on 24 7 and several other boxes that are user machines that have unused bandwidth. Assume you want to host several web pages. How can the dedicated server redirect http traffic to the user machines. It is desirable that the address field in the web browser still displays the right address, and not an ip. Ie. I don't want to redirect to another web page, I want to tell the web browser that it should request the same web page from a different server. I have been browsing through the 3xx codes, and I don't think they are made for anything like this.
It should work some what along these lines:
1. Dedicated server is online all the time.
2. User machine starts and tells the dedicated server that it's online.
(several other user machines can do similarly)
3. Web browser looks up domain name and finds out that it points to dedicated server.
4. Web browser requests page.
5. Dedicated server tells web browser to repeat request to user machine
Is it possible to use some kind of redirect, and preferably tell the browser to keep sending further requests to user machine. The user machine can close down at almost any point of time, but it is assumed that the user machine will wait for ongoing transactions to finish, no closing the server program in the middle of a get or something.
What you want is called a Proxy server or load balancer that would sit in front of your web server.
The web browser would always talk to the load balancer, and the load balancer would forward the request to one of several back-end servers. No redirect is needed on the client side, as the client always thinks it is just talking to the load balancer.
ETA:
Looking at your various comments and re-reading the question, I think I misunderstood what you wanted to do. I was thinking that all the machines serving content would be on the same network, but now I see that you are looking for something more like a p2p web server setup.
If that's the case, using DNS and HTTP 30x redirects would probably be what you need. It would probably look something like this:
Your "master" server would serve as an entry point for the app, and would have a well known host name, e.g. "www.myapp.com".
Whenever a new "user" machine came online, it would register itself with the master server and a the master server would create or update a DNS entry for that user machine, e.g. "user123.myapp.com".
If a request came to the master server for a given page, e.g. "www.myapp.com/index.htm", it would do a 302 redirect to one of the user machines based on whatever DNS entry it had created for that machine - e.g. redirect them to "user123.myapp.com/index.htm".
Some problems I see with this approach:
First, Once a user gets redirected to a user machine, if the user machine went offline it would seem like the app was dead. You could avoid this by having all the links on every page specifically point to "www.myapp.com" instead of using relative links, but then every single request has to be routed through the "master server" which would be relatively inefficient.
You could potentially solve this by changing the DNS entry for a user machine when it goes offline to point back to the master server, but that wouldn't work without an extremely short TTL.
Another issue you'll have is tracking sessions. You probably wouldn't be able to use sessions very effectively with this setup without a shared session state server of some sort accessible by all the user machines. Although cookies should still work.
In networking, load balancing is a technique to distribute workload evenly across two or more computers, network links, CPUs, hard drives, or other resources, in order to get optimal resource utilization, maximize throughput, minimize response time, and avoid overload. Using multiple components with load balancing, instead of a single component, may increase reliability through redundancy. The load balancing service is usually provided by a dedicated program or hardware device (such as a multilayer switch or a DNS server).
and more interesting stuff in here
apart from load balancing you will need to set up more or less similar environment on the "users machines"
This sounds like 1 part proxy, 1 part load balancer, and about 100 parts disaster.
If I had to guess, I'd say you're trying to build some type of relatively anonymous torrent... But I may be wrong. If I'm right, HTTP is entirely the wrong protocol for something like this.
You could use dns, off the top of my head, you could setup a hostname for each machine that is going to serve users:
www in A xxx.xxx.xxx.xxx # ip address of machine 1
www in A xxx.xxx.xxx.xxx # ip address of machine 2
www in A xxx.xxx.xxx.xxx # ip address of machine 3
Then as others come online, you could add then to the dns entries:
www in A xxx.xxx.xxx.xxx # ip address of machine 4
Only problem is you'll have to lower the time to live (TTL) entry for each record down to make it smaller (I think the default is 86400 - 1 day)
If a machine does down, you'll have to remove the dns entry, though I do think this is the least intensive way of adding capacity to any website. Jeff Attwood has more info here: is round robin dns good enough?

How to get visitor IP on load balancing machine using asp.net

We are having two load balancing server. In that we have hosted a asp.net 3.5 application right now we are using request userhostaddress to get visitor ip but it is giving load balancer ip instead of real ip. Can anyone have code for this.
I think that you must search not only for HTTP_X_FORWARDED_FOR, but for all that, depent what your loading ballance using
Context.Request.ServerVariables[CheckAllBelowList]
"HTTP_X_COMING_FROM",
"HTTP_X_FORWARDED_FOR",
"HTTP_X_FORWARDED",
"HTTP_VIA",
"HTTP_COMING_FROM",
"HTTP_FORWARDED_FOR",
"HTTP_FORWARDED",
"HTTP_FROM",
"HTTP_PROXY_CONNECTION",
"HTTP_CLIENT_IP",
"CLIENT_IP",
"FORWARDED",
The return of one of that, is the actual Ip of you client, except if is other proxy, I also need to learn how some one can get this, to find the user that is behind 2-3 proxy's...
If you know any other, please tell me so.
The problem is more to do with the fact that the "load balancer" is acting as a proxy. What type of load balancer are you using? I know with Microsoft ISA server there is a setting to pass the original users IP address through to the webserver.
Otherwise you will have to write a page to dump out the server variables and see if there an extra server variable being added that gives you the real client IP address.
depending on the load balancing server the client's IP could/should be written to:
Context.Request.ServerVariables["HTTP_X_FORWARDED_FOR"];
But beware if the user is also behind a proxy the value may be the proxies original client instead of the load balancer's client (which would be the proxy IP in this case). I'm not certain what behaviour is "normal".
Hope that helps,
Alex

Resources