In the case of two ADFS servers using wid (adfs1 and adfs2) load balanced and two ADFS Proxy servers (proxy1 and proxy2) also load balanced. An error message was logged on proxy1 that "the federation proxy server could not renew its trust with the Federation Service" (event id 394).
The fix seems to be to make sure proxy1 is talking to the primary ADFS server adfs1 (instead of the VIP which load balanced adfs1 and adfs2 as adfs.domain.com) and to re-register it. I did this by setting the FQDN adfs.domain.com to point to adfs1 in the hosts file on proxy1. I expect it will keep wanting to renew the trust so I should leave it that way. This would seem to break the full mesh redundancy of having 2x2 since proxy1 will only talk to adfs1. Is there a better way to deal with this issue in this configuration?
I understand moving to SQL server may be an option but is another single point of failure I would like to avoid since this is not a huge deployment. Any other ideas?
Thank you for your help!
Mike
Related:
https://social.msdn.microsoft.com/Forums/en-US/f25e9170-b0ad-4894-8622-c2a0493df5eb/adfs-30-wap-connection-to-primary-adfs-servers-maintaining-the-wap-trust?forum=ADFS
https://answers.microsoft.com/en-us/msoffice/forum/msoffice_o365admin-mso_dirservices/adfs-30-proxy-loses-trust-with-internal-adfs/55aaf56f-f093-4620-ae87-9ad777c3a71d
You dont need to point a WAP at a specific AD FS (such as the primary you are doing now). You should use the load balanced address to get WAP reach one of the two AD FS.
The difference is when establishing a trust with a WID based (no SQL in use) AD FS, the trust setup will either complete near instantly or within 6 mins based on whether the load balancer picked the primary or not. This is by design as any setup done via the secondary is redirected to the primary and then has to synchronize back to the secondary which happens every 5 mins by default.
Keep your deployment as simple as possible and dont make it more complex than it needs to be. https://learn.microsoft.com/en-us/windows-server/identity/ad-fs/design/federation-server-farm-using-sql-server explains the WID limits which should influence whether you need SQL.
You should troubleshoot WAP trust issues using guide at https://adfshelp.microsoft.com/TroubleshootingGuides/Workflow/da33a6cd-166b-4fca-863a-73aec904c3fd . If still stuck contact Microsoft support.
Related
Since we have moved to azure, we have numerous session lost issues only on production.
We have InProc, cookie based, sticky session, large timeout, no high traffic and no high memory/process usage.
We use HAProxy as loadbalancer.
I have done basic research and none of the following seems to be the cause:
session timeout
application pool settings/recycling
memory size and usage thresholds
no eaten exceptions
there is no changes to file system to cause a restart
I'm particularly more suspicious about how loadbalancer/ssl and application work together and if http headers are fine, but I don't know any tools to really monitor that.
I'm assigned to find a solution at the same time I have no privilege to access the machines.
Logs(Log4Net) are all stored in database but doesn't help to give a clear understanding of what is going on the system and cannot follow a user session using them.
I'm allowed to find the problem by adding required logs to code or to develop some kind of monitoring module or to use profiling/debugging tools.
Only once a month there will be a production deployment so I'm trying to use the opportunity as best as possible.
Question:
Is there any useful monitoring/profiling tool that can give me a clear view of what is happening in the system by aggregating information I may need? for example following a user/session between requests from time of login until session drop plus information about headers and other system application parameters.
if there is not such a tool out there, please give me your ideas to write one?
This is a common issue in load balanced environment. As mentioned in this answer for a similar question,
InProc mode, which stores session state in memory on the Web server. Which means that session data is maintained inside your web server on a given VM and is not shared outside of the VM. So when you have multiple server for load balancing, the session state isn't shared with each other. To solve this, you must store your session state external to the web server.
Use Redis, or SQL Database, or something else.
Apologies if there is an answer already out here but I've looked at over 2 dozen threads and can't find the specific answer.
So, for our ASP.NET (2.0) application, our infrastructure team set up a load balancer machine that has two IIS 7.5 servers.
We have a network file server where the single copy of the application files reside. I know very little about the inner workings of load-balancing and even IIS in general.
My question is regarding sessions. I guess I'm wondering if the 'balancing' part is based on sessions or on individual page requests.
For example, when a user first logs in to the site, he's authenticated (forms), but then while he navigates around from page to page--does IIS 7.5 automatically "lock him in" to the particular server that first logged him in and authenticated him, or could his page requests alternate from one server to the next?
If the requests do indeed alternate, what problems might I face? I've read a bit about duplicating the MachineKey, but we have done nothing in web.config regarding MachineKey--it does not exist there at all.
I will add that we are not experiencing any issues (that we know of anyway) regarding authentication, session objects, etc. - the site is working very well, the question is more academic, and I just want to make sure I'm not missing something that may bite me down the road.
Thanks,
Jim
while he navigates around from page to page--does IIS 7.5 automatically "lock him in" to the particular server that first logged him in and authenticated him
That depends on the configuration of the load balancer and is beyond the scope of a single IIS. Since you haven't provided any information on what actual balancer you use, I can only provide a general information - regardless of the balancer type (hardware, software), it can be configured for so called "sticky sessions". In such mode, you are guaranteed that once a browser establishes connection to your cluster, it will always hit the same server. There are two example techniques - in first, the balancer just creates a virtual mapping from source IP addresses to cluster node numbers (which means that multiple requests from the same IP hit the same server), in second - the balancer attaches an additional HTTP cookie/header that allows it to recognize the same client and direct it to the same node.
Note that the term "session" has nothing to do with the server side "session" where you have a per-user container. Session here means "client side session", a single browser on a single operating system and a series of request-replies from it to your server.
If the requests do indeed alternate, what problems might I face
Multiple issues. First, encryption, if relies on machine key, will not work. This means that even forms cookies would be rejected by cluster nodes other than the one that issued the cookie. A solution is to have the same machine key on all nodes.
Another common issue would be the inproc session provider - any data stored in the memory of one application server will not "magically" appear on other cluster nodes, thus, making the session data unavailable. A solution is to configure the session to be stored in a separate process, for example in a sql server database.
I will add that we are not experiencing any issues (that we know of anyway) regarding authentication, session objects
Sounds like a positive coincidence or the infrastructure team has already configured sticky sessions. The latter sounds possible, the configuration is usually obvious and easy.
What is the difference between the single and multiple instances sites in asp.net when using Azure cloud services?
Ok - there's a few concepts here that you need to grok to answer your question.
// Arrange.
First, I'll make some assumptions about your question mainly based on the link to documentation, so my answers have less ambiguity.
You're dealing with Azure WebSites and not a Cloud Web Role or a custom Windows Virtual Machine with IIS. **
You're trying to remember stuff with the Session object (ie: State data).
You're not sure what an instance or multiple instances are, with respect to Azure WebSites.
NOTE: My answer applies to WebSites, Web Roles and Windows VM's running IIS .. but I just wanted to be uber clear on the Q.
// Act.
When you create a website (either in an WebSite, Web-Role or a custom Windows Server with IIS) the website has some defined memory boundary/space/garden/wall/magic bubble which is called the App Pool. It means that your website is 100% isolated from other websites on that single server. You do something bad, it doesn't mess with anyone else's sites.
So that website which is installed on that single server is called an instance.
Next, we decide that we need to handle so many people hitting our websites, so we need to scale out. This means, make copies/clones of this website which has the effect of splitting the load up. If you scale out to 3 copies, then each webserver should (for simplicity) split the work load by a 3rd - so each handles about 33% of the load***.
Now, you have 1 website on 3 servers and this is called multiple instances.
So an instance is therefore a term used to describe how many servers the website is installed on.
Ok - so why is this important and what does this have to do with State (as suggested by that article you were reading/referring to) ?
Remember how I said that a instance is a single server and if you have multi instances you have more than 1 server? well .. now that the website exists on different servers - they can't share their State data between them unless you do some special stuff. That special stuff is what that document is chatting about, with lots of funky terms like Inproc, OutProc, Distributed caching, etc.
// Assert.
So the TL;DR; is that you now know that when you scale out and have multiple copies of your website on separate hardware, that's called multi instances and when you do that (have more than one copy) then you need to consider some special code to handle sharing of the State across these multiple servers -- if you need to share state.
Now - have a pic of a beautiful Mola Mola for reading all of this :)
*** Yes yes yes .. there's a number of algorithms to handle load balancing of scaled out sites, like round robin, etc. Lets just keep it really simple for the purpose of this question. K? thxgoodbai.
The "multiple instances" means what it says - more than a single instance of your website hosted on two IIS instances. If you utilise in-process state tracking you will not be able to balance traffic between the two web servers (typically sticky sessions are used to get around this but this is not an option with the load balancing capabilities of Azure).
Here is the scenario:
We have 3 web servers A, B, C.
We want to release a new version of the application without taking the application down
(e.g. not using the "Down for maintenance page").
Server A goes live with latest code.
Server B gets taken off-line. Users on Server B get routed to A and C.
Page1.aspx was updated with new control. Anyone that came from Server B to Server A while
on this page will get a viewstate error when they perform an action on this page. This is what we want to prevent.
How do some of you resolve this issue?
Here are some thoughts we had (whether it's possible or not using our load balancer, I don't know... I am not familiar with load balancer configuration [it's an F5]):
The more naive approach:
Take down servers A and B and update. C retains the old code. All traffic will be directed to C, and that's ok since it's the old code. When A and B go live with the update, if possible tell the load balancer to only keep people with active sessions on C and all new sessions get initiated on A and B. The problem with this approach is that in theory sessions can stick around for a long time if the user keeps using the application.
The less naive approach:
Similar to the naive approach, except (if possible) we tell the load balancer about "safe" pages, which are pages that were not changed. When the user eventually ends up on a "safe" page, he or she gets routed to server A or B. In theory the user may never land on one of these pages, but this approach is a little less risky (but requires more work).
I assume that your load balancer is directing individual users back to the same server in the web farm during normal operations, which is why you do not normally experience this issue, but only when you start redirecting users between servers.
If that assumption is correct then it is likely the issue is a inconsistent machinekey across the server farm.
ViewState is hashed against the machine key of the server to prevent tampering by the user on the client side. The machine key is generated automatically by IIS, and will change every time the server restarts or is reset, as well as being unique to each server.
In order to ensure that you don't hit viewstate validation issues when users move between servers there are two possible courses of action.
Disable the anti-tampering protection on the individual page or globally in the pages element of the web.config file using the enableViewStateMac attribute with a false value. I mention this purely for the sake of completeness - you should never do this on a production website.
Manually generate a machine key and share that same value across each application (you could use the same key for all your applications, but it is sensible to use one key per application to maximise security), on each of your servers. To do this you need to generate keys (do not use any you may see in demos on the internet, this defeats the purpose of the unique machine key), this can be done programatically or in IIS manager (see http://www.codeproject.com/Articles/221889/How-to-Generate-Machine-Key-in-IIS7). Use the same machine key when deploying the website to all of your servers.
I can't answer on the best practice for upgrading applications that require 100% uptime.
Hi, We are developing a multi-tenant application in Asp.Net with separate Database for each tenant, in which one of the requirement is to monitor the bandwidth usage for each tenant,
i have tried to search but not found much help on the topic,we want to monitor exactly how much bandwidth is being used for each tenant while each tenant can have its own top level domain or a sub domain or a combination of both.
so what are the available options, the ones which i can think of can be
IIS Log Monitoring means a separate application which will calculate the bandwidth for each tenant.
Log Each Request and Response for a tenant from within the application and then calculate the total bandwidth usage based on that.
Use some third part components if available
So what do you think will be the best approach, also if there is any other way to do this.
Ok, here is an idea (that I have not test, leave that to you)
On global.asax
use one of this function (find the one that have a valid final size)
Application_PostRequestHandlerExecute
Application_ReleaseRequestState
and get the size that you have send with
Response.Filter.Length
No need to metion, that you get the filename of the call using the
HttpContext.Current.Request.Path
This functions called with every single request, so you can get your size and you do the rest.
Here must note, that you need first to test this idea to see if its work, and maybe improve it, and have in mine that if you have compress the pages on server the length is not the correct and maybe you need to compress it on Global.asax to have the actually lenght.
Hope this help.
Well, since the IIS logs already contain the request size and response size, it doesn't seem like too much trouble to develop a small tool to parse them and calculate the total per day/week/month/whatever.
Trying to segment traffic based on host is difficult in my experience. Instead, if you give each tenant their own IP(s) for the applications you should be able to find programs that will monitor bandwidth based on IP.
ADDITION Is the structure of IIS that you have one website to rule them all for all tenants and on login the system forks to the proper database? If so, this may create problems with respect to versioning in that all tenant's sites will all have to have exactly the same schema and would all need to be updated simultaneously when you update the application such that a schema change is required.
Another structure, which sounds like what you may have, is that each tenant has their own website like so:
tenant1_site/appvirtualdir
tenant2_site/appvirtualdir
...
Where the appvirtualdir points to the same physical path for all tenant's sites. When all clients have the same application version, they are all using literally the same code. If you have this scenario and some sort of authentication, then you will need one IP per tenant anyway because of SSL. SSL will only bind to IP and port unlike non-SSL which will bind to IP, port and host. If that were the case, then monitoring traffic based on IP will still be simpler and more accurate as it could be done at the router or via a network monitor.