Please help!
We have two domain controllers. For some reason, they stopped replicating in July 2016. Now, when we try to manually initiate replication, we get the following error:
"The directory service cannot replicate with this server because the time since the last replication with this server has exceeded the tombstone lifetime."
Of course this is producing "Trust relationship has been lost with domain controller" issues all over our network as computers and servers can't connect with each other.
One of the suggestions to resolve this has been to demote the domain controllers and bring them back up...which is apparently very complicated.
Is there anything else that can be done to get these two domain controllers to replicate again since it has been so long?
Thanks!
We demoted one of the domain controllers, then re-promoted it, and now everything works fine. Replication is successful.
Related
In the case of two ADFS servers using wid (adfs1 and adfs2) load balanced and two ADFS Proxy servers (proxy1 and proxy2) also load balanced. An error message was logged on proxy1 that "the federation proxy server could not renew its trust with the Federation Service" (event id 394).
The fix seems to be to make sure proxy1 is talking to the primary ADFS server adfs1 (instead of the VIP which load balanced adfs1 and adfs2 as adfs.domain.com) and to re-register it. I did this by setting the FQDN adfs.domain.com to point to adfs1 in the hosts file on proxy1. I expect it will keep wanting to renew the trust so I should leave it that way. This would seem to break the full mesh redundancy of having 2x2 since proxy1 will only talk to adfs1. Is there a better way to deal with this issue in this configuration?
I understand moving to SQL server may be an option but is another single point of failure I would like to avoid since this is not a huge deployment. Any other ideas?
Thank you for your help!
Mike
Related:
https://social.msdn.microsoft.com/Forums/en-US/f25e9170-b0ad-4894-8622-c2a0493df5eb/adfs-30-wap-connection-to-primary-adfs-servers-maintaining-the-wap-trust?forum=ADFS
https://answers.microsoft.com/en-us/msoffice/forum/msoffice_o365admin-mso_dirservices/adfs-30-proxy-loses-trust-with-internal-adfs/55aaf56f-f093-4620-ae87-9ad777c3a71d
You dont need to point a WAP at a specific AD FS (such as the primary you are doing now). You should use the load balanced address to get WAP reach one of the two AD FS.
The difference is when establishing a trust with a WID based (no SQL in use) AD FS, the trust setup will either complete near instantly or within 6 mins based on whether the load balancer picked the primary or not. This is by design as any setup done via the secondary is redirected to the primary and then has to synchronize back to the secondary which happens every 5 mins by default.
Keep your deployment as simple as possible and dont make it more complex than it needs to be. https://learn.microsoft.com/en-us/windows-server/identity/ad-fs/design/federation-server-farm-using-sql-server explains the WID limits which should influence whether you need SQL.
You should troubleshoot WAP trust issues using guide at https://adfshelp.microsoft.com/TroubleshootingGuides/Workflow/da33a6cd-166b-4fca-863a-73aec904c3fd . If still stuck contact Microsoft support.
We have a secured & authenticated WCF service which cannot use service references. Thus, we provide the interface for the contracts and open client channel manually.
We have found out that as long we open it once, everything works fine. We can call several methods several times. However, if the channel is closed or just set to a new instance, the Login() (which happens to be required for first step prior to using the service), times out.
To make the matters even more mysterious, this only happens on our production server. If I run the same project locally, I am able to login many times as I want. Consuming the methods inside a web browser (even on a code-behind ASPX page) do not have this problem even with the production server. ONLY when it's a .NET client trying to open a client channel against the production server, do we have this problem.
We are not even sure where to start looking. Any advices would be greatly appreciated.
UPDATE:
As per #Rene's suggestion, we turned on logging on both sides. From client's log, there is a record of error which is basically the same timeout error we already got via the exception. Nothing meaningful. On the server's logs, there are records of service methods being invoked successfully even after 2nd login() and from server's POV, the request is served.
Additionally, I discovered that I could not even reproduce this issue on my machine using same test project to reproduce this problem. This reproduces on my developer's machine. I verified that we were at same version of .NET framework and Visual Studio. It has to be surely a client-side problem. What could be it?
In case anyone else is looking for answer, we finally found it -- the issue is due to the need to set on client's side System.Net.ServicePointManager.DefaultConnectionLimit to some higher value. The default value is 2 but in reality this allows only one proxy to be created and be usable. Setting it to 3 would allow 2 proxies to be created & be used.
I have set up LDAP Active Directory authentication for a Spring MVC application that I am configuring. I have been able to log in, and a majority of the time authentication occurs successfully. However, every so often, I will get a Connection Refused error. The timing for this seems to be sporadic and resolves each time within ten to fifteen minutes. I have done some research and have found that other have also had this problem. However, I have not been able to find a solution or a hint as to what may be causing it. If anyone could point me in the right direction on this, it would be greatly appreciated.
I was able to get an answer to this question from a coworker. He had me switch to another server and the problem seems to be resolved. He gave me the reason that the server I had been hitting was being overloaded.
I have a web application which fetches information from a web service. It works fine in our development environment.
Now I'm trying to get it to work in a customer's environment instead, where the web service is from a third party. The problem is that the first time the application tries to fetch information it cannot connect to the web service. When it tries again just seconds later it works fine. If I wait a couple of hours and try again, the problem occurs again.
I'm having a hard time believing this is a programming error, as our customer and the maker of the web service thinks. I think it has to do with one of the IIS or some security in the network. But I don't have much to go on and can't reproduce the error in our development environment.
Is it failing with timeOutException when you try to connect first time?. If yes, this could be the result on start up time of the service
I have a rule: "Always assume its your fault until you can demonstrate otherwise". After over 20 years, I still stick to it.
So there are therefore two cases:
The code is broken
There is a specific issue with the live environment
Since you want to demonstrate that the problem is (2) you need to test calls to the service, from the live environment, using something other than your application. Exactly what will depend on the nature of the web service but we've found SoapUI to be helpful.
The other thing that's not clear is whether you are making calls to the live service from your development environment - if, in testing, you're not communicating with the same instance of the service then that's an additional variable that will need to be considered (and I appreciate that you're not always given the option).
Lastly, #Krishna is right - there may be a spin up issue with the remote service (hence my question about whether you're talking to the same service from your dev environment) and -horrible as it is - the solution in the first instance may simply be to find a way to allow for this!
The error was the web service from the third party. The test stub we got to develop against was made in C# and returned only dummy answers. The web service in the customer environment actually connected to a COM object. The first communication with the COM object after a longer wait took almost a minute.
Good for me that the third party developers left the source code on the customer servers...
I am designing a error logging feature so our servers (each donig different things) can have a central data store for logging errors.
Would it be a good idea to have the various applications writing to the error log file using a WCF service, or is that a bad idea?
they can do it just by ADO.NET to the database, which I think is the simpler route.
How about having a look at syslog? It was made for exactly that purpose.
I'd say just log to your local data store. The advantages are :
Speed - it's pretty rapid to just
dump your chosen error report to an
existing data connection.
Tracability - What happens if you
have an error in your service? You
lose all ability to chase down
errors on all servers.
Simplicity - If you change the
endpoint for your errors service,
you have to update every other
application that uses the error
service.
Reporting - Do you really want to
trawl through error reports from
tens / hundreds of applications in
one place when you could easily find
them in the data store local to the
app?
Of course, any of these points could be viewed from the other side, these are just my opinions.
We're looking at a similar approach, except for audit logging as well as error handling.
Looking at using WCF over netTcp, also looking at using the event log, but that seems to require high trust settings, and maybe performance issues.
Not convinced by ZombieSheep's objections:
It's pretty rapid to dump your chosen error report over an existing WCF connection. Seriously. Plus, you can do it async/queued. Not a key factor for me.
You log to the central service and the local service. When the erroer service comes back on line, you poll your machines for events since the last timestamp. Problem solved.
Use a dns alias, and don't change the path - the way you should do internal addressing anyway IMO.
What if you have multiple apps on a single machine? What if you want to see the timing of errors across multiple apps?