How to implement OPENLDAP failover on RHEL. We have couple of LDAP servers, need to know how to handle failover of one server and redirect to other, viceversa.
If you're on 2.3+, n-Way Multi Master with mirrormode enabled and a wide IP is an option. I found this to be a helpful resource after exhausting the OpenLDAP documentation. Of course, this assumes that you want the data replicated and both servers to be available without caring which one you're actually connecting to.
Related
what happens when a load balancer (HaProxy / Nginx) is down, this single point of failure may cause the whole system to be unavailable, what's the best strategy to recover in this case; how to prevent service to be unavailable.
Do we also need replication for Loadbalancer to prevent data loss?
The common solution is to run one or several servers with one or more VIPs (Virtual IP address) where the keepalived handle the VIP and haproxy the load.
This is one of many examples how to create such a setup Setting Up A High-Availability Load Balancer (With Failover And Session Support) With HAProxy/Keepalived On Debian Lenny
About the "replication" should you answer to you these questions.
what do you want to replicate?
how many replications do you want?
In HAProxy can you use Peers for replication of several things.
In a SQL Server Managed Instance I have 2 databases (for security reasons both databases have different logins). I need the possibility to allow one database to look into the other one. In a local SQL Server I was able to create a Linked Server to realize this. But this seems not to work using the Managed Instance.
Can someone give some hints how to achieve this?
Managed Instance supports linked servers (unless if they use MSDTC for some distributed writes). Make sure that you add logins for remote server:
EXEC master.dbo.sp_addlinkedsrvlogin #rmtsrvname=N'PEER',#useself=N'False',#locallogin=NULL,
#rmtuser=N'$(linkedServerUsername)', #rmtpassword='$(linkedServerPassword)';
If it still doesn't work put the exact error message. This might be Network security Group blocking the port, VNets that are not peered, etc.
In Azure, I have a webrole exposed to public and 2 workerroles only accessible within the private network. Now I want to internally load balance the workerroles, so I have set an internal endpoint for workerroles, but what address should I use to communicate to the workers, it can't be the internal IP address because it's specific to a particular instance and wouldn't go through the load balancer right?
Thx a lot!
There is no internal load balancer in Windows Azure. The only load balancer is the one that has the public IP Addresses.
If you want to load balance only internal addresses (workers) you have to maintain it yourself. Meaning you have to install some kind of a load balancer on Azure VM, which is part of the same VNet. That load balancer may be of your choice (Windows or Linux). And you have to implement a watchdog service for when topology changes - i.e. workers are being recycled, hardware failure, scaling events. I would not recommend this approach unless this is absolutely necessary.
Last option is to have a (cached) pool of IP Endpints of all the workers and randomly chose one when you need.
Azure based Internal Load Balancers (ILB) are available as of TUESDAY, MAY 20, 2014.
http://azure.microsoft.com/blog/2014/05/20/internal-load-balancing/
Can be used for SQL AlwaysON deployments and publishing and internal endpoint accessible accessible from your VNET only (i.e not public routable).
note: Searching for ILB help and spotted this thread. Thought it was worth updating , if not let me know and I will delete.
You can configure IIS of your WebRole to act as Reverse Proxy for you using Application Request Routing then configure it's rules to Load-Balance requests using your chosen algorithm.
The easiest way will be to modify your WebRole.cs to obtain the list of (internal) endpoints of your WorkerRole then add it programatically (see example here).
Alternatively, you can use a Startup-Script to invoke appcmd to achieve the same results.
Lastly, you'll have to change your client settings to point requests to the (proxied) IIS endpoint instead of the regular WorkerRole endpoints.
Please note that Azure now supports Internal Load Balancing (ILB).
http://azure.microsoft.com/blog/2014/05/20/internal-load-balancing
We are planning a Biztalk 2009 set up in which we have 2 Biztalk Application Servers and 2 DB Servers (DB servers being in an Active/Passive Cluster). All servers are running Windows Server 2008 R2.
As part of our application, we will have incoming traffic via the MSMQ, FILE and SOAP adapters. We also have a requirement for High-availability and Load-balancing.
Let's say I create two different Biztalk Hosts and assign the FILE receive handler to the first one and the MSMQ receive handler to the second one. I now create two host instances for each of the two hosts (i.e. one for each of my two physical servers).
After reviewing the Biztalk Documentation, this is what I know so far:
For FILE (Receive), high-availablity and load-balancing will be achieved by Biztalk automatically because I set up a host instance on each of the two servers in the group.
MSMQ (Receive) requires Biztalk Host Clustering to ensure high-availability (Host Clustering however requires Windows Failover Clustering to be set up as well). No loading-balancing option is clear here.
SOAP (Receive) requires NLB to achieve Load-balancing and High-availability (if one server goes down, NLB will direct traffic to the other).
This is where I'm completely puzzled and I desperately need your help:
Is it possible to have a Windows Failover Cluster and NLB set up at the same time on the two application servers?
If yes, then please tell me how.
If no, then please explain to me how anyone is acheiving high-availability and load-balancing for MSMQ and SOAP when their underlying prerequisites are mutually exclusive!
Your help is greatly appreciated,
M
Microsoft doesn't support NLB and MSCS running on the same servers
"These two components work well together in a two or three tier application model running on separate computers. Be aware that running these two components on the same computer is unsupported and is not recommended by Microsoft due to potential hardware sharing conflicts between Cluster service and Network Load Balancing."
http://support.microsoft.com/kb/235305
If you want to provide HA for SOAP requests received in BizTalk you should configure you BizTalk servers to be in an Active/Active configuration (no MSCS) in the same BizTalk Group. Once you do this you install an configure NLB between these two. Your clients will be able to query the web services thru the NLB cluster and the NLB service will route the request to a specific server within the cluster (your asmx files should be installed and configured in both servers).
Regarding MSMQ the information you have obtained so far is right, the only way to assure HA for this adapter is clustering the BizTalk servers. If you want to implement this too then you must have a separate infrastructure for the SOAP receive hosts and the MSMQ ones.
The main reason for this scenario is that a BizTalk Isolated Host is not cluster aware so BizTalk InProcess Host could be all hung up and the Isolated Host would never know of it and would continue to receive requests.
I'm currently designing an architecture very similar so if you would like to share more comments or questions you can reach me at ignacioquijas#hotmail.com
Ignacio Quijas
Microsoft Biztalk Server Specialist
I have 50 machines in a LAN and each of these have internet access. Can a program be developed using vc++ which will tell what are all the websites which is being opened by users in each machine?
You can easily accomplish this by writing an application which captures packets outbound on port 80 (and the associated DNS information). The problem is that this application must run on every client computer which you want to trace. The easier method, as stated by others, is to take advantage of your network architecture and tunnel all traffic through a central proxy which can record the same information.
There are many-many enterprise tools suited for just this task in the latter instance.
Route your internet traffic through a centralized proxy and monitor the traffic from proxy say using Fiddler, or something else. In case proxying is not possible, use Fiddler to generate data at known location and then collate it at required intervals.
Install a firewall, if you don't already have one, and use it to log connections.