openldap synchronous vs. asynchronous replication - openldap

I've installed two openldap servers and configured them to do replication via 'syncrepl'. This replication is however asychronous. Changes are not immediately written to the other ldap server. What I would like to do is synchronous replication. With new or updated entries only written to the primary LDAP after they have been written to the second LDAP server.
Does anybody know if this is possible with openLDAP?

Related

Java Web application getting Cannot create PoolableConnectionFactory DB2 SQL Error: SQLCODE=-1776 for HADR configured on DB2

I have developed a web application where in i've configured two <Resource> with proper parameters in server.xml of Apache tomcat server, Using JNDI connection pool. Among two resource tags in server.xml, first tag is having details of primary server and another tag contains the details of standby server. My idea is if i dont get connection from primary for certain time, I'll switch the datasource to standby and run the query from servlet. But when I ran the code, It gave me the error Cannot create PoolableConnectionFactory (DB2 SQL Error: SQLCODE=-1776, SQLSTATE= , SQLERRMC=1, DRIVER=3.57.82)
I googled lot but cant find any concrete answer about this, but one thing was common in all i.e. HADR(Hisgh Availability Disaster Recovery) configuration of DB2 server.
Please help me out.
Generally speaking, you cannot connect to the standby database unless it assumes the primary role after the take-over.
The correct way of setting up a DB2 HADR cluster is to configure a virtual IP address in your cluster management software that gets assigned to the new primary database after the take-over; while the change remains completely transparent for client applications.
You'll need to talk to your DBAs to learn how to configure the application.
In a HADR configuration, each time the database flips from primary to standby and the standby to primary, the Server will send ClientReroute Exception to each client connected to DB2 server, So I've caught it programtically and and tried the transaction again and it succeeded.

Azure Cloud Services vs VMs for Existing Asp.Net website

I have seen variations of this question but couldn't find any that dealt with our particular scenario.
We have an existing aps.net website that links to a SQL Server database.
The database has clr user-defined types, hence it can only be hosted in Azure VM since Cloud Services don't support said types.
We initially wanted to use a vm for the database and cloud service for the front-end, but then some issues arose:
We use StateServer for storing State, but Azure doesn't support that. We would need to configure either Table storage, SQL Databases, or a Worker role dedicated to State management (a new worker role is an added cost). Table storage wouldn't be ideal due to performance. The other 2 options are preferable but they introduce cost or app-reconfiguration disadvantages.
We use SimpleMembership for user management. We would need to migrate the membership tables from our vm instance sql server to Azure's SQL Databases. This is an inconvenience as we want to keep all our tables in the same database, and splitting up the 2 may require making some code changes.
We are looking for a quick solution to have this app live as soon as possible, and at manageable cost. We are desperately trying to avoid re-factoring our code just to accommodate hosting part of the app in Azure Cloud services.
Questions:
Should we just go the VM route for hosting everything?
Is there any cost benefit in leveraging a VM instance (for sql server) and a Cloud Service instance (for the front-end)?
It seems to me every added "background process" to a Cloud Service will require a new worker role. For example, if we wanted to enable smtp for email services, this would require a new role, and hence more cost. Is this correct?
To run SQL Server with CLR etc, you'll need to run SQL Server in a Virtual Machine.
For the web tier, there are advantages to Cloud Services (web roles), as they are stateless - very easy to scale out/in without worrying about OS setup. And app setup is done through startup scripts upon bootup. If you can host your session content appropriately, the stateless model will be simpler to scale and maintain. However: If you have any type of complex installations to perform that take a while (or manual intervention), then a Virtual Machine may indeed be the better route, since you can build the VM out, and then create a master image from that VM. You'll still have OS and app maintenance issues to contend with, just as you would in an on-premises environment.
Let me correct you on your 3rd bullet regarding background processes. A cloud service's web role (or worker role) instances are merely Windows Server VM's with some scaffolding code for startup and process monitoring. You don't need a separate role for each. Feel free to run your entire app on a single web role and scale out; you'll just be scaling at a very coarse-grain level.
Some things to consider...
If you want to be cheap, you can have your web/worker role share the same code on a single machine by adding the RoleEntryPoint. Here is a post that actually shows how to do what you are trying to do with sending email:
http://blog.maartenballiauw.be/post/2012/11/12/Sending-e-mail-from-Windows-Azure.aspx
Session management is painfully slow in SQL Azure DB, I would use the Azure Cache if you can..it is fast.
SQL Server with VMs is going to cause problems for you, because you will also need to create a virtual network between that and any cloud services. This is really stupid, but if you deploy a cloud service AND a VM they communicate over the PUBLIC LOAD BALANCER causing a potential security concern and network latency. So, first you need to virtual network them (that is an extra cost)..then you also need to host a DNS server to address the SQL Server VM. Yes this is really stupid, unless you are OK with your web/worker roles communicating with your SQL Server over the internet :)
EDIT: changed "public internet" to "public load balancer" (and noted latency)
EDIT: The above information is 100% correct contrary to the comment by David below. Please read the guidance from Microsoft here:
http://msdn.microsoft.com/library/windowsazure/dn133152.aspx#scenario
DIRECTLY FROM MICROSOFT GUIDANCE speaking about cross Cloud Service communication (VM->web/worker roles):
"We recommend that you implement the first option as the connection process would not need to go through the public Internet. Therefore, it would provide a better network performance."
As of today (8/29/2013) Azure VMs and Worker/Web Roles are deployed into DIFFERENT "Cloud Services". Therefore communication between them needs to be secured via a Virtual Network that exposes private IP addresses between the instances.
To follow up on David's point below, that about adding an ACL. You are still sending packets over the internet using TDS (SQL Server protocol). That can be encrypted, but no sane architect/enterprise governance/security governance would "allow" this scenario to happen in a production environment.

Dynamically switching receive locations between database servers

Using BizTalk I need to read data from one of two databases that are hosted in Unix using ODBC.
The data is replicated between the databases and if one of the databases does not respond I need to switch to the other. There is no load balancer or anything so I need to be able to do the switch on the BizTalk server.
I was thinking of creating two receive locations, one for each database server, only one of them enabled and then have a Windows service that periodically tries to make a connection to one of the database servers and if there is an exception, call a powershell script that disables the receive location for the server that does not respond and enable the other receivelocation.
Is there a better solution for this?
I would solve this as follows:
In Biztalk create a single http receive location.
Create a windows service
In the windows service poll the first database, if it does not respond poll from the second database
Have the Biztalk service post the information to the http receive location
You need to consider what happens if you read the same data twice, once from the main database and once from the backup.

asp.net form submitting data to a mysql on Apache server

Basically to start with is it even possible to have a form that runs in asp.net and have it write data to a mysql database on a remote Apache web server? If so any pointers!? Not even sure really where to start researching it!
Yes and no...
form that runs in asp.net
Forms do not run "in" ASP.NET.
You can generate an HTML form using ASP.NET. You can submit form data to a webserver that uses ASP.NET to process the data.
write data to a mysql database
ASP.NET has database bindings that can talk with MySQL
mysql database on a remote
MySQL can listen on network interfaces so clients can connect over the network instead of using local sockets.
mysql database on a remote Apache web server
MySQL can't run on an Apache server. Apache is not an operating system.
MySQL can run on a server that is also running Apache, but with the above set up, Apache would be irrelevant.
Apache could be used to run a web service (e.g. written in Perl, Python or PHP) that connects to and queries a local MySQL server. ASP.NET could make HTTP requests to that webservice.
ASP.NET could be used to generate a form with an action that points to a URL that is handled by an Apache server.
Yes, you can write to a database anywhere in the world so long as you can:
Connect
Authenticate
Communicate (in a similar language)
You just need a connection to the database with valid credentials. You can talk to mySQL from .NET using the mySQL client.
In "theory" it's no different from a web server talking to a separate database server in the same building so long as the above three points are facilitated.

What's Enterprise SSO for in BizTalk Server?

Microsoft's Enterprise SSO server is bundled with BizTalk Server - I'm fairly familiar with how to configure it, make sure it's working, etc. My questsion is, what exactly does it do, and how does it do it?
My best understanding is that it is used to securely store configuration for things like ports and adapters, because configuration items often include things like credentials, passwords, connection strings, etc. In terms of "how it works", my best guess is that the configuration values are stored encrypted in an SSO database, and the "master secret" is simply the encryption key that only privileged credentials (like the one running the BizTalk hosts) have access to, so they can use it to access the encrypted configuration.
Can someone shine some light on this and point out where this is right/wrong?
You're pretty close overall. EntSSO is used by BizTalk internally to store any sensitive data. This includes particularly the adapter-specific part of any send port/receive location configuration.
But that's not all EntSSO does; it can also be used to provide credential mapping services between Windows and non-windows systems, by storing sets of encrypted credentials for other applications and mapping within them. Basically, this can be used to provide single sign-on services when building BizTalk solutions so that BizTalk can "act as" a specific user when doing stuff on their behalf.
For example, you could have BizTalk receive a message over an HTTP/SOAP receive location set up with Windows Integrated authentication, and then let BizTalk flow that authentication information over to an FTP send port where the Windows user credential is mapped to a specific username/password combination associated to it so that BizTalk can authenticate as said user to the FTP server. With this, different Windows Users sending messages to BizTalk would result in separate FTP connections created with different credentials on the other end (this is different from the default BizTalk behavior of using a single credential for all operations on a send port).
Obviously EntSSO offers a bunch of other options beyond this, but that's kinda the big deal.
BTW, the BizTalk docs actually contain a fairly extensive section on EntSSO that is pretty useful.

Resources