Can anyone help me with this ...
I have a 3 node sql server cluster lets say N1, N2 and N3. The name for the three node cluster is SQLCLUS.
The application connects to database using the name SQLCLUS in connections strings name.
The application uses SQL Server session manangement. So I remote desktopped to N1 (which is active while N2 and N3 are passive) and from the locaiton
C:\Windows\Microsoft.NET\Framework64\v2.0.50727
I executed the following command
aspnet_regsql.exe -S SQLCLUS -E -ssadd -sstype p
The command executed successfully. I could then login into SQLCLUS and see ASpState database created with 2 tables.
I then tested the applciation which uses the SQL Server session and it also works fine.
Now my question is ...
If there is a fail over to node N2 or N3 will my application still work ?. I did not execute the above command (aspnet_regsql.exe ) from N2.
Should I execute the command , aspnet_regsql.exe -S SQLCLUS -E -ssadd -sstype p , in N2 and N3 too ?
What changes happens in a sql server after executing the above command ?. I mean , is there any kind of service ot settings changes that can be seen ?.
Greatly apprecite any in puts regarding this....
Thanks in advance...
3.
Sql Server failover clustering can be conceptually explained as a smoke-and-mirrors dns hack. Thinking of clustering in familiar terms makes you realize how simple a technology it really is.
Simplified description of Sql Server Failover Clustering
Imagine you have two computers: SrvA and SrvB
You plug an external HD (F:) into SrvA, Install Sql Server and configure it to store its database files on f:\ (The executable is under C:\Program Files).
Unplug the HD, plug it into SrvB, Install Sql Server and configure it to store its database files on F:\ in the exact same location.
Now, you create a dns alias "MyDbServer" that points to SrvA, plug the external HD back into SrvA, and start sql server.
All is good until one day when the power supply fails on SrvA and the machine goes down.
To recover from this disaster you do the following:
Plug the external drive into SrvB
Start sql server on SrvB
Tweak the dns entry for "MyDbServer" to point to SrvB.
You're now up and going on SrvB, and your client applications are blissfully unaware that SrvA failed because they only ever connected using the name "MyDbServer".
Failover Clustering in the Reality
SrvA and SrvB are the cluster nodes.
The External HD is Shared SAN Storage.
The three step recovery process is what happens during a cluster failover and is managed automatically by the Windows Failover Clustering service.
What kinds of tasks need to be run on each Sql Node?
99.99% of the tasks that you perform in Sql Server will be stored in the database files on shared storage and therefore will move between nodes during a failover. This includes everything from creating logins, creating databases, INSERTS/UPDATES/DELETES on tables, Sql Agent jobs and just about everything else you can think of. This also includes all of the tasks that aspnet_regsql command performs (it does nothing special from a database perspective).
The remaining .01% of things that would have to be done on each individual node (because they aren't stored on shared storage) are things like applying service packs (remember that the executable is on c:), certain registry settings (some Sql Server registry settings are "checkpointed" and failover, some aren't), registering 3rd party COM dll's (no one really does this anymore) and changing the service account that Sql Server runs under.
Try it for yourself
If you want to verify that aspnet_regsql doesn't need to be run on each node, then try failing over and verify that your app still works. If you do run aspnet_regsql on each node and reference the clustered name (SQLCLUS) then you will effectively be over-writing the database, so if it doesn't error out, it will just wipe out your existing data.
Related
i have a challenge deploying my code on a remote server, because i don't know how to keep a special order of actions in the local and remote machine.
Set up server (remote)
Prepare data (local)
Copy files (local to remote)
Process files (remote)
Copy files (remote to local)
Start services (remote)
I start my deploymnet with
$ pyinfra inventory.py deploy.py
This is my inventory.py
my_hosts = ["cattleserver"]
my_local = ['#local']
In deploy.py i try to map the order above with corresponding code.
But this seems not to be possible, because pyinfra has no concept for order (correct me if i am wrong). It just guarantess an end state.
Is there a way to map my order above?
Regards,
Alexander
Myself and my colleagues are currently in the process of upgrading our BizTalk environment to BT 2020 from BT 2013R2 and as part of this we are intending on setting up two BizTalk servers so that we can have host instances running across both of them. We do not, however, need more than one MessageBox DB based on the load we see, and after looking online, there doesn't seem to be a breadth of information.
Is it possible to have our BT setup to have 2 servers running off of a single MessageBox and is it complicated to configure?
It’s the basic feature of BizTalk to have multi servers group connected to same message box. On your second computer, when you configure BizTalk using BizTalk Configuration Wizard, you choose option to join existing group and you should select your existing dbs to join.
Microsoft Docs Install BizTalk Server in a Multi-Computer Environment
I am new to oracle coherence
Basically we have some data and we wanted some java/bpel webservice to get those data from coherence cache instead of database. [we are planning to load all those data to cache server]
So we have below questions before we start this solution.
Webservice we are planning to start is going to be just java would be fine.
And all operations are reading only.
Question
1. IS it Coherence needs to be stand alone server ? (down load from oracle and install it separately and run the default cacheserver) ?
2.If so we are planning to do the pre loading of data from database to cache server by using code ? i hope thas possible ? Any pointers would be helpful ?
3.How does the webservice connect with Coherence server if webservice running in different nmachine vs coherence server running ?
(OR)
Is it mandatory that webservice and coherence should run in the same machine ?
If webservice can run in different machine how does the webservice code connects to coherence server (any code sample , url would be helpful) ?
Also what is that coherence comes with weblogic ? Is it not fit for our applications design i assume ?!!!! then what type of solution we go for weblogic with coherence ?
FYI : Our goal is simple we want to store the data in cache server and have our new webservice to retrieve the data from Cache servere instead of database(because v are planning to avoid database trip )
Well, you questions are very open and probably have more than 1 correct answer. I'll try to answer all of them.
First, please take into consideration, that Coherence is not a free tool and you have to pay for a license.
Now, to the answers:
Basically, coherence has 2 parts: proxy and server. The first is the responsible to routing your requests and the second for hosting the data. You can run both together in the same service but this has pros&cons. One Con is that your services will be very loaded and the memory will be shared between two kind of processes. A pro is that is very simple to run.
You can preload all the data from the DB. For that you have to open the Coherence and write your own code. For that, you need to define you own cachestore (look for that keyword in coherence docs) and override the loadAll method.
As far I remember Coherence comes together with Weblogic. That says the license for the one is the license for the second and they come in the same product. I'm not familiar with weblogic, but I suppose is a service of the package. In any case, for connecting to coherence you can refer to Configuring and Managing Coherence Clusters
The coherence services can run in different machines, in different network and even in different places of the world if you want. Each, proxy or server, consumer and DB, could be in a different network. Everything can be configured. You have to say you weblogic server where the coherence proxy will be, you'll set in the coherence proxy/server the addresses of them and you'll configure your coherence server for finding out his database by configuration. Is a bit complicated to explain everything here.
I think I answered before.
Just take into consideration coherence is a very powerful tool but very complicated to operate and to troubleshoot. Consider the pros/cons of accessing directly your DB and think about if you really need it.
If you have specific questions, please don't hesitate. Is a bit complicated to explain everything here since you're trying to set up one of the complicated system I ever seen. But is excellent and I really recommend it.
EDIT:
Basically Coherence is composed by 2 main parts: proxy and server. The names are a bit confusing since both are servers, but the proxy serves to the clients trying to perform cache operations (CRUD) while the "servers" serve to the proxies. Proxy is the responsible for receiving all the requests, processing them and routing them, according to their keys, to the respective server who holds the data or who would be the responsible for holding it if the operation requires a loading act. So the answer to your questions is: YES, you need at least one proxy active in your cluster, otherwise you'll be unable to operate correctly. It can run on one of your machines on into a third one. Is recommended to hold more than 1 proxy for HA purposes and proxies can act as servers as well (by setting the localstorage flag to true). I know, is a bit complicated and I recommend to follow oracle docs.
Essentially, there are 2 types of Coherence installation.
1) Stand-alone installation (without a WebLogic Server in the mix)
2) Managed installation (with Weblogic Server in the mix)
Here are a few characteristics for each of the above
Stand-alone installation (without a WebLogic Server in the mix)
Download the Coherence installation package and install (without any dependency on existing WebLogic or FMW installations)
Setup and Configure the Coherence Servers from the command-line
Administer and Maintain the Coherence Servers from the command-line
Managed installation (with Weblogic Server in the mix)
Utilize the existing installation of Coherence that was installed when WebLogic or FMW was installed
Setup and Configure the Managed Conherence Servers to work with WebLogic server
Administer and Maintain the Managed Coherence Servers via the WebLogic console
Note the key difference in terminology, Coherence Servers (no WL dependency) vs. Managed Coherence Servers (with WL dependency)
I am using a c program to write/delete 1-2MB of data periodically (10min) to sqlite3 database. The program also act as a read only database for my node.js web server to output Restful APIs. (I can not use node.js modules because node.js web server is on different machine)
In documentation its mentioned that in client/server architechture RDBMS might be good but that point is not put strongly
I am using a c program to act as a server to answer web servers request as well as other processes on different machine. The system require small data (~2-5Mb) frequently (every 5min).
If it is not good to use sqlite as client server database How can I convince my manager?
If its okay then why do they not have a standard server plugin?
When the SQLite documentation speaks about a client/server architecture, it says:
If you have many client programs accessing a common database over a network
This applies to whatever program(s) that access the database directly. (In the case of SQLite, this would imply that you have a networked file server, and that multiple clients access the database file directly over the network.)
If you have a single process that accesses the database file, then you do not have a client/server architecture as far as the database is concerned.
If there are other clients that access your server program, this has no effect on your database.
I am setting up a virtual environment as a proof of concept with the following architecture:
2 node web farm
2 node SQL active/passive fail-over cluster
2 node BizTalk active/active cluster
The first two are straight forward, now I'm wondering about the BizTalk cluster.
If I followed the same model as setting up SQL (by using the Fail-over cluster manager in windows to create a cluster) I think I would end up with an active/passive cluster.
What makes a BizTalk cluster Active/Active?
Do I need to create a windows cluster first, or do I just install BizTalk on both machines and configure BizTalk appropriately?
Yes, my understanding is that you do need to cluster the OS first.
That said, you can usually avoid the need for clustering unless you need to cluster one of the 'pull' receive handlers like FTP, MSMQ, SAP etc. For everything else IMO it usually makes sense just to add multiple BizTalk servers in a group, and then use NLB for e.g. WCF Receive adapters.
The Rationale is that by running multiple host instances of each 'type' (e.g. 2+ Receive, 2+ Process, 2+ Send, etc), is that you also have the ability to stop and start host instances without any downtime, e.g. for maintenance (patches), application deployment, etc.
The one caveat with the Group approach is that SSO master doesn't failover automatically, although this isn't usually a problem as the other servers will still be able to work from cache.
You can configure a BizTalk Group in multi-computer environment. You can refer to the doc available at MSDN download center for more details. The document specifically has a section titled "Considerations for clustering BizTalk Server in a Multiple Server environment"
You can also additionally configure your BizTalk host as a clustered resource. You can refer to the documentation available at MSDN for more details.