How can I configure more than one MQ queue Manager and connection factory in weblogic 11 in a clustered environment - ejb

So here is the question . I have an application running in weblogic 8 which is heavily dependent on JMS messages . The messages flow from a clustered MQ Server and the application which is in ejb 2 used to listen to directly in the MQ using configurations in the weblogic-ejb-jar.xml. The MQ server has two Queue Managers and two different Connection Factory name for those managers .
<weblogic-enterprise-bean>
<ejb-name>MDB_QM1</ejb-name>
<message-driven-descriptor>
<destination-jndi-name>QM1</destination-jndi-name>
<initial-context-factory>weblogic.jndi.WLInitialContextFactory</initial-context-factory>
<connection-factory-jndi-name>CF1</connection-factory-jndi-name>
</message-driven-descriptor>
</weblogic-enterprise-bean>
<weblogic-enterprise-bean>
<ejb-name>MDB_QM2</ejb-name>
<message-driven-descriptor>
<destination-jndi-name>QM2</destination-jndi-name>
<initial-context-factory>weblogic.jndi.WLInitialContextFactory</initial-context-factory>
<connection-factory-jndi-name>CF2</connection-factory-jndi-name>
</message-driven-descriptor>
</weblogic-enterprise-bean>
Now the application is to be migrated to weblogic 10.3 and the configurations has already been done in test for that .ejb version has changed to 3 and they have used annotations in the mdbs for queue config and such .But the real problem is this ,the application used to listen to the MQ directly and now they have configured a bridge which will transfer the message from MQ to a internal queue and the mdb will listen to that queue . The source MQ queue config in weblogic bridge config has only one connection factory in there and I am not sure if its possible to configure multiple queue managers , connection factories etc in a single queue . But in production there will be multiple queue managers etc .
I think its possible if I configure foreign servers for those queues then the clustering will be possible . But that will mean a huge change in the application and weblogic config also . So the ideal solution I am searching for is a way to connect to those multiple MQ queue managers with the existing bridge configuration. If its not possible , please suggest the next best thing . I am open to all ideas :)
Thanks

A WebSphere MQ cluster is all about how QMgrs talk to each other and does not hide from the application the fact that there are multiple physical instances of a queue. A separate connection is required to each queue manager instance. The app will need to either...
Use an instance of the bridge for each queue instance.
Configure the bridge to make multiple simultaneous connections to the different QMgrs.
Whoever made the decision to configure a bridge with a single connection factory did not consider the architecture of the underlying transport. You cannot overcome that poor decision with configuration, no matter how hard you try.

Related

Service Fabric and TCP connections

We have developed a TeamViewer-like service where clients connect via SSL to our centralized servers. Other clients can connect to the server as well and we can setup a tunnel through our service to allow peer-to-peer connectivity without NAT or firewall issues.
This works fine with Azure Cloud Services, but we would like to move away from Azure Cloud Services. Service Fabric seems to be the way to go, because it supports ARM and also allows much fine-grained services and make updating parts of the system much more easy.
I know that microservices in Service Fabric can be stateful, but all examples use persistent data as state. In my situation the TCP connection is also part of the state. Is it possible to use TCP with service fabric?
The TCP endpoint should be kept alive on the same instance (for several days), so this makes the entire service fabric model much more difficult.
Sure, you can have users connect to your services over any protocol you want. Your service sounds very stateful to me in the same way that user session state is stateful - you want users to return to the same place where their data is. In your case, that "data" is a TCP connection. But there's no guarantee a TCP endpoint will be kept alive for days in any system - machines fail, software crashes, OSes get patched, etc. You need to be prepared for the connection to break so you can quickly re-establish it. Service Fabric stateful services are great for this. Failover of a stateful service to another machine is extremely fast (milliseconds). Of course, you can't actually replicate a live connection, but you sure can replicate all the metadata you need to re-establish a connection if it breaks.

How to configure a Message Driven Bean to subscribe to a remote JMS Topic

I am trying to get a MessageDriven (EJB 3) bean to subscribe to a JMS Topic on another glassfish instance on another host. Is this possible?
In the Glassfish console you can modify the JMS server and point it to another Glassfish instance or a standalone OpenMQ broker. Although you can configure several JMS hosts to my knowledge Glassfish will always use the one called default_JMS_host so that's the one you want to edit.
Just one thing: in such a setup the two server instances will share queues and topics, which may not be what you want if the two servers are for example running the same application but don't want to share e.g a particular queue. This can easily be solved via the Destination Resources configuration, by specifying different physical names for that queue.

Security groups and UDP on Heroku

Has anyone experienced running multiple collaborating applications on Heroku? For example, an admin application to manage another application; or a stats server observing another application?
On Amazons' EC2 platform you can use security groups to restrict access to servers, creating a virtual network between your application or server instances. Is there any such way to do this on Heroku? If so, can you open UDP as well as TCP connections?
Thanks
Robbie
The comment from #elithrar is correct. To talk between applications you either need to define an API, or used shared resources. For example you can have 2 applications connect to the same database by manually copying and pasting the DATABASE_URL from one app to another. This has the downside that should we need to roll credentials (very rare) your manually copied configuration will break.
The same pattern can be used with any add-ons, such as https://addons.heroku.com/redistogo or https://addons.heroku.com/iron_mq to share a message bus or queue between two applications.

Biztalk Server 2009 - Failover Clustering and Network Load Balancing (NLB)

We are planning a Biztalk 2009 set up in which we have 2 Biztalk Application Servers and 2 DB Servers (DB servers being in an Active/Passive Cluster). All servers are running Windows Server 2008 R2.
As part of our application, we will have incoming traffic via the MSMQ, FILE and SOAP adapters. We also have a requirement for High-availability and Load-balancing.
Let's say I create two different Biztalk Hosts and assign the FILE receive handler to the first one and the MSMQ receive handler to the second one. I now create two host instances for each of the two hosts (i.e. one for each of my two physical servers).
After reviewing the Biztalk Documentation, this is what I know so far:
For FILE (Receive), high-availablity and load-balancing will be achieved by Biztalk automatically because I set up a host instance on each of the two servers in the group.
MSMQ (Receive) requires Biztalk Host Clustering to ensure high-availability (Host Clustering however requires Windows Failover Clustering to be set up as well). No loading-balancing option is clear here.
SOAP (Receive) requires NLB to achieve Load-balancing and High-availability (if one server goes down, NLB will direct traffic to the other).
This is where I'm completely puzzled and I desperately need your help:
Is it possible to have a Windows Failover Cluster and NLB set up at the same time on the two application servers?
If yes, then please tell me how.
If no, then please explain to me how anyone is acheiving high-availability and load-balancing for MSMQ and SOAP when their underlying prerequisites are mutually exclusive!
Your help is greatly appreciated,
M
Microsoft doesn't support NLB and MSCS running on the same servers
"These two components work well together in a two or three tier application model running on separate computers. Be aware that running these two components on the same computer is unsupported and is not recommended by Microsoft due to potential hardware sharing conflicts between Cluster service and Network Load Balancing."
http://support.microsoft.com/kb/235305
If you want to provide HA for SOAP requests received in BizTalk you should configure you BizTalk servers to be in an Active/Active configuration (no MSCS) in the same BizTalk Group. Once you do this you install an configure NLB between these two. Your clients will be able to query the web services thru the NLB cluster and the NLB service will route the request to a specific server within the cluster (your asmx files should be installed and configured in both servers).
Regarding MSMQ the information you have obtained so far is right, the only way to assure HA for this adapter is clustering the BizTalk servers. If you want to implement this too then you must have a separate infrastructure for the SOAP receive hosts and the MSMQ ones.
The main reason for this scenario is that a BizTalk Isolated Host is not cluster aware so BizTalk InProcess Host could be all hung up and the Isolated Host would never know of it and would continue to receive requests.
I'm currently designing an architecture very similar so if you would like to share more comments or questions you can reach me at ignacioquijas#hotmail.com
Ignacio Quijas
Microsoft Biztalk Server Specialist

Network Load Balancing Biztalk Instances

What are some good articles/ resources to understand how load balancing is configured with Biztalk --- both in terms of inherent abilities of the product as well as employing NLB (network load balancing with Windows 2003 or later editions)?
EDIT: I am specifically interested in the impact of application protocol on load balancing? For example, how two instances of Biztalk server handle TCP/IP connections when the other party (to which Biztalk makes a connection request) doesn't allow more than one connection, etc.
The obvious resource is MSDN - There is a section entitled Planning for High Activity that covers most of the concepts and will give you the right terminology to then go looking for other resources on the web. As with a lot of Microsoft server products, MSDN also has lots of white papers covering specific BizTalk scenarios.
Most good BizTalk books also include a section on load balancing concepts (Professional BizTalk Server 2006 has an example).
Beyond that, there are several key concepts that you may find helpful, particularly around the use of terminology (some of BizTalk's usage can be misleading).
Load Balancing
BizTalk Server is, by the nature of its architecture, load balancing. What that means is that if you have more than one BizTalk Host connecting to a MessageBox database, the messages within the database will be spread evenly across the hosts participating in the BizTalk group. (with caveats around just what BizTalk processess have been configured to run in each Host).
There is also the concept of Network Load Balancing which is Microsoft Network Load Balancing Services or any equivalent service. In BizTalk this applies at the web level, for receive adapters using the HTTP protocol (e.g. the HTTP adapter, the SOAP adapter and WCF HTTP adapters). This load balancing is not actually a BizTalk service but is instead a load balancing layer provided on top of the BizTalk isolated host adapters to ensure high availability of the web resources. It is configured the same as any other NLB service.
Clustering
When clustering is mentioned in BizTalk it is used to refer to one of two things - clustering at the SQL layer to provide high availability and failover, and BizTalk Host Clustering.
SQL Clustering - this is simply (though it isn't simple to do, just say) a matter of providing a SQL server cluster that runs the BizTalk server databases, allowing for database failover. This is not a BizTalk specific technology.
BizTalk Host clustering - in this case a BizTalk Server Host is marked as being clustered when creating it inside BizTalk. This is a BizTalk specific setting that essentially states that one and only one instance of the host will be running at a time, and that by extension all the resources within this host will also only have a single instance. It is primarily intended for usage for adapters like the FTP and MSMQ adapters that behave incorrectly when more that one is allowed to run at the same time.
This edit is in response to the OP's comment asking for further details. Hopefully this make things clearer. If you have more questions about specifics I can possibly answer them but this pretty much exhausts my theory knowledge about high availability environment configuration. I'm primarily a BizTalk dev and solution designer, when it comes to network intricacies there are people where I work who fill in the nitty gritty detail and implementation of these designs.
Network Load Balancing for HTTP Based Adapters
The key point I was trying to express here was that Network Load Balancing in the context of BizTalk is no different for any other Network Load Balancing scenario.
BizTalk has two type of hosts, In Process and Isolated. In Process hosts are individual BizTalk services running on servers (with one host instance per server). Isolated hosts are actually delegates to a web server (IIS) that handle all HTTP based adapters (the HTTP adapter and the SOAP adapter plus certain configurations for the WCF adapter)
When you introduce Network Load Balancing to a BizTalk environment what you are doing is intoducing it at the web server layer, for the Isolated host hosted adapters.
Here is the MSDN page for the introduction to NLB. One of the key points about NLB is expressed in the page in the following quote:
Network Load Balancing allows all of
the computers in the cluster to be
addressed by the same set of cluster
IP addresses (but also maintains their
existing unique, dedicated IP
addresses).
By setting up NLB you allow multiple isolated host servers to handle internet traffic directed at a single dedicated IP address. The NLB configuration farms out the work.
Clustering BizTalk Adapter Handlers
In my answer above I stated that certain BizTalk adapters behave incorrectly when allowed to run within multiple BizTalk Host Instances. This is very adapter specific in terms of the why, so the best expansion on that answer I can give is the following quote from the MSDN documentation, dealing with the FTP adapter specifically.
For most of the BizTalk integrated
adapters, high availability can be
achieved by creating multiple adapter
handlers to run on BizTalk host
instances on different BizTalk servers
within a BizTalk group. FTP adapter
receive handlers should not, however,
be configured to run in multiple
BizTalk host instances simultaneously.
This recommendation is made because
the FTP receive adapter uses the FTP
protocol to retrieve files from the
target system and the FTP protocol
does not lock files to ensure that
multiple copies of the same file are
not retrieved simultaneously when
running multiple instances of the FTP
receive adapter.
As they say, the FTP adapter utilises the FTP protocol which does not lock files. Because BizTalk is natively a highly parallel system, if you allowed multiple BizTalk hosts to host an instance of the FTP adapter you would end up with multiple copies of the same FTP message received into your BizTalk system. What BizTalk clustering does is ensure that any clustered BizTalk hosts will run on 1 and only 1 host instance. By placing your FTP receive handler inside a clusterd host, you ensure that:
you will always have an FTP adapter running so long as a BizTalk host is running
you will never have more than one FTP adapter running.
Additionally you can use a BizTalk clustered host to reduce load on a system. For example, a BizTalk SQL adapter receive location that has been configured to poll, will poll on all host instances. While this would not necessarily cause multiple message instances, it could cause undue load on the SQL server you poll, or even create deadlock scenarios depending on the design of the called stored procedure, so clustering the SQL Adapter receive handler can be a good idea.

Resources