I'm developing a EJB client. The EJB (2.1) server in deployed as Websphere 6.0 cluster.
I'm doing the jndi lookup after acquirement of the InitialContent object for specific ip address using the following code:
Hashtable env = new Hashtable();
env.put(Context.INITIAL_CONTEXT_FACTIORY,com.ibm.websphere.naming.WsnInitialContextFactory");
env.put(Context.PROVIDER_URL, "IIOP://111.111.111.111:222"); // this is the IP address of one of the cluster servers
then we create the InitialContent object. Now the question:
How do i make the lookup so it will return me a cluster wise remote interface?
By cluster wise i mean the call will be not direct call to one of the EJB servers but to a mechanism of the cluster which is aware of the clustered servers.. This shout be basic thing
yet i cannot find any clear documentations of this on the web. Anyone worked with Websphere 6.0 clusted EJB environment?
thanks.
Did you try this as a provider URL,
corbaloc::cluster_host1:RMI_PORT_NO,:cluster_host2:RMI_PORT_NO
Replace cluster_host1 with your cluster name and RMI_PORT_NO with RMI Port number like 9811 or 2809.
Even though you are pointing to one member of the cluster for the lookup the EJB that is created can be anywhere in the cluster. In fact, I once had a problem with that because I needed to get a bean on the same member and I couldn't figure out a way to guarantee it would be local. This might provide some more insight:
https://www.ibm.com/support/knowledgecenter/beta/en/SSAW57_9.0.0/com.ibm.websphere.nd.multiplatform.doc/ae/rnam_example_prop3.html
Related
the problem is I have 3 virtual machines with the same source images in three diferent zones in a region. I can't put them in a MIG because each of them has to attach to a specific persistent disk and according that I researched, I have no control of which VM in the MIG will be attached to which persistent disk (please correct me if I´m wrong). I explored the unmanaged instance group option too, but only has zonal scope. Is there any way to create a load balancer that works with my VMs or I have to create another solution (ex. NGINX)?
You have two options.
Create an Unmanaged Instance Group. This allows you to set up your VM instances as you want. You can create multiple instance groups for each region.
Creating groups of unmanaged instances
Use a Network Endpoint Group. This supports specifying backends based upon the Compute Engine VM instance's internal IP address. You can specify other methods such as an Internet address.
Network endpoint groups overview
Solution in my case was using a dedicated vm, intalling nginx as a load balancer and creating static ips to each vm in the group. I couldn't implement managed or unmanaged instance groups and managed load balancer.
That worked fine but another solution found in quicklabs was adding all intances in a "instance pool", maybe in the future I will implement that solution.
I have an EJB3 bean that needs to GET or POST from multiple HTTP servers. I have read the documentation on writing JCA adapters, as well as the documentation on Apache HTTPComponents, specifically the managed connections, managed connection factories, etc. that HTTPClient offers.
I note that the documentation for BasicHttpClientConnectionManager says to use it, rather than PoolingHttpClientConnectionManager "Inside of an EJB Container." It is unclear whether "Inside of an EJB Container" refers to user code in an EJB to be run in a container, or to the container's own implementation code (e.g. something you might put in a JCA adapter.
I'm still a little unclear, architecturally, how to handle the task to take maximum advantage of the servics offered by the container. Thus far, my choices seem to be:
From within the EJB, create a new BasicHttpClientConnectionManager, and then create a client, like so:
BasicHttpClientConnectionManager cxMgr = new BasicHttpClientConnectionManager()
HttpClients.custom().setConnectionManager(cxMgr).build()
This would, I believe, result in no connection pooling, but rather EJB instance pooling, which probably wouldn't be all that performant since the EJB container has no way of knowing which bean instance is holding an active connection to which remote HTTP server.
Write a (fairly minimal) JCA adapter that wraps the PoolingHttpClientConnectionManager, and then, within the EJB, grab that adapter with a #Resource annotation and use it to build the HTTPClient
Write a JCA adapter that manages a pool of HTTPClients and hands those out when needed.
I am unclear on which approach I should take, or on whether or not I'm ignoring some sort of HTTP Connection management service that's already built into the container (in this case, TomEE plus). How should I do this?
It's not that simple: none of those options match. The particular answer depends on whether remote side of your HTTP connection is a single resource or not.
If it's a single logical resource, then JCA matches best. Still, you should hardly avoid any references to it's HTTP nature in your adapter: design a proper logical interface for your remote system and then implement a resource adapter upon that interface. Your clients request the data they need and then receive it not as HTML strings, but as ready-to-use Java beans. You will be able to utilize much of advantages provided by EJB container without it's requirements violation.
If you need an entry point to access unspecified number of external services (HTML pages search indexing might be this kind of job), then things get tricky. JCA is not intended to be used for that kind of things, but it suites them quite for sure: I am maintaining TCP SocketConnector project which does almost the same and everything works like a charm. EJB won't fit here: stateless beans pooling is hard to control, singletons will require you to write non-blocking pooling implementation yourself and statefuls are, well, kind of single-threaded.
There's also an extra option: while JCA will be easier to integrate to your existing Java EE environment, it might be hard to implement a resource adapter from scratch if you're doing it for the first time. An external HTTP querying tool which is connected with a JMS queue might come in handy here. JMS will make it easier to balance workload, but it removes any interactivity and may slow the whole chain down.
P.S. For the third option of your original question: there's no need to implement pooling in JCA as container already maintains connections pool for all resource adapter instances by default. Just keep a single HTTP Client connection per JCA's ManagedConnection and everything will work out-of-the-box.
I am new to oracle coherence
Basically we have some data and we wanted some java/bpel webservice to get those data from coherence cache instead of database. [we are planning to load all those data to cache server]
So we have below questions before we start this solution.
Webservice we are planning to start is going to be just java would be fine.
And all operations are reading only.
Question
1. IS it Coherence needs to be stand alone server ? (down load from oracle and install it separately and run the default cacheserver) ?
2.If so we are planning to do the pre loading of data from database to cache server by using code ? i hope thas possible ? Any pointers would be helpful ?
3.How does the webservice connect with Coherence server if webservice running in different nmachine vs coherence server running ?
(OR)
Is it mandatory that webservice and coherence should run in the same machine ?
If webservice can run in different machine how does the webservice code connects to coherence server (any code sample , url would be helpful) ?
Also what is that coherence comes with weblogic ? Is it not fit for our applications design i assume ?!!!! then what type of solution we go for weblogic with coherence ?
FYI : Our goal is simple we want to store the data in cache server and have our new webservice to retrieve the data from Cache servere instead of database(because v are planning to avoid database trip )
Well, you questions are very open and probably have more than 1 correct answer. I'll try to answer all of them.
First, please take into consideration, that Coherence is not a free tool and you have to pay for a license.
Now, to the answers:
Basically, coherence has 2 parts: proxy and server. The first is the responsible to routing your requests and the second for hosting the data. You can run both together in the same service but this has pros&cons. One Con is that your services will be very loaded and the memory will be shared between two kind of processes. A pro is that is very simple to run.
You can preload all the data from the DB. For that you have to open the Coherence and write your own code. For that, you need to define you own cachestore (look for that keyword in coherence docs) and override the loadAll method.
As far I remember Coherence comes together with Weblogic. That says the license for the one is the license for the second and they come in the same product. I'm not familiar with weblogic, but I suppose is a service of the package. In any case, for connecting to coherence you can refer to Configuring and Managing Coherence Clusters
The coherence services can run in different machines, in different network and even in different places of the world if you want. Each, proxy or server, consumer and DB, could be in a different network. Everything can be configured. You have to say you weblogic server where the coherence proxy will be, you'll set in the coherence proxy/server the addresses of them and you'll configure your coherence server for finding out his database by configuration. Is a bit complicated to explain everything here.
I think I answered before.
Just take into consideration coherence is a very powerful tool but very complicated to operate and to troubleshoot. Consider the pros/cons of accessing directly your DB and think about if you really need it.
If you have specific questions, please don't hesitate. Is a bit complicated to explain everything here since you're trying to set up one of the complicated system I ever seen. But is excellent and I really recommend it.
EDIT:
Basically Coherence is composed by 2 main parts: proxy and server. The names are a bit confusing since both are servers, but the proxy serves to the clients trying to perform cache operations (CRUD) while the "servers" serve to the proxies. Proxy is the responsible for receiving all the requests, processing them and routing them, according to their keys, to the respective server who holds the data or who would be the responsible for holding it if the operation requires a loading act. So the answer to your questions is: YES, you need at least one proxy active in your cluster, otherwise you'll be unable to operate correctly. It can run on one of your machines on into a third one. Is recommended to hold more than 1 proxy for HA purposes and proxies can act as servers as well (by setting the localstorage flag to true). I know, is a bit complicated and I recommend to follow oracle docs.
Essentially, there are 2 types of Coherence installation.
1) Stand-alone installation (without a WebLogic Server in the mix)
2) Managed installation (with Weblogic Server in the mix)
Here are a few characteristics for each of the above
Stand-alone installation (without a WebLogic Server in the mix)
Download the Coherence installation package and install (without any dependency on existing WebLogic or FMW installations)
Setup and Configure the Coherence Servers from the command-line
Administer and Maintain the Coherence Servers from the command-line
Managed installation (with Weblogic Server in the mix)
Utilize the existing installation of Coherence that was installed when WebLogic or FMW was installed
Setup and Configure the Managed Conherence Servers to work with WebLogic server
Administer and Maintain the Managed Coherence Servers via the WebLogic console
Note the key difference in terminology, Coherence Servers (no WL dependency) vs. Managed Coherence Servers (with WL dependency)
I plan to get a Oracle SOA Suite Implementation Specialist (1z0-478)
While practicing with example question I encountered a question make me confused:
*Each JCA adapter has a single deployment listed in the WLS Console. Identify two accurate descriptions about managing multiple instances of each adapter in the runtime.
A. Instance configuration in the SOA Suite deployment plan
B. JCA tiles for each adapter instance
C. Adapter connection factories specified in the WLS Console
D. One entry per adapter instance in the adapters_config.xml file*D
Answers are: A, D
But I can't find any file named adapters_config.xml?
And I don't know what is entry in here?
In Oracle SOA 11g most of the configurations files /common xsd‘s are stored in MDS.
All the configurations files available in MDS are read only; the configurable values can be changed through EM console or WLST script etc.
in MDS you can see adapters_config.xml under soa/configuration/default folder under SOA folder in MDS.
following is the sample:
I am setting up a virtual environment as a proof of concept with the following architecture:
2 node web farm
2 node SQL active/passive fail-over cluster
2 node BizTalk active/active cluster
The first two are straight forward, now I'm wondering about the BizTalk cluster.
If I followed the same model as setting up SQL (by using the Fail-over cluster manager in windows to create a cluster) I think I would end up with an active/passive cluster.
What makes a BizTalk cluster Active/Active?
Do I need to create a windows cluster first, or do I just install BizTalk on both machines and configure BizTalk appropriately?
Yes, my understanding is that you do need to cluster the OS first.
That said, you can usually avoid the need for clustering unless you need to cluster one of the 'pull' receive handlers like FTP, MSMQ, SAP etc. For everything else IMO it usually makes sense just to add multiple BizTalk servers in a group, and then use NLB for e.g. WCF Receive adapters.
The Rationale is that by running multiple host instances of each 'type' (e.g. 2+ Receive, 2+ Process, 2+ Send, etc), is that you also have the ability to stop and start host instances without any downtime, e.g. for maintenance (patches), application deployment, etc.
The one caveat with the Group approach is that SSO master doesn't failover automatically, although this isn't usually a problem as the other servers will still be able to work from cache.
You can configure a BizTalk Group in multi-computer environment. You can refer to the doc available at MSDN download center for more details. The document specifically has a section titled "Considerations for clustering BizTalk Server in a Multiple Server environment"
You can also additionally configure your BizTalk host as a clustered resource. You can refer to the documentation available at MSDN for more details.