Should i use Coherence standalone server ? for a java webservice to use cache data ? - oracle-coherence

I am new to oracle coherence
Basically we have some data and we wanted some java/bpel webservice to get those data from coherence cache instead of database. [we are planning to load all those data to cache server]
So we have below questions before we start this solution.
Webservice we are planning to start is going to be just java would be fine.
And all operations are reading only.
Question
1. IS it Coherence needs to be stand alone server ? (down load from oracle and install it separately and run the default cacheserver) ?
2.If so we are planning to do the pre loading of data from database to cache server by using code ? i hope thas possible ? Any pointers would be helpful ?
3.How does the webservice connect with Coherence server if webservice running in different nmachine vs coherence server running ?
(OR)
Is it mandatory that webservice and coherence should run in the same machine ?
If webservice can run in different machine how does the webservice code connects to coherence server (any code sample , url would be helpful) ?
Also what is that coherence comes with weblogic ? Is it not fit for our applications design i assume ?!!!! then what type of solution we go for weblogic with coherence ?
FYI : Our goal is simple we want to store the data in cache server and have our new webservice to retrieve the data from Cache servere instead of database(because v are planning to avoid database trip )

Well, you questions are very open and probably have more than 1 correct answer. I'll try to answer all of them.
First, please take into consideration, that Coherence is not a free tool and you have to pay for a license.
Now, to the answers:
Basically, coherence has 2 parts: proxy and server. The first is the responsible to routing your requests and the second for hosting the data. You can run both together in the same service but this has pros&cons. One Con is that your services will be very loaded and the memory will be shared between two kind of processes. A pro is that is very simple to run.
You can preload all the data from the DB. For that you have to open the Coherence and write your own code. For that, you need to define you own cachestore (look for that keyword in coherence docs) and override the loadAll method.
As far I remember Coherence comes together with Weblogic. That says the license for the one is the license for the second and they come in the same product. I'm not familiar with weblogic, but I suppose is a service of the package. In any case, for connecting to coherence you can refer to Configuring and Managing Coherence Clusters
The coherence services can run in different machines, in different network and even in different places of the world if you want. Each, proxy or server, consumer and DB, could be in a different network. Everything can be configured. You have to say you weblogic server where the coherence proxy will be, you'll set in the coherence proxy/server the addresses of them and you'll configure your coherence server for finding out his database by configuration. Is a bit complicated to explain everything here.
I think I answered before.
Just take into consideration coherence is a very powerful tool but very complicated to operate and to troubleshoot. Consider the pros/cons of accessing directly your DB and think about if you really need it.
If you have specific questions, please don't hesitate. Is a bit complicated to explain everything here since you're trying to set up one of the complicated system I ever seen. But is excellent and I really recommend it.
EDIT:
Basically Coherence is composed by 2 main parts: proxy and server. The names are a bit confusing since both are servers, but the proxy serves to the clients trying to perform cache operations (CRUD) while the "servers" serve to the proxies. Proxy is the responsible for receiving all the requests, processing them and routing them, according to their keys, to the respective server who holds the data or who would be the responsible for holding it if the operation requires a loading act. So the answer to your questions is: YES, you need at least one proxy active in your cluster, otherwise you'll be unable to operate correctly. It can run on one of your machines on into a third one. Is recommended to hold more than 1 proxy for HA purposes and proxies can act as servers as well (by setting the localstorage flag to true). I know, is a bit complicated and I recommend to follow oracle docs.

Essentially, there are 2 types of Coherence installation.
1) Stand-alone installation (without a WebLogic Server in the mix)
2) Managed installation (with Weblogic Server in the mix)
Here are a few characteristics for each of the above
Stand-alone installation (without a WebLogic Server in the mix)
Download the Coherence installation package and install (without any dependency on existing WebLogic or FMW installations)
Setup and Configure the Coherence Servers from the command-line
Administer and Maintain the Coherence Servers from the command-line
Managed installation (with Weblogic Server in the mix)
Utilize the existing installation of Coherence that was installed when WebLogic or FMW was installed
Setup and Configure the Managed Conherence Servers to work with WebLogic server
Administer and Maintain the Managed Coherence Servers via the WebLogic console
Note the key difference in terminology, Coherence Servers (no WL dependency) vs. Managed Coherence Servers (with WL dependency)

Related

WSO2 clustering in a distributed deployment

I am trying to understand clustering concept of WSO2. My basic understanding of cluster is that there are 2 or more server with same function using VIP or load balance in front. So I would like to know which of the WSO2 components can be clustered. I am trying to achieve configuration mentioned in this diagram.
Image of Config I am trying to achieve:
Can this configuration is achievable or not?
Can we cluster 2 Publisher nodes and 2 store nodes or not?
And how do we cluster Key Manager use same setting as Identity Manager?
Should we use port offset when running 2 components on the same server? And if yes how we make sure that components are using the ports as mentioned in port offset?
Should we create separate external database for each CarnonDB datasource entry in master_datasource.xml file or we can keep using local H2 database for this. I have created following databases Let me know if I am correct in doing this or not. wso2 databases I created:
I made several copies of wso2 binary files as shown in Image and copied them to the servers where I want to run 2 components on same server. Is this correct way of running 2 components on same server?
For Load balancing which components should we load balance and what ports should be use for load balancing?
That configuration is achievable. But Analytics servers are best to run on separate servers as they utilize a lot of resources.
Yes, you can.
Yes, you need port-offset. If you're on Linux, you can use netstat -pln command and filter by server PID.
Every server needs a local database and other databases are shared as mentioned in https://docs.wso2.com/display/CLUSTER44x/Clustering+API+Manager+2.0.0
Having copies is one way of doing that. Another way is letting a single server act as multiple components. For example, you can run publisher and store components together. You can see the recommended patterns in https://docs.wso2.com/display/AM210/Deployment+Patterns.
Except for Traffic manager, you can load balance every other component. For traffic manager, you can use fail-over. Here are the ports you need to load balance.
Servlet port - 9443(https)/9763 (For admin console and admin services)
NIO port - 8243(https)/8280 (For API calls at gateway)

R web server that handles sessions

I am not sure if this is the right place to ask this question. Please point out to me where if this is the case.
I must build a multi user, stateful (sessions; object persistance) web application that will uses .NET in the backend and must connect to R in order to perform calculations on data that lies in a SQL server 2016 DB. Basically, I need to connect a MS based backend with R.
Everything is clear, except for the problem that I need to find an R server that handles sessions. I know shiny but I can't use it (long story).
rApache and openCPU do not handle sessions.
Rserve for windows is very limited (no parallel connections are supported, subsequent connections share the same namespace and sessions are not supported - this is a consequence of the fact that parallel connections are not supported)
Finally, I have seen Rook (i.e. Run R/Rook as a web server on startup) but I can't read anywhere, even the docs. if it is able to deal with sessions. My question is: is there a non stateless R web server or does anyone knows if Rook is stateless?
EDIT:
Apparently, this question has been around for longer: http://jeffreyhorner.tumblr.com/about#comment-789093732

How does the Realm Mobile Platform scale?

You could say I am a fan of the Realm Mobile Platform. I'm using it and it seems to be working well.
However I am confused with how to operate it going to production. It seems to be deployed only to one server, and even the professional and enterprise editions are working on my single server.
Assuming Realm have thought of this (as Enterprise edition supports 'enterprise scaling) - how does this work if all clients point to my owned server URL?
Another question is how to monitor the load on that server.
Thanks!
The Professional Edition and the Enterprise Edition emit statsd compatible metrics which allow you to track the usage and load on each node in a Realm Object Server cluster. These metrics are also used internally inside the cluster in order to display statistics about the health of the cluster.
We are obviously still adding metrics as we understand more about our customer's use-cases, and fine-tuning the ones that we have.
With regards to the way the clustering works, we are currently implementing this according to an iterative process, where we add more and more features, and more and more resilience to the system with every passing day.
Basically, we have a logical load balancer process, which receives the incoming client connections, and then dispatches that to a node inside the cluster. This logical load balancer can be HA'd and LB'd itself as well, just like you would any regular WS connection handler. Handling many connections these days is easy. It's handling the quadratic merge algorithms that is expensive on the Realm Object Server, which is why the clustering is required for deployments at scale.

Can Oracle Coherence run embedded in application server process like Hazelcast?

We're considering using Coherence to replace Hazelcast. Now we run Hazelcast in embedded mode, inside our application server process. I wonder if Coherence can also run like this? I couldn't find document confirming this.
There are 3 popular ways to deploy Coherence with an application server:
1) Client/server - using the Coherence*Extend protocol, or using the HTTP / REST protocol. This allows an application server to be operated independently of the Coherence cluster, and is simpler and safer as a result, but can have slightly higher latency as a result.
2) In cluster, but using separate dedicated cache servers - this is called "storage disabled" in which the application server doesn't use any memory for managing the Coherence data, and instead separate processes are running in the cluster just to manage that data.
3) In process (i.e. embedded into the application or into the server) - this is the original Coherence deployment model, but has grown less popular due to the other models.
I have been using Oracle Coherence for 5+ years.
To answer your question, YES - Coherence can run within an application process. It is called as in-process. There is out-of-process & in-process deployment approaches, which it supports.
I wrote a blog few years back (please refer) on session management using Coherence - hope it helps:
http://ankurkumar78.blogspot.in/2011/08/oracle-coherence-best-practices-in.html

How to set up a BizTalk active/active cluster

I am setting up a virtual environment as a proof of concept with the following architecture:
2 node web farm
2 node SQL active/passive fail-over cluster
2 node BizTalk active/active cluster
The first two are straight forward, now I'm wondering about the BizTalk cluster.
If I followed the same model as setting up SQL (by using the Fail-over cluster manager in windows to create a cluster) I think I would end up with an active/passive cluster.
What makes a BizTalk cluster Active/Active?
Do I need to create a windows cluster first, or do I just install BizTalk on both machines and configure BizTalk appropriately?
Yes, my understanding is that you do need to cluster the OS first.
That said, you can usually avoid the need for clustering unless you need to cluster one of the 'pull' receive handlers like FTP, MSMQ, SAP etc. For everything else IMO it usually makes sense just to add multiple BizTalk servers in a group, and then use NLB for e.g. WCF Receive adapters.
The Rationale is that by running multiple host instances of each 'type' (e.g. 2+ Receive, 2+ Process, 2+ Send, etc), is that you also have the ability to stop and start host instances without any downtime, e.g. for maintenance (patches), application deployment, etc.
The one caveat with the Group approach is that SSO master doesn't failover automatically, although this isn't usually a problem as the other servers will still be able to work from cache.
You can configure a BizTalk Group in multi-computer environment. You can refer to the doc available at MSDN download center for more details. The document specifically has a section titled "Considerations for clustering BizTalk Server in a Multiple Server environment"
You can also additionally configure your BizTalk host as a clustered resource. You can refer to the documentation available at MSDN for more details.

Resources