Can Oracle Coherence run embedded in application server process like Hazelcast? - oracle-coherence

We're considering using Coherence to replace Hazelcast. Now we run Hazelcast in embedded mode, inside our application server process. I wonder if Coherence can also run like this? I couldn't find document confirming this.

There are 3 popular ways to deploy Coherence with an application server:
1) Client/server - using the Coherence*Extend protocol, or using the HTTP / REST protocol. This allows an application server to be operated independently of the Coherence cluster, and is simpler and safer as a result, but can have slightly higher latency as a result.
2) In cluster, but using separate dedicated cache servers - this is called "storage disabled" in which the application server doesn't use any memory for managing the Coherence data, and instead separate processes are running in the cluster just to manage that data.
3) In process (i.e. embedded into the application or into the server) - this is the original Coherence deployment model, but has grown less popular due to the other models.

I have been using Oracle Coherence for 5+ years.
To answer your question, YES - Coherence can run within an application process. It is called as in-process. There is out-of-process & in-process deployment approaches, which it supports.
I wrote a blog few years back (please refer) on session management using Coherence - hope it helps:
http://ankurkumar78.blogspot.in/2011/08/oracle-coherence-best-practices-in.html

Related

How does the Realm Mobile Platform scale?

You could say I am a fan of the Realm Mobile Platform. I'm using it and it seems to be working well.
However I am confused with how to operate it going to production. It seems to be deployed only to one server, and even the professional and enterprise editions are working on my single server.
Assuming Realm have thought of this (as Enterprise edition supports 'enterprise scaling) - how does this work if all clients point to my owned server URL?
Another question is how to monitor the load on that server.
Thanks!
The Professional Edition and the Enterprise Edition emit statsd compatible metrics which allow you to track the usage and load on each node in a Realm Object Server cluster. These metrics are also used internally inside the cluster in order to display statistics about the health of the cluster.
We are obviously still adding metrics as we understand more about our customer's use-cases, and fine-tuning the ones that we have.
With regards to the way the clustering works, we are currently implementing this according to an iterative process, where we add more and more features, and more and more resilience to the system with every passing day.
Basically, we have a logical load balancer process, which receives the incoming client connections, and then dispatches that to a node inside the cluster. This logical load balancer can be HA'd and LB'd itself as well, just like you would any regular WS connection handler. Handling many connections these days is easy. It's handling the quadratic merge algorithms that is expensive on the Realm Object Server, which is why the clustering is required for deployments at scale.

Should i use Coherence standalone server ? for a java webservice to use cache data ?

I am new to oracle coherence
Basically we have some data and we wanted some java/bpel webservice to get those data from coherence cache instead of database. [we are planning to load all those data to cache server]
So we have below questions before we start this solution.
Webservice we are planning to start is going to be just java would be fine.
And all operations are reading only.
Question
1. IS it Coherence needs to be stand alone server ? (down load from oracle and install it separately and run the default cacheserver) ?
2.If so we are planning to do the pre loading of data from database to cache server by using code ? i hope thas possible ? Any pointers would be helpful ?
3.How does the webservice connect with Coherence server if webservice running in different nmachine vs coherence server running ?
(OR)
Is it mandatory that webservice and coherence should run in the same machine ?
If webservice can run in different machine how does the webservice code connects to coherence server (any code sample , url would be helpful) ?
Also what is that coherence comes with weblogic ? Is it not fit for our applications design i assume ?!!!! then what type of solution we go for weblogic with coherence ?
FYI : Our goal is simple we want to store the data in cache server and have our new webservice to retrieve the data from Cache servere instead of database(because v are planning to avoid database trip )
Well, you questions are very open and probably have more than 1 correct answer. I'll try to answer all of them.
First, please take into consideration, that Coherence is not a free tool and you have to pay for a license.
Now, to the answers:
Basically, coherence has 2 parts: proxy and server. The first is the responsible to routing your requests and the second for hosting the data. You can run both together in the same service but this has pros&cons. One Con is that your services will be very loaded and the memory will be shared between two kind of processes. A pro is that is very simple to run.
You can preload all the data from the DB. For that you have to open the Coherence and write your own code. For that, you need to define you own cachestore (look for that keyword in coherence docs) and override the loadAll method.
As far I remember Coherence comes together with Weblogic. That says the license for the one is the license for the second and they come in the same product. I'm not familiar with weblogic, but I suppose is a service of the package. In any case, for connecting to coherence you can refer to Configuring and Managing Coherence Clusters
The coherence services can run in different machines, in different network and even in different places of the world if you want. Each, proxy or server, consumer and DB, could be in a different network. Everything can be configured. You have to say you weblogic server where the coherence proxy will be, you'll set in the coherence proxy/server the addresses of them and you'll configure your coherence server for finding out his database by configuration. Is a bit complicated to explain everything here.
I think I answered before.
Just take into consideration coherence is a very powerful tool but very complicated to operate and to troubleshoot. Consider the pros/cons of accessing directly your DB and think about if you really need it.
If you have specific questions, please don't hesitate. Is a bit complicated to explain everything here since you're trying to set up one of the complicated system I ever seen. But is excellent and I really recommend it.
EDIT:
Basically Coherence is composed by 2 main parts: proxy and server. The names are a bit confusing since both are servers, but the proxy serves to the clients trying to perform cache operations (CRUD) while the "servers" serve to the proxies. Proxy is the responsible for receiving all the requests, processing them and routing them, according to their keys, to the respective server who holds the data or who would be the responsible for holding it if the operation requires a loading act. So the answer to your questions is: YES, you need at least one proxy active in your cluster, otherwise you'll be unable to operate correctly. It can run on one of your machines on into a third one. Is recommended to hold more than 1 proxy for HA purposes and proxies can act as servers as well (by setting the localstorage flag to true). I know, is a bit complicated and I recommend to follow oracle docs.
Essentially, there are 2 types of Coherence installation.
1) Stand-alone installation (without a WebLogic Server in the mix)
2) Managed installation (with Weblogic Server in the mix)
Here are a few characteristics for each of the above
Stand-alone installation (without a WebLogic Server in the mix)
Download the Coherence installation package and install (without any dependency on existing WebLogic or FMW installations)
Setup and Configure the Coherence Servers from the command-line
Administer and Maintain the Coherence Servers from the command-line
Managed installation (with Weblogic Server in the mix)
Utilize the existing installation of Coherence that was installed when WebLogic or FMW was installed
Setup and Configure the Managed Conherence Servers to work with WebLogic server
Administer and Maintain the Managed Coherence Servers via the WebLogic console
Note the key difference in terminology, Coherence Servers (no WL dependency) vs. Managed Coherence Servers (with WL dependency)

Highly configurable and efficient ESB / SOA / integration framework

my plan is to develop or use a Java-based integration framework (ESB, SOA whatever) that deals with services, with the following constraints:
a Service can be deployed on multiple machines but doesn't have to be present on every one of them
a Service can be deployed and re-deployed (with a newer version) separately
a Service is connected to other services either by:
in-memory connections
(async / sync) remoting to other machines
the routing logic of the Service connections should be configurable on the fly, without re-deploying or restarting anything
I know that OpenESB is close to these requirements, however it requires redeployment of the service to change the routing (suppose the connections are HTTP BC based), but I'm unfamiliar in this regard with MuleESB, WSO2, JBossESB, whatever open source ESB... Is there any good solution for this (e.g. configurable in-memory and/or remoting routing)? I don't really care about clustering as I plan to use the servers separately, and the designated (if required) JMS solution would be HornetQ if that matters.
You mention several different concepts, but a combination of an ESB pattern, Apache Load Balancer and Maven should get you close. Do not get to hung up on the product, settle on a paradigm/pattern and the decision of the product will be easy, it either does things the way you like or does not.
Here is the pattern I use.
SOA Design Patterns
This may also interest you SOA for executives
Cheers
After a long discussions about the pros and cons, we are going to have a HornetQ-based (JMS MQ) solution, where we create message routing rules and sometimes processing codes that handle the different kind of routing. HornetQ is able to handle the in-jvm requirement too, but that part will be covered under the hood.

How to automate integration testing that requires multiple computers?

How do you automate integration testing that requires 2 or more PCs (distributed app)? What's your strategy for performing integration testing (or performance testing) on the cases where multiple machines are involved?
Example
We need to integration-test our client/server app. To mimic the live-system, we need to deploy the client on one machine, and the server on another. Then we measure the TCP transfer speed.
There are ways to do this, but none of them are built into any frameworks that I am aware of.
Below are the three ways I have addressed it in the past:
use VMWare Server/ESX - What we have done most recently is to actually build VM images for the server and client machine with a mountable second drive (data drive). We then build and unit test our software, before the performance test we spin up the VM, then deploy the code to the data drive. After that we deploy a set of test scripts to the machines and kick them off (via Powershell). This works pretty well, has good replay-ability and allows us to give the test servers to other teams/customers for their evaluation. The downside is that it is very resource intensive.
Dedicated Server & Client Test Sets - We had two different Source Repositories, one for the server and one for the client. We then went through the build as above, but one at a time, deploying the server (and testing it against the old client), deploying the client (and testing it against the old server), and then deploying both and testing the combination. This worked fairly well, but required some manual testing for certain scenarios and could get cumbersome if we needed to test multiple server changes or client changes at the same time.
Test against production only - We only ever updated the client OR the server and then we updated that part and tested it against the current production setup. The downside of this of course is that we had to deploy much slower and make incremental changes in one system or the other, deploy, test and release, then make changes in the other component. Rinse and repeat.
If you have the resources I highly recommend #1. Its harder to setup initially but it pays for itself very quickly, and once its setup its repeatable for other products as well (as long as they follow a relatively similar deployment pattern).
It depends on your setup. For example I needed to test a group of web services that my team created/modified. During the test we deployed the app to one machine as the producer and used SoapUI to generated a few thousand transactions via many threads (from 1 to 100 threads as I remember). That way we guaranteed the response and the SLA (service level agreement).

High performance ASP.NET setup

I would like to ask you what is the best setup for a following application:
ASP.NET 3.5 Web site - used as a presentation layer, a lot of AJAX and JS. Will not hit the server a lot.
ASP.NET WCF - sevice providing all data to the application. It's responsible for validation, data modeling / preparing and communication with the DB Server.
Database - SQL Server 2005 Std, some logic is coded on the server side as stored procedures. Some of the logic can be a bit time consuming. In my opinion it's the most resource consuming part of the app.
The website can have up to 1000 users per minute. We can have up to 4 servers in the following configuration: Intel Bi Xeon Quad 8x 2.00+ GHz, 16 GB RAM, SSD or RAID drives.
What is the best way to place parts of the application on the physical servers? Will they handle this kind of load?
The less scalable place in any application is database server, you can add more web and application servers but you can't replicate DB with the same ease so you will benefit in a long run if DB will not contain any logic especially any long running logic. In a lot of the applications limiting factor is not cpu but memory think about user sessions if you store 1mb of data per user you applications will be able to support 64,000 silmantanius user sessions with you machines it may be sufficient or not. Both problems can be mitigated by using application level caching but this can cause it own set of problems because now you faced with stale data. To scale session based sites you will need to use smart load balancer solution that supports sticky sessions, for your loads most likely you will need hardware load balancer.
In the application you describe, I suspect that thread management is going to be a big issue. Throwing hardware at the problem may not be the best approach.
In terms of partitioning, it depends on whether you can leverage things like caching and cache notifications. If every call to the app has to hit the DB and run a lengthy stored procedure, then you may want to have more DB machines and fewer front-end web servers.
This is a big subject. In an attempt to provide a reasonably comprehensive answer to exactly this kind of question, I ended up writing a book about it: Ultra-Fast ASP.NET: Build Ultra-Fast and Ultra-Scalable web sites using ASP.NET and SQL Server.

Resources