We have Glassfish cluster which has two instances.
On this cluster one my EJB application is deployed and running.
Now I have another EJB-Timer-application which I want to deploy on Glassfish server(domain) not on cluster then I want to access the cluster's EJB. AS per my understanding EJB timer could not deployed on a cluster because it could be run on both the instances of cluster.
What are the possible ways to access it ?
Thanks
You can actually deploy EJB timers on a cluster. They will only execute on one instance. At startup, each #Scheduled bean is assigned in a round-robin manner to an instance. If an instance fails, the timers will fail-over to the next healthy instance.
Remember to follow the setup procedure for EJB timers as described here: http://docs.oracle.com/cd/E18930_01/html/821-2418/beahw.html. In short, you need to specify an XA datasource for the timers database instead of using the default embedded one.
We are running several #Scheduled beans in a Glassfish clustered production environment.
Related
I'm currently in the process of migrating 3 applications from Elastic Kubernetes Service (EKS) to ECS Fargate. Each application is built with Node JS .The current setup seems to be only 1 load balancer in front of one application and the other two applications are accessed through that one load balancer. This is currently how all three applications is accessed:
first_app.example.com
first_app.example.com/second_app
first_app.example.com/third_app
The front end of each application is being powered by an nginx proxy in EKS. I'm not entirely sure if I need nginx to be in ECS Fargate because the application load balancer I'm planning on to use will have an SSL cert integrated with it for redirects to HTTPS from HTTP. I'm a little unclear how to approach moving these applications to Fargate. Additionally, the third app has 3 additional functions:
Apollo GraphQL (abstraction layer between the front end & back end)
CSV
File Manager
This functionality also needs to be implemented on the Fargate side.
Currently I have setup one ECS Fargate cluster, one ECS Service, and one task definition. The task definition currently has the following 7 ECR images:
app_one_front_end
app_two_front_end
app_three_front_end
app_three_csv_job
app_three_file_manager_job
app_three_graphql
nginx ??
All of these images are stored in ECR. However I don't believe I need nginx in this Fargate cluster.
I'm a little unsure how to approach the architecture for this set of applications. It seems I can only have one task definition running on a service, that's why all containers were implemented into one task definition. The service can then be associated with an application load balancer where I set path based routing to access each application.
Any advice on how to approach this migration would be appreciated.
Thanks!
Each Kubernetes Replica Set should be converted to an ECS Service. Each Kubernetes Pod would be converted to an ECS Task.
Kubernetes Replica Set == ECS Service
Kubernetes Pod == ECS Task
If you had multiple Replica Sets in Kubernetes, in order to scale your pods independently, then in order to have the same scalability in ECS you would configure them as separate services with independent scaling configurations.
You are correct in that you probably don't need the Nginx container in ECS.
It seems I can only have one task definition running on a service, that's why all containers were implemented into one task definition.
Services can communicate with each other. You would enable ECS Service Discovery to facilitate that. However it is fine to have them all in the same Task/Service if they don't need to be scaled out independently.
Also, multiple services can be associated with a single Application Load Balancer by creating different listener rules in the load balancer that map to different Target Groups, if that is something you need. You might need to have multiple Target Groups even if you only have a single ECS Service, because you will need to map different load balancer listeners to different containers in your task. That basically allows the Application Load Balancer to perform the job that Nginx was doing in Kubernetes.
How can I deploy and run Corda nodes of spring webserver based "Yo!CorDapp" example (https://github.com/joeldudleyr3/spring-observable-stream), on separate machines?
What are the configuration changes I need to implement in this regard.
As long as you are running each server on the same machine as the node it talks to, there shouldn't be any configuration required.
Simply start the nodes on their separate machines, then start the webserver on each machine, with the application properties modified or overridden to point to that node's RPC port.
Since the nodes are on separate machines, it's even possible to use the same RPC port for all nodes, since the IP address will differ.
I have an J2EE client-server application, using Wildfly 10. I'm using stateful EJBs so the clients can communicate with the server, use load balancing and failover.
In one production environment we will have dozens of servers on a cluster. So I'm worried about clustering, because I don't want EVERY stateful EJB been replicated over EVERY server in the cluster. I think that overhead will be terrible.
Is there any way to configure my stateful EJBs or WilfFly so muy EJBs won't be replicated but still load balancing over the cluster ?
So here is the question . I have an application running in weblogic 8 which is heavily dependent on JMS messages . The messages flow from a clustered MQ Server and the application which is in ejb 2 used to listen to directly in the MQ using configurations in the weblogic-ejb-jar.xml. The MQ server has two Queue Managers and two different Connection Factory name for those managers .
<weblogic-enterprise-bean>
<ejb-name>MDB_QM1</ejb-name>
<message-driven-descriptor>
<destination-jndi-name>QM1</destination-jndi-name>
<initial-context-factory>weblogic.jndi.WLInitialContextFactory</initial-context-factory>
<connection-factory-jndi-name>CF1</connection-factory-jndi-name>
</message-driven-descriptor>
</weblogic-enterprise-bean>
<weblogic-enterprise-bean>
<ejb-name>MDB_QM2</ejb-name>
<message-driven-descriptor>
<destination-jndi-name>QM2</destination-jndi-name>
<initial-context-factory>weblogic.jndi.WLInitialContextFactory</initial-context-factory>
<connection-factory-jndi-name>CF2</connection-factory-jndi-name>
</message-driven-descriptor>
</weblogic-enterprise-bean>
Now the application is to be migrated to weblogic 10.3 and the configurations has already been done in test for that .ejb version has changed to 3 and they have used annotations in the mdbs for queue config and such .But the real problem is this ,the application used to listen to the MQ directly and now they have configured a bridge which will transfer the message from MQ to a internal queue and the mdb will listen to that queue . The source MQ queue config in weblogic bridge config has only one connection factory in there and I am not sure if its possible to configure multiple queue managers , connection factories etc in a single queue . But in production there will be multiple queue managers etc .
I think its possible if I configure foreign servers for those queues then the clustering will be possible . But that will mean a huge change in the application and weblogic config also . So the ideal solution I am searching for is a way to connect to those multiple MQ queue managers with the existing bridge configuration. If its not possible , please suggest the next best thing . I am open to all ideas :)
Thanks
A WebSphere MQ cluster is all about how QMgrs talk to each other and does not hide from the application the fact that there are multiple physical instances of a queue. A separate connection is required to each queue manager instance. The app will need to either...
Use an instance of the bridge for each queue instance.
Configure the bridge to make multiple simultaneous connections to the different QMgrs.
Whoever made the decision to configure a bridge with a single connection factory did not consider the architecture of the underlying transport. You cannot overcome that poor decision with configuration, no matter how hard you try.
I am trying to get a MessageDriven (EJB 3) bean to subscribe to a JMS Topic on another glassfish instance on another host. Is this possible?
In the Glassfish console you can modify the JMS server and point it to another Glassfish instance or a standalone OpenMQ broker. Although you can configure several JMS hosts to my knowledge Glassfish will always use the one called default_JMS_host so that's the one you want to edit.
Just one thing: in such a setup the two server instances will share queues and topics, which may not be what you want if the two servers are for example running the same application but don't want to share e.g a particular queue. This can easily be solved via the Destination Resources configuration, by specifying different physical names for that queue.