Non-clustered stateful EJB in WildFly 10 - ejb

I have an J2EE client-server application, using Wildfly 10. I'm using stateful EJBs so the clients can communicate with the server, use load balancing and failover.
In one production environment we will have dozens of servers on a cluster. So I'm worried about clustering, because I don't want EVERY stateful EJB been replicated over EVERY server in the cluster. I think that overhead will be terrible.
Is there any way to configure my stateful EJBs or WilfFly so muy EJBs won't be replicated but still load balancing over the cluster ?

Related

Migrating to ECS Fargate from EKS

I'm currently in the process of migrating 3 applications from Elastic Kubernetes Service (EKS) to ECS Fargate. Each application is built with Node JS .The current setup seems to be only 1 load balancer in front of one application and the other two applications are accessed through that one load balancer. This is currently how all three applications is accessed:
first_app.example.com
first_app.example.com/second_app
first_app.example.com/third_app
The front end of each application is being powered by an nginx proxy in EKS. I'm not entirely sure if I need nginx to be in ECS Fargate because the application load balancer I'm planning on to use will have an SSL cert integrated with it for redirects to HTTPS from HTTP. I'm a little unclear how to approach moving these applications to Fargate. Additionally, the third app has 3 additional functions:
Apollo GraphQL (abstraction layer between the front end & back end)
CSV
File Manager
This functionality also needs to be implemented on the Fargate side.
Currently I have setup one ECS Fargate cluster, one ECS Service, and one task definition. The task definition currently has the following 7 ECR images:
app_one_front_end
app_two_front_end
app_three_front_end
app_three_csv_job
app_three_file_manager_job
app_three_graphql
nginx ??
All of these images are stored in ECR. However I don't believe I need nginx in this Fargate cluster.
I'm a little unsure how to approach the architecture for this set of applications. It seems I can only have one task definition running on a service, that's why all containers were implemented into one task definition. The service can then be associated with an application load balancer where I set path based routing to access each application.
Any advice on how to approach this migration would be appreciated.
Thanks!
Each Kubernetes Replica Set should be converted to an ECS Service. Each Kubernetes Pod would be converted to an ECS Task.
Kubernetes Replica Set == ECS Service
Kubernetes Pod == ECS Task
If you had multiple Replica Sets in Kubernetes, in order to scale your pods independently, then in order to have the same scalability in ECS you would configure them as separate services with independent scaling configurations.
You are correct in that you probably don't need the Nginx container in ECS.
It seems I can only have one task definition running on a service, that's why all containers were implemented into one task definition.
Services can communicate with each other. You would enable ECS Service Discovery to facilitate that. However it is fine to have them all in the same Task/Service if they don't need to be scaled out independently.
Also, multiple services can be associated with a single Application Load Balancer by creating different listener rules in the load balancer that map to different Target Groups, if that is something you need. You might need to have multiple Target Groups even if you only have a single ECS Service, because you will need to map different load balancer listeners to different containers in your task. That basically allows the Application Load Balancer to perform the job that Nginx was doing in Kubernetes.

Is it a good practice to have embedded jetty and GRPC server running in the same JVM?

Our organization is looking into implementing new internal APIs using GRPC.
Currently, we have a microservice that is serving internal/external requests using embedded Jetty. We want to make internal communication between services to be done over GRPC.
So, we'll have 2 servers running on the same VM: jetty and GRPC. Is it a good practice, any red flags with that approach?
We do not want to split that said microservice into 2 to save costs. We should be able to run the app on the same number of VMs.
There's nothing inherently special or wrong about having Jetty and gRPC in the same JVM. The main point of potential trouble is just that you will have two ports exposed instead of one; that might matter for service discovery or firewalls.

Load Stressing Web applications deployed in openstack instances under an autoscaling group

I am working testing the auto-scaling feature of OpenStack. In my test set-up, java servlet applications are deployed in tomcat web servers behind a HAproxy load balancer. I aim at stressing testing the application, to see how it scales and the response time using JMeter as the stress tester. However the I observe that HAProxy (or something else) terminates the connection immediately the onComplete signal is sent by one of the member instances. Consequently, the subsequent responses from the remaining servers are reported as failures (timeouts). I have configured the HAProxy server to use a round-robin algorithm with sticky sessions. See attached JMeter results tree , I am not sure of the next step to undertake. The web applications are asyncronous hence my expectation was that the client (HAProxy in this case) should wait until the last thread is submitted before sending the response.
Is there be some issues with my approach or some set up flaws ?

GlassFish clustering + EJB lookup

We have Glassfish cluster which has two instances.
On this cluster one my EJB application is deployed and running.
Now I have another EJB-Timer-application which I want to deploy on Glassfish server(domain) not on cluster then I want to access the cluster's EJB. AS per my understanding EJB timer could not deployed on a cluster because it could be run on both the instances of cluster.
What are the possible ways to access it ?
Thanks
You can actually deploy EJB timers on a cluster. They will only execute on one instance. At startup, each #Scheduled bean is assigned in a round-robin manner to an instance. If an instance fails, the timers will fail-over to the next healthy instance.
Remember to follow the setup procedure for EJB timers as described here: http://docs.oracle.com/cd/E18930_01/html/821-2418/beahw.html. In short, you need to specify an XA datasource for the timers database instead of using the default embedded one.
We are running several #Scheduled beans in a Glassfish clustered production environment.

Biztalk Server 2009 - Failover Clustering and Network Load Balancing (NLB)

We are planning a Biztalk 2009 set up in which we have 2 Biztalk Application Servers and 2 DB Servers (DB servers being in an Active/Passive Cluster). All servers are running Windows Server 2008 R2.
As part of our application, we will have incoming traffic via the MSMQ, FILE and SOAP adapters. We also have a requirement for High-availability and Load-balancing.
Let's say I create two different Biztalk Hosts and assign the FILE receive handler to the first one and the MSMQ receive handler to the second one. I now create two host instances for each of the two hosts (i.e. one for each of my two physical servers).
After reviewing the Biztalk Documentation, this is what I know so far:
For FILE (Receive), high-availablity and load-balancing will be achieved by Biztalk automatically because I set up a host instance on each of the two servers in the group.
MSMQ (Receive) requires Biztalk Host Clustering to ensure high-availability (Host Clustering however requires Windows Failover Clustering to be set up as well). No loading-balancing option is clear here.
SOAP (Receive) requires NLB to achieve Load-balancing and High-availability (if one server goes down, NLB will direct traffic to the other).
This is where I'm completely puzzled and I desperately need your help:
Is it possible to have a Windows Failover Cluster and NLB set up at the same time on the two application servers?
If yes, then please tell me how.
If no, then please explain to me how anyone is acheiving high-availability and load-balancing for MSMQ and SOAP when their underlying prerequisites are mutually exclusive!
Your help is greatly appreciated,
M
Microsoft doesn't support NLB and MSCS running on the same servers
"These two components work well together in a two or three tier application model running on separate computers. Be aware that running these two components on the same computer is unsupported and is not recommended by Microsoft due to potential hardware sharing conflicts between Cluster service and Network Load Balancing."
http://support.microsoft.com/kb/235305
If you want to provide HA for SOAP requests received in BizTalk you should configure you BizTalk servers to be in an Active/Active configuration (no MSCS) in the same BizTalk Group. Once you do this you install an configure NLB between these two. Your clients will be able to query the web services thru the NLB cluster and the NLB service will route the request to a specific server within the cluster (your asmx files should be installed and configured in both servers).
Regarding MSMQ the information you have obtained so far is right, the only way to assure HA for this adapter is clustering the BizTalk servers. If you want to implement this too then you must have a separate infrastructure for the SOAP receive hosts and the MSMQ ones.
The main reason for this scenario is that a BizTalk Isolated Host is not cluster aware so BizTalk InProcess Host could be all hung up and the Isolated Host would never know of it and would continue to receive requests.
I'm currently designing an architecture very similar so if you would like to share more comments or questions you can reach me at ignacioquijas#hotmail.com
Ignacio Quijas
Microsoft Biztalk Server Specialist

Resources