how to set value for max-beans-in-cache in weblogic-ejb-jar.xml - ejb

I need to understand what all factors to consider in order to set up max-beans-in-cache parameter for my ejb. Can someone help me please?
My scenario:
OSB having single cluster of 4 servers, each server running 3 services each.
My app server is a group of 4 clusters , where each cluster is having 2 managed servers. My ejb is deployed on the cluster.

Related

How to setup Airflow > 2.0 high availability cluster on centos 7 or above

I want to setup HA for airflow(2.3.1) on centos7. Messaging queue - Rabbitmq and metadata db - postgres. Anybody knows how to setup it.
Your question is very large, because the high availability has multiple level and definition:
Airflow availability: multiple scheduler, multiple workers, auto scaling to avoid pressure, high storage volume, ...
The databases: a HA cluster for Rabbitmq and a HA cluster for postgres
Even if you have the first two levels, how many node you want to use? you cannot put everything in the same node, you need to run one service replica per node
Suppose you did that, and now you have 3 different nodes running in the same data center, what if there is a fire in the data center? So you need to use multiple nodes in different regions
After doing all of above, is there a risk for network problem? of course there is
If you just want to run airflow in HA mode, you have multiple option to do that on any OS:
docker compose: usually we use it for developing, but you can use it for production too, you can create multiple scheduler instances, with multiple workers, it can help you to improve the availability of your service
docker swarm: similar to docker compose with additional features (scaling, multi nodes, ...), you will not find much resources to install it, but you can use the compose files and just do some changes
kubernetes: the best solution, K8S can help you to ensure the availability of your services, easy install with helm
or just running the different services on your host: not recommended, because of manual tasks, and applying the HA is complicated

Load balancing on same server

I research about Kubernetes and actually saw that they do load balancer on a same node. So if I'm not wrong, one node means one server machine, so what good it be if doing load balancer on the same server machine. Because it will use same CPU and RAM to handle requests. First I thought that load balancing would do on separate machine to share resource of CPU and RAM. So I wanna know the point of doing load balancing on same server.
If you can do it on one node , it doesn't mean that you should do it , specially in production environment.
the production cluster will have least 3 or 5 nodes min
kubernetes will spread the replicas across the cluster nodes in balancing node workload , pods ends up on different nodes
you can also configure on which nodes your pods land
use advanced scheduling , pod affinity and anti-affinity
you can also plug you own schedular , that will not allow placing the replica pods of the same app on the same node
then you define a service to loadbalance across pods on different nodes
kube proxy will do the rest
here is a useful read:
https://itnext.io/keep-you-kubernetes-cluster-balanced-the-secret-to-high-availability-17edf60d9cb7
So you generally need to choose a level of availability you are
comfortable with. For example, if you are running three nodes in three
separate availability zones, you may choose to be resilient to a
single node failure. Losing two nodes might bring your application
down but the odds of loosing two data centres in separate availability
zones are low.
The bottom line is that there is no universal approach; only you can
know what works for your business and the level of risk you deem
acceptable.
I guess you mean how Services do automatical load-balancing. Imagine you have a Deployment with 2 replicas on your one node and a Service. Traffic to the Pods goes through the Service so if that were not load-balancing then everything would go to just one Pod and the other Pod would get nothing. You could then handle more load by spreading evenly and still be confident that traffic will be served if one Pod dies.
You can also load-balance traffic coming into the cluster from outside so that the entrypoint to the cluster isn't always the same node. But that is a different level of load-balancing. Even with one node you can still want load-balancing for the Services within the cluster. See Clarify Ingress load balancer on load-balancing of external entrypoint.

How does the microservice API gateway pattern work with auto Horizontal scaling?

If I would like to have a high available solution. So, I would have two API gateways in different data center.
Each API gateway is connected to three microservices like billing, users, and account services. Each one has three replica.
So is that true to have 6 copies for one microservice, and if not. How does it work?
In this scenario you'd want to deploy a single Kong cluster across multiple data centers - have a look at https://getkong.org/docs/0.10.x/clustering/
Kong supports two datastores (Postgres and Cassandra) you'd probably want to pick Cassandra, but you could make Postgres work - have a look at https://getkong.org/docs/0.10.x/configuration/#datastore-section
The API gateway model is a scalable solution for microservice based architectures.
You have the gateway distributed over 2 data-centres which helps provide High Availability to the gateway - you could even consider spreading it over 3 for full multi-region in the future.
If your microservices each have a replica of 3 and they are distributed into the 2 data-centres then yes you have 6 instances of that microservice running however unless you have the two data-centres sharing resources then it is 3 of each microservice in each data-centre.

OpenStack single node vs multi node

I'm totally new to openstack and after seeing tutorials running it on both single node and multi node(at least 3 consisting 1 controller 1 compute and 1 network node) i was wondering whats the diffrence and is there any advantages with multi nodes over single nodes ones?
Open stack system consists of lot services. If you are running all these in single node , then
there will be resource scarcity issue unless you have a machine with very high CPU,RAM etc. Another advantage of multi node configuration is failover. In 3 node config if one node is down then you can continue with 2 nodes (provided you have service replication). Better go for at least 3 node config which is recommended by openstack.
With Multinode configuration, you can achieve scale-out storage solution by adding more storage needs to your needs. Also, several compute nodes can be used to increase computation needs.

How to set up a BizTalk active/active cluster

I am setting up a virtual environment as a proof of concept with the following architecture:
2 node web farm
2 node SQL active/passive fail-over cluster
2 node BizTalk active/active cluster
The first two are straight forward, now I'm wondering about the BizTalk cluster.
If I followed the same model as setting up SQL (by using the Fail-over cluster manager in windows to create a cluster) I think I would end up with an active/passive cluster.
What makes a BizTalk cluster Active/Active?
Do I need to create a windows cluster first, or do I just install BizTalk on both machines and configure BizTalk appropriately?
Yes, my understanding is that you do need to cluster the OS first.
That said, you can usually avoid the need for clustering unless you need to cluster one of the 'pull' receive handlers like FTP, MSMQ, SAP etc. For everything else IMO it usually makes sense just to add multiple BizTalk servers in a group, and then use NLB for e.g. WCF Receive adapters.
The Rationale is that by running multiple host instances of each 'type' (e.g. 2+ Receive, 2+ Process, 2+ Send, etc), is that you also have the ability to stop and start host instances without any downtime, e.g. for maintenance (patches), application deployment, etc.
The one caveat with the Group approach is that SSO master doesn't failover automatically, although this isn't usually a problem as the other servers will still be able to work from cache.
You can configure a BizTalk Group in multi-computer environment. You can refer to the doc available at MSDN download center for more details. The document specifically has a section titled "Considerations for clustering BizTalk Server in a Multiple Server environment"
You can also additionally configure your BizTalk host as a clustered resource. You can refer to the documentation available at MSDN for more details.

Resources