Performance on EJB Container Managed Transaction VS Bean Managed Transaction - ejb

Regarding on comparison between EJB Container Managed Transaction and Bean Managed Transaction?
Which one has got better performance.
Best practice on when to use either of them.
Thanks

Wether to choose among BMT and CMT depends on the business decision, rather than performance.
In my opinion there's no best practices, however these are some examples for preferring one or the other.
BMT
You have a Stateful bean, and the global commit depends on the results of other methods. By using BMT with Stateful, you can infact leave the transaction open, and close/commit it when your business decision has been made.
For any reason you want to have full control on your transaction boundaries
CMT
You have a Stateful and you want to implement the SessionSynchronization interface for knowing when transaction begin end etc... In this case your bean has to use CMT
You have a chain of EJBs and you want all them to be part of a single transaction, in this case you need to use CMT (although the first ejb can initiate and share its BMT transaction).
There are some scenarios where the usage of BMT has something to consider.
If you use BMT with MessageDriven the message is not part of the transaction, so the message is acknowledged by the container.
If you an EJB and a method with BMT, that method cannot join an existing transaction.

Bean Managed Transaction should be a little faster than Container Managed Tansaction.
CMT needs to do some extra work, which does not need to be done in BMT.
The container must check if a transaction is already started.
The container must read the #TransactionAttribute annotation of the method.
The container must start a new Transaction (in case of TransactionAttributeType: Required, RequiresNew) before running the method or throw an Exception (in case of TransactionAttributeType: Never, Mandatory).

Related

Transient Database contexts from separate dependencies fails for parallel queries

Background (TLDR: I need parallel queries)
I am building REST service that needs to be able to answer queries very fast.
As such I'm pre-loading a large part of the database into memory and answering using that data instead of making complex database queries for each request. This works great, and the average response time of the API is well below the requirements and a lot faster than direct database queries.
But I have a problem. The service takes about 5 minutes to start and pre-load all of its information. During this time it can not answer queries.
Problem
I want to change this so that during the pre-load phase it makes database queries until the in-memory cache is loaded.
This leads me to a problem. I need to have multiple active queries to my database. Anyone who has tried this in EF Core has problably seen this message.
System.InvalidOperationException: A second operation started on this context before a previous operation completed. This is usually caused by different threads using the same instance of DbContext. For more information on how to avoid threading issues with DbContext, see https://go.microsoft.com/fwlink/?linkid=2097913.
The first sentence on the linked page is
Entity Framework Core does not support multiple parallel operations
being run on the same DbContext instance.
I thought this would be easily solved by wrapping my cache-loading into its own class and the direct query into another, and then having both of these requiring their own instance of the Database Context. Then my service can in turn get these injected and use both of these dependencies in parallel.
This should be what I have:
I have also set up my database context so that it uses transient for all parts.
services.AddDbContext<IDataContext, DataContext>(options =>
options.UseSqlServer(connectionString), ServiceLifetime.Transient, ServiceLifetime.Transient
);
I have also enabled MultipleActiveResultSets=True
All of this however results in the exact same error as listed above.
Again, everything is Transient except the HandlerService which is Singelton as I want this to keep a copy of the cache in memory and not have to load it for every request.
What is it I have failed to understand about the ef-core database context, or DI in general?
I figured out what the problem was. In my case there is as described above, one singleton handler. This handler has one (indirect) context (through DI) for fulfilling requests until the cache is loaded. When multiple parallel queries are sent to the API before the cache is loaded, then this error occurs as each of these request are using the same context. And in my test I was always hitting the parallel requests as part of the startup and hence the singelton service was trying to use the same db context for multiple requests. My solution is to in this one place step outside the "normal" dependency injection and use the IServiceScopeFactory to get a new instance of the dependency used to resolve requests before the cache is loaded. Bohdans answer led me to this conclusion and ultimate solution.
I'm not sure whether it qualifies for a full answer but it's too broad for a comment.
When doing .NET core background services which are obviously singletons too I use IServiceScopeFactory to create services with a limited lifetime.
Here's how I create a context
using (var scope = _scopeFactory.CreateScope())
{
var context = scope.ServiceProvider.GetRequiredService<DbContext>();
}
My guess is that you could inject it in your hander and use it like this too. So it would allow you to leave context as scoped instead of transient with is default setting btw.
Hope that helps.

Should AbstractListenerContainerFactory really close the consumer it uses to check topics

In org.springframework.kafka.listener.AbstractMessageListenerContainer, when starting the container, the checkTopics method checks if the subscribed topic exists on the broker using a Kafka consumer that is created in a try with resources block.
When the consumer is closed, the closure cascades down to many Closeable associated objects, notably the key and value deserializers (see org.apache.kafka.clients.consumer.KafkaConsumer). In a Spring application, Deserializers are generally declared as beans so there is only one instance in the factory of each type, and while most deserializers implement close as a no-op, I have come across cases where closing a Deserializer renders it unusable from that point on.
It seems to me that while closing a consumer sounds reasonable, given that multiple instances are spun up by Spring and the one created here is just a throwaway, the cascade down to Deserializer beans is an undesirable consequence that maybe wasn't noticed when the AbstractMessageListenerContainer was written.
There is a workaround - when creating the KafkaListenerContainerFactory just call
factory.getContainerProperties().setMissingTopicsFatal(false);
but this removes the safety check for topic existence and seems like a bit of a hack. Is closing the consumer really the right thing to do in AbstractMessageListenerContainer?
We can and will use an AdminClient instead of a Consumer to check that the topic(s) exist.
However, this is not a panacea since stop()ping a container also closes the Consumer(s) so the same problem would exist when the container(s) are restarted.
In cases like this, it's better to let Kafka manage the lifecycle of the deserializer instead of declaring it as a bean.

stateful session bean pool size

am going through enterprise session bean material.i have doubts regarding below oints :-
1) lets say we mentioned the pool size as 50 for some statful session bean. Different 50 clients use them .
so now all the 50 beans maintain some state. At what point of time will these states be removed so that
if 51th clien ask for bean it does not get any previous spoilt state.?
2) lets say we mentioned the pool size as 50 for some stateless session bean and all are in use at some
point of time. If 51th client comes and ask for a bean, will that wait till some bean become free
or create new bean instance?
Stateful session beans are not normally pooled. It's possible, but their state makes them less ideal to pool as the client expects a fresh bean when obtaining a reference.
For stateless beans, yes the 51st client will have to wait. This is normally a good thing since it automatically moderates the resource consumption of your system. Depending on the resources you have, your workload and the amount of work being to in a single call to a ssb, you might want to tune the size of your pool.
As bkail states the pooling semantics of #Stateless beans are vendor-specific. That said in EJB 3.1 we added the #AccessTimeout annotation which can be used on the bean class or methods of a #Stateless, #Stateful or #Singleton bean.
#AccessTimeout
In a general sense this annotation portably specifies up to how long a caller will wait if a wait condition occurs with concurrent access. Specific to each bean type, wait conditions will occur when:
#Singleton - an #Lock(WRITE) method is being invoked and container-managed concurrency is being used. All methods are #Lock(WRITE) by default.
#Stateful - any method of the instance is being invoked and a second invocation occurs. OR the #Stateful bean is in a transaction and the caller is invoking it from outside that transaction.
#Stateless - no instances are available in the pool. As noted, however, pooling sematics, if any, are not covered by the spec. If the vendor's pooling semantics do involve a wait condition, the #AccessTimeout should apply.
Usage
The #AccessTimeout is simply a convenience wrapper around the long and TimeUnit tuples commonly used in the java.util.concurrent API.
import java.util.concurrent.TimeUnit;
#Target({METHOD, TYPE})
#Retention(RUNTIME)
public #interface AccessTimeout {
long value();
TimeUnit unit() default TimeUnit.MILLISECONDS;
}
When explicitly set on a bean class or method, it has three possible meanings:
#AccessTimeout(-1) - Never timeout, wait as long as it takes. Potentially forever.
#AccessTimeout(0) - Never wait. Immediately throw ConcurrentAccessException if a wait condition would have occurred.
#AccessTimout(30, TimeUnit.SECONDS) - Wait up to 30 seconds if a wait condition occurs. After that, throw ConcurrentAccessTimeoutExcpetion
No standard default
Note that the value attribute has no default. This was intentional and intended to communicate that if #AccessTimeout is not explicitly used, the behavior you get is vendor-specific.
Some vendors will wait for a preconfigured time and throw javax.ejb.ConcurrentAccessException, some vendors will throw it immediately. When we were defining this annotation it became clear that all of us vendors were doing things a bit differently and enforcing a default would cause problems for existing apps.
On a similar note, prior to EJB 3.0 there was no default transaction attribute and it was different for every vendor. Thank goodness EJB 3.0 was different enough that we could finally say, "For EJB 3.0 beans the default is REQUIRED."

Why use facade pattern in EJB?

I've read through this article trying to understand why you want a session bean in between the client and entity bean. Is it because by letting the client access entity bean directly you would let the client know exactly all about the database?
So by having middleman (the session bean) you would only let the client know part of the database by implementing the business logic in some certain way. So only part of the database which is relevant to the client is only visible. Possibly also increase the security.
Is the above statement true?
Avoiding tight coupling between the client & the business objects, increasing manageability.
Reducing fine-grained method invocations, leads to minimize method invocation calls over the network, providing coarse-grained access to clients.
Can have centralized security & transaction constraints.
Greater flexibility & ability to cope with changes.
Exposing only required & providing simpler interface to the clients, hiding the underlying complexity and inner details, interdependencies between business components.
The article you cite is COMPLETELY out of date. Check the date, it's from 2002.
There is no such thing anymore as an entity bean in EJB (they are currently retained for backwards compatibility, but are on the verge of being purged completely). Entity beans where awkward things; a model object (e.g. Person) that lives completely in the container and where access to every property of it (e.g. getName, getAge) required a remote container call.
In this time and age, we have JPA entities that are POJOs and contain only data. Don't confuse a JPA entity with this ancient EJB entity bean. They sound similar but are completely different things. JPA entities can be safely send to a (remote) client. If you are really concerned that the names used in your entity reveal your DB structure, you could use XML mapping files instead of annotations and use completely different names.
That said, session beans can still perfectly be used to implement the Facade pattern if that's needed. This pattern is indeed used to give clients a simplified and often restricted view of your system. It's just that the idea of using session beans as a Facade for entity beans is completely outdated.
It is to simplify the work of the client. The Facade presents a simple interface and hides the complexity of the model from the client. It also makes it possible for the model to change without affecting the client, as long as the facade does not change its interface.
It decouples application logic with the business logic.
So the actual data structures and implementation can change without breaking existing code utilizing the APIs.
Of course it hides the data structure from "unknown" applications if you expose your beans to external networks

Communication between EJB3 Instances (Java EE inter-bean communication) possible?

I'm designing a part of a Java EE 6 application, consisting of EJB3 beans. Part of the requirements are multiple parallel (say a few hundred) long running (over days) database hunts. Individual hunts have different search parameters (start time, end time, query filter). Parameters may get changed over time.
Currently I'm thinking of the following:
SearchController (Stateless Session Bean) formulates a set of search parameters, sends it off to a SearchListener via JMS
SearchListener (Message Driven Bean) receives search parameters, instantiates a SearchWorker with the parameters
SearchWorker (SLSB) hunts repeatedly through the database; when it finds something, the result is sent off via JMS, and the search continues; when the given 'end-time' has reached, it ends
What I'm wondering now:
Is there a problem, with EJB3 instances running for days? (Other than that I need to be able to deal with container restarts...)
How do I know how many and which EJB instances of SearchWorker are currently running?
Is it possible to communicate with them individually (similar to sending a System V signal to a unix process), e.g. to send new parameters, to end an instance, etc..
If you're holding a huge ResultSet open for an extended period of time, you're likely to encounter either transaction timeouts or database locking issues.
There is no builtin mechanism for determining which bean instances are running in a method, so you would need to add your own mechanism. Your product might have some kind of performance monitoring that lets you know how many of each type of bean is currently running a method.
As for cross-thread communication, you would need to implement your own synchronization and periodically check in the bean method. You'll be outside the scope of standard EJB since each parallel call to a business method will allocate a new SLSB from the pool.

Resources