Spring Data JPA - Java 8 Stream Support & Transactional Best Practices - spring-mvc

I have a pretty standard MVC setup with Spring Data JPA Repositories for my DAO layer, a Service layer that handles Transactional concerns and implements business logic, and a view layer that has some lovely REST-based JSON endpoints.
My question is around wholesale adoption of Java 8 Streams into this lovely architecture: If all of my DAOs return Streams, my Services return those same Streams (but do the Transactional work), and my Views act on and process those Streams, then by the time my Views begin working on the Model objects inside my Streams, the transaction created by the Service layer will have been closed. If the underlying data store hasn't yet materialized all of my model objects (it is a Stream after all, as lazy as possible) then my Views will get errors trying to access new results outside of a transaction. Previously this wasn't a problem because I would fully materialize results into a List - but now we're in the brave new world of Streams.
So, what is the best way to handle this? Fully materialize the results inside of the Service layer as a List and hand them back? Have the View layer hand the Service layer a completion block so further processing can be done inside of a transaction?
Thanks for the help!

In thinking through this, I decided to try the completion block solution I mentioned in my question. All of my service methods now have as their final parameter a results transformer that takes the Stream of Model objects and transforms it into whatever resulting type is needed/requested by the View layer. I'm pleased to report it works like a charm and has some nice side-effects.
Here's my Service base class:
public class ReadOnlyServiceImpl<MODEL extends AbstractSyncableEntity, DAO extends AbstractSyncableDAO<MODEL>> implements ReadOnlyService<MODEL> {
#Autowired
protected DAO entityDAO;
protected <S> S resultsTransformer(Supplier<Stream<MODEL>> resultsSupplier, Function<Stream<MODEL>, S> resultsTransform) {
try (Stream<MODEL> results = resultsSupplier.get()) {
return resultsTransform.apply(results);
}
}
#Override
#Transactional(readOnly = true)
public <S> S getAll(Function<Stream<MODEL>, S> resultsTransform) {
return resultsTransformer(entityDAO::findAll, resultsTransform);
}
}
The resultsTransformer method here is a gentle reminder for subclasses to not forget about the try-with-resources pattern.
And here is an example Controller calling in to the service base class:
public abstract class AbstractReadOnlyController<MODEL extends AbstractSyncableEntity,
DTO extends AbstractSyncableDTOV2,
SERVICE extends ReadOnlyService<MODEL>>
{
#Autowired
protected SERVICE entityService;
protected Function<MODEL, DTO> modelToDTO;
protected AbstractReadOnlyController(Function<MODEL, DTO> modelToDTO) {
this.modelToDTO = modelToDTO;
}
protected List<DTO> modelStreamToDTOList(Stream<MODEL> s) {
return s.map(modelToDTO).collect(Collectors.toList());
}
// Read All
protected List<DTO> getAll(Optional<String> lastUpdate)
{
if (!lastUpdate.isPresent()) {
return entityService.getAll(this::modelStreamToDTOList);
} else {
Date since = new TimeUtility(lastUpdate.get()).getTime();
return entityService.getAllUpdatedSince(since, this::modelStreamToDTOList);
}
}
}
I think it's a pretty neat use of generics to have the Controllers dictate the return type of the Services via the Java 8 lambda's. While it's strange for me to see the Controller directly returning the result of a Service call, I do appreciate how tight and expressive this code is.
I'd say this is a net positive for attempting a wholesale switch to Java 8 Streams. Hopefully this helps someone with a similar question down the road.

Related

Using Twitter4J's UserStreamListener with EJB

Looking around StackOverflow, I see this answer to a similar problem - according to the Twitter4J documentation, TwitterStream#addListener takes a callback function. I have naively written my class as follows:
#Stateless
#LocalBean
public class TwitterListenerThread implements Runnable {
private TwitterStream twitterStream;
public TwitterListenerThread(){}
#EJB private TwitterDispatcher dispatcher;
#Override
public void run() {
ConfigurationBuilder cb = new ConfigurationBuilder();
cb.setDebugEnabled(true)
.setJSONStoreEnabled(true)
.setOAuthConsumerKey(Properties.getProperty("twitter_OAuthConsumerKey"))
.setOAuthConsumerSecret(Properties.getProperty("twitter_OAuthConsumerSecret"))
.setOAuthAccessToken(Properties.getProperty("twitter_OAuthAccessToken"))
.setOAuthAccessTokenSecret(Properties.getProperty("twitter_OAuthAccessTokenSecret"));
twitterStream = new TwitterStreamFactory(cb.build()).getInstance();
UserStreamListener listener = new UserStreamListener() {
#Override
public void onStatus(Status status) {
dispatcher.dispatch(status);
}
// Standard code
};
twitterStream.addListener(listener);
// Listen for all user activity
String user = Properties.getProperty("twitter-userid");
String[] users = {user};
twitterStream.user(users);
}
}
Now, on my colleague's PC this soon fails with an attempt to invoke when container is undeployed on the dispatcher.dispatch(status); line. I understand the reason as being due to the Twitter4J threading model not playing well with the JavaEE EJB model, but I cannot work out what to do based on the answer presented in the linked answer - how would I use a Message-Driven Bean to listen in to the Twitter stream?
After a little thinking, I worked out that the solution offered was to write a separate application that used just Java SE code to feed, using non-annotated code, a JMS queue with tweets, and then in my main application use a Message-Driven Bean to listen to the queue.
However, I was not satisfied with that work-around, so I searched a little more, and found Issue TFJ-285, Allow for alternative implementations of Dispatcher classes:
Now it is possible to introduce your own dispatcher implementation.
It can be Quartz based, it can be MDB based, and it can be EJB-timer based.
By default, Twitter4J still uses traditional and transient thread based dispatcher.
Implement a class implementing twtitter4j.internal.async.Dispatcher interface
put the class in the classpath
set -Dtwitter4j.async.dispatcherImpl to locate your dispatcher implementation
This is the default implementation on GitHub, so one could replace the:
private final ExecutorService executorService;
with a:
private final ManagedExecutorService executorService;
And, in theory, Bob's your uncle. If I ever get this working, I shall post the code here.

Failure of singleton EJB injection in Vaadin application

iI'm playing around with the Charts and CDI add-ons for Vaadin at the moment and am trying to inject a mock data source into a Chart class. The data source is a singleton bean that has already had a reference injected into the View that will be displaying the chart but I was under the impression that this shouldn't matter as singletons are application scoped.
The EJB is injected correctly into the view but when the chart class is instantiated, the injection of the data source fails and returns a null reference. I've been using the no-interface facility up until now but even if I do use an interface for the data source, this doesn't make any difference. I'm guessing that there is either a scoping issue or I'm fundamentally misusing/misunderstanding CDI. The other possibility is that I've run into a limitation to the Vaadin CDI add-on as this methodology worked without problems in JSF2.2.
If anyone has any ideas or pointers I'd be really grateful as it's pretty frustrating. Granted this is a quick and dirty implementation but it is a prototype; refactoring to separate concerns (data provision vs building UI components) may well sort the issue but I'd like to understand what's happening here first.
EJB:
#Startup
#Singleton
public class MockDataProvider implements Serializable {
private static final long serialVersionUID = -4789949304830373309L;
private Random rand = new Random();
private Collection<Person> people = new ArrayList<Person>();
private Collection<Address> addresses = new ArrayList<Address>();
private Collection<Evnt> evnts = new ArrayList<Evnt>();
private Collection<TicketType> tickets = new ArrayList<TicketType>();
/**
* Initialize the data for the application
*/
public MockDataProvider() {
}
#PostConstruct
private void init() {
loadAddressData();
loadTicketData();
loadEventData();
loadPersonData();
}
View implementation (injection successful here):
#CDIView(DashboardView.VIEW_ID)
public class DashboardView extends AbstractMVPView implements IDashboardView {
public final static String VIEW_ID = "dashboard";
#Inject
#CDILogger
private Logger logger;
#EJB
MockDataProvider dataProvider;
#Inject
EventsPerMonthChart eventsPerMonthChart;
private Table eventsTable;
private Table peopleTable;
public DashboardView() {
}
Chart class (implemented by DashboardView - EJB injection fails so a null pointer exception is thrown by dataProvider.getEvntCollection.
#Dependent
public class EventsPerMonthChart extends Chart {
#EJB
MockDataProvider dataProvider;
public EventsPerMonthChart() {
super(ChartType.PIE);
setCaption("Events per month");
getConfiguration().setTitle("");
getConfiguration().getChart().setType(ChartType.PIE);
setWidth("100%");
setHeight("90%");
DataSeries series = new DataSeries();
ArrayList<Evnt> events = (ArrayList) dataProvider.getEvntCollection();
OK - it looks like the problem was down to ignorance on my part as I did not understand the contexts where EJB injection is permitted.
The EJB (MockDataProvider) is instantiated by the container and injected into the DashboardView class which, as it was annotated with #CDIView, is also managed by the container. Hence, everything works fine. However, the Chart object was not container managed (despite my misguided addition of #Dependent to try and get the container to "notice" it) - injection into POJOs is not permitted but appears to fail silently which only added to my confusion.
Granted, the code structure is pretty appalling (close coupling, highly dependant and no separation of concerns) and this shoddy approach to prototyping has been responsible for creating the issue. Passing the Charts object the data directly or a via reference to the EJB via a constructor call works without problems.
Good job your learn from your mistakes. At the rate I'm making them, I'm going to be a genius!

Scheduled database maintenance with Java EE 6 (connection lifetime)

I'm new to Java EE 6 so I apologize if the answer to this question is obvious. I have a task that must run hourly to rebuild a Solr index from a database. I also want the rebuild to occur when the app is deployed. My gut instinct is that this should work:
#Singleton
#Startup
public class Rebuilder {
#Inject private ProposalDao proposalDao;
#Inject private SolrServer solrServer;
#Schedule(hour="*", minute="0", second="0")
public void rebuildIndex() {
// do the rebuild here
}
}
Since I'm using myBatis, I have written this producer:
public class ProposalSessionProvider {
private static final String CONFIGURATION_FILE = "...";
static {
try {
sessFactory = new SqlSessionFactoryBuilder().build(
Resources.getResourceAsReader(CONFIGURATION_FILE));
}
catch (IOException ex) {
throw new RuntimeException("Error configuring MyBatis: " + ex.getMessage(), ex);
}
}
#Produces
public ProposalsDao openSession() {
log.info("Connecting to the database");
session = sessFactory.openSession();
return session.getMapper(ProposalsDao.class);
}
}
So I have three concerns:
What's the appropriate way to trigger a rebuild at deployment time? A #PostConstruct method?
Who is responsible for closing the database connection, and how should that happen? I'm using myBatis which is, I believe, pretty ignorant of the Java EE lifecycle. It seems like if I use #Singleton the connections will never be released, but is it even meaningful to put #Startup on a #Stateless bean?
Should the Rebuilder be a singleton or not? It seems like if it is not I couldn't use #PostConstruct to handle the initial rebuild or I'll get double rebuilds every hour.
I'm not really sure how to proceed here. Thanks for your time.
I don't know myBatis but i can tell you than #Schedule job is transactional. Anyway i'am not sure that JTA managed transaction will apply here according to the way you retrieve the session. Isn't there a way to retrieve a persistenceContext in MyBatis ? For the trigger part IMHO #Startup will do the job properly and will need a singleton bean so. Anyway i'am not able to tell you which of the 2 methods you propose is the best one.
For the scheduling part, you are correct; I'd write the index building logic in a separate class, and have both a (Singleton?) #StartUp bean and a #Schedule-annotated method in a separate class call it.
JMS could be used by said beans to trigger the index rebuilding, if you don't want to have a dependency between the index-building code, and the triggering code in said classes.
I don't know myBatis well enough, but if your connection is managed by a data source #Resource, then I believe it could indeed benefit from CMT.

Is interception worth the overhead it creates?

I'm in the middle of a significant effort to introduce NHibernate into our code base. I figured I would have to use some kind of a DI container, so I can inject dependencies into the entities I load from the database. I chose Unity as that container.
I'm considering using Unity's interception mechanism to add a transaction aspect to my code, so I can do e.g. the following:
class SomeService
{
[Transaction]
public void DoSomething(CustomerId id)
{
Customer c = CustomerRepository.LoadCustomer(id);
c.DoSomething();
}
}
and the [Transaction] handler will take care of creating a session and a transaction, committing the transaction (or rolling back on exception), etc.
I'm concerned that using this kind of interception will bind me to using Unity pretty much everywhere in the code. If I introduce aspects in this manner, then I must never, ever call new SomeService(), or I will get a service that doesn't have transactions. While this is acceptable in production code, it seems too much overhead in tests. For example, I would have to convert this:
void TestMethod()
{
MockDependency dependency = new MockDependency();
dependency.SetupForTest();
var service = SomeService(dependency);
service.DoSomething();
}
into this:
void TestMethod()
{
unityContainer.RegisterType<MockDependency>();
unityContainer.RegisterType<IDependency, MockDependency>();
MockDependency dependency = unityContainer.Resolve<MockDependency>();
dependency.SetupForTest();
var service = unityContainer.Resolve<SomeService>();
service.DoSomething();
}
This adds 2 lines for each mock object that I'm using, which leads to quite a bit of code (our tests use a lot of stateful mocks, so it is not uncommon for a test class to have 5-8 mock objects, and sometimes more.)
I don't think standalone injection would help here: I have to set up injection for every class that I use in the tests, because it's possible for aspects to be added to a class after the test is written.
Now, if I drop the use of interception I'll end up with:
class SomeService
{
public void DoSomething(CustomerId id)
{
Transaction.Run(
() => {
Customer c = CustomerRepository.LoadCustomer(id);
c.DoSomething();
});
}
}
which is admittedly not as nice, but doesn't seem that bad either.
I can even set up my own poor man's interception:
class SomeService
{
[Transaction]
public void DoSomething(CustomerId id)
{
Interceptor.Intercept(
MethodInfo.GetCurrentMethod(),
() => {
Customer c = CustomerRepository.LoadCustomer(id);
c.DoSomething();
});
}
}
and then my interceptor can process the attributes for the class, but I can still instantiate the class using new and not worry about losing functionality.
Is there a better way of using Unity interception, that doesn't force me to always use it for instantiating my objects?
If you want to use AOP but are concerned abut Unity then I would recommend you check out PostSharp. That implements AOP as a post-compile check but has no changes on how you use the code at runtime.
http://www.sharpcrafters.com/
They have a free community edition that has a good feature set, as well as professional and enterprise versions that have significantly enhanced feature sets.

Security and roles authorization with model view presenter design pattern

Where is the most fitting place for security and roles authorization to fit into the model view presenter design pattern?
Would it be for all pages that implement security to implement a specific interface, say IAuthorizedView that's along the lines of
public interface IAuthorizedView : IView
{
IUser user;
void AuthorizationInitialized();
void AuthorizationInvoked();
}
Then handled inside the presenter level
public abstract class Presenter<TView> where TView : IView
{
public TView View { get; set; }
public virtual void OnViewInitialized()
{
}
public virtual void OnViewLoaded()
{
}
}
public abstract class AuthorizationSecuredPresenter<TView>
: Presenter<TView> where TView : IAuthorizedView
{
public override void OnViewInitialized()
{
View.AuthorizationInitialized();
base.OnViewInitialized();
}
public override void OnViewLoaded()
{
View.AuthorizationInvoked();
base.OnViewLoaded();
}
}
This would be my first idea on it, the only question this would leave me is if we move from solely web based and added any type of API that required authorization on the service level that there would end up alot of duplication of access checking or is that perfectly acceptable to verify twice and should be designed for up front?
Here is something that you might want to consider.
I would use the decorator pattern to authorize each call to your object separatly.
Let's say you have the following class:
public class MyService
{
public virtual void DoSomething()
{
//do something on the server
}
}
You would then proceed to create a base decorator to implement the default constructor like this:
public class MyServiceDecoratorBase : MyService
{
public MyServiceDecoratorBase(MyService service)
{
}
}
Once this is setup, you can actually start to decorate by creating an authorization decorator like this:
public class MyServiceAuthorizationDecorator : MyServiceDecoratorBase
{
private readonly MyService _service;
public MyServiceDecoratorBase(MyService service)
{
_service = service;
}
public override void DoSomething()
{
//TODO: Authorize the user here.
_service.DoSomething();
}
}
So now that the main classes are done... how are you going to call all this? Easy!
MyService service = new MyServiceAuthorizationDecorator(new MyService());
service.DoSomething();
Now... the advantage of all that is that your authorization logic is completely decoupled from your main service(or object) logic. Why is this important? Testability. You can test your main service independently of your authorization logic. This correspond to the Open/Close Principle.
Now, let's say you want to calculate performance on those pesky methods... add a decorator! Logging? Another decorator! They can all be chained that way. Of course, the more you add and the heavier it gets but I think that it's worth it for the advantage it gives.
Comments?
Your design looks fine; as for your concluding question ...
if we move from solely web based and
added any type of API that required
authorization on the service level
that there would end up alot of
duplication of access checking or is
that perfectly acceptable to verify
twice and should be designed for up
front?
The answer is emphatically yes - you may even want to verify permissions more often than that, even when these checks are semi-redundant. I can think of at least three times I'd check security in a typical web application (with role-based security requirements):
First, inside your business layer - to ensure security is applied no matter what the execution context.
Second, when creating the view itself (or its presenter), it's important to make sure users only see features for which they have permission - both for security reasons and so they don't waste their time.
Third, when constructing menus to make sure that users don't see functionality that they don't have permission to use. Again, this is for both security and usability reasons. You don't want to distract users with features they can't use, if you can help it.
The View should handles just the UI. It should setup the dialog/form/controls however you need it. When the user tries to authorize hand the data off to the presenter.
The presenter then should take that data and validate it using the API and model exposed from the model.
In my CAD/CAM application the actual API reside in lowest of my application the utility assembly. I wrap and interface around it so that if I chance my security API the upper levels do not see anything different. The Utility tells me if the entered information is valid or not and what level of security to grant the person.
Any more specific depends on the exact security API you are using.

Resources