Scheduled database maintenance with Java EE 6 (connection lifetime) - ejb

I'm new to Java EE 6 so I apologize if the answer to this question is obvious. I have a task that must run hourly to rebuild a Solr index from a database. I also want the rebuild to occur when the app is deployed. My gut instinct is that this should work:
#Singleton
#Startup
public class Rebuilder {
#Inject private ProposalDao proposalDao;
#Inject private SolrServer solrServer;
#Schedule(hour="*", minute="0", second="0")
public void rebuildIndex() {
// do the rebuild here
}
}
Since I'm using myBatis, I have written this producer:
public class ProposalSessionProvider {
private static final String CONFIGURATION_FILE = "...";
static {
try {
sessFactory = new SqlSessionFactoryBuilder().build(
Resources.getResourceAsReader(CONFIGURATION_FILE));
}
catch (IOException ex) {
throw new RuntimeException("Error configuring MyBatis: " + ex.getMessage(), ex);
}
}
#Produces
public ProposalsDao openSession() {
log.info("Connecting to the database");
session = sessFactory.openSession();
return session.getMapper(ProposalsDao.class);
}
}
So I have three concerns:
What's the appropriate way to trigger a rebuild at deployment time? A #PostConstruct method?
Who is responsible for closing the database connection, and how should that happen? I'm using myBatis which is, I believe, pretty ignorant of the Java EE lifecycle. It seems like if I use #Singleton the connections will never be released, but is it even meaningful to put #Startup on a #Stateless bean?
Should the Rebuilder be a singleton or not? It seems like if it is not I couldn't use #PostConstruct to handle the initial rebuild or I'll get double rebuilds every hour.
I'm not really sure how to proceed here. Thanks for your time.

I don't know myBatis but i can tell you than #Schedule job is transactional. Anyway i'am not sure that JTA managed transaction will apply here according to the way you retrieve the session. Isn't there a way to retrieve a persistenceContext in MyBatis ? For the trigger part IMHO #Startup will do the job properly and will need a singleton bean so. Anyway i'am not able to tell you which of the 2 methods you propose is the best one.

For the scheduling part, you are correct; I'd write the index building logic in a separate class, and have both a (Singleton?) #StartUp bean and a #Schedule-annotated method in a separate class call it.
JMS could be used by said beans to trigger the index rebuilding, if you don't want to have a dependency between the index-building code, and the triggering code in said classes.
I don't know myBatis well enough, but if your connection is managed by a data source #Resource, then I believe it could indeed benefit from CMT.

Related

Spring Data JPA - Java 8 Stream Support & Transactional Best Practices

I have a pretty standard MVC setup with Spring Data JPA Repositories for my DAO layer, a Service layer that handles Transactional concerns and implements business logic, and a view layer that has some lovely REST-based JSON endpoints.
My question is around wholesale adoption of Java 8 Streams into this lovely architecture: If all of my DAOs return Streams, my Services return those same Streams (but do the Transactional work), and my Views act on and process those Streams, then by the time my Views begin working on the Model objects inside my Streams, the transaction created by the Service layer will have been closed. If the underlying data store hasn't yet materialized all of my model objects (it is a Stream after all, as lazy as possible) then my Views will get errors trying to access new results outside of a transaction. Previously this wasn't a problem because I would fully materialize results into a List - but now we're in the brave new world of Streams.
So, what is the best way to handle this? Fully materialize the results inside of the Service layer as a List and hand them back? Have the View layer hand the Service layer a completion block so further processing can be done inside of a transaction?
Thanks for the help!
In thinking through this, I decided to try the completion block solution I mentioned in my question. All of my service methods now have as their final parameter a results transformer that takes the Stream of Model objects and transforms it into whatever resulting type is needed/requested by the View layer. I'm pleased to report it works like a charm and has some nice side-effects.
Here's my Service base class:
public class ReadOnlyServiceImpl<MODEL extends AbstractSyncableEntity, DAO extends AbstractSyncableDAO<MODEL>> implements ReadOnlyService<MODEL> {
#Autowired
protected DAO entityDAO;
protected <S> S resultsTransformer(Supplier<Stream<MODEL>> resultsSupplier, Function<Stream<MODEL>, S> resultsTransform) {
try (Stream<MODEL> results = resultsSupplier.get()) {
return resultsTransform.apply(results);
}
}
#Override
#Transactional(readOnly = true)
public <S> S getAll(Function<Stream<MODEL>, S> resultsTransform) {
return resultsTransformer(entityDAO::findAll, resultsTransform);
}
}
The resultsTransformer method here is a gentle reminder for subclasses to not forget about the try-with-resources pattern.
And here is an example Controller calling in to the service base class:
public abstract class AbstractReadOnlyController<MODEL extends AbstractSyncableEntity,
DTO extends AbstractSyncableDTOV2,
SERVICE extends ReadOnlyService<MODEL>>
{
#Autowired
protected SERVICE entityService;
protected Function<MODEL, DTO> modelToDTO;
protected AbstractReadOnlyController(Function<MODEL, DTO> modelToDTO) {
this.modelToDTO = modelToDTO;
}
protected List<DTO> modelStreamToDTOList(Stream<MODEL> s) {
return s.map(modelToDTO).collect(Collectors.toList());
}
// Read All
protected List<DTO> getAll(Optional<String> lastUpdate)
{
if (!lastUpdate.isPresent()) {
return entityService.getAll(this::modelStreamToDTOList);
} else {
Date since = new TimeUtility(lastUpdate.get()).getTime();
return entityService.getAllUpdatedSince(since, this::modelStreamToDTOList);
}
}
}
I think it's a pretty neat use of generics to have the Controllers dictate the return type of the Services via the Java 8 lambda's. While it's strange for me to see the Controller directly returning the result of a Service call, I do appreciate how tight and expressive this code is.
I'd say this is a net positive for attempting a wholesale switch to Java 8 Streams. Hopefully this helps someone with a similar question down the road.

how to dynamically register Feed Inbound Adapter in Spring Integration?

I'm trying to implement an RSS/Atom feed aggregator in spring-integration and I am primarily using the Java DSL to write my IntegrationFlow. A requirement of this aggregator is that feeds can be added / removed during runtime. That is to say, the feeds are not known at design time.
I found it simple to use the basic Feed.inboundAdapter() with a test url and extract the links out of the feed with a transformer and then pass it on to an outbound-file-adapter to save the links to a file. However, I have gotten very stuck when trying to read the (thousands) of feed urls from an inbound-file-adapter run the file through a FileSplitter and then pass each resulting Message<String> containing the feed url to then register a new Feed.inboundAdapter(). Is this not possible with the Java DSL?
Ideally I would love it if I could do the following:
#Bean
public IntegrationFlow getFeedsFromFile() throws MalformedURLException {
return IntegrationFlows.from(inboundFileChannel(), e -> e.poller(Pollers.fixedDelay(10000)))
.handle(new FileSplitter())
//register new Feed.inboundAdapter(payload.toString()) foreach Message<String> containing feed url coming from FileSplitter
.transform(extractLinkFromFeedEntry())
.handle(appendLinkToFile())
.get();
}
Though after reading through the spring integration java DSL code multiple times (and learning a tonne of stuff along the way) I just can't see that it's possible to do it this way. So... A) is it? B) should it be? C) Suggestions?
It almost feels like I should be able to take the output of .handle(new FileSplitter()) and pass that into .handleWithAdapter(Feed.inboundAdapter(/*stuff here*/)) but the DSL only references outbound-adapters there. Inbound adapters are really just a subclass of AbstractMessageSource and it seems the only place you can specify one of those is as an argument to the IntegrationFlows.from(/*stuff here*/) method.
I would have thought it would be possible to take the input from a file, split it line by line, use that output to register inbound feed adapters, poll those feeds, extract the new links from feeds as they appear and append them to a file. It appears as though it's not.
Is there some clever subclassing I can do to make this work??
Failing that... and I suspect this is going to be the answer, I found the spring integration Dynamic Ftp Channel Resolver Example and this answer on how to adapt it dynamically register stuff for the inbound case...
So is this the way to go? Any help/guidance appreciated. After pouring over the DSL code and reading documentation for days, I think I'll have a go at implementing the dynamic ftp example and adapting it to work with FeedEntryMessageSource... in which case my question is... that dynamic ftp example works with XML configuration, but is it possible to do it with either Java config or the Java DSL?
Update
I've implemented the solution as follows:
#SpringBootApplication
class MonsterFeedApplication {
public static void main(String[] args) throws IOException {
ConfigurableApplicationContext parent = SpringApplication.run(MonsterFeedApplication.class, args);
parent.setId("parent");
String[] feedUrls = {
"https://1nichi.wordpress.com/feed/",
"http://jcmuofficialblog.com/feed/"};
List<ConfigurableApplicationContext> children = new ArrayList<>();
int n = 0;
for(String feedUrl : feedUrls) {
AnnotationConfigApplicationContext child = new AnnotationConfigApplicationContext();
child.setId("child" + ++n);
children.add(child);
child.setParent(parent);
child.register(DynamicFeedAdapter.class);
StandardEnvironment env = new StandardEnvironment();
Properties props = new Properties();
props.setProperty("feed.url", feedUrl);
PropertiesPropertySource pps = new PropertiesPropertySource("feed", props);
env.getPropertySources().addLast(pps);
child.setEnvironment(env);
child.refresh();
}
System.out.println("Press any key to exit...");
System.in.read();
for (ConfigurableApplicationContext child : children) {
child.close();
}
parent.close();
}
#Bean
public IntegrationFlow aggregateFeeds() {
return IntegrationFlows.from("feedChannel")
.transform(extractLinkFromFeed())
.handle(System.out::println)
.get();
}
#Bean
public MessageChannel feedChannel() {
return new DirectChannel();
}
#Bean
public AbstractPayloadTransformer<SyndEntry, String> extractLinkFromFeed() {
return new AbstractPayloadTransformer<SyndEntry, String>() {
#Override
protected String transformPayload(SyndEntry payload) throws Exception {
return payload.getLink();
}
};
}
}
DynamicFeedAdapter.java
#Configuration
#EnableIntegration
public class DynamicFeedAdapter {
#Value("${feed.url}")
public String feedUrl;
#Bean
public static PropertySourcesPlaceholderConfigurer pspc() {
return new PropertySourcesPlaceholderConfigurer();
}
#Bean
public IntegrationFlow feedAdapter() throws MalformedURLException {
URL url = new URL(feedUrl);
return IntegrationFlows
.from(s -> s.feed(url, "feedTest"),
e -> e.poller(p -> p.fixedDelay(10000)))
.channel("feedChannel")
.get();
}
}
And this works IF and only IF I have one of the urls defined in application.properties as feed.url=[insert url here]. Otherwise it fails telling me 'unable to resolve property {feed.url}'. I suspect what is happening there is that the #Beans defined in DynamicFeedAdapter.java all get singletons eagerly initialized, so aside from the beans being manually created in our for loop in the main method (which work fine because they have feed.url property injected) we have a stray singleton that has been eagerly initialized and if there is no feed.url defined in application.properties then it can't resolve the property and everything goes bang. Now from what I know of Spring, I know it should be possible to #Lazy initialize the beans in DynamicFeedAdapter.java so we don't wind up with this one unwanted stray singleton problem-child. The problem is now...if I just mark the feedAdapter() #Lazy then the beans never get initialized. How do I initialize them myself?
Update - problem solved
Without having tested it, I think the problem is that boot is finding
the DynamicFeedAdapter during its component scan. A simple solution is
to move it to a sibling package. If MonsterFeedApplication is in
com.acme.foo, then put the adapter config class in com.acme.bar. That
way, boot won't consider it "part" of the application
This was indeed the problem. After implementing Gary's suggestion, everything works perfect.
See the answer to this question and its follow up for a similar question about inbound mail adapters.
In essence, each feed adapter is created in a child context that is parameterized.
In that case the child contexts are created in a main() method but there's no reason it couldn't be done in a service invoked by .handle().

Using Twitter4J's UserStreamListener with EJB

Looking around StackOverflow, I see this answer to a similar problem - according to the Twitter4J documentation, TwitterStream#addListener takes a callback function. I have naively written my class as follows:
#Stateless
#LocalBean
public class TwitterListenerThread implements Runnable {
private TwitterStream twitterStream;
public TwitterListenerThread(){}
#EJB private TwitterDispatcher dispatcher;
#Override
public void run() {
ConfigurationBuilder cb = new ConfigurationBuilder();
cb.setDebugEnabled(true)
.setJSONStoreEnabled(true)
.setOAuthConsumerKey(Properties.getProperty("twitter_OAuthConsumerKey"))
.setOAuthConsumerSecret(Properties.getProperty("twitter_OAuthConsumerSecret"))
.setOAuthAccessToken(Properties.getProperty("twitter_OAuthAccessToken"))
.setOAuthAccessTokenSecret(Properties.getProperty("twitter_OAuthAccessTokenSecret"));
twitterStream = new TwitterStreamFactory(cb.build()).getInstance();
UserStreamListener listener = new UserStreamListener() {
#Override
public void onStatus(Status status) {
dispatcher.dispatch(status);
}
// Standard code
};
twitterStream.addListener(listener);
// Listen for all user activity
String user = Properties.getProperty("twitter-userid");
String[] users = {user};
twitterStream.user(users);
}
}
Now, on my colleague's PC this soon fails with an attempt to invoke when container is undeployed on the dispatcher.dispatch(status); line. I understand the reason as being due to the Twitter4J threading model not playing well with the JavaEE EJB model, but I cannot work out what to do based on the answer presented in the linked answer - how would I use a Message-Driven Bean to listen in to the Twitter stream?
After a little thinking, I worked out that the solution offered was to write a separate application that used just Java SE code to feed, using non-annotated code, a JMS queue with tweets, and then in my main application use a Message-Driven Bean to listen to the queue.
However, I was not satisfied with that work-around, so I searched a little more, and found Issue TFJ-285, Allow for alternative implementations of Dispatcher classes:
Now it is possible to introduce your own dispatcher implementation.
It can be Quartz based, it can be MDB based, and it can be EJB-timer based.
By default, Twitter4J still uses traditional and transient thread based dispatcher.
Implement a class implementing twtitter4j.internal.async.Dispatcher interface
put the class in the classpath
set -Dtwitter4j.async.dispatcherImpl to locate your dispatcher implementation
This is the default implementation on GitHub, so one could replace the:
private final ExecutorService executorService;
with a:
private final ManagedExecutorService executorService;
And, in theory, Bob's your uncle. If I ever get this working, I shall post the code here.

Is interception worth the overhead it creates?

I'm in the middle of a significant effort to introduce NHibernate into our code base. I figured I would have to use some kind of a DI container, so I can inject dependencies into the entities I load from the database. I chose Unity as that container.
I'm considering using Unity's interception mechanism to add a transaction aspect to my code, so I can do e.g. the following:
class SomeService
{
[Transaction]
public void DoSomething(CustomerId id)
{
Customer c = CustomerRepository.LoadCustomer(id);
c.DoSomething();
}
}
and the [Transaction] handler will take care of creating a session and a transaction, committing the transaction (or rolling back on exception), etc.
I'm concerned that using this kind of interception will bind me to using Unity pretty much everywhere in the code. If I introduce aspects in this manner, then I must never, ever call new SomeService(), or I will get a service that doesn't have transactions. While this is acceptable in production code, it seems too much overhead in tests. For example, I would have to convert this:
void TestMethod()
{
MockDependency dependency = new MockDependency();
dependency.SetupForTest();
var service = SomeService(dependency);
service.DoSomething();
}
into this:
void TestMethod()
{
unityContainer.RegisterType<MockDependency>();
unityContainer.RegisterType<IDependency, MockDependency>();
MockDependency dependency = unityContainer.Resolve<MockDependency>();
dependency.SetupForTest();
var service = unityContainer.Resolve<SomeService>();
service.DoSomething();
}
This adds 2 lines for each mock object that I'm using, which leads to quite a bit of code (our tests use a lot of stateful mocks, so it is not uncommon for a test class to have 5-8 mock objects, and sometimes more.)
I don't think standalone injection would help here: I have to set up injection for every class that I use in the tests, because it's possible for aspects to be added to a class after the test is written.
Now, if I drop the use of interception I'll end up with:
class SomeService
{
public void DoSomething(CustomerId id)
{
Transaction.Run(
() => {
Customer c = CustomerRepository.LoadCustomer(id);
c.DoSomething();
});
}
}
which is admittedly not as nice, but doesn't seem that bad either.
I can even set up my own poor man's interception:
class SomeService
{
[Transaction]
public void DoSomething(CustomerId id)
{
Interceptor.Intercept(
MethodInfo.GetCurrentMethod(),
() => {
Customer c = CustomerRepository.LoadCustomer(id);
c.DoSomething();
});
}
}
and then my interceptor can process the attributes for the class, but I can still instantiate the class using new and not worry about losing functionality.
Is there a better way of using Unity interception, that doesn't force me to always use it for instantiating my objects?
If you want to use AOP but are concerned abut Unity then I would recommend you check out PostSharp. That implements AOP as a post-compile check but has no changes on how you use the code at runtime.
http://www.sharpcrafters.com/
They have a free community edition that has a good feature set, as well as professional and enterprise versions that have significantly enhanced feature sets.

Performing logging operations in MVC .NET

i'm trying to work out the best method to perform logging in the application i'm currently developing.
right now, i have a Log table that stores the username, timestamp, action, controller, and a message. when a controller is instantiated, it gets the IoC info through Castle Windsor.
for example, my "Sites" controller is created as follows:
private ISitesRepository siteRepository;
private ILogWriter logWriter;
public SiteController(ISitesRepository siteRepository, ILogWriter logWriter)
{
this.siteRepository = siteRepository;
this.logWriter = logWriter;
}
and the log writer has a function that creates and inserts a log entry (WriteToLog). within the Sites controller's Edit and Create actions, it calls the WriteToLog function.
this is working and doing its job, but my question is- do i really need to set up each controller this way, passing through the ILogWriter interface/repository? it struck me that i could possibly set up a LogController, and just have that do the "heavy lifting" of writing to my logs.
that way, i wouldn't have to mess with the IoC stuff in every other controller. is it possible to execute an action on another controller (for example, a LogController-> WriteLog)? i'm not sure how would that be done without doing a redirect...
Could you pass by an abstract class? This abstract class having a static property referencing you log writer?
something like this
public abstract class BaseController
{
public static ILogWriter Logwriter{get;set;}
public static BaseController
{
Logwriter = YourFactory.GetLogwriter();
}
}
public class YourController:BaseController
{
public YourController(ISitesRepository siteRepository)
{
}
}
Ok, after much head scratching, i think i found an acceptable solution.
I implemented my logging action as a custom action filter as so:
public class LogAction : ActionFilterAttribute, IActionFilter
{
public LogLevel loglevel;
public string message;
public override void OnActionExecuted(ActionExecutedContext filterContext)
{
ILogWriter logWriter = AppServiceFactory.Instance.Create<ILogWriter>();
logWriter.WriteToLog(
filterContext.ActionDescriptor.ControllerDescriptor.ControllerName,
filterContext.ActionDescriptor.ActionName,
loglevel,
filterContext.HttpContext.Timestamp,
filterContext.HttpContext.User.Identity.Name.ToString(),
message + "(id=" + filterContext.RouteData.Values["id"] + ")");
}
}
but i ran into a wall trying to get the IoC to work in a custom attribute filter. scouring stackoverflow and google searches, i found that it's sort of difficult to do, with talk about using different wrappers, action invokers, etc, which all seemed more complicated than i was really willing to deal with.
trying to learn more about IoC (i'm still very new at this), i found this article,
which really helped point me in the right direction. i added his sealed AppServiceFactory class with my WindsorControllerFactory, and it worked like a charm.
As i said, i'm very new with to MVC and this IoC stuff, so i'm not sure this is an ideal way of handling things- but it seems simple and it works so far. I'd welcome any comments or criticisms on handling it through this method.
UPDATE
Figured out a different way of doing this- created a function in my WebUI project as such:
public static class Loggers
{
public static void WriteLog(ControllerContext controllerContext, LogLevel logLevel, string message)
{
ILogWriter logWriter = AppServiceFactory.Instance.Create<ILogWriter>();
logWriter.WriteToLog(
controllerContext.RouteData.Values["controller"].ToString(),
controllerContext.RouteData.Values["action"].ToString(),
logLevel,
controllerContext.HttpContext.Timestamp,
controllerContext.HttpContext.User.Identity.Name.ToString(),
message);
}
}
now, wherever i want to log something, i can call
Loggers.WriteLog(
this.ControllerContext,
LogLevel.Membership,
"Removed role '" + role + "'" + " from user " + _userService.Get(id).UserName );
to write a record to the log. this gives me a lot more flexibility on my "message" content, and solves the problem of including logging in the global.asax file, which would've been difficult if not impossible using the attribute filters. i'll leave the rest, as it may be of use to someone else, but i think this is the way i'll go on this.
as usual, things are usually simpler in MVC than i original think they will be :)

Resources