I'm trying to implement an RSS/Atom feed aggregator in spring-integration and I am primarily using the Java DSL to write my IntegrationFlow. A requirement of this aggregator is that feeds can be added / removed during runtime. That is to say, the feeds are not known at design time.
I found it simple to use the basic Feed.inboundAdapter() with a test url and extract the links out of the feed with a transformer and then pass it on to an outbound-file-adapter to save the links to a file. However, I have gotten very stuck when trying to read the (thousands) of feed urls from an inbound-file-adapter run the file through a FileSplitter and then pass each resulting Message<String> containing the feed url to then register a new Feed.inboundAdapter(). Is this not possible with the Java DSL?
Ideally I would love it if I could do the following:
#Bean
public IntegrationFlow getFeedsFromFile() throws MalformedURLException {
return IntegrationFlows.from(inboundFileChannel(), e -> e.poller(Pollers.fixedDelay(10000)))
.handle(new FileSplitter())
//register new Feed.inboundAdapter(payload.toString()) foreach Message<String> containing feed url coming from FileSplitter
.transform(extractLinkFromFeedEntry())
.handle(appendLinkToFile())
.get();
}
Though after reading through the spring integration java DSL code multiple times (and learning a tonne of stuff along the way) I just can't see that it's possible to do it this way. So... A) is it? B) should it be? C) Suggestions?
It almost feels like I should be able to take the output of .handle(new FileSplitter()) and pass that into .handleWithAdapter(Feed.inboundAdapter(/*stuff here*/)) but the DSL only references outbound-adapters there. Inbound adapters are really just a subclass of AbstractMessageSource and it seems the only place you can specify one of those is as an argument to the IntegrationFlows.from(/*stuff here*/) method.
I would have thought it would be possible to take the input from a file, split it line by line, use that output to register inbound feed adapters, poll those feeds, extract the new links from feeds as they appear and append them to a file. It appears as though it's not.
Is there some clever subclassing I can do to make this work??
Failing that... and I suspect this is going to be the answer, I found the spring integration Dynamic Ftp Channel Resolver Example and this answer on how to adapt it dynamically register stuff for the inbound case...
So is this the way to go? Any help/guidance appreciated. After pouring over the DSL code and reading documentation for days, I think I'll have a go at implementing the dynamic ftp example and adapting it to work with FeedEntryMessageSource... in which case my question is... that dynamic ftp example works with XML configuration, but is it possible to do it with either Java config or the Java DSL?
Update
I've implemented the solution as follows:
#SpringBootApplication
class MonsterFeedApplication {
public static void main(String[] args) throws IOException {
ConfigurableApplicationContext parent = SpringApplication.run(MonsterFeedApplication.class, args);
parent.setId("parent");
String[] feedUrls = {
"https://1nichi.wordpress.com/feed/",
"http://jcmuofficialblog.com/feed/"};
List<ConfigurableApplicationContext> children = new ArrayList<>();
int n = 0;
for(String feedUrl : feedUrls) {
AnnotationConfigApplicationContext child = new AnnotationConfigApplicationContext();
child.setId("child" + ++n);
children.add(child);
child.setParent(parent);
child.register(DynamicFeedAdapter.class);
StandardEnvironment env = new StandardEnvironment();
Properties props = new Properties();
props.setProperty("feed.url", feedUrl);
PropertiesPropertySource pps = new PropertiesPropertySource("feed", props);
env.getPropertySources().addLast(pps);
child.setEnvironment(env);
child.refresh();
}
System.out.println("Press any key to exit...");
System.in.read();
for (ConfigurableApplicationContext child : children) {
child.close();
}
parent.close();
}
#Bean
public IntegrationFlow aggregateFeeds() {
return IntegrationFlows.from("feedChannel")
.transform(extractLinkFromFeed())
.handle(System.out::println)
.get();
}
#Bean
public MessageChannel feedChannel() {
return new DirectChannel();
}
#Bean
public AbstractPayloadTransformer<SyndEntry, String> extractLinkFromFeed() {
return new AbstractPayloadTransformer<SyndEntry, String>() {
#Override
protected String transformPayload(SyndEntry payload) throws Exception {
return payload.getLink();
}
};
}
}
DynamicFeedAdapter.java
#Configuration
#EnableIntegration
public class DynamicFeedAdapter {
#Value("${feed.url}")
public String feedUrl;
#Bean
public static PropertySourcesPlaceholderConfigurer pspc() {
return new PropertySourcesPlaceholderConfigurer();
}
#Bean
public IntegrationFlow feedAdapter() throws MalformedURLException {
URL url = new URL(feedUrl);
return IntegrationFlows
.from(s -> s.feed(url, "feedTest"),
e -> e.poller(p -> p.fixedDelay(10000)))
.channel("feedChannel")
.get();
}
}
And this works IF and only IF I have one of the urls defined in application.properties as feed.url=[insert url here]. Otherwise it fails telling me 'unable to resolve property {feed.url}'. I suspect what is happening there is that the #Beans defined in DynamicFeedAdapter.java all get singletons eagerly initialized, so aside from the beans being manually created in our for loop in the main method (which work fine because they have feed.url property injected) we have a stray singleton that has been eagerly initialized and if there is no feed.url defined in application.properties then it can't resolve the property and everything goes bang. Now from what I know of Spring, I know it should be possible to #Lazy initialize the beans in DynamicFeedAdapter.java so we don't wind up with this one unwanted stray singleton problem-child. The problem is now...if I just mark the feedAdapter() #Lazy then the beans never get initialized. How do I initialize them myself?
Update - problem solved
Without having tested it, I think the problem is that boot is finding
the DynamicFeedAdapter during its component scan. A simple solution is
to move it to a sibling package. If MonsterFeedApplication is in
com.acme.foo, then put the adapter config class in com.acme.bar. That
way, boot won't consider it "part" of the application
This was indeed the problem. After implementing Gary's suggestion, everything works perfect.
See the answer to this question and its follow up for a similar question about inbound mail adapters.
In essence, each feed adapter is created in a child context that is parameterized.
In that case the child contexts are created in a main() method but there's no reason it couldn't be done in a service invoked by .handle().
Related
Looking around StackOverflow, I see this answer to a similar problem - according to the Twitter4J documentation, TwitterStream#addListener takes a callback function. I have naively written my class as follows:
#Stateless
#LocalBean
public class TwitterListenerThread implements Runnable {
private TwitterStream twitterStream;
public TwitterListenerThread(){}
#EJB private TwitterDispatcher dispatcher;
#Override
public void run() {
ConfigurationBuilder cb = new ConfigurationBuilder();
cb.setDebugEnabled(true)
.setJSONStoreEnabled(true)
.setOAuthConsumerKey(Properties.getProperty("twitter_OAuthConsumerKey"))
.setOAuthConsumerSecret(Properties.getProperty("twitter_OAuthConsumerSecret"))
.setOAuthAccessToken(Properties.getProperty("twitter_OAuthAccessToken"))
.setOAuthAccessTokenSecret(Properties.getProperty("twitter_OAuthAccessTokenSecret"));
twitterStream = new TwitterStreamFactory(cb.build()).getInstance();
UserStreamListener listener = new UserStreamListener() {
#Override
public void onStatus(Status status) {
dispatcher.dispatch(status);
}
// Standard code
};
twitterStream.addListener(listener);
// Listen for all user activity
String user = Properties.getProperty("twitter-userid");
String[] users = {user};
twitterStream.user(users);
}
}
Now, on my colleague's PC this soon fails with an attempt to invoke when container is undeployed on the dispatcher.dispatch(status); line. I understand the reason as being due to the Twitter4J threading model not playing well with the JavaEE EJB model, but I cannot work out what to do based on the answer presented in the linked answer - how would I use a Message-Driven Bean to listen in to the Twitter stream?
After a little thinking, I worked out that the solution offered was to write a separate application that used just Java SE code to feed, using non-annotated code, a JMS queue with tweets, and then in my main application use a Message-Driven Bean to listen to the queue.
However, I was not satisfied with that work-around, so I searched a little more, and found Issue TFJ-285, Allow for alternative implementations of Dispatcher classes:
Now it is possible to introduce your own dispatcher implementation.
It can be Quartz based, it can be MDB based, and it can be EJB-timer based.
By default, Twitter4J still uses traditional and transient thread based dispatcher.
Implement a class implementing twtitter4j.internal.async.Dispatcher interface
put the class in the classpath
set -Dtwitter4j.async.dispatcherImpl to locate your dispatcher implementation
This is the default implementation on GitHub, so one could replace the:
private final ExecutorService executorService;
with a:
private final ManagedExecutorService executorService;
And, in theory, Bob's your uncle. If I ever get this working, I shall post the code here.
I'm new to Java EE 6 so I apologize if the answer to this question is obvious. I have a task that must run hourly to rebuild a Solr index from a database. I also want the rebuild to occur when the app is deployed. My gut instinct is that this should work:
#Singleton
#Startup
public class Rebuilder {
#Inject private ProposalDao proposalDao;
#Inject private SolrServer solrServer;
#Schedule(hour="*", minute="0", second="0")
public void rebuildIndex() {
// do the rebuild here
}
}
Since I'm using myBatis, I have written this producer:
public class ProposalSessionProvider {
private static final String CONFIGURATION_FILE = "...";
static {
try {
sessFactory = new SqlSessionFactoryBuilder().build(
Resources.getResourceAsReader(CONFIGURATION_FILE));
}
catch (IOException ex) {
throw new RuntimeException("Error configuring MyBatis: " + ex.getMessage(), ex);
}
}
#Produces
public ProposalsDao openSession() {
log.info("Connecting to the database");
session = sessFactory.openSession();
return session.getMapper(ProposalsDao.class);
}
}
So I have three concerns:
What's the appropriate way to trigger a rebuild at deployment time? A #PostConstruct method?
Who is responsible for closing the database connection, and how should that happen? I'm using myBatis which is, I believe, pretty ignorant of the Java EE lifecycle. It seems like if I use #Singleton the connections will never be released, but is it even meaningful to put #Startup on a #Stateless bean?
Should the Rebuilder be a singleton or not? It seems like if it is not I couldn't use #PostConstruct to handle the initial rebuild or I'll get double rebuilds every hour.
I'm not really sure how to proceed here. Thanks for your time.
I don't know myBatis but i can tell you than #Schedule job is transactional. Anyway i'am not sure that JTA managed transaction will apply here according to the way you retrieve the session. Isn't there a way to retrieve a persistenceContext in MyBatis ? For the trigger part IMHO #Startup will do the job properly and will need a singleton bean so. Anyway i'am not able to tell you which of the 2 methods you propose is the best one.
For the scheduling part, you are correct; I'd write the index building logic in a separate class, and have both a (Singleton?) #StartUp bean and a #Schedule-annotated method in a separate class call it.
JMS could be used by said beans to trigger the index rebuilding, if you don't want to have a dependency between the index-building code, and the triggering code in said classes.
I don't know myBatis well enough, but if your connection is managed by a data source #Resource, then I believe it could indeed benefit from CMT.
My goal is to have one data context (MainDbContext) per HTTP request in ASP.NET MVC and dispose the data context when the request ends.
I'm using the following StructureMap configuration:
public static class ContainerConfigurer
{
public static void Configure()
{
ObjectFactory.Initialize(x =>
{
x.For<MainDbContext>().HttpContextScoped();
});
}
}
Whenever I need a MainDbContext, I'm using this code:
var dbContext = ObjectFactory.GetInstance<MainDbContext>();
This is working as expected: only one data context is being created per HTTP request. The problem is, MainDbContext is not being disposed at the end of the request.
How can I configure my ObjectFactory to dispose the data context when the HTTP request finishes? Or is this just something I need to do manually using Application_EndRequest() in Global.asax.
Update
I just tried adding the following code to Global.asax:
protected virtual void Application_EndRequest()
{
ObjectFactory.GetInstance<MainDbContext>().Dispose();
}
As expected, this solves the problem. I'm still wondering if there's any way to do this automatically with StructureMap, however.
Instead of:
x.For<MainDbContext>().HttpContextScoped();
Try:
x.For<MainDbContext>().HttpContextScoped().Use(() => new MainDbContext());
Also normally it's repository classes that need a db context. So instead of ObjectFactory.GetInstance<MainDbContext>(); have your repositories take some interface db context and configure StructureMap to inject the MainDbContext into them. Then make StructureMap inject repositories into controllers, ...
In Application_EndRequest:
protected void Application_EndRequest()
{
ObjectFactory.ReleaseAndDisposeAllHttpScopedObjects();
}
Using a nested container is the only way to get Structure Map to automatically dispose objects. If you're not using that technique the only way is to dispose the objects yourself using either the way the OP described (pulling the object from the container and disposing it; see this NHibernate example for one way to do it) or to scope the object to HttpRequest and call ReleaseAndDisposeAllHttpScopedObjects as Darin described.
I'm trying to follow the examples provided in this post, to create a dynamic list constraint in Alfresco 3.3.
So, I've created my own class extending ListOfValuesConstraint:
public class MyConstraint extends ListOfValuesConstraint {
private static ServiceRegistry registry;
#Override
public void initialize() {
loadData();
}
#Override
public List getAllowedValues() {
//loadData();
return super.getAllowedValues();
}
#Override
public void setAllowedValues(List allowedValues) {
}
protected void loadData() {
List<String> values = new LinkedList<String>();
String query = "+TYPE:\"cm:category\" +#cm\\:description:\"" + tipo + "\"";
StoreRef storeRef = new StoreRef("workspace://SpacesStore");
ResultSet resultSet = registry.getSearchService().query(storeRef, SearchService.LANGUAGE_LUCENE, query);
// ... values.add(data obtained using searchService and nodeService) ...
if (values.isEmpty()) {
values.add("-");
}
super.setAllowedValues(values);
}
}
ServiceRegistry reference is injected by Spring, and it's working fine. If I only call loadData() from initialize(), it executes the Lucene query, gets the data, and the dropdown displays it correctly. Only that it's not dynamic: data doesn't get refreshed unless I restart the Alfresco server.
getAllowedValues() is called each time the UI has to display a property having this constraint. The idea on the referred post is to call loadData() from getAllowedValues() too, so the values will be actually dynamic. But when I do this, I don't get any data. The Lucene query is the same, but returns 0 results, so my dropdown only displays -.
BTW, the query I'm doing is: +TYPE:"cm:category" +#cm\:description:"something here", and it's the same on each case. It works from initialize, but doesn't from getAllowedValues.
Any ideas on why is this happening, or how can I solve it?
Thanks
Edit: we upgraded to Alfresco 3.3.0g Community yesterday, but we're still having the same issues.
This dynamic-list-of-values-constraint is a bad idea and I tell you why:
The Alfresco repository should be in a valid state all the time. Your (dynamic) list of constraints will change (that's why you want it to be dynamic). Adding items would not be a problem, but editing and removing items are. If you would remove an item from your option-list, the nodes in the repository with this property value will be invalid.
You will not be able to fix this easily. The standard UI will fail on invalid-state-nodes. Simply editing this value and setting it to something valid will not work. You have been warned.
Because the default UI widget for a ListConstraint is a dropdown, not every dropdown should be a ListConstraint. ListConstraints are designed for something like a Status property: { Draft, Waiting Approval, Approved }. Not for a list of customer-names.
I have seen this same topic come up again and again over the last few years. What you actually want is let the user choose a value from a dynamic list of options (combo box). This is a UI problem, not a dictionary-model-issue. You should setup something like this with the web-config-context.xml (Alfresco web UI) or in Alfresco Share. The last one is more flexible and I would recommend taking that path.
i'm trying to work out the best method to perform logging in the application i'm currently developing.
right now, i have a Log table that stores the username, timestamp, action, controller, and a message. when a controller is instantiated, it gets the IoC info through Castle Windsor.
for example, my "Sites" controller is created as follows:
private ISitesRepository siteRepository;
private ILogWriter logWriter;
public SiteController(ISitesRepository siteRepository, ILogWriter logWriter)
{
this.siteRepository = siteRepository;
this.logWriter = logWriter;
}
and the log writer has a function that creates and inserts a log entry (WriteToLog). within the Sites controller's Edit and Create actions, it calls the WriteToLog function.
this is working and doing its job, but my question is- do i really need to set up each controller this way, passing through the ILogWriter interface/repository? it struck me that i could possibly set up a LogController, and just have that do the "heavy lifting" of writing to my logs.
that way, i wouldn't have to mess with the IoC stuff in every other controller. is it possible to execute an action on another controller (for example, a LogController-> WriteLog)? i'm not sure how would that be done without doing a redirect...
Could you pass by an abstract class? This abstract class having a static property referencing you log writer?
something like this
public abstract class BaseController
{
public static ILogWriter Logwriter{get;set;}
public static BaseController
{
Logwriter = YourFactory.GetLogwriter();
}
}
public class YourController:BaseController
{
public YourController(ISitesRepository siteRepository)
{
}
}
Ok, after much head scratching, i think i found an acceptable solution.
I implemented my logging action as a custom action filter as so:
public class LogAction : ActionFilterAttribute, IActionFilter
{
public LogLevel loglevel;
public string message;
public override void OnActionExecuted(ActionExecutedContext filterContext)
{
ILogWriter logWriter = AppServiceFactory.Instance.Create<ILogWriter>();
logWriter.WriteToLog(
filterContext.ActionDescriptor.ControllerDescriptor.ControllerName,
filterContext.ActionDescriptor.ActionName,
loglevel,
filterContext.HttpContext.Timestamp,
filterContext.HttpContext.User.Identity.Name.ToString(),
message + "(id=" + filterContext.RouteData.Values["id"] + ")");
}
}
but i ran into a wall trying to get the IoC to work in a custom attribute filter. scouring stackoverflow and google searches, i found that it's sort of difficult to do, with talk about using different wrappers, action invokers, etc, which all seemed more complicated than i was really willing to deal with.
trying to learn more about IoC (i'm still very new at this), i found this article,
which really helped point me in the right direction. i added his sealed AppServiceFactory class with my WindsorControllerFactory, and it worked like a charm.
As i said, i'm very new with to MVC and this IoC stuff, so i'm not sure this is an ideal way of handling things- but it seems simple and it works so far. I'd welcome any comments or criticisms on handling it through this method.
UPDATE
Figured out a different way of doing this- created a function in my WebUI project as such:
public static class Loggers
{
public static void WriteLog(ControllerContext controllerContext, LogLevel logLevel, string message)
{
ILogWriter logWriter = AppServiceFactory.Instance.Create<ILogWriter>();
logWriter.WriteToLog(
controllerContext.RouteData.Values["controller"].ToString(),
controllerContext.RouteData.Values["action"].ToString(),
logLevel,
controllerContext.HttpContext.Timestamp,
controllerContext.HttpContext.User.Identity.Name.ToString(),
message);
}
}
now, wherever i want to log something, i can call
Loggers.WriteLog(
this.ControllerContext,
LogLevel.Membership,
"Removed role '" + role + "'" + " from user " + _userService.Get(id).UserName );
to write a record to the log. this gives me a lot more flexibility on my "message" content, and solves the problem of including logging in the global.asax file, which would've been difficult if not impossible using the attribute filters. i'll leave the rest, as it may be of use to someone else, but i think this is the way i'll go on this.
as usual, things are usually simpler in MVC than i original think they will be :)