Servlet Context, Collections and Serialization - servlets

Often, I work on Java EE application. Today I'm facing an issue: serialize Collections in servlet context. In my case, my app contains a Servlet Context Listener and many servlets.
The context listener load a ConcurrentHashMap containing several lists of products at initialisation and a task scheduler to refresh this list.
The servlets are supposed to access the right list, based on user provided parameters.
Here the code of my contextInitialized Listener:
public void contextInitialized(ServletContextEvent event) {
app = event.getServletContext();
myMap = new ConcurrentHashMap<String, Catalog>();
myMap.put("FR", new Catalog());
myMap.put("UK", new Catalog());
app.setAttribute("catalogue", myMap);
scheduler = Executors.newSingleThreadScheduledExecutor();
scheduler.scheduleAtFixedRate(new AutomateRefresh(), 0, 60, TimeUnit.MINUTES);
}
In order to show my problem, i created a servlet that display everything which is a boolean or a ConcurrentHashMap in context
I'm not surprised to find this kind of results:
javax.servlet.context.tempdir is equal to...
Working is equal to... true
org.apache.catalina.resources is equal to...
org.apache.tomcat.InstanceManager is equal to...
org.apache.catalina.jsp_classpath is equal to...
javax.websocket.server.ServerContainer is equal to...
org.apache.jasper.compiler.TldCache is equal to...
catalogue is equal to...
org.apache.tomcat.JarScanner is equal to...
As you can see, my two custom keys (the boolean Working and the ConcurrentHashMap catalogue) exists. But catalogue is empty when not accessed inside the Listener.
I found that:
The serialization form of java.util.HashMap doesn't serialize the buckets themselves, and the hash code is not part of the persisted state.
Source: Serializing and deserializing a map with key as string
For many projects a serializable and thread-safe collection is useful. I am probably not the only one who is looking for that (see the amount of topic about servlet context).
ConcurrentHashMap is thread-safe but I am unable to retrieve my data in other servlet (in the same app). Is there an implementation of Collection which is thread-safe and serializable (due to WebLogic server policy) ? Or am I using it in a wrong way ?
EDIT: Code of "Display context servlet"
public void doGet( HttpServletRequest request, HttpServletResponse response ) throws ServletException, IOException{
System.out.println("List of all values in the context:");
Enumeration<?> e = getServletContext().getAttributeNames();
while (e.hasMoreElements())
{
String name = (String) e.nextElement();
System.out.print("\n" + name + " is equal to... ");
// Get the value of the attribute
Object value = this.getServletContext().getAttribute(name);
if (value instanceof ConcurrentHashMap) {
ConcurrentHashMap<String, Catalog> map = (ConcurrentHashMap<String, Catalog>) value;
Iterator<Entry<String, Catalog>> it = map.entrySet().iterator();
while (it.hasNext()) {
ConcurrentHashMap.Entry<String, Catalog> entry = (ConcurrentHashMap.Entry<String, Catalog>)it.next();
System.out.print("\t" + entry.getKey() + "=" + entry.getValue());
}
} else if (value instanceof Boolean) {
System.out.print((Boolean)value);
}
}
}
EDIT2: Like BalusC suggested the HashMap maybe null (a rookie mistake ?).
Here the task code. The task is in the Listener. The Listener initialize the HashMap with new empty object. The task refresh the objects when webapp starts and then every hour.
public class AutomateRefresh implements Runnable {
public void run() {
System.out.println("Scheduler trigger");
if(app.getAttribute("catalogue") instanceof ConcurrentHashMap){
myMap = (ConcurrentHashMap<String, Catalog>) app.getAttribute("catalogue");
//Autorefresh
Iterator<Entry<String, Catalog>> it = myMap.entrySet().iterator();
while (it.hasNext()) {
ConcurrentHashMap.Entry<String, Catalog> entry = (ConcurrentHashMap.Entry<String, Catalog>)it.next();
((Catalog)entry.getValue()).setValid(false);//Set as not valid anymore for further request
try {
((Catalog)entry.getValue()).refreshdb((String) entry.getKey());//TODO rework to use REST API
} catch (SQLException e) {
e.printStackTrace();
}
it.remove(); // avoids a ConcurrentModificationException
app.setAttribute("catalogue", myMap);
app.setAttribute("Working", true);
System.out.println((String)entry.getKey() + " = " + (Catalog)entry.getValue());
}
}
else{
System.out.println("Catalogue is not an instance of ConcurrentHashMap as expected.");
app.setAttribute("Working", false);
}
}
}
When the task triggered, for each Catalog stored in the Context, the task update the data stored by them. It also display data in console.
Results:
Refresh Catalog for UK with DB
UK = Catalog [list size is : 0 valid=true, lastToken=notoken]
Refresh Catalog for FR with DB
FR = Catalog [list size is : 30 valid=true, lastToken=notoken]
Catalog is a class with an ArrayList, a boolean and a String. Everything seems correct: UK is supposed to be empty but not null and FR is supposed to contains 30 products.
I still can not access this data in other servlets.

I found the origin of the problem, a rookie mistake as expected:
I tried to update this way, assuming it would have updated the object directly in the ConcurrentHashMap
((Catalog)entry.getValue()).refreshdb((String) entry.getKey());
I replace it by:
Catalog myCatalog = (Catalog)entry.getValue();
myCatalog.refreshdb((String) entry.getKey());
myMap.put((String)entry.getKey(), myCatalog);
And it works now.
I still don't know why my objects were accessible from the listener, they are not supposed to work that way. Maybe a strange behavior from my server ? Anyway, this issue is fixed.
Thanks to BalusC for his help.

Related

Vaadin - grid.getDataProvider().refreshAll(); Does not work after updating browser

how are you ? I have a detail with the push after updating the tab that contains my grid, I am using vaadin 8.0.4
,google chrome updated, and my example is based here https://github.com/vaadin/archetype-application-example
My application consists of data stored in mongodb, when I make a direct change in the db it is reflected in the grid every so often, 30 seconds, with push, it always works on a single tab, the problem appears when I update the tab or create a new one, the push seems to be disconnected and my grid is not updated anymore, the strange thing is that I added the #PreserveOnRefresh and in the first tab that accesses the application in that if the push works even after updating very strange.
This instance check changes in my db
private ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(5);
and
I use the grid update, with
grid.getDataProvider().refreshAll()
I even try to broadcast the tabs, by means of the pattern described in the book 12.16.4. Broadcasting to Other Users
because if the active notifications continue in all the tabs, but the grid does not, only in the original tab.
Update:
In this example application the problem is actually the login, when I remove it if everything works perfectly as it should be with push. But only when I remove the login
#Override
public void receiveBroadcast() {
access(() -> {
//Notification.show(message);
//grid.getDataProvider().refreshAll();
getNavigator().removeView(MonitorCrudView.VIEW_NAME);
getNavigator().addView(MonitorCrudView.VIEW_NAME,new MonitorCrudView(this));
getNavigator().navigateTo(MonitorCrudView.VIEW_NAME);
Notification.show("Grid updated", Notification.Type.TRAY_NOTIFICATION);
});
}
The detail is that when I have the AccessContro enabled, and I enter as admin by what you see, when executing the above method I get an exception of type "No request linked to the current thread"; Coming from the "CurrentUser" Class
https://github.com/vaadin/archetype-application-example/blob/master/mockapp-ui/src/main/java/org/vaadin/mockapp/samples/authentication/CurrentUser.java
But here in vaadin 8.0.4 changes a little
public final class CurrentUser {
/**
* The attribute key used to store the username in the session.
*/
public static final String CURRENT_USER_SESSION_ATTRIBUTE_KEY = CurrentUser.class
.getCanonicalName();
private CurrentUser() {
}
/**
* Returns the name of the current user stored in the current session, or an
* empty string if no user name is stored.
*
* #throws IllegalStateException
* if the current session cannot be accessed.
*/
public static String get() {
String currentUser = (String) getCurrentRequest().getWrappedSession()
.getAttribute(CURRENT_USER_SESSION_ATTRIBUTE_KEY);
if (currentUser == null) {
return "";
} else {
return currentUser;
}
}
/**
* Sets the name of the current user and stores it in the current session.
* Using a {#code null} username will remove the username from the session.
*
* #throws IllegalStateException
* if the current session cannot be accessed.
*/
public static void set(String currentUser) {
if (currentUser == null) {
getCurrentRequest().getWrappedSession().removeAttribute(
CURRENT_USER_SESSION_ATTRIBUTE_KEY);
} else {
getCurrentRequest().getWrappedSession().setAttribute(
CURRENT_USER_SESSION_ATTRIBUTE_KEY, currentUser);
}
}
private static VaadinRequest getCurrentRequest() {
VaadinRequest request = VaadinService.getCurrentRequest();
if (request == null) {
throw new IllegalStateException(
"No request bound to current thread");
}
return request;
}
}
UPDATE:
Https://github.com/rucko24/testView/blob/master/MyApp-ui/src/main/java/example/samples/crud/SampleCrudView.java
In this class I added the button that broadcast to all UI.
Log in as admin
Click on Update grid
Should give a type exception
java.util.concurrent.ExecutionException:
java.lang.IllegalStateException: No request bound to current thread
Does not continue to throw exeption after refreshing the UI
But when opening in an incognito tab it always throws the exception once before updating.
With the base project and mongo db plus the
private static ScheduledExecutorService scheduler =
Executors.newScheduledThreadPool (5);
I always get the same exception from above, and never change the view with push
Disclaimer: This is better suited as a comment but it does not fit the allocated space.
The VaadinService.getCurrentRequest() API doc states that:
The current response can not be used in e.g. background threads because of the way server implementations reuse response instances.
At the same time, the UI.access() javadoc is somewhat ambiguous stating that:
Please note that the runnable might be invoked on a different thread or later on the current thread, which means that custom thread locals might not have the expected values when the command is executed
The above statements kind of explain why VaadinService.getCurrentRequest() is null in your getCurrentRequest() method.
Nonetheless, it seems that UI.getCurrent() returns an instance when running in that background thread, also suggested by this vaadin forum post and vaadin book:
Your code is not thread safe as it does not lock the VaadinSession before accessing the UI. The preferred pattern is using the UI.access and VaadinSession.access methods as described in Book of Vaadin section 11.16.3. Inside an access block Vaadin automatically sets the relevant threadlocals in addition to properly handling session locking.
In conclusion, i'd suggest to replace all the calls to getCurrentRequest().getWrappedSession() with UI.getCurrent().getSession();, eg:
UI.getCurrent().getSession().getAttribute(CURRENT_USER_SESSI‌​ON_ATTRIBUTE_KEY);
or
UI.getCurrent().getSession().setAttribute(CURRENT_USER_SESSI‌​ON_ATTRIBUTE_KEY, currentUser);
I tested this with your sample and it worked fine.

Using FirebaseRemoteConfig I am confused whether the setDefault method overwrites the cached Config values from the last fetch everytime we run it.

I am using a singleton Instance of the FirebaseRemoteConfig class which is generated using the following Provider method.
#Provides
#Singleton
FirebaseRemoteConfig provideFirebaseRemoteConfig() {
final FirebaseRemoteConfig mFirebaseRemoteConfig = FirebaseRemoteConfig.getInstance();
FirebaseRemoteConfigSettings configSettings = new FirebaseRemoteConfigSettings.Builder()
.setDeveloperModeEnabled(BuildConfig.DEBUG)
.build();
mFirebaseRemoteConfig.setConfigSettings(configSettings);
mFirebaseRemoteConfig.setDefaults(R.xml.remote_config_defaults);
long cacheExpiration = 3600 * 3; // 3 hours in seconds.
if (mFirebaseRemoteConfig.getInfo().getConfigSettings().isDeveloperModeEnabled()) {
cacheExpiration = 0;
}
mFirebaseRemoteConfig.fetch(cacheExpiration)
.addOnCompleteListener(new OnCompleteListener<Void>() {
#Override
public void onComplete(#NonNull Task<Void> task) {
if (task.isSuccessful()) {
// Once the config is successfully fetched it must be activated before newly fetched
// values are returned.
mFirebaseRemoteConfig.activateFetched();
} else {
FirebaseCrash.log("RemoteConfig fetch failed at " +System.currentTimeMillis());
}
}
});
return mFirebaseRemoteConfig;
}
Now the issue here is that if I am setting the setDefaults method everytime I am generating the singleton instance and since the last fetched config values have an expiration time, doesn't it mean that the Config values will revert to the initial defaultvalues hardcoded instead of picking up the last known config fetched. That is in case of inability to fetch from the server after the last fetched Config values expire.
I tried looking at the Docs but there was no specific detail on how the whole caching works except for a simple overview. So people who have experience using RemoteConfig can easily answer this but I am using it for the first time so any help is appreciated.
Nope. setDefaults does not overwrite any previously fetched values you might have received from RemoteConfig.
From RemoteConfig's perspective, the "expiration time" does't mean that the previously fetched values are considered invalid. It just means that it's time for it to go out onto the network and see if any new values have appeared. If they haven't (or if it can't reach the network), RemoteConfig will keep whatever values it previously downloaded last time.

how to dynamically register Feed Inbound Adapter in Spring Integration?

I'm trying to implement an RSS/Atom feed aggregator in spring-integration and I am primarily using the Java DSL to write my IntegrationFlow. A requirement of this aggregator is that feeds can be added / removed during runtime. That is to say, the feeds are not known at design time.
I found it simple to use the basic Feed.inboundAdapter() with a test url and extract the links out of the feed with a transformer and then pass it on to an outbound-file-adapter to save the links to a file. However, I have gotten very stuck when trying to read the (thousands) of feed urls from an inbound-file-adapter run the file through a FileSplitter and then pass each resulting Message<String> containing the feed url to then register a new Feed.inboundAdapter(). Is this not possible with the Java DSL?
Ideally I would love it if I could do the following:
#Bean
public IntegrationFlow getFeedsFromFile() throws MalformedURLException {
return IntegrationFlows.from(inboundFileChannel(), e -> e.poller(Pollers.fixedDelay(10000)))
.handle(new FileSplitter())
//register new Feed.inboundAdapter(payload.toString()) foreach Message<String> containing feed url coming from FileSplitter
.transform(extractLinkFromFeedEntry())
.handle(appendLinkToFile())
.get();
}
Though after reading through the spring integration java DSL code multiple times (and learning a tonne of stuff along the way) I just can't see that it's possible to do it this way. So... A) is it? B) should it be? C) Suggestions?
It almost feels like I should be able to take the output of .handle(new FileSplitter()) and pass that into .handleWithAdapter(Feed.inboundAdapter(/*stuff here*/)) but the DSL only references outbound-adapters there. Inbound adapters are really just a subclass of AbstractMessageSource and it seems the only place you can specify one of those is as an argument to the IntegrationFlows.from(/*stuff here*/) method.
I would have thought it would be possible to take the input from a file, split it line by line, use that output to register inbound feed adapters, poll those feeds, extract the new links from feeds as they appear and append them to a file. It appears as though it's not.
Is there some clever subclassing I can do to make this work??
Failing that... and I suspect this is going to be the answer, I found the spring integration Dynamic Ftp Channel Resolver Example and this answer on how to adapt it dynamically register stuff for the inbound case...
So is this the way to go? Any help/guidance appreciated. After pouring over the DSL code and reading documentation for days, I think I'll have a go at implementing the dynamic ftp example and adapting it to work with FeedEntryMessageSource... in which case my question is... that dynamic ftp example works with XML configuration, but is it possible to do it with either Java config or the Java DSL?
Update
I've implemented the solution as follows:
#SpringBootApplication
class MonsterFeedApplication {
public static void main(String[] args) throws IOException {
ConfigurableApplicationContext parent = SpringApplication.run(MonsterFeedApplication.class, args);
parent.setId("parent");
String[] feedUrls = {
"https://1nichi.wordpress.com/feed/",
"http://jcmuofficialblog.com/feed/"};
List<ConfigurableApplicationContext> children = new ArrayList<>();
int n = 0;
for(String feedUrl : feedUrls) {
AnnotationConfigApplicationContext child = new AnnotationConfigApplicationContext();
child.setId("child" + ++n);
children.add(child);
child.setParent(parent);
child.register(DynamicFeedAdapter.class);
StandardEnvironment env = new StandardEnvironment();
Properties props = new Properties();
props.setProperty("feed.url", feedUrl);
PropertiesPropertySource pps = new PropertiesPropertySource("feed", props);
env.getPropertySources().addLast(pps);
child.setEnvironment(env);
child.refresh();
}
System.out.println("Press any key to exit...");
System.in.read();
for (ConfigurableApplicationContext child : children) {
child.close();
}
parent.close();
}
#Bean
public IntegrationFlow aggregateFeeds() {
return IntegrationFlows.from("feedChannel")
.transform(extractLinkFromFeed())
.handle(System.out::println)
.get();
}
#Bean
public MessageChannel feedChannel() {
return new DirectChannel();
}
#Bean
public AbstractPayloadTransformer<SyndEntry, String> extractLinkFromFeed() {
return new AbstractPayloadTransformer<SyndEntry, String>() {
#Override
protected String transformPayload(SyndEntry payload) throws Exception {
return payload.getLink();
}
};
}
}
DynamicFeedAdapter.java
#Configuration
#EnableIntegration
public class DynamicFeedAdapter {
#Value("${feed.url}")
public String feedUrl;
#Bean
public static PropertySourcesPlaceholderConfigurer pspc() {
return new PropertySourcesPlaceholderConfigurer();
}
#Bean
public IntegrationFlow feedAdapter() throws MalformedURLException {
URL url = new URL(feedUrl);
return IntegrationFlows
.from(s -> s.feed(url, "feedTest"),
e -> e.poller(p -> p.fixedDelay(10000)))
.channel("feedChannel")
.get();
}
}
And this works IF and only IF I have one of the urls defined in application.properties as feed.url=[insert url here]. Otherwise it fails telling me 'unable to resolve property {feed.url}'. I suspect what is happening there is that the #Beans defined in DynamicFeedAdapter.java all get singletons eagerly initialized, so aside from the beans being manually created in our for loop in the main method (which work fine because they have feed.url property injected) we have a stray singleton that has been eagerly initialized and if there is no feed.url defined in application.properties then it can't resolve the property and everything goes bang. Now from what I know of Spring, I know it should be possible to #Lazy initialize the beans in DynamicFeedAdapter.java so we don't wind up with this one unwanted stray singleton problem-child. The problem is now...if I just mark the feedAdapter() #Lazy then the beans never get initialized. How do I initialize them myself?
Update - problem solved
Without having tested it, I think the problem is that boot is finding
the DynamicFeedAdapter during its component scan. A simple solution is
to move it to a sibling package. If MonsterFeedApplication is in
com.acme.foo, then put the adapter config class in com.acme.bar. That
way, boot won't consider it "part" of the application
This was indeed the problem. After implementing Gary's suggestion, everything works perfect.
See the answer to this question and its follow up for a similar question about inbound mail adapters.
In essence, each feed adapter is created in a child context that is parameterized.
In that case the child contexts are created in a main() method but there's no reason it couldn't be done in a service invoked by .handle().

Concurrency issues with adding/removing from HashMap with Firebase

I'm having an issue removing from and adding to a HashMap that I'm storing as a field in a Firebase object.
I start with adding a key: {key1=true}
Then I delete a key while adding another one: {key1=true, key2=true} -> {key2=true}.
So I'm expecting the end result to be just key2 but what I'm getting is empty {}. I understand how it is happening but fail to understand how to fix this. The issue is that I'm doing the remove in a callback, which is occurring after the add.
Do I just have to refactor my code so that the remove doesn't happen in a callback? This sounds like a common issue, or are people just designing their code architecture better than I am?
For posterity's sake - I do really love the convenience of a HashMap but there is a concurrency issue with HashMap (setting the whole hashmap overwrites any other values that you may be setting concurrently).
I mistakenly had been setting the whole HashMap when I went to go updateChildren() so I wrote a simple wrapper to handle the adding/removing of entries in the HashMap:
protected void updateFirebase(String FIELD, HashMap<String, Boolean> hashMap, String hashKey) {
boolean found = false;
for (Object k : hashMap.keySet()) {
if (TextUtils.equals((String) k, hashKey)) {
found = true;
}
}
if (found) { // item being added
HashMap<String, Object> addItem = new HashMap<>();
addItem.put(hashKey, hashMap.get(hashKey));
new Firebase(getObjectUrl()).child(this.key).child(FIELD).updateChildren(
addItem
);
} else { // item being removed
new Firebase(getObjectUrl()).child(this.key).child(FIELD).child(hashKey).removeValue();
}
}
So this instead will update the entry alone as opposed to trying to update a whole subtree - which removes the issue of concurrently updating with bad stale HashMap states.

Dynamic list constraint on Alfresco

I'm trying to follow the examples provided in this post, to create a dynamic list constraint in Alfresco 3.3.
So, I've created my own class extending ListOfValuesConstraint:
public class MyConstraint extends ListOfValuesConstraint {
private static ServiceRegistry registry;
#Override
public void initialize() {
loadData();
}
#Override
public List getAllowedValues() {
//loadData();
return super.getAllowedValues();
}
#Override
public void setAllowedValues(List allowedValues) {
}
protected void loadData() {
List<String> values = new LinkedList<String>();
String query = "+TYPE:\"cm:category\" +#cm\\:description:\"" + tipo + "\"";
StoreRef storeRef = new StoreRef("workspace://SpacesStore");
ResultSet resultSet = registry.getSearchService().query(storeRef, SearchService.LANGUAGE_LUCENE, query);
// ... values.add(data obtained using searchService and nodeService) ...
if (values.isEmpty()) {
values.add("-");
}
super.setAllowedValues(values);
}
}
ServiceRegistry reference is injected by Spring, and it's working fine. If I only call loadData() from initialize(), it executes the Lucene query, gets the data, and the dropdown displays it correctly. Only that it's not dynamic: data doesn't get refreshed unless I restart the Alfresco server.
getAllowedValues() is called each time the UI has to display a property having this constraint. The idea on the referred post is to call loadData() from getAllowedValues() too, so the values will be actually dynamic. But when I do this, I don't get any data. The Lucene query is the same, but returns 0 results, so my dropdown only displays -.
BTW, the query I'm doing is: +TYPE:"cm:category" +#cm\:description:"something here", and it's the same on each case. It works from initialize, but doesn't from getAllowedValues.
Any ideas on why is this happening, or how can I solve it?
Thanks
Edit: we upgraded to Alfresco 3.3.0g Community yesterday, but we're still having the same issues.
This dynamic-list-of-values-constraint is a bad idea and I tell you why:
The Alfresco repository should be in a valid state all the time. Your (dynamic) list of constraints will change (that's why you want it to be dynamic). Adding items would not be a problem, but editing and removing items are. If you would remove an item from your option-list, the nodes in the repository with this property value will be invalid.
You will not be able to fix this easily. The standard UI will fail on invalid-state-nodes. Simply editing this value and setting it to something valid will not work. You have been warned.
Because the default UI widget for a ListConstraint is a dropdown, not every dropdown should be a ListConstraint. ListConstraints are designed for something like a Status property: { Draft, Waiting Approval, Approved }. Not for a list of customer-names.
I have seen this same topic come up again and again over the last few years. What you actually want is let the user choose a value from a dynamic list of options (combo box). This is a UI problem, not a dictionary-model-issue. You should setup something like this with the web-config-context.xml (Alfresco web UI) or in Alfresco Share. The last one is more flexible and I would recommend taking that path.

Resources