how are you ? I have a detail with the push after updating the tab that contains my grid, I am using vaadin 8.0.4
,google chrome updated, and my example is based here https://github.com/vaadin/archetype-application-example
My application consists of data stored in mongodb, when I make a direct change in the db it is reflected in the grid every so often, 30 seconds, with push, it always works on a single tab, the problem appears when I update the tab or create a new one, the push seems to be disconnected and my grid is not updated anymore, the strange thing is that I added the #PreserveOnRefresh and in the first tab that accesses the application in that if the push works even after updating very strange.
This instance check changes in my db
private ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(5);
and
I use the grid update, with
grid.getDataProvider().refreshAll()
I even try to broadcast the tabs, by means of the pattern described in the book 12.16.4. Broadcasting to Other Users
because if the active notifications continue in all the tabs, but the grid does not, only in the original tab.
Update:
In this example application the problem is actually the login, when I remove it if everything works perfectly as it should be with push. But only when I remove the login
#Override
public void receiveBroadcast() {
access(() -> {
//Notification.show(message);
//grid.getDataProvider().refreshAll();
getNavigator().removeView(MonitorCrudView.VIEW_NAME);
getNavigator().addView(MonitorCrudView.VIEW_NAME,new MonitorCrudView(this));
getNavigator().navigateTo(MonitorCrudView.VIEW_NAME);
Notification.show("Grid updated", Notification.Type.TRAY_NOTIFICATION);
});
}
The detail is that when I have the AccessContro enabled, and I enter as admin by what you see, when executing the above method I get an exception of type "No request linked to the current thread"; Coming from the "CurrentUser" Class
https://github.com/vaadin/archetype-application-example/blob/master/mockapp-ui/src/main/java/org/vaadin/mockapp/samples/authentication/CurrentUser.java
But here in vaadin 8.0.4 changes a little
public final class CurrentUser {
/**
* The attribute key used to store the username in the session.
*/
public static final String CURRENT_USER_SESSION_ATTRIBUTE_KEY = CurrentUser.class
.getCanonicalName();
private CurrentUser() {
}
/**
* Returns the name of the current user stored in the current session, or an
* empty string if no user name is stored.
*
* #throws IllegalStateException
* if the current session cannot be accessed.
*/
public static String get() {
String currentUser = (String) getCurrentRequest().getWrappedSession()
.getAttribute(CURRENT_USER_SESSION_ATTRIBUTE_KEY);
if (currentUser == null) {
return "";
} else {
return currentUser;
}
}
/**
* Sets the name of the current user and stores it in the current session.
* Using a {#code null} username will remove the username from the session.
*
* #throws IllegalStateException
* if the current session cannot be accessed.
*/
public static void set(String currentUser) {
if (currentUser == null) {
getCurrentRequest().getWrappedSession().removeAttribute(
CURRENT_USER_SESSION_ATTRIBUTE_KEY);
} else {
getCurrentRequest().getWrappedSession().setAttribute(
CURRENT_USER_SESSION_ATTRIBUTE_KEY, currentUser);
}
}
private static VaadinRequest getCurrentRequest() {
VaadinRequest request = VaadinService.getCurrentRequest();
if (request == null) {
throw new IllegalStateException(
"No request bound to current thread");
}
return request;
}
}
UPDATE:
Https://github.com/rucko24/testView/blob/master/MyApp-ui/src/main/java/example/samples/crud/SampleCrudView.java
In this class I added the button that broadcast to all UI.
Log in as admin
Click on Update grid
Should give a type exception
java.util.concurrent.ExecutionException:
java.lang.IllegalStateException: No request bound to current thread
Does not continue to throw exeption after refreshing the UI
But when opening in an incognito tab it always throws the exception once before updating.
With the base project and mongo db plus the
private static ScheduledExecutorService scheduler =
Executors.newScheduledThreadPool (5);
I always get the same exception from above, and never change the view with push
Disclaimer: This is better suited as a comment but it does not fit the allocated space.
The VaadinService.getCurrentRequest() API doc states that:
The current response can not be used in e.g. background threads because of the way server implementations reuse response instances.
At the same time, the UI.access() javadoc is somewhat ambiguous stating that:
Please note that the runnable might be invoked on a different thread or later on the current thread, which means that custom thread locals might not have the expected values when the command is executed
The above statements kind of explain why VaadinService.getCurrentRequest() is null in your getCurrentRequest() method.
Nonetheless, it seems that UI.getCurrent() returns an instance when running in that background thread, also suggested by this vaadin forum post and vaadin book:
Your code is not thread safe as it does not lock the VaadinSession before accessing the UI. The preferred pattern is using the UI.access and VaadinSession.access methods as described in Book of Vaadin section 11.16.3. Inside an access block Vaadin automatically sets the relevant threadlocals in addition to properly handling session locking.
In conclusion, i'd suggest to replace all the calls to getCurrentRequest().getWrappedSession() with UI.getCurrent().getSession();, eg:
UI.getCurrent().getSession().getAttribute(CURRENT_USER_SESSION_ATTRIBUTE_KEY);
or
UI.getCurrent().getSession().setAttribute(CURRENT_USER_SESSION_ATTRIBUTE_KEY, currentUser);
I tested this with your sample and it worked fine.
Related
we have Axon application that stores new Order. For each order state change (OrderStateChangedEvent) it plans couple of tasks. The tasks are triggered and proceeded by yet another Saga (TaskSaga - out of scope of the question)
When I delete the projection database, but leave the event store, then run the application again, the events are replayed (what is correct), but the tasks are duplicated.
I suppose this is because the OrderStateChangedEvent triggers new set of ScheduleTaskCommand each time.
Since I'm new in Axon, can't figure out how to avoid this duplication.
Event store running on AxonServer
Spring boot application autoconfigures the axon stuff
Projection database contains the projection tables and the axon tables:
token_entry
saga_entry
association_value_entry
I suppose all the events are replayed because by recreating the database, the Axon tables are gone (hence no record about last applied event)
Am I missing something?
should the token_entry/saga_entry/association_value_entry tables be part of the DB for the projection tables on each application node?
I thought that the event store might be replayed onto new application node's db any time without changing the event history so I can run as many nodes as I wish. Or I can remove the projection dB any time and run the application, what causes that the events are projected to the fresh db again. Or this is not true?
In general, my problem is that one event produces command leading to new events (duplicated) produced. Should I avoid this "chaining" of events to avoid duplication?
THANKS!
Axon configuration:
#Configuration
public class AxonConfig {
#Bean
public EventSourcingRepository<ApplicationAggregate> applicationEventSourcingRepository(EventStore eventStore) {
return EventSourcingRepository.builder(ApplicationAggregate.class)
.eventStore(eventStore)
.build();
}
#Bean
public SagaStore sagaStore(EntityManager entityManager) {
return JpaSagaStore.builder().entityManagerProvider(new SimpleEntityManagerProvider(entityManager)).build();
}
}
CreateOrderCommand received by Order aggregate (method fromCommand just maps 1:1 command to event)
#CommandHandler
public OrderAggregate(CreateOrderCommand cmd) {
apply(OrderCreatedEvent.fromCommand(cmd))
.andThenApply(() -> OrderStateChangedEvent.builder()
.applicationId(cmd.getOrderId())
.newState(OrderState.NEW)
.build());
}
Order aggregate sets the properties
#EventSourcingHandler
protected void on(OrderCreatedEvent event) {
id = event.getOrderId();
// ... additional properties set
}
#EventSourcingHandler
protected void on(OrderStateChangedEvent cmd) {
this.state = cmd.getNewState();
}
OrderStateChangedEvent is listened by Saga that schedules couple of tasks for the order of the particular state
private Map<String, TaskStatus> tasks = new HashMap<>();
private OrderState orderState;
#StartSaga
#SagaEventHandler(associationProperty = "orderId")
public void on(OrderStateChangedEvent event) {
orderState = event.getNewState();
List<OrderStateAwareTaskDefinition> tasksByState = taskService.getTasksByState(orderState);
if (tasksByState.isEmpty()) {
finishSaga(event.getOrderId());
}
tasksByState.stream()
.map(task -> ScheduleTaskCommand.builder()
.orderId(event.getOrderId())
.taskId(IdentifierFactory.getInstance().generateIdentifier())
.targetState(orderState)
.taskName(task.getTaskName())
.build())
.peek(command -> tasks.put(command.getTaskId(), SCHEDULED))
.forEach(command -> commandGateway.send(command));
}
I think I can help you in this situation.
So, this happens because the TrackingToken used by the TrackingEventProcessor which supplies all the events to your Saga instances is initialized to the beginning of the event stream. Due to this the TrackingEventProcessor will start from the beginning of time, thus getting all your commands dispatched for a second time.
There are a couple of things you could do to resolve this.
You could, instead of wiping the entire database, only wipe the projection tables and leave the token table intact.
You could configure the initialTrackingToken of a TrackingEventProcessor to start at the head of the event stream instead of the tail.
Option 1 would work out find, but requires some delegation from the operations perspective. Option 2 leaves it in the hands of a developer, potentially a little safer than the other solution.
To adjust the token to start at the head, you can instantiate a TrackingEventProcessor with a TrackingEventProcessorConfiguration:
EventProcessingConfigurer configurer;
TrackingEventProcessorConfiguration trackingProcessorConfig =
TrackingEventProcessorConfiguration.forSingleThreadedProcessing()
.andInitialTrackingToken(StreamableMessageSource::createHeadToken);
configurer.registerTrackingEventProcessor("{class-name-of-saga}Processor",
Configuration::eventStore,
c -> trackingProcessorConfig);
You'd thus create the desired configuration for your Saga and call the andInitialTrackingToken() function and ensuring the creation of a head token of no token is present.
I hope this helps you out Tomáš!
Steven's solution works like a charm but only in Sagas. For those who want to achieve the same effect but in classic #EventHandler (to skip executions on replay) there is a way. First you have to find out how your tracking event processor is named - I found it in AxonDashboard (8024 port on running AxonServer) - usually it is location of a component with #EventHandler annotation (package name to be precise). Then add configuration as Steven indicated in his answer.
#Autowired
public void customConfig(EventProcessingConfigurer configurer) {
// This prevents from replaying some events in #EventHandler
var trackingProcessorConfig = TrackingEventProcessorConfiguration
.forSingleThreadedProcessing()
.andInitialTrackingToken(StreamableMessageSource::createHeadToken);
configurer.registerTrackingEventProcessor("com.domain.notreplayable",
org.axonframework.config.Configuration::eventStore,
c -> trackingProcessorConfig);
}
I am using a singleton Instance of the FirebaseRemoteConfig class which is generated using the following Provider method.
#Provides
#Singleton
FirebaseRemoteConfig provideFirebaseRemoteConfig() {
final FirebaseRemoteConfig mFirebaseRemoteConfig = FirebaseRemoteConfig.getInstance();
FirebaseRemoteConfigSettings configSettings = new FirebaseRemoteConfigSettings.Builder()
.setDeveloperModeEnabled(BuildConfig.DEBUG)
.build();
mFirebaseRemoteConfig.setConfigSettings(configSettings);
mFirebaseRemoteConfig.setDefaults(R.xml.remote_config_defaults);
long cacheExpiration = 3600 * 3; // 3 hours in seconds.
if (mFirebaseRemoteConfig.getInfo().getConfigSettings().isDeveloperModeEnabled()) {
cacheExpiration = 0;
}
mFirebaseRemoteConfig.fetch(cacheExpiration)
.addOnCompleteListener(new OnCompleteListener<Void>() {
#Override
public void onComplete(#NonNull Task<Void> task) {
if (task.isSuccessful()) {
// Once the config is successfully fetched it must be activated before newly fetched
// values are returned.
mFirebaseRemoteConfig.activateFetched();
} else {
FirebaseCrash.log("RemoteConfig fetch failed at " +System.currentTimeMillis());
}
}
});
return mFirebaseRemoteConfig;
}
Now the issue here is that if I am setting the setDefaults method everytime I am generating the singleton instance and since the last fetched config values have an expiration time, doesn't it mean that the Config values will revert to the initial defaultvalues hardcoded instead of picking up the last known config fetched. That is in case of inability to fetch from the server after the last fetched Config values expire.
I tried looking at the Docs but there was no specific detail on how the whole caching works except for a simple overview. So people who have experience using RemoteConfig can easily answer this but I am using it for the first time so any help is appreciated.
Nope. setDefaults does not overwrite any previously fetched values you might have received from RemoteConfig.
From RemoteConfig's perspective, the "expiration time" does't mean that the previously fetched values are considered invalid. It just means that it's time for it to go out onto the network and see if any new values have appeared. If they haven't (or if it can't reach the network), RemoteConfig will keep whatever values it previously downloaded last time.
Often, I work on Java EE application. Today I'm facing an issue: serialize Collections in servlet context. In my case, my app contains a Servlet Context Listener and many servlets.
The context listener load a ConcurrentHashMap containing several lists of products at initialisation and a task scheduler to refresh this list.
The servlets are supposed to access the right list, based on user provided parameters.
Here the code of my contextInitialized Listener:
public void contextInitialized(ServletContextEvent event) {
app = event.getServletContext();
myMap = new ConcurrentHashMap<String, Catalog>();
myMap.put("FR", new Catalog());
myMap.put("UK", new Catalog());
app.setAttribute("catalogue", myMap);
scheduler = Executors.newSingleThreadScheduledExecutor();
scheduler.scheduleAtFixedRate(new AutomateRefresh(), 0, 60, TimeUnit.MINUTES);
}
In order to show my problem, i created a servlet that display everything which is a boolean or a ConcurrentHashMap in context
I'm not surprised to find this kind of results:
javax.servlet.context.tempdir is equal to...
Working is equal to... true
org.apache.catalina.resources is equal to...
org.apache.tomcat.InstanceManager is equal to...
org.apache.catalina.jsp_classpath is equal to...
javax.websocket.server.ServerContainer is equal to...
org.apache.jasper.compiler.TldCache is equal to...
catalogue is equal to...
org.apache.tomcat.JarScanner is equal to...
As you can see, my two custom keys (the boolean Working and the ConcurrentHashMap catalogue) exists. But catalogue is empty when not accessed inside the Listener.
I found that:
The serialization form of java.util.HashMap doesn't serialize the buckets themselves, and the hash code is not part of the persisted state.
Source: Serializing and deserializing a map with key as string
For many projects a serializable and thread-safe collection is useful. I am probably not the only one who is looking for that (see the amount of topic about servlet context).
ConcurrentHashMap is thread-safe but I am unable to retrieve my data in other servlet (in the same app). Is there an implementation of Collection which is thread-safe and serializable (due to WebLogic server policy) ? Or am I using it in a wrong way ?
EDIT: Code of "Display context servlet"
public void doGet( HttpServletRequest request, HttpServletResponse response ) throws ServletException, IOException{
System.out.println("List of all values in the context:");
Enumeration<?> e = getServletContext().getAttributeNames();
while (e.hasMoreElements())
{
String name = (String) e.nextElement();
System.out.print("\n" + name + " is equal to... ");
// Get the value of the attribute
Object value = this.getServletContext().getAttribute(name);
if (value instanceof ConcurrentHashMap) {
ConcurrentHashMap<String, Catalog> map = (ConcurrentHashMap<String, Catalog>) value;
Iterator<Entry<String, Catalog>> it = map.entrySet().iterator();
while (it.hasNext()) {
ConcurrentHashMap.Entry<String, Catalog> entry = (ConcurrentHashMap.Entry<String, Catalog>)it.next();
System.out.print("\t" + entry.getKey() + "=" + entry.getValue());
}
} else if (value instanceof Boolean) {
System.out.print((Boolean)value);
}
}
}
EDIT2: Like BalusC suggested the HashMap maybe null (a rookie mistake ?).
Here the task code. The task is in the Listener. The Listener initialize the HashMap with new empty object. The task refresh the objects when webapp starts and then every hour.
public class AutomateRefresh implements Runnable {
public void run() {
System.out.println("Scheduler trigger");
if(app.getAttribute("catalogue") instanceof ConcurrentHashMap){
myMap = (ConcurrentHashMap<String, Catalog>) app.getAttribute("catalogue");
//Autorefresh
Iterator<Entry<String, Catalog>> it = myMap.entrySet().iterator();
while (it.hasNext()) {
ConcurrentHashMap.Entry<String, Catalog> entry = (ConcurrentHashMap.Entry<String, Catalog>)it.next();
((Catalog)entry.getValue()).setValid(false);//Set as not valid anymore for further request
try {
((Catalog)entry.getValue()).refreshdb((String) entry.getKey());//TODO rework to use REST API
} catch (SQLException e) {
e.printStackTrace();
}
it.remove(); // avoids a ConcurrentModificationException
app.setAttribute("catalogue", myMap);
app.setAttribute("Working", true);
System.out.println((String)entry.getKey() + " = " + (Catalog)entry.getValue());
}
}
else{
System.out.println("Catalogue is not an instance of ConcurrentHashMap as expected.");
app.setAttribute("Working", false);
}
}
}
When the task triggered, for each Catalog stored in the Context, the task update the data stored by them. It also display data in console.
Results:
Refresh Catalog for UK with DB
UK = Catalog [list size is : 0 valid=true, lastToken=notoken]
Refresh Catalog for FR with DB
FR = Catalog [list size is : 30 valid=true, lastToken=notoken]
Catalog is a class with an ArrayList, a boolean and a String. Everything seems correct: UK is supposed to be empty but not null and FR is supposed to contains 30 products.
I still can not access this data in other servlets.
I found the origin of the problem, a rookie mistake as expected:
I tried to update this way, assuming it would have updated the object directly in the ConcurrentHashMap
((Catalog)entry.getValue()).refreshdb((String) entry.getKey());
I replace it by:
Catalog myCatalog = (Catalog)entry.getValue();
myCatalog.refreshdb((String) entry.getKey());
myMap.put((String)entry.getKey(), myCatalog);
And it works now.
I still don't know why my objects were accessible from the listener, they are not supposed to work that way. Maybe a strange behavior from my server ? Anyway, this issue is fixed.
Thanks to BalusC for his help.
Help me solve next problem.
I have ASP .NET MVC2 application. I run it on IIS 7.5. In one page user clicks button and handler for this button sends request to server (jquery.ajax). At server action in controller starts new thread (it makes long time import):
var thread = new Thread(RefreshCitiesInDatabase);
thread.Start();
State of import is available in static variable. New thread changes value of variable in the begin of work.
User can check state of import too with the help of this variable, which is used in view. And user sees import's state.
When I start this function few minutes everything is okey. On page I see right state of import, quantity of imported records is changed, I see changes in logs. But after few minutes begin troubles.
When I refresh page with import state sometimes I see that import is okey but sometimes I see page with default values about import (like application is just started), but after that again I can see page with normal import's state.
I tried to attach Visual Studio to IIS process and debug application. But when request comes to controller sometimes static variables have right values and sometimes they have default values (static int has 0, static string has "" etc.).
Tell me what I do wrong. May be I must start additional thread in other way?
Thanks in advance,
Dmitry
I add parts of code:
Controller:
public class ImportCitiesController : Controller
{
[Dependency]
public SaveCities SaveCities { get; set; }
//Start import
public JsonResult StartCitiesImport()
{
//Methos in core dll, which makes import
SaveCities.StartCitiesSaving();
return Json("ok");
}
//Get Information about import
public ActionResult GetImportState()
{
var model = new ImportCityStatusModel
{ NowImportProcessing = SaveCities.CitiesSaving };
return View(model);
}
}
Class in Core:
public class SaveCities
{
// Property equals true, when program are saving to database
public static bool CitiesSaving = false;
public void StartCitiesSaving()
{
var thread = new Thread(RefreshCitiesInDatabase);
thread.Start();
}
private static void RefreshCitiesInDatabase()
{
CitiesSaving = true;
//Processing......
CitiesSaving = false;
}
}
UPDATE
I think, I found problem, but still I don't know how solve it. My IIS uses application pool with parameter "Maximum Worker Processes" = 10. And all tasks in application are handled by few processes. And my request to controll about import's state always is handled by different processes. And they have different static variables. I guess it is right way for solving.
But I don't know how merge all static values in one place.
Without looking at the code, here are the obvious question. Are you sure your access is thread safe (that is do you properly use lock to update you value or even access it => C# thread safety with get/set) ?
A code sample could be nice.
thanks for the code, it seem that CitiesSaving is not locked properly before read/write you should hide the instance variable behind a property to handle all the locking. Marking this field as volatile could also help (see http://msdn.microsoft.com/en-us/library/aa645755(v=vs.71).aspx )
I'm trying to follow the examples provided in this post, to create a dynamic list constraint in Alfresco 3.3.
So, I've created my own class extending ListOfValuesConstraint:
public class MyConstraint extends ListOfValuesConstraint {
private static ServiceRegistry registry;
#Override
public void initialize() {
loadData();
}
#Override
public List getAllowedValues() {
//loadData();
return super.getAllowedValues();
}
#Override
public void setAllowedValues(List allowedValues) {
}
protected void loadData() {
List<String> values = new LinkedList<String>();
String query = "+TYPE:\"cm:category\" +#cm\\:description:\"" + tipo + "\"";
StoreRef storeRef = new StoreRef("workspace://SpacesStore");
ResultSet resultSet = registry.getSearchService().query(storeRef, SearchService.LANGUAGE_LUCENE, query);
// ... values.add(data obtained using searchService and nodeService) ...
if (values.isEmpty()) {
values.add("-");
}
super.setAllowedValues(values);
}
}
ServiceRegistry reference is injected by Spring, and it's working fine. If I only call loadData() from initialize(), it executes the Lucene query, gets the data, and the dropdown displays it correctly. Only that it's not dynamic: data doesn't get refreshed unless I restart the Alfresco server.
getAllowedValues() is called each time the UI has to display a property having this constraint. The idea on the referred post is to call loadData() from getAllowedValues() too, so the values will be actually dynamic. But when I do this, I don't get any data. The Lucene query is the same, but returns 0 results, so my dropdown only displays -.
BTW, the query I'm doing is: +TYPE:"cm:category" +#cm\:description:"something here", and it's the same on each case. It works from initialize, but doesn't from getAllowedValues.
Any ideas on why is this happening, or how can I solve it?
Thanks
Edit: we upgraded to Alfresco 3.3.0g Community yesterday, but we're still having the same issues.
This dynamic-list-of-values-constraint is a bad idea and I tell you why:
The Alfresco repository should be in a valid state all the time. Your (dynamic) list of constraints will change (that's why you want it to be dynamic). Adding items would not be a problem, but editing and removing items are. If you would remove an item from your option-list, the nodes in the repository with this property value will be invalid.
You will not be able to fix this easily. The standard UI will fail on invalid-state-nodes. Simply editing this value and setting it to something valid will not work. You have been warned.
Because the default UI widget for a ListConstraint is a dropdown, not every dropdown should be a ListConstraint. ListConstraints are designed for something like a Status property: { Draft, Waiting Approval, Approved }. Not for a list of customer-names.
I have seen this same topic come up again and again over the last few years. What you actually want is let the user choose a value from a dynamic list of options (combo box). This is a UI problem, not a dictionary-model-issue. You should setup something like this with the web-config-context.xml (Alfresco web UI) or in Alfresco Share. The last one is more flexible and I would recommend taking that path.