How to define multiple flow executors for differents flow and disable continuation snapshot for some flows? - spring-webflow

I'm working on a huge project and we would like to have a different management of continuation for some flows.
We want to be able to use the continuation snapshots (those that permit the use of the back button) for most of our flows but we also want to be able to totally disable continuation snapshots for some of our flows that use huge quantity of memory and that we don't want to serialize.
Is it possible ? And how ?
Thank you very much.

Big caveat that I haven't tried to do any of this. But, here's a potential approach.
First of all, you need your own implementation of FlowExecutionSnapshotFactory. This will allow you to manage the creation and restoration of snapshots. You'll probably want to wrap SerializedFlowExecutionSnapshotFactory, but only allow the snapshot to be created in certain circumstances. Even better, you might want to allow the snapshot to be created, but to omit some of the data from it.
Now the problem is getting Webflow to use your new SnapshotFactory. The factory is created in FlowExecutorFactoryBean.createFlowExecutionSnapshotFactory(). So you need to get this created. You can specify your own FlowExecutorFactoryBean in your application-context.xml file. There's instructions on how to do that at http://forum.springsource.org/showthread.php?54714-SWF-2-0-Backtracking-and-exception-catching - scroll down to angrysoul's post at the bottom.
Now you just need to make sure you provide your own own instance of FlowExecutorImpl, that contains your own snapshot factory.

Related

Issue with objectify save

I have a usecase wherein I save an entity for the first time and a second after saving it I fetch it, update it a bit and save it in batch along with two other entites (different 'kinds'). In a few cases (10 out of 50K), the update to datastore is ignored.
I mean, it's there in the objectify cache but the change didn't happen in datastore.
How I am able to justify the above, is because after the save, I fetch it again after a second and I'm able to see it.
PS: I also use .now() while saving. This shouldn't happen when now() is used right?
Sounds like you are seeing eventual consistency in the datastore. There is quite a bit of Google documentation available, but this looks to be the most comprehensive:
https://cloud.google.com/datastore/docs/articles/balancing-strong-and-eventual-consistency-with-google-cloud-datastore/
There are a lot of ways to deal with eventual consistency, either by avoiding it (using get-by-key operations), changing the structure of your data (using #Parent relationships), or masking it with UI behavior (say, add the new thing to the list in UI code instead of just refreshing the whole list).

How to set up async processes with Vuex in any specific order

First let's understand the app:
The sample app mocks grabbing data from 2 sources: an array of available objects, and an array of objects being used.
The app also displays new objects (available ones not being used).
Finally, it also allows you to use (register) one of the new objects
Requirements:
The app needs to display the list of New objects first
The app needs to minimize the number of API calls to the bare minimum and only make calls when strictly necessary
Calls to produce changes in API data (register) should be reactive and display changes in UI immediately
The code I have implemented meets these 3 requirements. However, I'm really unhappy with this implementation, and I'm sure that's not the way the Vuex store is supposed to be used.
For starters, my implementation only works for the specific order in which the components are displayed in the screen:
<new name="New" :selected="true"></new>
<available name="Available"></available>
<using name="Using"></using>
If I, for example, want to move <available> to the last tab, the code will break.
This happens because I haven't been able to simply call dispatch('getNews') once and have everything else fall into place, without at the same time duplicating one or to API calls, and thus not meeting the requirements...
I tried using dispatch('...').then().then() but I haven't been able to make it work and meet the requirements.
I would greatly appreciate anyone with experience in similar situations with Vuex tell me how they'd do this.
Bonus if you can do it without adding extra mutations.

Subscribe, publish dynamic collection made in run time

I am trying to make a chat application in meteorJS and i was thinking of making a seperate collection for each groups dynamically when they initiate chat, I want to publish and subscribe that collection for transmitting chat information and all the group user can subscribe it, but i am not being able to create a collection dynamically.
I tried making a function which gets call when the user subscribe the collection.
this.createDb =(name) ->
#ChatDb = new Mongo.Collection(name)
return true
everything is fine, but when i subscribe this collection from client side, ChatDb is unknown, Can any one help me with this, that would be great. :)
ps- i am writing code in angular-meteor framework
Andrew Mao's answer to a similiar question:
In most instances, you probably don't want to create multiple
collections, but instead use one collection and send views of it to
clients depending on their subscription.
You may want to check out the
https://github.com/mizzao/meteor-partitioner package I've built which
is designed especially for this purpose, and includes an example for
how to do this for multiple chat rooms. You can also see
https://github.com/mizzao/CrowdMapper for an implemented example.
I haven't done Meteor for a while now so I couldn't give you a solid answer. But I remember quite clearly that creating collections dynamically is not the recommended way for achieving what you want to do.

How to realize persistence of a complex graph with an Object Database?

I have several graphs. The breadth and depth of each graph can vary and will undergo changes and alterations during runtime. See example graph.
There is a root node to get a hold on the whole graph (i.e. tree). A node can have several children and each child serves a special purpose. Furthermore a node can access all its direct children in order to retrieve certain informations. On the other hand a child node may not be aware of its own parent node, nor other siblings. Nothing spectacular so far.
Storing each graph and updating it with an object database (in this case DB4O) looks pretty straightforward. I could have used a relational database to accomplish data persistence (including database triggers, etc.) but I wanted to realize it with an object database instead.
There is one peculiar thing with my graphs. See another example graph.
To properly perform calculations some nodes require informations from other nodes. These other nodes may be siblings, children/grandchildren or related in some other kind. In this case a specific node knows the other relevant nodes as well (and thus can get the required informations directly from them). For the sake of simplicity the first image didn't show all potential connections.
If one node has a change of state (e.g. triggered by an internal timer or triggered by some other node) it will inform other nodes (interested obsevers, see also observer pattern) about the change. Each informed node will then take appropriate actions to update its own state (and in turn inform other observers as needed). A root node will not know about every change that occurs, since only the involved nodes will know that something has changed. If such a chain of events is triggered by the root node then of course it's not much of an issue.
The aim is to assure data persistence with an object database. Data in memory should be in sync with data stored within the database. What adds to the complexity is the fact that the graphs don't consist of simple (and stupid) data nodes, but that lots of functionality is integrated in each node (i.e. events that trigger state changes throughout a graph).
I have several rough ideas on how to cope with the presented issue (e.g. (1) stronger separation of data and functionality or (2) stronger integration of the database or (3) set an arbitrary time interval to update data and accept that data may be out of synch for a period of time). I'm looking for some more input and options concerning such a key issue (which will definitely leave significant footprints on a concrete implementation).
(edited)
There is another aspect I forgot to mention. A graph should not reside all the time in memory. Graphs that are not needed will be only present in the database and thus put in a state of suspension. This is another issue which needs consideration. While in suspension the update mechanisms will probably be put to sleep as well and this is not intended.
In the case of db4o check out "transparent activation" to automatically load objects on demand as you traverse the graph (this way the graph doesn't have to be all in memory) and check out "transparent persistence" to allow each node to persist itself after a state change.
http://www.gamlor.info/wordpress/2009/12/db4o-transparent-persistence/
Moreover you can use db4o "callbacks" to trigger custom behavior during db4o operations.
HTH
German
What's the exact question? Here a few comments:
As #German already mentioned: For complex object graphs you probably want to use transparent persistence.
Also as #German mentione: Callback can help you to do additional stuff when objects are read/written etc on the database.
To the Observer-Pattern. Are you on .NET or Java? Usually you don't want to store the observers in the database, since the observers are usually some parts of your business-logic, GUI etc. On .NET events are automatically not stored. On Java make sure that you mark the field holding the observer-references as transient.
In case you actually want to store observers, for example because they are just other elements in your object-graph. On .NET, you cannot store delegates / closures. So you need to introduce a interface for calling the observer. On Java: Often we use anonymous inner classes as listener: While db4o can store those, I would NOT recommend that. Because a anonymous inner class gets generated name which can change. Then db4o will not find that class later if you've changed your code.
Thats it. Ask more detailed questions if you want to know more.

Bulk Collection Manipulation through a REST (RESTful) API

I'd like some advice on designing a REST API which will allow clients to add/remove large numbers of objects to a collection efficiently.
Via the API, clients need to be able to add items to the collection and remove items from it, as well as manipulating existing items. In many cases the client will want to make bulk updates to the collection, e.g. adding 1000 items and deleting 500 different items. It feels like the client should be able to do this in a single transaction with the server, rather than requiring 1000 separate POST requests and 500 DELETEs.
Does anyone have any info on the best practices or conventions for achieving this?
My current thinking is that one should be able to PUT an object representing the change to the collection URI, but this seems at odds with the HTTP 1.1 RFC, which seems to suggest that the data sent in a PUT request should be interpreted independently from the data already present at the URI. This implies that the client would have to send a complete description of the new state of the collection in one go, which may well be very much larger than the change, or even be more than the client would know when they make the request.
Obviously, I'd be happy to deviate from the RFC if necessary but would prefer to do this in a conventional way if such a convention exists.
You might want to think of the change task as a resource in itself. So you're really PUT-ing a single object, which is a Bulk Data Update object. Maybe it's got a name, owner, and big blob of CSV, XML, etc. that needs to be parsed and executed. In the case of CSV you might want to also identify what type of objects are represented in the CSV data.
List jobs, add a job, view the status of a job, update a job (probably in order to start/stop it), delete a job (stopping it if it's running) etc. Those operations map easily onto a REST API design.
Once you have this in place, you can easily add different data types that your bulk data updater can handle, maybe even mixed together in the same task. There's no need to have this same API duplicated all over your app for each type of thing you want to import, in other words.
This also lends itself very easily to a background-task implementation. In that case you probably want to add fields to the individual task objects that allow the API client to specify how they want to be notified (a URL they want you to GET when it's done, or send them an e-mail, etc.).
Yes, PUT creates/overwrites, but does not partially update.
If you need partial update semantics, use PATCH. See http://greenbytes.de/tech/webdav/draft-dusseault-http-patch-14.html.
You should use AtomPub. It is specifically designed for managing collections via HTTP. There might even be an implementation for your language of choice.
For the POSTs, at least, it seems like you should be able to POST to a list URL and have the body of the request contain a list of new resources instead of a single new resource.
As far as I understand it, REST means REpresentational State Transfer, so you should transfer the state from client to server.
If that means too much data going back and forth, perhaps you need to change your representation. A collectionChange structure would work, with a series of deletions (by id) and additions (with embedded full xml Representations), POSTed to a handling interface URL. The interface implementation can choose its own method for deletions and additions server-side.
The purest version would probably be to define the items by URL, and the collection contain a series of URLs. The new collection can be PUT after changes by the client, followed by a series of PUTs of the items being added, and perhaps a series of deletions if you want to actually remove the items from the server rather than just remove them from that list.
You could introduce meta-representation of existing collection elements that don't need their entire state transfered, so in some abstract code your update could look like this:
{existing elements 1-100}
{new element foo with values "bar", "baz"}
{existing element 105}
{new element foobar with values "bar", "foo"}
{existing elements 110-200}
Adding (and modifying) elements is done by defining their values, deleting elements is done by not mentioning it the new collection and reordering elements is done by specifying the new order (if order is stored at all).
This way you can easily represent the entire new collection without having to re-transmit the entire content. Using a If-Unmodified-Since header makes sure that your idea of the content indeed matches the servers idea (so that you don't accidentally remove elements that you simply didn't know about when the request was submitted).
Best way is :
Pass Only Id Array of Deletable Objects from Front End Application To Web API
2. Then You have Two Options:
2.1 Web API Way : Find All Collections/Entities using Id arrays and Delete in API , but you need to take care of Dependant entities like Foreign Key Relational Table Data too
2.2. Database Way : Pass Ids to your database side, find all records in Foreign Key Tables and Primary Key Tables and Delete in same order i.e. F-Key Table records then P-Key Table records

Resources