Reason to integrate Spring Web-flow with Spring MVC - spring-mvc

Why(In what scenarios) do we need to integrate Spring Webflow with Spring MVC? Both these frameworks are used to create web-app and I do not see any point why we would integrate them. I would appreciate if someone could clarify me about it.

This is really late but I don't see a satisfactory answer to this question and would like to share an approach I had tried in a recent project which I feel is better than the spring web flow approach which is strictly tied down to spring views and unnecessarily adds to the already existing xml load . I created a SPA(Single Page Application) using angular js with Spring MVC. In angular js I did not use routers or state, rather I created a div within the controller like below
On the server side to capture all possible transitions from one frame(I am referring to a particular screen in the SPA) to another I created a tree of rules using MVEL . So in the database I had a structure which stored a tree of rules for every frame . The data in the MVEL expressions were being set by the various services each action invoked. Thus on any action the following steps were followed.
1) Validate the action.
2) Invoke various services.
3) Capture the data from these services and merge it with the existing data of the user.
4) Feed this captured data into collection of rules for each frame along with the details of the current frame.
5) Run the rules of the tree w.r.t to current frame and fetch its output.
6) If there is only one transition then that is the final transition. If there are 2 transitions and one is default then ignore the default transition and use the other transition.
7) Return the template name of the transition to the angular controller and set the value of the page variable in the scope of the controller.
Using this approach all my services had to do was store data in different data fields w.r.t a particular action. All the complex if-else conditions for Web Flows or any complex process definitions(like the one defined in Spring-Web Flow) were not required. The MVEL rule engine managed all that and since it was all in the database it could be changed without needing a server re-start.
I believe this generic approach with MVEL is a flexible approach which comprehensively handles the problem of a convoluted flow without making the application code a mess or adding additional unnecessary xml files.

We combine both. Web Flow for the multi-step activities, where it doesn't make sense to jump into the middle of a process, and plain-MVC Controllers for the single-step activities. Things you might bookmark individually.
For example, an appointment-scheduling application, "find my appointment" might be a single Controller that accepts identifying information. "Make a new appointment" is a flow, with multiple steps of selecting a location, date, time, confirm the appointment, etc.

If your application have complex Flow pages, events which need to be defined as Finite state machine then use Webflow. It would be justified to use webflow for website where you buy Insurance, Flight Tickets. Web Flow conditions are like:
There is a clear start and an end point.
The user must go through a set of screens in a specific order.
The changes are not finalized until the last step.
Once complete it shouldn't be possible to repeat a transaction accidentally

Related

How to handle Complex Object in the ngrx?

Hello,
I'm working on a project in which the main part of the data has a complex structure as you can see in the above picture.
Now, the object, in reality, is much complex than that but what I showed it servers the purpose.
Because in DB they are linked together in tables relationship the first time when the website is launched, after log in, a list of projects will come together with some small details of technology and dataObjects.
I created separated action and effects files but everything is handled by a single reducer. What I mean is at the start, the list of projects will be saved on a state, than any other actions like Create a project, technology, data object, edit, delete has to perform actions over the same state "projects-state".
For example besides technologyAPIS will be another 3-4 technologies, inside each technology object will be another list of objects.
The issue here is that the reducer file is getting bigger and bigger when it handles all kinds of actions that will perform actions over the specific data from the state. It is important that the chain of Objects stay together.
My question is, is this a bad approach? it can be handled in a different way? I know I can create a reducer for each entity (project, technology, data app) but I will lose that relationship between them, where one belongs to the other?
Thank you so much for your feature response
I've only been doing reactive/NGRX for a few months now, but from my understanding, thats defiantly a bad approach. It should still work, but may be a hassle to debug/maintain.
Ngrx seems to be promoting 'Normalising' data, just like the usual Relational Database concept.
You could break down your project-state into smaller states, with relational keys in them.
Example
Projects state (not to be confused with project-state)
id:number
projectDescription:string
createdBy:string
techAPIs:number[] //where the content here is the id of the TechApi
TechApi state
id:number
otherInfo:any
And then when you need to access the TechApi state for a project, you retrieve it and filter/map it in a selector.
This is a some what General Example if my explanation is not understandable.

Ngrx complex state reducer

I struggle finding the right way to mutate my state in an ngrx application as the state is rather complex and depending on many factors. This Question is not about doing one piece of code correct but more about how to design such a software in general, what are doe's and don'ts when finding some hacky solutions and workarounds.
The app 'evolved' by time and i wan't to share this process in an abstracted way to make my point clear:
Stage 1
State contains Entities. Those represent nodes in a tree and are linked by ids. Modifying or adding an entity requires a check about the type of nodes the new/modified ones should be connected with. Also it might be that upon modifying a node, other nodes had to be updated.
The solution was: create functions that do the job. Call them right in the reducer so everything is always up to date and synchronus when used (there are services that might modify state).
Stage 2
A configuration is added to the state having an impact on the way the automatically modifyed nodes are modifyed/created. This configuration is saved in it's own state right under the root state.
The solution:
1) Modify the actions to also take the required data from the configuration.
2) Modify the places where the actions are created/dispatched (add some ugly
this.state.select(fromRoot.getX)
.first()
.(subscribe(element => {this.state.dispatch(new Action({...old_payload, newPayload: element}))})
wrapper around the dispatch-calls)
3) modify the functions doing the node-modification and
4) adding the argument-passing to the function calls inside the reducer
Stage 3
Now i'am asked to again add another configuration to the process, also retrived by the backend and also saved in another state right under the root state
State now looks like:
root
|__nodes
|__config_1
|__config_2
i was just about to repeat the steps as in stage 2 but the actions get really ig with all the data passed in and functions have to carry around a lot of data. This seems to be wrong, when i actually dispatch the action on the state containing all the needed info.
How can i handle this correct?
Some ideasi already had:
use Effects: they are able to get everything from state they need and can create everything - so i only need to dispatch an action with only the actions payload, the effect then can grab everything from the state it needs. I don't like this idea because it triggers asynchronus tasks to modify the state and add not-state-changing actions.
use a service: with a service holding state it would be much like with effects but without using actions to just create asynchronus calls which then dispatch the actions that relly change state.
do all the stuffi n the component: at the moment the components are kept pretty simple when it comes to changing state as i prefer the idea that actions carry as little data as possible, since reducers can access the state to get theyr data - but this is where the problem occus, this time i can't get hands on the data i need.

Is Factory pattern the best method to access different databases based on the type of data

I am working on an .net application that needs to present to the user data from 3 different platforms and any actions taken are also saved to the respective databases. What is the best design pattern for data access ,I am thinking Factory would be best but i need some advice as I am kind of new to this approach.
Lets say we have 5 different websites that are independent of each other completely , but similar . Products added to these different sites need to be reviewed in a single application and each product is either approved to rejected by the user . We dont need to combine the data , but the UI is the same , based on what data they are looking at , we just need to save the actions to that particular db.
Yes, and is most common to use.
Look this site: http://www.primaryobjects.com/CMS/Article81.aspx
Contains a simple step to step how to create a database factory in C# for web pages.
I would say an Observer pattern:
(for receiving messages of new products if you are not polling manually)
coupled with a Command pattern:
(for agnostically rejecting/approving a product and sending it off to be processed by its DB)
You could then interface the brokering class with a Factory pattern. Im just curious on how you would get notified of a new product: polling, pushing, sockets, etc since that would open up more options for your solution

How can I utilize multiple databases in an entity framework solution simultaneously?

I have two unrelated databases and I need to pass data back and forth between them. Right now I have created two separate entity models - one for each database - but this is causing issues in my code b/c I have to do a Using nameofcontext / End Using and when I try to then use some of the results from the first section of the code in a second Using nameofcontext / End Using it doesn't like it - b/c I've closed the connection to the first database!
Since this is a website, you could create one instance of each context in Global.asax's BeginRequest event, and dispose of that instance in EndRequest. Doing that means during the rest of the event lifecycle, you have contexts that will remain open and can do what you need, but you still know they're being properly disposed.
That's how I've gotten around issues like this.
Note: Don't store the context in a global shared variable because that will share it between multiple requests and havok will ensure. HttpContext.Current.Items lets you store something that is easy to retrieve in your code but is specific to the current request, so that's a safe place to store them.

How to store the model of a application?

I am working on an StoryBoarding application,it is a slide based application in which the authors can put several components like image , sound , captions etc in each of the slide.A collection of slides will make a storyboard.This application will be deployed on a web server (sharepoint + IIS , and php+apache), and several users can collaborate with each other for authoring or reviewing the storyboard.In my application I also want to support auto save ,which will keep on storing the state of the storyboard.User can also save at any point of time by clicking the save button.
I am confused about how to store the state of the storyboard.
1)Presently I am doing this by passing all the storyboard data to a dot net web-service and then that service is storing images,caption etc in their respective tables into a database .
2)Another approach possible is to store the model of the application as a serialized object into the db , which will be more convenient since separating the components of the model (like images,captions etc..) will not be required and also restoring the state of the objects in the application will be easy .
I have two doubts about using approach 2 :-
i) I want the the saved storyboard to load quickly, for which I would like to support the partial so that lighter objects like caption can be loaded quickly but other heavier objects like image,video etc can be loaded on demand. Using approach 2 , do I have to send the whole data in one go or is there way to support partial loading ?
ii) How to implement the auto save feature when using approach 2, for every auto save do I have to send the whole serialized object again back to db or is there a way to send only the changed part of the model to be stored in db .
Please suggest which approach would be better to use for this application , and also answer afore mentioned doubts regarding using approach 2 .
If i would be working on such Application i will use Approach DIVIDE AND RULE, a design seprate logic to save and retrive each components like Images, sound etc, because it can easly handles any modification and enhancement in application,
Another thing is that it will take longer time to save, update and load data from backend if you use approach 2.
I am with approach 1.
Hopes that helps

Resources