something in ngrx (redux pattern) than I still dont get for large applications - redux

I've been building data driven applications for about 18 years and for the past two, I've been successfuly using angular for my large forms/crud based apps. You know, the classic sql server db with hundreds of tables with millons of records. So far, so good.
Now I'm porting/re-engineering a desktop app with about 50 forms, all complex, all fully functional, "smart". My approach for the last couple years was to simply work tightly with the backend rest API to retrieve, insert or update data as needed and everything works fine.
Then I stumbled across ngrx and I understand exactly how it works, what it does and why it is good for a "reactive" app.
My problem is the following: In the usual lifecycle of the kind of systems i mentioned, you always have to deal with fresh data and always have to tell everything to the server. Almost no data in such apps can be safely "stored" localy since transactional systems rely on centralized data interactions. There's no such thing as "hey lets keep this employee's sales here for later use".
So why would it be so important to manage a local 'store' when most of my data is volatile? I understand why it would be useful for global app data like user-profile or general ui related state, but for the core data itself? I dont get it. You query for data, plug that data in the form, it gets processed by the user and sent back to the server. That data is no longer needed, and if you do need it, you ask for it again, as it could have changed its state since the last time you interacted with it.
I do not understand the great lengths i have to go to mantain a local store and all the boilerplate if that state is so volatile.
They say change detection does not scale but I've build some really large web apps with a simple "http service" pattern and it works just fine, cause most of the component-tree is destroyed anyway as you go somewhere else in the app, and any previous subscriptions become useless. Even with large-bulky-kinky forms, it's never that big of a problem the inner workings of a form as to require external "aid" fro a store. The way I see it, the "state" of a form is a concern of that form in that moment alone. Is it to keep the component tree in sync? never had problems with that before... even for complicated trees with lots of shared data, master detail is kind of a flat pattern in the end if al lthe data is there.
For other components, such as grids, charts, reporte, etc, same thing applyes. They get the data they need and then "puf", gone.
So now you see my mindset. I AM trying to change it to something better. Why am I missing out the redux pattern?

I have a bit of experience here! It's all subjective, so what I've done may not suit you. My system is a complex system that sounds like it's on a similar scale as yours. I battled at first with the same issues of "why build complex logic on the front end and back end", and "why bother keeping stuff in state".
A redux/NGRX approach works for me because there are multiple ways data can be changed - perhaps it's a single user using the front end, perhaps it's another user making a change and I want to respond to that change straight away to avoid concurrency issues down the track. Perhaps there are multiple parts within my front end that can manipulate the same data.
At the back end, I use a CQRS pattern instead of a traditional REST API. Typically, one might suggest to re-implement the commands/queries to "reduce" changes to the state, however I opted for a different approach. I don't just want to send a big object graph back to the server and have it blindly insert, and I don't want to re-implement logic on the client and server.
My basic "use case" life cycle looks a bit like:
Load a list of data (limited size, not all attributes).
User selects item from list
Client requests "full" object/view/dto from server
Client stores response in object entity state
User starts modifying data
These changes are stored as "in progress" changes in a different part of state. The system is now responding to the data in the "in progress" part
If another change comes in from server, it doesn't overwrite the "in progress" data, but it does replace what is in the object entity state.
If required, UI shows that the underlying data has changed / is different to what user has entered / whatever.
User clicks on the "perform action" button, or otherwise triggers a command to be sent to server
server performs command. Any errors are returned, or success
server notifies client that change was successful, the client clears the "in progress" information
server notifies client that Entity X has been updated, client re-requests entity X and puts it into the object entity state. This notification is sent to all connected clients, so they can all behave appropriately.

Related

How To Use Flux Stores

Most examples of Flux use a todo or chat example. In all those examples, the data set you are storing is somewhat small and and be kept locally so not exactly sure if my planned use of stores falls in line with the flux "way".
The way I intend to use stores are somewhat like ORM repositories. A way to access data in multiple ways and persist data to the data service, whatever that might be.
Lets say I am building a project management system. I would probably have methods like these for data retrieval:
getIssueById
getIssuesByProject
getIssuesByAssignedUser
getIssueComments
getIssueCommentById
etc...
I would also have methods like this for persisting data to the data service:
addIssue
updateIssue
removeIssue
addIssueComment
etc...
The one main thing I would not do is locally store any issue data (and for that matter most store data that related to a data store). Most of the data is important to have fresh because maybe the issue status has updated since I last retrieved that issue. All my data retrieval method would probably always make an API requests to the the latest data.
Is this against the flux "way"? Are there any issue with going about flux in this way?
I wouldn't get too hung up on the term "store". You need create application state in some way if you want your components to render something. If you need to clear that state every time a different request is made, no problem. Here's how things would flow with getIssueById(), as an example:
component calls store.getIssueById(id)
returns empty object since issue isn't in store's cache
the store calls action.fetchIssue(id)
component renders empty state
server responds with issue data and calls action.receiveIssue(data)
store caches that data and dispatches a change event
component responds to event by calling store.getIssueById(id)
the issue data is returned
component renders data
Persisting changes would be similar, with only the most recent server response being held in the store.
user interaction in component triggers action.updateIssue(modifiedIssue)
store handles action, sending changes to server
server responds with updated issue and calls action.receiveIssue(data)
...and so on with the last 4 steps from above.
As you can see, it's not really about modeling your data, just controlling how it comes and goes.

Best practice for saving application-level data to database (ASP.NET)

We have a very large HTML form (> 100 fields) that updates a SQL Server database with user-entered data. It will take the user a long time to fill out the form, but every piece of information they submit is very valuable to the business process. Even if the user gives up on the form, we want to retain everything they have entered.
We plan to attach an onblur event to each field and use jQuery/AJAX to post each piece of data back to the application server immediately. That part is pretty straightforward. The question we have is when and how to best save this application-level information to the database. Again, our priority is data retention as opposed to performance but we also want to do this as efficiently as possible.
Options as I see it are:
Have the web service immediately post each piece of data to the database server.
Store the information in a custom class on the application server, then periodically call an update method to post new data to the database.
Store the information in view or session state, then run a routine to post this information to the database server.
Something else that we haven't thought of.
Option 1 seems the most obviously failsafe, but also the most resource intensive. Option 2 seems the most elegant, but can we be absolutely certain that the custom class instance can't be destroyed without first running its update method?
Thanks for your help!
IMHO, I'd really cut up the form into sections (if possible). Since this is ASP.Net, if you are using Web Forms then look into using wizards (cut up the form into logical Steps)
You can do same without Form Wizard, but still cut up the process into logical steps, client-side. You can probably do this in pure JavaScript, but it would likely be easier if you used a framework (jQuery, Knockout, etc.) - the concept remains the same, cut up the form entry process into sections (aka 'Steps') - e.g. using display toggles, divs for each "step", etc.
"retain everything even if abandoned later": assumed that the steps are "hierarchical" where the "most critical" inputs are at the beginning. This makes the "steps" approach even more important - this is a "logical group" (of inputs you really want) so if you do the Step approach, then you can save this data (of this "step") to DB in whatever fashion you deem appropriate (e.g. Ajax, or ASP.Net Post/postback).
Hth...
I would package everything up in some xml or dataset (.getxml) and pass the xml to a stored procedure....
How to pass XML from C# to a stored procedure in SQL Server 2008?
And maybe put the call on a background thread.
http://code.msdn.microsoft.com/CSASPNETBackgroundWorker-dda8d7b6
The xml will be faster than calling the values row by row (RBAR).
You can save just the xml, or shred the xml into a relational table(s).

Should I avoid the session with a complex object in asp.net?

Here's my issue, we have a large patient object that is used on multiple screens throughout the admin. Each screen contains different information about the same patient. It can't all be on one screen.
The only time I want to persist the patient is when the user clicks save. I need to have an in memory patient somewhere. A user may be in the admin, change patient information on various screens, run validation and decide to not save that patient. This is typical use.
Is it ok to store this patient in the session? Or, is there a better approach to do this? At most this admin would have 20 users with access.
Opinions may vary on this. Session is tricky, especially if you use something other than in-memory session. Distributed session will break a non-serializable object. If this object is a simple POCO or object you control, try your best to make it play with serialization. If it does you're set. For an admin tool without much load I'd say you'd be fine.
Hey I found this - know nothing about the site, but illustrates my point:
https://www.fortify.com/vulncat/en/vulncat/dotnet/asp_dotnet_bad_practices_non_serializable_object_stored_in_session.html
I had a similar situation with similar amount of users. I did it and it worked great.
My situation was about scheduling events.
Someone would create an event and through multiple web pages would modify and configure this event. When they were all done it would save all the details to SQL. In the end, I was surprised just how well it worked.
Session should be fine here. You have what appears to be a light user load... but you might want to check exactly how much memory the object takes up, multiply that by the maximum number of users, and see where you are.
If you want to avoid the session altogether, you could use System.Web.Caching to store the object instead, and key the stored object using the users identifier plus some constant string.
In either case, you'll want to be aware of how many web servers are running the application. If it's just one web server, no worries. If you have multiple web servers, you'll want to make sure they are "sticky" - then the user is guaranteed to have all requests processed by the same server. How this is done is entirely dependent on your flavor of load balancing... normally the "IT folks" handle this for you.

Simple Shopping Cart Using Session Variables Now Using AJAX

I know there are a million questions out there on how to implement a shopping cart in your site. However, I think my problem may be somewhat different. I currently have a working shopping cart that I wrote back in the 1.1 days that uses ASP.NET session variables to keep track of everything. This has been in place for about 6 years and has served its purpose well. However, it has come time to upgrade the site and part of what I have been tasked with is creating a more, erm, user-friendly site. Part of this is removing updatepanels and implementing real AJAX solutions.
My problem comes in where I need to persist this shopping cart over several pages. Sure I could use cookies, but I would like to keep track of carts for statistical purposes (abandonment stats, items added but not bought, those kinds of stats) as well as user-friendliness, like persisting their cart so that if they come back it is remembered. This is easy enough if a user is logged in, but I don't want to force a user to create an account if they don't want.
Additionally, the way we were processing orders was a bit, ehh, slapped-together. All of the details (color selected, type selected, etc) are passed to paypal via their description string which for the most part is ok, but if a product has selections that are too long for the string (255 chars i believe), they are cut off and we have to call the customer to confirm what they bought. If I were to implement a more "solid" shopping cart, we wouldn't have to do that because all the customer's choices would be stored, in addition to the order automatically being entered into the order processing system (they're manually entered into an excel spreadsheet. i know, right).
I want to do this the right way, but I don't want to use any sort of overblown software that won't really work with our current business model. Do I use a cookie to "label" each visitor to match them with their cart (give them a cookie with a GUID) across pages, keep their whole cart client side, keep the cart server side and just pull it from the db on each page refresh? Any help would be greatly appreciated.
Thanks!
So this isn't really the answer to your question, but it is part of the answer. I'm trying to find the duplicate that this is of (it may not be) but you can keep a lot of the same code if you'll use IRequiresSessionState. I didn't find any exact duplicates, but I recognized the subject matter.
Handler with IRequiresSessionState does not extend session timeout
other answers:
ASP.NET: How to access Session from handler?
Authentication in ASP.NET HttpHandler
So what you want to really do is just look to implement PageMethods in your pages, and then you can reduce a LOT of the overhead of communicating with the pages. But if you want to migrate away from what you're doing now you want to start implementing handlers (and configure them to use JSON - there's a decorator for that) and you can use that with the likes of jQuery.ajax() as a direct URL and keep it within the same scope of your project. Note that it will by default send the cookies for you, so that's no big deal. The reason why I say that is that the cookie has the identifier from Forms to let the session be identified.
So if you're using the IRequireSessionState then you can still use all the session state information that you're used to using. There's nothing wrong with using Session in combination with AJAX. The two really don't have a lot to do with each other. One is used for server storage, the other is used for server communication.
Now, if you're trying for a completely clientside app and a RESTful server solution, then you're going to need to start passing complex JSON structures back and forth (not a big deal, just a matter of making sure you define your datatypes for yourself pretty well in your own documentation) and you can keep everything restricted to only what's passed.
I actually use all three types of these methods in my applications on the same server, depending on what I'm trying to do with each of my apps. I have found that each has their own strength and weakness. (Ok, I don't use the session part, because I handle my state in other ways, but I could use session state)
What could I clarify with here?
There are a few ways you can handle this. The main question is how long do you need to persist their shopping cart information (30minutes, 1 hour, 1 day, 1 week, ...etc.).
Short Storage Requirements Easiest Implementation (30min - 1hr)
You can use session with Page Methods by using HttpContext.Current.Session["key"] so you can keep your session storage the same as it currently works for you. You can call these Page Methods using jquery ajax pretty easily and would eliminate the need for update panels, a script manage, etc. So it gets you halfway there in my opinion. Your pages will load faster and be more responsive and you don't have to throw away any of the code involved with caching stuff in session. Main downside to this is that you are still using session so you really don't want to persist sessions for too long as that will bog down the server hosting a site if it is fairly active.
Long Storage Requirements Server Side implementation
Same stuff above applies except you are not using session and you can use stateless web services if you like. You would generate a GUID for each visitor and storing that GUID in a cookie. On each ajax call you would send this GUID along with the data to persist. This info would get stored in a database identified by the GUID. If the customer completes their order then the information can be moved from the cache database to the completed orders database. In this implementation you would want to write some service or scheduled job that would delete cached orders (not completed) after a certain amount of time to keep the cache database lean.
Nice thing about this solution is you can have a pretty long lived cache, write some reports that key off this cached data, and the load on your web server will be reduced. Additionally if your site becomes more popular it is easy to scale out because you don't have to worry about keeping sessions in sync across multiple web servers.
Long Storage Requirements Client Side Implementation
This approach still uses web services or page methods but there is no caching database involved. Essentially you jam all your information into a cookie or set of cookies and key off that. You might still be able to get some information if you read out the contents of the cookie(s) on each POST and store that somewhere to report off of.
If you don't need to track what customers added things but didn't order then the major benefit of this solution is that you can cut the amount of POST's you have to do down by a lot. You can write to the cookie(s) in javascript and just POST everything when they are completing their order. Just be careful not to put any sensitive information in the cookies unencrypted (contact info, billing info, ..etc.) as there are ways to mine data in cookies from other domains in some less secure browsers. For the sensitive stuff you would POST it to the server and have it returned the encrypted information for storage in the cookie(s).
Downside to this solution is that if the information you need to store is large you could run up against the max cookie size and/or max number of cookies per domain limitation. With a good strategy (ie. storing product id's not product description) you will probably be ok.
Let me know if any of the above is unclear or if you have additional questions.
EDIT: Didn't see the answer above that essentially lays out the Short Storage Requirements one I have. If that is the accepted solution give him the check mark (he beat me to it (= ). Leaving my answer as it lays out some additional options.

Solution for previewing user changes and allowing rollback/commit over a period of time

I have asked a few questions today as I try to think through to the solution of a problem.
We have a complex data structure where all of the various entities are tightly interconnected, with almost all entities heavily reliant/dependant upon entities of other types.
The project is a website (MVC3, .NET 4), and all of the logic is implemented using LINQ-to-SQL (2008) in the business layer.
What we need to do is have a user "lock" the system while they make their changes (there are other reasons for this which I won't go into here that are not database related). While this user is making their changes we want to be able to show them the original state of entities which they are updating, as well as a "preview" of the changes they have made. When finished, they need to be able to rollback/commit.
We have considered these options:
Holding open a transaction for the length of time a user takes to make multiple changes stinks, so that's out.
Holding a copy of all the data in memory (or cached to disk) is an option but there is heck of a lot of it, so seems unreasonable.
Maintaining a set of secondary tables, or attempting to use session state to store changes, but this is complex and difficult to maintain.
Using two databases, flipping between them by connection string, and using T-SQL to manage replication, putting them back in sync after commit/rollback. I.e. switching on/off, forcing snapshot, reversing direction etc.
We're a bit stumped for a solution that is relatively easy to maintain. Any suggestions?
Our solution to a similar problem is to use a locking table that holds locks per entity type in our system. When the client application wants to edit an entity, we do a "GetWithLock" which gets the client the most up-to-date version of the entity's data as well as obtaining a lock (a GUID that is stored in the lock table along with the entity type and the entity ID). This prevents other users from editing the same entity. When you commit your changes with an update, you release the lock by deleting the lock record from the lock table. Since stored procedures are the api we use for interacting with the database, this allows a very straight forward way to lock/unlock access to specific entities.
On the client side, we implement IEditableObject on the UI model classes. Our model classes hold a reference to the instance of the service entity that was retrieved on the service call. This allows the UI to do a Begin/End/Cancel Edit and do the commit or rollback as necessary. By holding the instance of the original service entity, we are able to see the original and current data, which would allow the user to get that "preview" you're looking for.
While our solution does not implement LINQ, I don't believe there's anything unique in our approach that would prevent you from using LINQ as well.
HTH
Consider this:
Long transactions makes system less scalable. If you do UPDATE command, update locks last until commit/rollback, preventing other transaction to proceed.
Second tables/database can be modified by concurent transactions, so you cannot rely on data in tables. Only way is to lock it => see no1.
Serializable transaction in some data engines uses versions of data in your tables. So after first cmd is executed, transaction can see exact data available in cmd execution time. This might help you to show changes made by user, but you have no guarantee to save them back into storage.
DataSets contains old/new version of data. But that is unfortunatelly out of your technology aim.
Use a set of secondary tables.
The problem is that your connection should see two versions of data while the other connections should see only one (or two, one of them being their own).
While it is possible theoretically and is implemented in Oracle using flashbacks, SQL Server does not support it natively, since it has no means to query previous versions of the records.
You can issue a query like this:
SELECT *
FROM mytable
AS OF TIMESTAMP
TO_TIMESTAMP('2010-01-17')
in Oracle but not in SQL Server.
This means that you need to implement this functionality yourself (placing the new versions of rows into your own tables).
Sounds like an ugly problem, and raises a whole lot of questions you won't be able to go into on SO. I got the following idea while reading your problem, and while it "smells" as bad as the others you list, it may help you work up an eventual solution.
First, have some kind of locking system, as described by #user580122, to flag/record the fact that one of these transactions is going on. (Be sure to include some kind of periodic automated check, to test for lost or abandoned transactions!)
Next, for every change you make to the database, log it somehow, either in the application or in a dedicated table somewhere. The idea is, given a copy of the database at state X, you could re-run the steps submitted by the user at any time.
Next up is figuring out how to use database snapshots. Read up on these in BOL; the general idea is you create a point-in-time snapshot of the database, do whatever you want with it, and eventually throw it away. (Only available in SQL 2005 and up, Enterprise edition only.)
So:
A user comes along and initiates one of these meta-transactions.
A flag is marked in the database showing what is going on. A new transaction cannot be started if one is already in process. (Again, check for lost transactions now and then!)
Every change made to the database is tracked and recorded in such a fashion that it could be repeated.
If the user decides to cancel the transaction, you just drop the snapshot, and nothing is changed.
If the user decides to keep the transaction, you drop the snapshot, and then immediately re-apply the logged changes to the "real" database. This should work, since your requirements imply that, while someone is working on one of these, no one else can touch the related parts of the database.
Yep, this sure smells, and it may not apply to well to your problem. Hopefully the ideas here help you work something out.

Resources