Redux performance in a large scale JS app - redux

I recently started studying redux, after few hours i got the question in my mind about performance of redux in a large scale web application.
my question is since redux maintains the previous states of the store over time, let's say the application is big enough and having ample amount of state changes over the time, doesn't it reduce the performance of the app since of the increasing memory consumption over time to maintain all the previous states?
FYI : i'm thinking based on java background of garbage collection and releasing the unused memory after some time.

Assuming you use immutable data structures like the ones provided by immutable.js, there is no extra cost to remembering the previous states when adding or updating an existing data structure other than keeping track of references. This is one of the big advantages of using immutable data structures. Of course, when a state change consists of replacing the complete state with something else, these advantages are mitigated.
That said, you don't have to keep track of the previous states, it's just much easier and more effective to do it with immutable data structures.
Also, Redux doesn't remember previous states by default, this is only used in the redux devtools which provides the "time travel" functionality you seem to aim at. This is very handy during development.

Related

how to provide exclusive copies of big data repository to many developers?

Here's a situation I am facing right now at work:
we currently have 300GB+ of production data (and it increases every day at large). It's in a mongodb clustr
data science team members are working on few algorithms that require access to all of this data at once and those algorithms may update data in place, hence, they have replicated the data in dev environment for their use until they are sure their code works
if multiple devs are running their algorithms then all/some of them may end up with unexpected output because other algorithms are also updating the data
this problem could be easily solved if everyone had their own copy of data!
however, given the volume of data, it's not feasible for me to provide them (8 developers right now) with their exclusive copy everyday. Even if I automate this process, we'll have to wait until copy is completed over the wire
I am hoping for a future proof approach considering we'll be dealing with TB's of data quite soon
I am assuming that many organizations would be facing such issues, and wondering how do other folks approach such a case.
I'd highly appreciate any pointers, leads, solutions for this.
Thanks
You can try using snapshots on the replicated data so each developer can have his own "copy" of the data. See Snapshots definition and consult your cloud provider if it can provide writable snapshots.
Note, snapshots are created almost instantly and at the moment of creation they almost do not require storage space because this technology utilizes pointers but not data itself. Unfortunately each snapshot can grow up to the original volume size because any change of data will initiate physical data copy: the technology that hides behind the process is usually CoW - Copy-on-write. So there is a serious danger that uncontrolled snapshots can "eat" all your free storage space.

Redux with a large entities in a state

I have a redux application. In a first load (initial state) I get data from server and put it into the store. Application have entity with name "Task". If this tasks about 500 - app work perfectly (fast). But if tasks over 2000 - I see a slow down work. This "tasks" use a differents areas.
May I optimize my application? I don't want call API for different areas because this areas use a similar "tasks".
I read about immutable.js. This is may way or not?
Thanks a lot.
The actual number of entities or JS objects in the store shouldn't matter in and of itself - it's a question of what your code is doing with those entities. If you're doing expensive transformations or filtering options every time your components re-render, then yes, an increase in the number of entities will slow things down. Also, using Immutable won't magically improve speed - it can make certain operations faster, but has overhead of its own and can be misused.
I'll point you to some resources for improving Redux performance:
My blog post Practical Redux, Part 6: Connected Lists, Forms, and Performance
A slideshow called High Performance Redux
The articles in the Redux Performance section of my React/Redux links list
The Redux FAQ entry on "scaling" Redux
The Redux FAQ entry on speeding up mapState functions

Shouldn't a Redux app with immutable data run out of memory after a while?

I come from a background in embedded systems where you're really careful about memory management. With Redux especially its concept of immutability.
So let's say I'm modifying a member of an array. I have to create a new array that links to all original members plus the modified item.
I understand why using Immutability improves the speed but my question is since we essentially never remove the old copies of the objects and create new ones, Redux still keeps a reference to the old objects because of time traveling features.
Most machines these days have quite a lot of memory, but shouldn't at least in theory the Redux app crash because the tab/process runs out of memory? After a long use maybe?
No. First, Redux itself doesn't keep around old data - that's something that the Redux DevTools addon does. Second, I believe the DevTools addon has limits on how many actions it will track. Third, Javascript is a garbage-collected language, so items that are no longer referenced will be cleaned up. (Hand-waving a bit there, but that generally covers things.)
Immutable.js leverages the idea of Structural Sharing while creating new copies of collections. It implements persistent data structures and internally uses concepts like tries to implement structural sharing. So, if you created a list with 10 items, adding a new item to it will not create a new list.
Persistent data structures provide the benefits of immutability while
maintaining high performance reads and writes and present a familiar
API.
Immutable.js data structures are highly efficient on modern JavaScript VMs by
using structural sharing via hash maps tries and vector tries as
popularized by Clojure and Scala, minimizing the need to copy or cache
data.
I suggest you watch this awesome talk by Lee Bryon, Immutable.js creator

Does Redux have a nicer way to handle a very large state object?

We're planning an Electron app, and we're considering Redux. The app is going to have a huge state, data potentially being read from dozens or hundreds of files. While doing some research to understand Redux, I found that the reducer method must never alter the state, and must create a new state with any changes. This is going to be a problem if the state of the app is very large, since now we need to deeply copy everything in the state (it's going to be a bunch of nested objects), and we're going to take up double the memory that was being used to hold the state once, then the old state will be deleted. This doesn't seem ideal at all.
Is there a better way to handle this in situation in Redux?
You shouldn't deeply copy. In fact, I'd say deep-copying is counter-intuitive to the redux-way.
You'll be copying by reference most of the time, which is very fast.
If you're worried about your state tree being too large, I recommend redux-ignore, to break things down.
I'm currently running a redux app that has over 200 reducers. I've yet to detect a bottleneck due to redux, even on cheap android mobile devices.

Architecture for Satellite Parts of a Larger Application

I work for a firm that provides certain types of financial consulting services in most states in the US. We currently have a fairly straightforward CRUD application that manages clients and information about assets and services we perform for each. It only concerns itself with the fundamental data points and processes that are common to all locations--the least common denominator.
Now we want to implement support for tracking disparate data points and processes that vary from state to state while preserving the core nationally-oriented system. Like this:
(source: flickr.com)
The stack I'm working with is ASP.Net and SQL Server 2008. The national application is a fairly straightforward web forms thing. Its data access layer is a repository wrapper around LINQ to SQL entities and datacontext. There is little business logic beyond CRUD operations currently, but there would be more as the complexities of each state were introduced.
So, how to impelement the satellite pieces...
Just start glomming on the functionality and pursue a big ball of mud
Build a series of satellite apps that re-use the data-access layer but are otherwise stand-alone
Invest (money and/or time) in a rules engine (a la Windows Workflow) and isolate the unique bits for each state as separate rule-sets
Invest (time) in a plugin framework a la MEF and implement each state's functionality as a plugin
Something else
The ideal user experience would appear as a single application that seamlessly adapts its presentation and processes to whatever location the user is working with. This is particularly useful because some users work with assets in multiple states. So there is a strike against number two.
I have no experience with MEF or WF so my question in large part is whether or not mine is even the type of problem either is intended to address. They both kinda sound like it based on the hype, but could turn out to be a square peg for a round hole.
In all cases each state introduces new data points, not just new processes, so I would imagine the data access layer would grow to accommodate the addition of new tables and columns, but I'm all for alternatives to that as well.
Edit: I tried to think of some examples I could share. One might be that in one state we submit certain legal filings involving client assets. The filing has attributes and workflow that are different from other states that may require similar filings, and the assets involved may have quite different attributes. Other states may not have comparable filings at all, still others may have a series of escalating filings that require knowledge of additional related entities unique to that state.
Start with the Strategy design pattern, which basically allows you outline a "placeholder", to be replaced by concrete classes at runtime.
You'll have to sketch out a clear interface between the core app and the "plugins", and you have each strategy implement that. Then, at runtime, when you know which state the user is working on, you can instantiate the appropriate state strategy class (perhaps using a factory method), and call the generic methods on that, e.g. something like
IStateStrategy stateStrategy = StateSelector.GetStateStrategy("TX"); //State id from db, of course...
stateStrategy.Process(nationalData);
Of course, each of these strategies should use the existing data layer, etc.
The (apparent) downside with this solution, is just that you'll be hardcoding the rules for each state, and you cannot transparently add new rules (or new states) without changing the code. Don't be fooled, that's not a bad thing - your business logic should be implemented in code, even if its dependent on runtime data.
Just a thought: whatever you do, completely code 3 states first (with 2 you're still tempted to repeat identical code, with more it's too time-consuming if you decide to change the design).
I must admit I'm completely ignorant about rules or WF. But wouldn't it be possible to just have one big stupid ASP.Net include file with instructions for states separated from main logic without any additional language/program?
Edit: Or is it just the fact that each state has quote a lot a completely different functionality, not just some bits?

Resources