Shouldn't a Redux app with immutable data run out of memory after a while? - redux

I come from a background in embedded systems where you're really careful about memory management. With Redux especially its concept of immutability.
So let's say I'm modifying a member of an array. I have to create a new array that links to all original members plus the modified item.
I understand why using Immutability improves the speed but my question is since we essentially never remove the old copies of the objects and create new ones, Redux still keeps a reference to the old objects because of time traveling features.
Most machines these days have quite a lot of memory, but shouldn't at least in theory the Redux app crash because the tab/process runs out of memory? After a long use maybe?

No. First, Redux itself doesn't keep around old data - that's something that the Redux DevTools addon does. Second, I believe the DevTools addon has limits on how many actions it will track. Third, Javascript is a garbage-collected language, so items that are no longer referenced will be cleaned up. (Hand-waving a bit there, but that generally covers things.)

Immutable.js leverages the idea of Structural Sharing while creating new copies of collections. It implements persistent data structures and internally uses concepts like tries to implement structural sharing. So, if you created a list with 10 items, adding a new item to it will not create a new list.
Persistent data structures provide the benefits of immutability while
maintaining high performance reads and writes and present a familiar
API.
Immutable.js data structures are highly efficient on modern JavaScript VMs by
using structural sharing via hash maps tries and vector tries as
popularized by Clojure and Scala, minimizing the need to copy or cache
data.
I suggest you watch this awesome talk by Lee Bryon, Immutable.js creator

Related

Won't an app based on immutable data structures run out of memory?

Never mind redux or something - I am solely asking about Immutable.JS, Ramda, etc.
If new versions of a data structure is created by structural sharing, that means that every new version needs to have a pointer to the previous version in order for it to be able to share anything. That again means that older versions of a structure cannot be garbage collected, meaning again, that in an app where you have state, this state will use a monotonically increasing amount of memory. If this is the case, then that data structure will at some point have used all available memory, if it keeps getting modified.
Am I missing something here? I can see that for many (most) use cases on the web (in a browser), this won't be a problem, as you are probably just changing a tiny part of the structure each time, and you will probably leave the page or reload it way before you use all memory, but for long running processes this should pose a problem. Right? Riiight?
If new versions of a data structure is created by structural sharing, that means that every new version needs to have a pointer to the previous version in order for it to be able to share anything.
This is not correct in general. The new version would have pointers to subparts of the previous version. So it shares a fraction (
often almost all) of the data of the older version.
For example, Ocaml's maps (represented and implemented by some self-balancing binary search tree variant of red-black trees) are immutable: see documentation of Map. But if you add (or remove) some binding to (from) a map, you get a new map sharing most (but not all) of its internal nodes with the old one.
So the garbage collector would eventually "delete" those old internal nodes which are not relevant to the current "state".
BTW web programming (and web navigation) is related to continuations and continuation-passing style. See e.g. Byrd's Web Programming with Continuations and several papers by C.Queinnec.
Read also more about monads in functional programming.

Redux performance in a large scale JS app

I recently started studying redux, after few hours i got the question in my mind about performance of redux in a large scale web application.
my question is since redux maintains the previous states of the store over time, let's say the application is big enough and having ample amount of state changes over the time, doesn't it reduce the performance of the app since of the increasing memory consumption over time to maintain all the previous states?
FYI : i'm thinking based on java background of garbage collection and releasing the unused memory after some time.
Assuming you use immutable data structures like the ones provided by immutable.js, there is no extra cost to remembering the previous states when adding or updating an existing data structure other than keeping track of references. This is one of the big advantages of using immutable data structures. Of course, when a state change consists of replacing the complete state with something else, these advantages are mitigated.
That said, you don't have to keep track of the previous states, it's just much easier and more effective to do it with immutable data structures.
Also, Redux doesn't remember previous states by default, this is only used in the redux devtools which provides the "time travel" functionality you seem to aim at. This is very handy during development.

Meteor : Buisness Object

I started Meteor a few months ago.
I would like to know if using cursor.observeChanges for buisness objects is a good idea
I want to separate operations and views so I can uses the same operations in many views/events, and I want to know if it is a good idea.
Someone told me, we should not separate operations on mongo from view.
So my question is : Is it a good idea to to Buisness Objects with Meteor ?
Tanks for reading me.
cursor.observeChanges is essentially what you get behind the scenes when you do normal find() queries and bind to template helpers due to its context being reactive.
In the meteor world, the traditional model/view/controller paradigm is shifted towards a reactive data-on-the-wire concept including features like latency compensation.
What you refer to as a business object is basically a representation of your business data which is strongly typed, has a type of its own, atomic, and has only one task of representing.
You can achieve that kind of separation of concerns in any language/framework, including meteor. That only depends on how you lay out, structure and abstract your code.
What Meteor brings into the equation is the toolset to build up an interface to your data with modern ux features that are otherwise very hard/expensive to get.
The only concern over business-class applications could be the fact that Meteor currently employs MongoDB by default. MongoDB has its own discussions around business applications whether they need transaction support, ad-hoc aggregation, foreign key relationships etc. But that is another topic.

Can ASP.NET performance be improved with modules/static classes?

Can using Modules or Shared/Static references to the BLL/DAL improve the performance of an ASP.NET website?
I am working of a site that consists of two projects, one the website, the other a VB.NET class library which acts as a combination of DAL and BLL.
The library is used to communicate with databases and sometimes transform/validate the data going into/coming from the DBs.
Currently each page on the site that needs db access (vast majority) will create an instance of the relevant class in the library to access specific tables.
As I understand it this leads to a class from the library being instantiated and garbage collected for each request, with the possibility of multiple concurrent instances if multiple users view the same page.
If I converted the classes to modules (shared/static class) would performance increase and memory be saved as only one instance of each module exists at a time and a new instance is not having to be created for each request?
(if so, does anyone know if having TableAdapters as global variables in the modules would cause problems due to threading?)
Alternatively would making the references to the Library class it the ASP.NET page have the same effect? (except I would have to re-write a lot less)
I'm no expert, but think that the absence of examples of this static class / session object model in books and online is indicative of it being a bad idea.
I inherited a Linq-To-Sql application where the db contexts were static, and after n requests the whole thing just fell apart. The standard model for L2Sql is the Unit-of-Work pattern (define a task or set of tasks - do them and close). Let the framework worry about connection pooling and efficient GC.
Are you just trying to be efficient or do you have performance issues? If the latter it's usually more effective to look at caching or improving query efficiency (use stored procedures, root out queries in loops) than looking at object instantiation.
Statics don't play well with unit tests either (another reason why they have dropped out of fashion).
instances are only a problem if they are not collected by the CG (a memory leak). Instances are more flexible than static as well because you can configure the instance to the specific context you are using.
When an application has poor performance or memory problems its usually a sign that
instances are not properly released (IDisposable)
the amount of data retrieved is too big (not paging large sets of data)
a large number of queries are executed (select n+1, or just a lot of queries)
poorly constructed sql statements (missing indexes, FK, too many joins, etc)
too many remote calls (either to other servers, or disk)
These are first things I would check. then start looking at the number of instantiated objects. Chances are that correcting the above mentioned list will solve most performance bottlenecks.
Can using Modules or Shared/Static references to the BLL/DAL improve
the performance of an ASP.NET website?
It's possible, but it depends heavily on how you use your data. One tradeoff in using a single shared instance of an object instead of one per request is that you will need to apply locking unless the objects are strictly read-only, and locking can both slow things down and complicate your code (not to mention being a common source of bugs).
However, if each object is going to contain the exact same data, then the tradeoff may be worth it -- even more so if it can save a DB round-trip.
You might consider using either a Singleton or a small number of parameterized objects rather than a static, though -- and use caching to manage them. That would give you the flexibility to let go of objects that you no longer need, which is harder to do when you're dealing with statics.

Should I cache instances of frequently accessed classes

New to .net and was wondering if there is a performance gain to keeping an instance of, for example a DAL object in scope?
Coming from the Coldfusion world I would instanciate a component and store it in the application scope so that every time my code needed to use that component it would not have to be instanciated over and over again effecting performance.
Is there any benefit to doing this in ASP.Net apps?
Unless you are actually experiencing a performance problem, than you need not worry yourself with optimizations like this.
Solve the business problems first, and use good design. As long as you have a decent abstraction layer for your data access code, then you can always implement a caching solution later down the road if it becomes a problem.
Remember that any caching solution increases complexity dramatically.
NO. In the multi-tier world of .asp this would be considered a case of "premature optimization". Once a sites suite of stubs, scripts and programs has scaled up and been running for a few months then you can look at logs and traces to see what might be cached, spawned or rewritten to improve performance. And as the infamous Jeff Atwood says "Most code optimizations for web servers will benifit from money being spent on new and improved hardware rather than tweaking code for hours and hours"
Yes indeed you can and probably should. Oftentimes the storage for this is in the Session; you store data that you want for the user.
If it's a global thing, you may load it in the Application_Start event and place it somewhere, possibly the HttpCache.
And just a note, some people use "Premature Optimisation" to avoid optimising at all; this is nonsense. It is reasonable to cache in this case.
It is very important to do the cost benefit analysis before caching any object, one must consider all the factors like
Performance advantage
Frequency of use
Hardware
Scalability
Maintainability
Time available for delivery (one of the most important factor)
Finally, it is always useful to cache object which are very costly to create or you are using very frequently i.e. Tables's Data (From DB) or xml data
Does the class you are considering this for have state? If not, (and DAL classes often do not have state, or do not need state), then you should make it's methods static, and then you don't need to instantiate it at all. If the only state it holds is a connection string, you can also make that property field a static property field, and avoid the requirement of instantiating it that way.
Otherwise, take a look at the design pattern called Flyweight

Resources