I am checking the performance of my angular project and I was checking the memory usage for this using memory of developer tools. As I am using Redux store it's showing memory utilized by angular project + redux store. How can I know how much memory redux store is taking?
Since your Redux store is just a single variable, you can just
console.log(JSON.stringify(store.getState()).length)
to get a rough estimation in bytes.
Related
As per the documentation, Firebase Functions are currently supported for 4 regions only - “us-central1”, “us-east1", “europe-west1”, “asia-northeast1"
That means locations further away would incur more latency, and often that translates to lower performance.
How can this limitation be worked around?
1) Choosing a location that is closest to you. You can set up test cloud functions in different regions, and test the round-trip latency. Only you can discover the specifics about your location.
2) Focus your software architecture on infrastructure that is locally available.
Use the client-side Firestore library directly as much as possible. It supports offline data, queueing data to send out later if you don't have internet, and caching read data locally - you can't get faster latency than that! So make sure you use Firestore for CRUD operations.
3) Architect to use CloudFunctions for batch and background processesing. If any business-logic processing is required, write the data to Firestore (using client libraries), and have a FF trigger to do some processing upon the write data-event. Have that trigger update that record with the additional processing, and state. I believe that if you're using the client-side libraries there is a way to have the updated data automatically pushed back to the client-side. (edited)
You also have the bonus benefit of being able to control authorisation with Firestore Auth, where Functions don't have an admin-level authorisation control.
4) Reduce chatter - minimising the amount of CloudFunction calls overall, and ensuring your CloudFunctions themselves do more in one go and return more complete data in one go.
I am just wandering where Ngrx keeps state data. Is it just in memory storage, or it uses localStorage or indexDB? I mean how it stores the state when the app refreshes?
By default its in memory only. There are packages that allow support for persistence on the client.
IndexDB:
https://github.com/ngrx/db
LS:
https://github.com/btroncone/ngrx-store-localstorage
We are currently using Google Data Store and Objectify to return query results back to the front end. I am currently doing performance comparisons between Data Store and Cloud Storage for returning lists of key values.
My question is whether using Objectify will perform better than the Java or Python low-level APIs, or whether they should be the same. If the performance is not better with Objectify then I can safely use the regular APIs for my performance tests.
Any help appreciated.
Thanks,
b/
This is a weird question. The performance of the Python and Java low-level APIs are wildly different because of the performance of the runtimes. Objectify is a thin object-mapping layer on top of the Java low-level API. In general, it does not add significant computational cost to do this mapping, although it is possible to create structures and patterns that do (especially with lifecycle callbacks). The "worst" is that Objectify does some class introspection on your entities at boot, which might or might not be significant depending on how many entity classes you have.
If you are asking this question, you are almost certainly prematurely optimizing.
Objectify allows you to write code faster and make it easier to maintain at the expense of very small/negligible performance penalty.
You can mix low-level API with Objectify in the same application as necessary. If you ever notice a spot where performance difference is significant (which is unlikely, if you use Objectify correctly), then you can always re-write that part in low-level API code.
Thanks for the responses. I am not currently trying to optimise the application (as such) but trying to assess whether our data can be stored in Cloud Storage instead of Datastore, without incurring a significant performance hit when retrieving the keys.
We constantly reload our data and thus have a large ingestion cost with Data Store each time we do so. If we used Cloud Storage instead then this would be minimal.
This is an option which Google's architects have suggested so we are just doing some due diligence on it.
Is is possible to check the remaining storage within my sqlite db using phonegap?
I've created a db and defined the size at 10MB.
db = window.openDatabase("SampleDB","0.1","Name DB", 10000000);
What I want to do is notify the client prior to adding a record through a form on a phonegap app that the DB size has been exceeded so they can't enter any more records. I'll be syncing up with a server when I have a connection to clear down the db
Is this possible as I don't want to create a 100MB DB and hope the size isn't maxed out before a connection is found to the server?
Thanks.
Informative list of all saving approaches and their memory-limitations in cordova can be found here.
But when you really want to know how much memory a db can hold of an individually device you do not get around the work to modify a plugin for all concerning platforms.
At the moment there are still only two global players on mobile market:
Android:
But there is a way to get max memory usage of a sqlite-db of android here
Have a look on the answer that states: Android's SQLiteDatabase class includes a method that sets a maximum database size, setMaximumSize(long numBytes)
I made a plugin for android recently and so I'm pretty confidend that making an android db-max-checker-plugin is a peace of cake. Of course you have to implement code that handles sqlite-db operations on your own but there are tons of tutorials out there.
And here is some kind of startup app for IOS, Android and WindowsPhone(Hello World).
IOS(Apple):
Unfortunate I've not found any reference that matches your request but
here is a good note that states that sqlite-db is limit only on hard-drive of your device and the programm's footprint(how much objects are allocated, imported resources like images etc.) is rather moderate. In terms of cordova allocating a great deal of objects is no need. Nowadays an IOS-device comes with a big hard-drive space so the chance that there is a memory-shortage sometime is less likely than when it comes to android.
Based on this answer, it looks like the meteor server keeps an in-memory copy of the cache for each connected client. My understanding is that it gets used in order to avoid sending multiple copies of data when dealing with overlapping subscriptions on a client.
The relevant part of the linked answer (emphasis is mine):
The merge box: The job of the merge box is to combine the results (added, changed and removed calls) of all of a client's active publish functions into a single data stream. There is one merge box for each connected client. It holds a complete copy of the client's minimongo cache.
Assuming that answer is still accurate in the current version of meteor, couldn't that create a huge waste of memory on the server as the number of users increases?
As an off-the-cuff calculation, if an app had about a 100kB cache per client, then 10,000 concurrent users would use up 1GB of memory on the server, and 100,000 users a whopping 10GB! This would be true even if each client was looking at almost identical data. It seems plausible for an app use much more data than that per client, which would further exacerbate the problem.
Does this problem exist in the current version of Meteor? If so, what techniques can be used to limit the amount of memory the server needs to use to manage all the client subscriptions?
Take a look at this post by Arunoda at his meteorhacks.com blog:
http://meteorhacks.com/making-meteor-500-faster-with-smart-collections.html
which talks about his Smart Collections page:
http://meteorhacks.com/introducing-smart-collections.html
He created an alternative Collection stack which has succeeded in it's goals for speed, efficiency (memory & cpu) and scalability (you can see a graphed comparison in the post). Admittedly in his tests RAM usage was negligent with both Collection types, although the way he's implemented things there should be a very obvious difference with the type of use case you mentioned.
Also, you can see in this post on meteor-core:
https://groups.google.com/d/msg/meteor-core/jG1KLObX1bM/39aP4kxqWZUJ
that the Meteor developers are aware of his work and are cooperating in implementing some of the improvements into Meteor itself (but until then his smart package works great).
Important note! Smart collections relies on access to the Mongo Oplog. This is easy if you're running on your own machine or hosted infrastructure. If you're using a cloud based database, this option might not be available, or if it is, will cost a lot more than the smaller packages.