I am new to IONIC. I want to know what is the ideal choice for storage. Is it IONIC storage?
Also i need to manually enter some data for the app. Now i can’t find any way to store data beforehand, is this possible in ionic?
For example, lets say i need a database/storage filled before with some values. How can i do that? Is that possible or i need to get the data from cloud?
I have posted my query in IONIC forum, this is the best answer i got:
Super simple solution: On start you check if a special value is set, e.g. databasePreloaded. If it is missing, you open a file, read the content and write it to the storage. Then you set databasePreloaded. On next start it will be present and the data won’t be loaded again.
My question is if the data is around 5-6 MB then what the ideal way to do that?
Using a check to see if the data has been loaded is a simple choice.
It would be easy to work with.
The main issue for me is whether the data will change at all once it has been deployed? You would need to think how this would be handled.
Finally, I'm under the impression IndexedDB can only handle up to 5mb so you will need to store this data in sqlite?
Related
I have just published an app that uses Firestore as a backend.
I want to change how the data is structured;
for example if some documents are stored in subcollections like 'PostsCollection/userId/SubcolletionPosts/postIdDocument' I want to move all this last postIdDocument inside the first collection 'PostsCollection'.
Obviously doing so would prevent users of the previous app version from writing and reading the right collection and all data would be lost.
Since I don't know how to approach this issue, I want to ask you what is the best approach that also big companies use when changing the data structure of their projects.
So the approach I have used is document versioning. There is an explanation here.
You basically version your documents so when you app reads them, it knows how to update those documents to get them to the desired version. So in your case, you would have no version, and need to get to version 1, which means read the sub-collections to the top collection and remove the sub collection before working with the document.
Yes it is more work, but allows an iterative approach to document changes. And sometimes, a script is written to update to the desired state and new code is deployed 😛. Which usually happens when someone wants it done yesterday. Which with many documents can have it's own issues.
There's any way to list the kinds that are not being used in google's datastore by our app engine app without having to look into our code and/or logic? : )
I'm not talking about indexes, which I can list by issuing an
gcloud datastore indexes list
and then compare with the datastore-indexes.xml or index.yaml.
I tried to check datastore kinds statistics and other metadata but I could not find anything useful to help me on this matter.
Should I give up to find ways of datastore providing me useful stats and code something to keep collecting datastore statistics(like data size), during a huge period to have at least a clue of which kinds are not being used and then, only after this research, take a look into our app code to see if the kind Model was removed?
Example:
select bytes from __Stat_Kind__
Store it somewhere and keep updating for a period. If the Kind bytes size does not change than probably the kind is not being used anymore.
The idea is to do some cleaning in datastore.
I would like to find which kinds are not being used anymore, maybe for a long time or were created manually to be used once... You know, like a table in oracle that no one knows what is used for and then if we look into the statistics of that table we would see that this table was only used once 5 years ago. I'm trying to achieve the same in datastore, I want to know which kinds are not being used anymore or were used a while ago, then ask around and backup/delete it if no owner was found.
It's an interesting question.
I think you would be best-placed to audit your code and instill organizational practice that requires this documentation to be performed in future as a business|technical pre-prod requirement.
IIRC, Datastore doesn't automatically timestamp Entities and keys (rightly) aren't incremental. So there appears no intrinsic mechanism to track changes short of taking a snapshot (expensive) and comparing your in-flight and backup copies for changes (also expensive and inconclusive).
One challenge with identifying a Kind that appears to be non-changing is that it could be referenced (rarely) by another Kind and so, while it does not change, it is required.
Auditing your code and documenting it for posterity should not only provide you with a definitive answer (and identify owners) but it pays off a significant technical debt that has been incurred and avoids this and probably future problems (e.g. GDPR-like) requirements that will arise in the future.
Assuming you are referring to records being created/updated, then I can think of the following options
Via the Cloud Console (Datastore > Dashboard) - This lists all your 'Kinds' and the number of records in each Kind. Theoretically, you can take a screen shot and compare the counts so that you know which one has experienced an increase or not.
Use of Created/LastModified Date columns - I usually add these 2 columns to most of my datastore tables. If you have them, then you can have a stored function that queries them. For example, you run a query to sort all of your Kinds in descending order of creation (or last modified date) and you only pull the first record from each one. This tells you the last time a record was created or modified.
I would write a function as part of my App, put it behind a page which requires admin privilege (only app creator can run it) and then just clicking a link on my App would give me the information.
If I have User and Profile objects. What is the best way to structure my collections in firestore given that the follow scenarios can take place?
Users have a single Profile
Users can update their Profile
Users can save other users' profiles
Users can deleted their saved profiles
The same profile can't be saved twice
If Users and Profiles are separate collections, what is the best way to store saved profiles?
One way that came to mind was that each user has a sub collection called SavedProfiles. The id of each document is the id of the profile. Each saved Profile only contains a reference to the user who's profile it belongs to.
The other option was to do the same thing but store the whole profile of each saved profile.
The benefits of the first approach is that when a user updates their own profile there's no need to update any of the their profiles that have already been saved as it's only the reference that is stored. However, attempting to read a user's saved profiles may require two read operations (which will be quite often), one to get all the references then querying for all the profiles with those reference (if that's even possible???). This seems quite expensive.
The second approach seems like the right way to go as it solves the problem of reading all the saved profiles. But updating multiple saved profiles seems like an issue as each user's saved profiles may be unique. I understand that it's possible to do a batch update but will it be necessary to query each user in the db for their saved profiles and check if that updated profile exists, if so update it? I'm not too sure which way to go. I'm not super used to NoSQL data structures and it already seems like I've done something wrong since I've used a sub collection since it's advised to keep everything as denormalized as possible so please let me know if the structure to my whole db is wrong too, which is also quite possible...
Please provide some examples of how to get and update profiles/saved profiles.
Thank you.
Welcome to the conundrum that is designing a NoSQL database. There is no right or wrong answer, here. It's whatever works best for you.
As you have identified, querying will be much easier with your second option. You can easily create a Cloud Function which updates any profiles which have been modified.
Your first option will require multiple gets to the database. It really depends how you plan to scale this and how quick you want your app to run.
Option 1 will be a slow user experience, while all of the data is fetched. Option 2 will be a much faster user experience, but will requre your Cloud Function to update every saved profile. However, this is a background task so wouldn't matter if it takes a few seconds.
In my multi-user meteor application design I want to enable users to be able to create and store their own reactive dashboards to visualize data that they own within the applications database. For example, a user may have an object in the database representing the real-time disk usage of a processor. I want them to be able to submit/store html say to represent a dynamic dial as their dashboard. Another user may have their own weather station and want a dashboard with a last 24 hours thermometer and pressure trend. When they call up one of their stored dashboards it is rendered and would update as their data changes.
Can anyone point to example code or explain how to accomplish this? Or, authoritatively explain why it cannot be done in the framework. I have come across various dynamic API's but nothing that fits the bill. I.e. UI.renderWithData and Meteor._def_template.
The following topic was very similar to my questions and it got me a good start and I figured it out and posted and answer there.
How to make meteor evaluate user defined template text
I'm writing a simple Wordpress plugin for work and am wondering if using the Transients API is practical in this case, or if I should seek out another way.
The plugin's purpose is simple. I'm making a call to USZip Web Service (http://www.webservicex.net/uszip.asmx?op=GetInfoByZIP) to retrieve data. Our sales team is using a Lead Intake sheet that the plugin will run on.
I wanted to reduce the number of API calls, so I thought of setting a transient for each zip code as the key and store the incoming data (city and zip). If the corresponding data for a given zip code already exists, then no need to make an API call.
Here are my concerns:
1. After a quick search, I realized that the transient data is stored in the wp_options table and storing the data would balloon that table in no time. Would this cause a significance performance issue if the db becomes huge?
2. Is this horrible practice to create this many transient keys? It could easily becomes thousands in a few months time.
If using Transient is not the best way, could you please help point me in the right direction? Thanks!
P.S. I opted for the Transients API vs the Options API. I know zip codes don't change often, but they sometimes so. I set expiration time of 3 months.
A less-inflated solution would be:
Store a single option called uszip with a serialized array inside the option
Grab the entire array each time and simply check if the zip code exists
If it doesn't exist, grab the data and save the whole transient again
You should make sure you don't hit the upper bounds of a serialized array in this table (9,000 elements) considering 43,000 zip codes exist in the US. However, you will most likely have a very localized subset of zip codes.