Setting up default preloaded data in tests inside in RTK query - redux

I am migrating certain reducers written with the duck pattern to RTK and RTK Query.
As part of that i am moving data out of the reducer and into a common createApi call in a file.
There are some tests written (for the older redux code) - where instead of calling an api endpoint - the data is directly set in redux store - passed as initialData.
import store from 'singleton-store'
function renderApp(children, initialData){
return <Provider store={store}>{children}</Provider>
}
//write a test using renderApp to test a redux connected component
But now since I am using RTK - query - I cannot set this data in older redux store.
It can only be set once the api is called and data is set via the createApi call.
How can I prefill api data in the store instead of calling an api - as part of running my test suite in React testing library.
I saw this issue and what I want to do is exactly this:

Related

Get same object from service into several component in Vue.js

Here is the context :
on a Vue project I have a service workspaces.js that return the current Workspace for my user
I have to use this workspace from several components (header, main content, menu, etc.)
This object is binded with firestore with onSnapshot API
This object is reactive too with reactive API from Vue
On vuex I use package vuex-persist to have persistance on reload
I tried to save the current workspace in the state of vuex store, but with the different tools used (onbinding, persist, reactive) this don't work properly and data binding is not always working.
Now I want to just save the id of the workspace in vuex and get current Workspace from my service.
However to avoid multiple requests to firestore, I want to use the same object in each components that need the workspace. I have to use a kind of Singleton.
I thought about make a variable exported from the service that store the current Workspace. I wonder if this is reliable or not, I don't know how do import work exactly. Will be the same instance of my service in each component that use it ? Is there a risk that one component get the workspace and an other one get null ?

NgRx How to launch a feature module from another module

I have three modules: AppModule, ModuleX, ModuleY.
In ModuleX I get and store data with NgRx but it only does so If I route to one of the components of ModuleX's and trigger OnInit lifecycle hook. I need to access that data in ModuleY.
Is there any way to do that,can I globally access to data that stored in the state from anywhere?

Firebase Realtime Database is not generating dynamic Id for Imported Data

Firebase Realtime Database is not generating dynamic Id for Imported Data via json file but it generates id for data posted via frontend... using POST request. Like cartItemess object has dynamically generated id - "MOsH2-zPBHMM06f1dNI", but the below init data i imported manually using Import option which never generated any id.
That's expected. If you're using the import function in the Firebase console, it will simply import all the data exactly as it appears in the JSON file. It doesn't generate any extra data or do any manipulation of it. This is different than the POST calls to the REST API or calls to push() in client SDKs, which will generate a random ID.

Restore the previous saved status into redux

We're using redux and immutable objects on our redux store.
The scenario is that a user might dump the current store status into database and later the user might be able to restore it.
I'm a newbie to redux.
Is there any keywords to search for this kind of techniques?
We will try to dump the status into JSON format and reload it from database.
The key word is "persistence". There's dozens of existing libraries for persisting Redux state already - you can either try using them as-is, or look at how they work and implement some of the approaches yourself.
To actually persist the state, you'd normally either do it in a store subscription callback, or in a middleware. Then, as part of your app's setup process, retrieve the persisted state (from the server or localStorage or wherever you persisted it), and pass it as the second argument to createStore(rootReducer, preloadedState).
I have used window.localStorage for this.
const MyReducer = (state,action) => {
switch(action.type){
...
case 'SAVE_STATE':
stateString = JSON.stringify(state.toJS())
window.localStorage.setItem('applicationState', stateString)
return state
}
}

Import large data (json) into Firebase periodically

We are in the situation that we will have to update large amounts of data (ca. 5 Mio Records) in firebase periodically. At the moment we have a few json files that are around ~1 GB in size.
As existing third party solutions (here and here) have some reliability issues (import object per object; or need for open connection) and are quite disconnected to the google cloud platform ecosystem. I wonder if there is now an "official" way using i.e. the new google cloud functions? Or a combination with app engine / google cloud storage / google cloud datastore.
I really like not to deal with authentication — something that cloud functions seems to handle well, but I assume the function would time out (?)
With the new firebase tooling available, how to:
Have long running cloud functions to do data fetching / inserts? (does it make sense?)
Get the json files into & from somewhere inside the google cloud platform?
Does it make sense to first throw large data into google-cloud-datastore (i.e. too $$$ expensive to store in firebase) or can the firebase real-time database be reliably treaded as a large data storage.
I finally post the answer as it aligns with the new Google Cloud Platform tooling of 2017.
The newly introduced Google Cloud Functions have a limited run-time of approximately 9 minutes (540 seconds). However, cloud functions are able to create a node.js read stream from cloud storage like so (#googlecloud/storage on npm)
var gcs = require('#google-cloud/storage')({
// You don't need extra authentication when running the function
// online in the same project
projectId: 'grape-spaceship-123',
keyFilename: '/path/to/keyfile.json'
});
// Reference an existing bucket.
var bucket = gcs.bucket('json-upload-bucket');
var remoteReadStream = bucket.file('superlarge.json').createReadStream();
Even though it is a remote stream, it is highly efficient. In tests I was able to parse jsons larger than 3 GB under 4 minutes, doing simple json transformations.
As we are working with node.js streams now, any JSONStream Library can efficiently transform the data on the fly (JSONStream on npm), dealing with the data asynchronously just like a large array with event streams (event-stream on npm).
es = require('event-stream')
remoteReadStream.pipe(JSONStream.parse('objects.*'))
.pipe(es.map(function (data, callback(err, data)) {
console.error(data)
// Insert Data into Firebase.
callback(null, data) // ! Return data if you want to make further transformations.
}))
Return only null in the callback at the end of the pipe to prevent a memory leak blocking the whole function.
If you do heavier transformations that require a longer run time, either use a "job db" in firebase to track where you are at and only do i.e. 100.000 transformations and call the function again, or set up an additional function which listens on inserts into a "forimport db" that finally transforms the raw jsons object record into your target format and production system asynchronously. Splitting import and computation.
Additionally, you can run cloud functions code in a nodejs app engine. But not necessarily the other way around.

Resources