NgRx First time effect is called is much faster and following - ngrx

I'm trying to optimize a page in an application and noticed when using console.time that the time from dispatching an action to reaching the effect takes a long time (>500ms).
When the page is fully loaded, I have a button that will trigger again the action (LoadBudgetListRequestAction) and fetch data from the server to render on the page)
The first time the page loads it takes 27-50ms, the following times I refresh the data using the button, it takes around 600-1000ms.
Code dispatching action:
loadBudgets() {
console.time('SINCE DISPATCHING UNTIL EFFECT');
console.time('DISPATCHING LoadBudgetListRequestAction until reducer');
this.store$.dispatch(
new BudgetListActions.LoadBudgetListRequestAction(),
);
}
Code reducer:
case BudgetListActionTypes.BUDGET_LIST_LOAD_REQUEST: {
console.timeEnd('DISPATCHING LoadBudgetListRequestAction until reducer');
return {
...state,
isLoading: true,
error: null,
isDataAvailable: false
};
}
Code effect:
#Injectable()
export class BudgetListStoreEffects {
#Effect()
loadRequestEffect$ = this.actions$.pipe(
ofType(
featureActions.BudgetListActionTypes.BUDGET_LIST_LOAD_REQUEST,
),
switchMap(() => {
console.timeEnd('SINCE DISPATCHING UNTIL EFFECT');
//API CALL
},
));
constructor(
private budgetApi: BudgetService,
private actions$: Actions,
private store$: Store<any>,
) {
}
}
The time from dispatching the action until the reducer is always under 1ms (console.timeEnd('DISPATCHING LoadBudgetListRequestAction until reducer')) but what I'm not understanding is why to reach the effect code it takes such a long time.
I have more than one effect because after retrieving the data from the server, I have to use the filters on the page as well (that are also in the store) to filter the data.
I'm assuming that the following times I reload the data, the store itself has much more data than in the first time. However, the amount of data that I'm retrieving isn't bigger than 1500 records.
Would appreciate any insight.
Thanks

Apparently the issue was using the StoreDevtoolsModule. When active it logs all data from the store to be used by the Redux DevTools and this slows the app.
While it is not a concern in dev environments, if it remains active in Production environment it might be an issue. It gets slower as the amount of data in the store gets bigger.

Related

Cypress not stubbing json data in intercept?

I've been searching for a solution all day, googling and StackOverflowing, but nothing appears to be working.
I've got a very simple NextJS app. On page load, I load a fact from a third party API automatically. Then a user can enter a search query, press enter, and search again based on that query. I want to create a Cypress test that checks for the functionality of that search feature.
Right now, I'm getting a timeout on cy.wait(), and it states that No request ever occurred.
app.spec.js
import data from '../fixtures/data';
describe('Test search functionality', () => {
it('renders new fact when search is performed', () => {
// Visit page
cy.visit('/');
// Wait for page to finish loading initial fact
cy.wait(1000);
// Intercept call to API
cy.intercept("GET", `${process.env.NEXT_PUBLIC_API_ENDPOINT}/jokes/search?query=Test`, {
fixture: "data.json",
}).as("fetchFact");
// Type in search input
cy.get('input').type('Test');
// Click on search button
cy.get('.submit-btn').click();
// Wait for the request to be made
cy.wait('#fetchFact').its('response.statusCode').should('eq', 200);
cy.get('p.copy').should('contain', data.result[0].value);
})
});
One thing I've noticed, is that the data being displayed on the page is coming from the actual API response, rather than the json file I'm attempting to stub with. None of React code is written server-side either, this is all client-side.
As you can see, the test is pretty simple, and I feel like I've tried every variation of intercept, changing order of things, etc. What could be causing this timeout? Why isn't the json being stubbed correctly in place of the network request?
And of course, I figure out the issue minutes after posting this question.
I realized that Cypress doesn't like Next's way of handling env variables, and instead needed to create a cypress.env.json. I've updated my test to look like this:
import data from '../fixtures/data';
describe('Test search functionality', () => {
it('renders new fact when search is performed', () => {
// Visit page
cy.visit('/');
// Wait for page to finish loading initial fact
cy.wait(1000);
// Intercept call to API
const url = `${Cypress.env('apiEndpoint')}/jokes/search?query=Test`;
cy.intercept("GET", url, {
fixture: "data",
}).as("fetchFact");
// Type in search input
cy.get('input').type('Test');
// Click on search button
cy.get('.submit-btn').click();
// Wait for the request to be made
cy.wait('#fetchFact').its('response.statusCode').should('eq', 200);
cy.get('p.copy').should('contain', data.result[0].value);
})
});

How to properly handle simultaneous persistence actions in Redux?

React application using Redux. A have a combined reducer, consisting of appStateReducer and contestReducer. Each of these two takes care of some part of the application data.
When action is performed, I want not only the respective state to be changed, but I also want to persistently save the new state, so that if the user reloads application page in the browser, the state would be preserved.
My idea is to add third reducer to take care only of save and load actions (each of the two sub-states separately).
Save and load will use IndexedDB, through localbase package. All of the db actions (add, get, update, delete) appear to be synchronous, i.e. there seems to be no real need to implement asynchronous actions. UPDATE: this is wrong, it is asynchronous, just some basic examples ignore it.
I am not sure how to handle the problem properly.
I will need a database connection object, a singleton, initialized once after page is loaded, which should be shared by all save/load actions regardless of which part of the state is to be stored or loaded. That would lead to a separate reducer working only with the db object. If I do this, the db reducer would have to have access to all the other sub-state, which is normally not the case in Redux.
Or, I could implement save and load action in each reducers separately, not a big deal, actually. But how to make the global db object accessible by the reducers?
It is as React application written in typescript and all components are implemented as classes.
You already have access to all data if you are using middleware, Example:
export const requestPost = (id) => (dispatch,getState) => {
// You can make an bank for post and check If data exist or not
const postState = getState().bank.posts.data;
const found = postState?.find((post) => post.id === id);
if (found) {
dispatch({ type: SUCCESS.POST, data: found });
} else {
dispatch({ type: REQUEST.POST });
API.get(`/post/v2?id=${id}`)
.then((res) => dispatch({ type: SUCCESS.POST, data: res.data[0] }))
.catch((err) => errorHandler(err, FAILURE.POST));
}
};
Just make and reducer for saving data on DB or somewhere and read them at the start.

NGRX bulk effect of already defined single effect

So, Im working on an app with a concept of "Plans" and each plan you can add a comment. That part works fine, but it seems to fail and get confused if i try to run this in a loop.
The Action:
export class AddComment implements Action {
readonly type = CommentActionTypes.AddComment;
constructor(public payload: Comment) {}
}
export class AddCommentSuccess implements Action {
readonly type = CommentActionTypes.AddCommentSuccess;
constructor(public payload: Comment) {}
}
Effect
#Effect()
addComment$: Observable<Action> = this.actions$
.ofType<AddComment>(CommentActionTypes.AddComment).pipe(
switchMap(action => this.commentService.addComment(this.disciplineType, action.payload)),
map((comment: any) => new AddCommentSuccess(comment)),
catchError(err => of(new AddCommentFail(err)))
);
Implementation
What im struggling with is firing this off in rapid success/ I have a situation where I want to add a duplicate comment to multiple plans.
saveSet.forEach(x => {
comment.plan_id = x.id;
this.store.dispatch(this.buildAddCommentAction(comment));
});
For reference:
buildAddCommentAction(comment: DisciplineComment) : Action {
return new CommentActions.AddComment(comment);
}
What is Happening
If i have a list of 5 plans, and want to add a duplicate comment to all of them, Im only getting a successful response for the last item in the loop.
Now i know that is overly chatty, that is 5 separate client/service calls. What I cant figure out, its what the prescribed approach to this should be?
1.) A new BulkAddComment Action, effect, etc. Im loathe to do this becuase I have Comments, Concerns (similar in function and need), and one of each for every "discipline". Thatd be about 36 new effects and twice that in actions. A serious refactor is needed.
2.) Modify the actions and effects for 1 or multiple
3.)?
Thanks for input
This is because you're using the switchMap operator which will cancel the current running observable, in your case the service call.
You'll have to use concatMap or mergeMap. If the order is important use concatMap, if not use mergeMap because this will make your service calls in parallel.
For more info, watch this.

Firebase React Native fetch data

I am trying to do an app on react-native with a feed. On my main screen, I go fetch the data :
fetchData() {
firebase.database().ref(`/posts/${group}`).on('value', async snapshot => {...}
}
when I want for example to like a comment of the post, I first push the data into firebase with different queries, for example :
export const likeComment = (...) => {
firebase.database().ref(`/posts/${group}/${post}`).update
({
updatedAt: firebase.database.ServerValue.TIMESTAMP
});
firebase.database().ref(`/posts/${group}/${post}/lastComments`).set(...);
But I realized that my first function fetchData was called 3 times.
then I grouped my queries like :
let updates = {}
updates[`/posts/${group}/${post}/lastComments`] = {...};
updates[`/posts/${group}/${post}`] = { ... };
firebase.database().ref().update(updates);
Then, fetchData was called still 2 times.
I was wondering if that was the best way to do it, and why does my function fetchData was still called twice.
Thanks for your help
It's hard to tell from the limited code, but my guess is that fetchData is not actually being called more than once, but instead the snapshot is firing when you make updates to your Firebase database.
The part of your code where you say .on('value', async snapshot => you're setting up a listener to send you updates any time that value changes.
So my guess is that your three invocations are:
The first time you actually call fetchData and set up the
listener
Your .on( snapshot listener firing when you call
update
Your .on( snapshot listener firing again when you
call set
This push-based database workflow is one of the main benefits of Firebase. If you only want to get the data once and not get live updates, you can call .once( instead of .on(
https://firebase.google.com/docs/database/web/read-and-write

"Thread safety" in Redux?

Let's pretend I have a long-running function working on computing my new state.
Meanwhile another action comes in and changes the state while the first one did not finish and is working on stuff.
If I am imagining things correctly there is no actions queue and the state might be resolved in some unpredictable manner.
Should I be worried about this at all?
I don't mean real threads, just a concept for the lack of better wording. Actions are asynchronous and state keys are being accessed by reference.
I was concerned about the same thing so I just did some digging. It looks like two threads concurrently calling dispatch() (if it were possible) could raise an exception. But it shouldn't be possible and that error message points to a particular, different cause. The "actions queue" is in the browser's own event loop. That event loop runs async/interaction callbacks (from which we call dispatch()) one-at-a-time.
That's the responsibility of your own action creators and your own reducers, and heavily related to how you structure your actions and reducers conceptually. The Redux FAQ question on structuring "business logic" is very relevant here:Redux FAQ
Thunk action creators have access to getState, so it's very common to have a thunk check the current state and only dispatch under certain conditions, such as this example:
// An example of conditional dispatching based on state
const MAX_TODOS = 5;
function addTodosIfAllowed(todoText) {
return (dispatch, getState) => {
const state = getState();
if(state.todos.length < MAX_TODOS) {
dispatch({type : "ADD_TODO", text : todoText});
}
}
}
Your reducer can also have sanity checks as well:
function todosReducer(state, action) {
switch(action.type) {
case "ADD_TODO": {
if(state.todos.length >= state.maxTodos) {
return state;
}
return {
...state,
todos : state.todos.concat(action.newTodo)
}
}
default : return state;
}
}
Personally, I don't like to have my reducers just blindly merge in whatever data's in the action, unless it's very small (like, say, the name of the currently selected tab or something). I prefer to have a reasonable amount of logic in my action creator to set up the action, a minimal-ish amount of data included in the action itself, and a sufficiently smart reducer to do the work based on that action.

Resources