How to structurally compare the previous and new value of nested objects that are being used in `watch`, in Options API? - vuejs3

I have a question which is a mix of both composition API and options API
What I want to do: I want to watch an object. That object is deeply nested with all kinds of data types.
Whenever any of the nested properties inside change, I want the watch to be triggered.
(This can be done using the deep: true option).
AND I want to be able to see the previous value and current value of the object.
(this doesn't seem to be possible because Vue stores the references of the objects, so, now the value and prevValue point to the same thing.)
In Vue3 docs, for the watch API, it says this
However, watching a reactive object or array will always return a reference to the
current value of that object for both the current and previous value of the state.
To fully watch deeply nested objects and arrays, a deep copy of values may be required.
This can be achieved with a utility such as lodash.cloneDeep
And this following example is given
import _ from 'lodash'
const state = reactive({
id: 1,
attributes: {
name: ''
}
})
watch(
() => _.cloneDeep(state),
(state, prevState) => {
console.log(state.attributes.name, prevState.attributes.name)
}
)
state.attributes.name = 'Alex' // Logs: "Alex" ""
Link to docs here - https://v3.vuejs.org/guide/reactivity-computed-watchers.html#watching-reactive-objects
However, this is composition API (if I'm not wrong).
How do I use this way of using cloneDeep in a watch defined in options API?
As an example, this is my code
watch: {
items: {
handler(value, prevValue) {
// check if value and prevValue are STRUCTURALLY EQUAL
let isEqual = this.checkIfStructurallyEqual(value, prevValue)
if (isEqual) return
else this.doSomething()
},
deep: true,
},
}
I'm using Vue 3 with Options API.
How would I go about doing this in Options API?
Any help would be appreciated! If there's another way of doing this then please do let me know!

I also asked this question on the Vue forums and it was answered.
We can use the same syntax as provided in the docs in Options API using this.$watch()
data() {
id: 1,
attributes: {
name: ''
}
}
this.$watch(
() => _.cloneDeep(this.attributes),
(state, prevState) => {
console.log(state.name, prevState.name)
}
)
this.attributes.name = 'Alex' // Logs: "Alex" ""

Related

Updating normalised data without causing more re-renders outside of the state slice that has been updated

I have some normalised data (items) within my redux store:
{
items: {
index: ['a','b'],
dict: {
a: {
title: "red",
},
b: {
title: "car",
}
}
},
...
}
So, if I want to update anything within an item object, the reducer looks like this:
...
const itemsReducer = (state = initialState.items, action) => {
switch (action.type) {
case itemsActions.types.UPDATE_ITEM: {
return {
...state,
[action.payload.itemId]: {
title: action.payload.title,
}
}
}
default: return state;
}
};
But this technique creates a new object for items, which can cause unnecessary components to re-render, when really it should only cause components that subscribe to state changes of the individual object to re-render.
Is there any way to get around this?
That is how immutable updates are required to work - you must create copies of every level of nesting that needs to be updated.
In general, components should extract the smallest amount of data that they need from the store, to help minimize the chance of unnecessary re-renders. For example, most of the time a component probably shouldn't be reading the entire state.items slice.
FWIW, it looks like you're hand-writing your reducer logic. You should be using our official Redux Toolkit package to write your Redux logic in general. RTK also specifically has a createEntityAdapter API that will do most typical normalized state updates for you, so you don't have to write reducer logic by hand.
I'll also note that the recently released Reselect 4.1 version has new options you can use for customizing memoized selectors as well.

React Redux accessing dynamically filtered state in mapStateToProps - rerendering woes

I have a functional component, that is passed instructions on what to pull from the redux store.
Using mapStateToProps=(state, ownProps), I can happily pull the required items from state (store) - but, at a cost of any changes in the entire state tree triggering rerunning mapStateToProps and a gazillion rerenders.
Let me unpack.
Here's a snapshot of part of the store:
{
settings: {...stuff...},
projects: [...stuff...],
definitions: [...stuff...],
themes: [...stuff...],
surfaces: {
'6': { <--- VARIABLE PASSED TO COMPONENT
surface: {
STRIP: [..stuff..],
GLOBAL: { <--- CATEGORY PASSED TO COMPONENT
DISPLAY: {...stuff...},
ASSIGNMENT: { <--- LIST OF REQUIRED OBJECTS HAS
A_TRACK: { SUBCATEGORY AND TARGET (A_TRACK etc...)
value: 0,
type: 'switch',
label: 'TRACK'
},
A_SEND: { <--- ANOTHER OBJECT I NEED TO GET
value: 0,
type: 'switch',
label: 'SEND'
},
A_PAN: {
value: 0,
type: 'switch',
label: 'PAN'
},
},
FADER_BANKS: {...stuff...},
STATUS: {...stuff...},
LOTS_MORE_STUFF
My parent component passes the required instructions to the child.
<RefMixerGroup
portId = {this.props.portId}
items={[
{parent: 'GLOBAL', group: "ASSIGNMENT", target: "A_TRACK"},
{parent: 'GLOBAL', group: "ASSIGNMENT", target: "A_SEND"},
]
}
/>
mapStateToProps is pretty simple:
const mapStateToPropy = (state, ownProps) => {
return {
groupItems: getItemsFromState(state.surfaces[ownProps.portId].surface, ownProps.items)
}
}
and the work is done in a simple function:
const getItemsFromState = (subState, items)=>{
let groupItems=[]
for (let i = 0; i < items.length; i++) {
const item = items[i];
const base = subState[item.parent];
let groupItem = base[item.group][item.target]
groupItems.push({...groupItem, target: item.target})
}
return groupItems
}
But because I am creating this array of matches, I think redux thinks I should be subscribing to every item in the tree...when I only want changes on the found elements, in this case:
surfaces[6].surface[GLOBAL][ASSIGNMENT][A_TRACK]
surfaces[6].surface[GLOBAL][ASSIGNMENT][A_SEND]
I tried using reselect and the rereselect instead of my getItemsFromState function above,
but all with the same result. Any change in that tree, starting with surfaces[6] triggers mapsStateToProps and a rerender.
There must be way around this, but I can't figure it out. I tried using areStatesEqual but it only provides nextState and prevState, and I need ownProps to compute equality. I possibly could use areStatePropsEqual, but that only works AFTER recomputing mapStateToProps unnecessarily.
There must be a way!
getItemsFromState is creating a new groupItems array reference every time it runs. It will be called after every dispatched action. Since connect re-renders any time any of the fields returned by mapState have changed to a new reference, your code is forcing React-Redux to re-render every time.
This is specifically why you should use memoized selectors to only return new derived data references if the input references have changed, typically with Reselect's createSelector. If your use of Reselect isn't helping here, it's likely that your selectors aren't being set up correctly, but I'd need to see specific examples to give advice there.
It's also why components should subscribe to the smallest amount of data that they actually need.
If you are using a function component, I'd suggest using useSelector instead of connect as well.

How do I return an entire paged set from the Jira API using Ramda?

I'm using the Nodejs library for talking to Jira called jira-connector. I can get all of the boards on my jira instance by calling
jira.board.getAllBoards({ type: "scrum"})
.then(boards => { ...not important stuff... }
the return set looks something like the following:
{
maxResults: 50,
startAt: 0,
isLast: false,
values:
[ { id: ... } ]
}
then while isLast === false I keep calling like so:
jira.board.getAllBoards({ type: "scrum", startAt: XXX })
until isLast is true. then I can organize all of my returns from promises and be done with it.
I'm trying to reason out how I can get all of the data on pages with Ramda, I have a feeling it's possible I just can't seem to sort out the how of it.
Any help? Is this possible using Ramda?
Here's my Rx attempt to make this better:
const pagedCalls = new Subject();
pagedCalls.subscribe(value => {
jira.board.getAllBoards({ type:"scrum", startAt: value })
.then(boards => {
console.log('calling: ' + value);
allBoards.push(boards.values);
if (boards.isLast) {
pagedCalls.complete()
} else {
pagedCalls.next(boards.startAt + 50);
}
});
})
pagedCalls.next(0);
Seems pretty terrible. Here's the simplest solution I have so far with a do/while loop:
let returnResult = [];
let result;
let startAt = -50;
do {
result = await jira.board.getAllBoards( { type: "scrum", startAt: startAt += 50 })
returnResult.push(result.values); // there's an array of results under the values prop.
} while (!result.isLast)
Many of the interactions with Jira use this model and I am trying to avoid writing this kind of loop every time I make a call.
I had to do something similar today, calling the Gitlab API repeatedly until I had retrieved the entire folder/file structure of the project. I did it with a recursive call inside a .then, and it seems to work all right. I have not tried to convert the code to handle your case.
Here's what I wrote, if it will help:
const getAll = (project, perPage = 10, page = 1, res = []) =>
fetch(`https://gitlab.com/api/v4/projects/${encodeURIComponent(project)}/repository/tree?recursive=true&per_page=${perPage}&page=${page}`)
.then(resp => resp.json())
.then(xs => xs.length < perPage
? res.concat(xs)
: getAll(project, perPage, page + 1, res.concat(xs))
)
getAll('gitlab-examples/nodejs')
.then(console.log)
.catch(console.warn)
The technique is pretty simple: Our function accepts whatever parameters are necessary to be able to fetch a particular page and an additional one to hold the results, defaulting it to an empty array. We make the asynchronous call to fetch the page, and in the then, we use the result to see if we need to make another call. If we do, we call the function again, passing in the other parameters needed, the incremented page number, and the merge of the current results and the ones just received. If we don't need to make another call, then we just return that merged list.
Here, the repository contains 21 files and folders. Calling for ten at a time, we make three fetches and when the third one is complete, we resolve our returned Promise with that list of 21 items.
This recursive method definitely feels more functional than your versions above. There is no assignment except for the parameter defaulting, and nothing is mutated along the way.
I think it should be relatively easy to adapt this to your needs.
Here is a way to get all the boards using rubico:
import { pipe, fork, switchCase, get } from 'rubico'
const getAllBoards = boards => pipe([
fork({
type: () => 'scrum',
startAt: get('startAt'),
}),
jira.board.getAllBoards,
switchCase([
get('isLast'),
response => boards.concat(response.values),
response => getAllBoards(boards.concat(response.values))({
startAt: response.startAt + response.values.length,
})
]),
])
getAllBoards([])({ startAt: 0 }) // => [...boards]
getAllBoards will recursively get more boards and append to boards until isLast is true, then it will return the aggregated boards.

How to implement redux-search

I am trying to implement a search filter in my application which uses react/redux using redux-search. The first gotcha I get is when I try to add the store enhancer as in the example.
// Compose :reduxSearch with other store enhancers
const enhancer = compose(
applyMiddleware(...yourMiddleware),
reduxSearch({
// Configure redux-search by telling it which resources to index for searching
resourceIndexes: {
// In this example Books will be searchable by :title and :author
books: ['author', 'title']
},
// This selector is responsible for returning each collection of searchable resources
resourceSelector: (resourceName, state) => {
// In our example, all resources are stored in the state under a :resources Map
// For example "books" are stored under state.resources.books
return state.resources.get(resourceName)
}
})
)
I understand evarything up to the resourceSelector, when I tried to get a deep dive into the example to see how it works but I can barely see how they are generated and the last line returns an error, Cannot read property 'get' of undefined
My state object looks like this
state: {
//books is an array of objects...each object represents a book
books:[
//a book has these properties
{name, id, author, datePublished}
]
}
Any help from anyone who understands redux-search is helpful
If this line:
return state.resources.get(resourceName)
Is causing this error:
Cannot read property 'get' of undefined
That indicates that state.resources is not defined. And sure enough, your state doesn't define a resources attribute.
The examples were written with the idea in mind of using redux-search to index many types of resources, eg:
state: {
resources: {
books: [...],
authors: [...],
// etc
}
}
The solution to the issue you've reported would be to either:
A: Add an intermediary resources object (if you think you might want to index other things in the future and you like that organization).
B: Replace state.resources.get(resourceName) with state[resourceName] or similar.

How to get multiple objects in list at a point in time

I want to provide my users with an API (pointing to my server) that will fetch data from Firebase and return it to them. I want it to be a 'normal' point-in-time request (as opposed to streaming).
My data is 'boxes' within 'projects'. A user can query my API to get all boxes for a project.
My data is normalised, so I will look up the project and get a list of keys of boxes in that project, then go get each box record individually. Once I have them all, I will return the array to the user.
My question: what is the best way to do this?
Here's what I have, and it works. But it feels so hacky.
const projectId = req.params.projectId; // this is passed in by the user in their call to my server.
const boxes = [];
let totalBoxCount = 0;
let fetchedBoxCount = 0;
const projectBoxesRef = db
.child('data/projects')
.child(projectId)
.child('boxes'); // a list of box keys
function getBox(boxSnapshot) {
totalBoxCount++;
db
.child('data/boxes') // a list of box objects
.child(boxSnapshot.key())
.once('value')
.then(boxSnapshot => {
boxes.push(boxSnapshot.val());
fetchedBoxCount++;
if (fetchedBoxCount === totalBoxCount) {
res.json(boxes); // leap of faith that getBox() has been called for all boxes
}
});
}
projectBoxesRef.on('child_added', getBox);
// 'value' fires after all initial 'child_added' things are done
projectBoxesRef.once('value', () => {
projectBoxesRef.off('child_added', getBox);
});
There are some other questions/answers on separating the initial set of child_added objects, and they have influenced my current decision, but they don't seem to relate directly.
Thanks a truck-load for any help.
Update: JavaScript version of Jay's answer below:
db
.child('data/boxes')
.orderByChild(`projects/${projectId}`)
.equalTo(true)
.once('value', boxSnapshot => {
const result = // some parsing of response
res.json(result);
});
This may be too simple a solution but if you have projects, and each project has boxes
your projects node
projects
project_01
boxes
box_id_7: true
box_id_9: true
box_id_34: true
project_37
boxes
box_id_7: true
box_id_14: true
box_id_42: true
and the boxes node
boxes
box_id_7
name: "a 3D box"
shape: "Parallelepiped"
belongs_to_project
project_01: true
box_id_14
name: "I have unequal lenghts"
shape: "Rhumboid"
belongs_to_project
project_37: true
box_id_34
name: "Kinda like a box but with rectangles"
shape: "cuboid"
belongs_to_project
project_01: true
With that, just one (deep) query on the boxes node will load all of the boxes that belong to project_01, which in this case is box_id_7 and box_id_34.
You could go the the other way and since you know the box id for each project in the projects node, you could do a series of observers to load in each project via it's specific path /boxes/box_id_7 etc. I like the query better; faster and less bandwidth.
You could expand on this if a box can belong to multiple projects:
box_id_14
name: "I have unequal lenghts"
shape: "Rhumboid"
belongs_to_project
project_01: true
project_37: true
Now query on the boxes node for all boxes that are part of project_01 will get box_id_7, box_id_14 and box_id_34.
Edit:
Once that structure is in place, use a Deep Query to then get the boxes that belong to the project in question.
For example: suppose you want to craft a Firebase Deep Query to return all boxes where the box's belongs_to_project list contains an item with key "project_37"
boxesRef.queryOrderedByChild("belongs_to_project/project_37"
.queryEqualToValue(true)
.observeSingleEventOfType(.Value, withBlock: { snapshot in
print(snapshot)
})
OK I think I'm happy with my approach, using Promise.all to respond once all the individual 'queries' are returned:
I've changed my approach to use promises, then call Promise.all() to indicate that all the data is ready to send.
const projectId = req.params.projectId;
const boxPromises = [];
const projectBoxesRef = db
.child('data/projects')
.child(projectId)
.child('boxes');
function getBox(boxSnapshot) {
boxPromises.push(db
.child('data/boxes')
.child(boxSnapshot.key())
.once('value')
.then(boxSnapshot => boxSnapshot.val())
);
}
projectBoxesRef.on('child_added', getBox);
projectBoxesRef.once('value', () => {
projectBoxesRef.off('child_added', getBox);
Promise.all(boxPromises).then(boxes => res.json(boxes));
});

Resources