functions.database.DeltaSnapshot#changed unexpected result - firebase

I am using the changed() function in some functions and the result is not expected given the documentation.
I added a few tests to the following file to illustrate the issue.
https://github.com/firebase/firebase-functions/blob/master/src/providers/database.ts
it('should be false when the current value has not changed', () => {
populate({ a: { b: 'c' } }, { a: { b: 'c' } });
expect(subject.child('a').changed()).to.be.false;
});
it('should be false when the current value has not changed, child path exists', () => {
populate({ a: { b: 'c' } }, { a: { b: 'c' } });
expect(subject.child('a/b').changed()).to.be.false;
});
it('should be false when the current value has not changed, child path does not exist', () => {
populate({ a: { b: 'c' } }, { a: { b: 'c' } });
expect(subject.child('a/d').changed()).to.be.false;
});
The first 2 tests fail, but i expected the changed() function to return false. Am i misunderstand the documentation?
Refs
https://firebase.google.com/docs/reference/functions/functions.database.DeltaSnapshot#changed

In the unit tests of the Firebase SDK for Cloud Functions, the signature of populate is populate(data, delta). That is, the original data and the change from the original data.
When the SDK checks for changes, it does so by seeing if the value is present in the delta payload. In a deployed function, anything that hasn't changed is explicitly not sent in the delta payload.

Related

How do I force an observeable to complete?

Kind of a niche question, but I know what the issue is so hopefully someone here can help me out. This is an Observable/RXFire issue, not an xstate issue.
I have this machine that invokes an observable:
export const tribeMachine = Machine(
{
id: "council",
initial: "init",
context: {},
states: {
init: {
invoke: {
id: "gettribes",
src: () =>
collectionData(database.collection("tribes")).pipe(
concatAll(),
map(x => ({ type: "STORE", x }))
),
onDone: "loaded"
},
on: {
STORE: {
actions: "storetribes"
},
CANCEL: "loaded"
}
},
loaded: {
entry: () => console.log("loaded")
},
error: {
entry: () => console.log("error")
}
}
},
{
actions: {
storetribes: (context, event) => console.log("hello")
}
}
);
The way it's supposed to work is that the machine invokes the observable on load, and then once the obs is done emitting its values and calls complete(), invoke.onDone is called and the machine transitions to the 'loaded' state.
When I use a normal observable that i created with a complete() call, or when i add take(#) to the end of my .pipe(), the transition works.
But for some reason the observable that comes from collectionData() from RXFire doesn't send out a 'complete' signal... and the machine just sits there.
I've tried adding a empty() to the end and concat()-ing the observables to add a complete signal to the end of the pipe... but then I found out that empty() is deprecated and it didn't seem to work anyway.
Been banging my head against the wall for awhile. any help is appreciated.
Edit:
Solution:
I misunderstood the purpose of collectionData(). It is a listener, so it's not supposed to complete. I was putting a square peg in round hole. The solution is to refactor the xstate machine so I don't need to call onDone at all.
Thank you for the answers nonetheless.
EDIT2: GOT IT TO WORK.
take(1) can be called BEFORE concatAll(). I thought if you called it first it would end the stream, but it doesn't. The rest of the operators in the pipe still apply. So i take(1) to get the single array, use concatAll() to flatten the array into a stream of individual objects, then map that data to a new object which triggers the STORE action. the store action then sets the data to the context of the machine.
export const tribeMachine = Machine({
id: 'council',
initial: 'init',
context: {
tribes: {},
markers: []
},
states: {
init: {
invoke: {
id: 'gettribes',
src: () => collectionData(database.collection('tribes')).pipe(
take(1),
concatAll(),
map(value => ({ type: 'TRIBESTORE', value })),
),
onDone: 'loaded'
},
on: {
TRIBESTORE: {
actions: ['storetribes', 'logtribes']
},
CANCEL: 'loaded'
}
},
loaded: {
},
error: {
}
}
},
{
actions: {
storetribes: assign((context, event) => {
return {
tribes: {
...context.tribes,
[event.value.id]: event.value
},
markers: [
...context.markers,
{
lat: event.value.lat,
lng: event.value.lng,
title: event.value.tribeName
}
]
}
})
}
}
)
Thanks for everyone's help!
Observables can return multiple values over time, so it is up to collectionData() to decide when to finish (i.e. causing complete() to be called).
However, if you only want to take 1 value from the observable, you can try:
collectionData(database.collection("tribes")).pipe(
take(1),
concatAll(),
map(x => ({ type: "STORE", x }))
),
This will cause the observable to complete once you take 1 value from collectionData().
Note: This may not be the best solution as it depends on how the observable streams you are using works. I am just highlighting that you can use take(1) to just take 1 value and complete the source observable.

Firebase database transactional search and update

I have a collection in firebase real time database that is a pool of codes that can be used once per 'store'. I need to search for an unused code, then mark it reserved by a store in an atomic fashion. The problem is I can't figure out how to do a transactional search and update in firebase, and the unused code is being 'used' multiple times until it gets updated.
const getUnusedCode = (storeID) => {
const codeRef = rtdb.ref('codes');
return codeRef
.orderByChild(storeID)
.equalTo(null)
.limitToFirst(1)
.once('child_added')
.then(snap => {
//setting the map {[storeID]:true} reserves the code
return snap.ref.update({ [storeID]: true }).then(() => {
return snap.key;
});
});
};
Edit: Here is the structure of the 'codes' collection:
{
"-LQl9FFD39PAeN5DnrGE" : {
"code" : 689343821901,
"i" : 0,
"5s6EgdItKW7pBIawgulg":true,
"ZK0lFbDnXcWJ6Gblg0tV":true,
"uKbwxPbZu2fJlsn998vm":true
},
"-LQl9FOxT4eq6EbwrwOx" : {
"code" : 689343821918,
"i" : 1,
"5s6EgdItKW7pBIawgulg":true
},
"-LQl9FPaUV33fvkiFtv-" : {
"code" : 689343821925,
"i" : 2
},
"-LQl9FQEwKKO9T0z4LIP" : {
"code" : 689343821932,
"i" : 3,
"ZK0lFbDnXcWJ6Gblg0tV":true
},
"-LQl9FQsEVSNZyhgdHmI" : {
"code" : 689343821949,
"i" : 4,
"5s6EgdItKW7pBIawgulg":true,
"uKbwxPbZu2fJlsn998vm":true
}
}
In this data, "5s6EgdItKW7pBIawgulg" is a store id, and true means this code has been used for this store
When new items are being imported, this function may get called hundres of times a minute, and is returning duplicates since it's not an atomic search-then-update. Is this possible in Firebase?
From what I understand you have a structure like this
codes: {
"code1": {
storeid: "store1"
},
"code2": {
storeid: "store2"
}
}
And you're trying to transactionally update it per store.
If this is the only update you're trying to do, I'd highly recommend inverting your data structure:
codes: {
"store1": "code1",
"store2": "code2"
}
On this structure the transaction for a store is quite simple, since the path is known:
var storeRef = firebase.database().ref("codes").child("store1");
storeRef.transation(function(current) {
if (current) {
// remove the code from the database
return null;
}
else {
// abort the transaction, since the code no longer exists
return undefined;
}
});
If you can't change the data structure, I'd probably user your current code to find the DatabaseReference to the code, and then use a transaction within the callback to update:
codeRef
.orderByChild(storeID)
.equalTo(null)
.limitToFirst(1)
.once('child_added')
.then(snap => {
//setting the map {[storeID]:true} reserves the code
return snap.ref.transaction(function(current) {
if (!current || current[storeId]) {
// the node no longer exists, or it already was claimed for this store
return undefined; // abort the transaction
}
else {
current[storeId] = true;
return current;
}
})
});

How to initialize 'settings' at an empty Firebase node in Polymer 2.x using firebase-document?

I want to implement <firebase-document> to fetch and apply user settings. I am using this page as a guide.
When I load the page with no data at the /users/{{userId}}/settings node in my firebase, I expect to see the object { new: 'initial', j: 5, o: 'N' }) loaded there. However, I actually see no change to the node in my firebase.
What am I doing wrong and how can I achieve my desired bahavior?
settings.html
<firebase-document
path="/users/{{userId}}/settings"
data="{{data}}">
</firebase-document>
<script>
Polymer({
properties: {
uid: String,
data: {
type: Object,
observer: 'dataChanged'
}
},
dataChanged: function (newData, oldData) {
// if settings have not been set yet, initialize with initial value
if(!newData && !oldData){
this.set('data', { new: 'initial', j: 5, o: 'N' });
}
}
});
</script>
Edit: Answering the questions by #HakanC's comment:
Does {{userId}} have a value when executed?
Yes.
Is there data at the path /users/{{userId}}/settings?
No. There is no /settings node when the user first logs in. But I do have code that successfully creates a node at /users/{{userId}}. And that node is present when this element executes its script.
Console log whether you arrive this.set line.
I do arrive there—multiple times. The first time, the logged value of data is undefined. The second time, the logged value of data is {}.
As you mentioned above that there is no data at the path, then you need to change if conditions. Something like:
if(newData === undefined){
instead of;
if(!newData && !oldData){
EDIT:
static get properties() { return {
uid: String,
data:{
type:Object,
value() {return{}; }
}
}
}
static get observers() { return [ 'dataChanged(data)' ]}
constructor() {
super();
}
ready() {
super.ready();
setTimeout(()=> {
this.set('data', undefined); //Retrieving data async from firebase.
},500)
}
dataChanged(data) {
// if data has not been set then, initialize with new value
console.log(data);
if(!data){
this.set('data', { new: 'initial', j: 5, o: 'N' });
}
}
DEMO

Meteor nested publications

I have two collections A and B in Meteor. For A I have a publication where I filter out a range of documents in A. Now I want to create a publications for B where I publish all documents in B that have a field B.length matching A.length.
I have not been able to find any example where this is shown but I feel it must be a standard use case. How can this be done in Meteor?
This is a common pattern for reywood:publish-composite
import { publishComposite } from 'meteor/reywood:publish-composite';
publishComposite('parentChild', {
const query = ... // your filter
find() {
return A.find(query, { sort: { score: -1 }, limit: 10 });
},
children: [
{
find(a) {
return B.find({length: a.length });
}
}
]
});
This is a quite different pattern than serverTransform as on the client you end up with two collections, A and B, as opposed to a synthetic single collection A that has some fields of of B. The latter is more like a SQL JOIN.
Use serverTransform
Meteor.publishTransformed('pub', function() {
const filter = {};
return A.find(filter)
.serverTransform({
'B': function(doc) {
return B.find({
length: doc.length
}); //this will feed directly into miniMongo as if it was a seperate publication
}
})
});

Efficient Redux reducers, avoiding unnecessary object copies

I guess my question could also summed up as something like
Is there an idiomatic ES6 way to have:
array.map(identity) === array ?
array.filter(i => true) === array ?
{obj..., attr: obj.attr} === obj ?
I know, it has not been implemented like that in ES6, but is there some possible syntax I'm missing or simple helper functions to have these properties true without resorting to an immutable lib?
I use Babel and new JS features, with immutable js objects.
I would like to know how to make my reducers more efficient and generate less unnecessary object copies
I'm not interested in a lib (Mori/ImmutableJS) solution.
I have a reducer that manages a paginated list.
The pages attribute is actually an Array[Array[item]]
Here is my reducer:
const initialState = {
isLoading: false,
pages: [],
allStamplesLoaded: false
};
function reducer(state = initialState, event) {
switch (event.name) {
case Names.STAMPLE_DELETED:
return {
...state,
pages: removeStampleFromPages(state.pages,event.data.stampleId)
};
case Names.STAMPLE_UPDATED:
return {
...state,
pages: updateStampleInPages(state.pages,event.data.apiStample)
};
case Names.STAMPLE_PAGES_CLEANED:
return {
...initialState,
};
case Names.STAMPLE_PAGE_REQUESTED:
return {
...state,
isLoading: true
};
case Names.STAMPLE_PAGE_LOADED:
const {stamplePage,isLastPage} = event.data;
return {
...state,
isLoading: false,
pages: [...state.pages, stamplePage],
isLastPage: isLastPage
};
case Names.STAMPLE_PAGE_ERROR:
return {
...state,
isLoading: false
};
default:
return state;
}
}
I also have these helper functions:
function removeStampleFromPages(pages,deletedStampleId) {
return pages.map(page => {
return page.filter(apiStample => apiStample != deletedStampleId)
})
}
function updateStampleInPages(pages,newApiStample) {
return pages.map(page => {
return updateStampleInPage(page,newApiStample);
})
}
function updateStampleInPage(page,newApiStample) {
return page.map(apiStample => {
if (apiStample.id === newApiStample.id) {
return newApiStample;
}
else {
return apiStample;
}
})
}
As you can notice, everytime an event such as STAMPLE_UPDATED is fired, then my reducer always return a new state, with a new array of array of pages, even if none of the items of the array were actually updated. This creates unnecessary object copying and GC.
I don't wan to optimize this prematurely nor introduce an immutable library in my app, but I'd like to know if there are any idiomatic ES6 ways to solve this problem?
Immutable data structures such as Immutable.js and Mori use a clever trick to avoid recreating the whole structure all the time.
The strategy is fairly simple: when you update a property drill down to the property, change it and rewrap all the property from this node till the root.
Let's assume you want to change the property c to 4 in the following state:
const state1 = {
a: {
b: {
c: 1
},
d: [2, 3, 4],
e: 'Hello'
}
}
The first step is to update c to 4. After that you need to create
a new object for b (because c changed)
a new object for a (because b changed)
and new object for the state (because a changed).
Your new state will look like this (a * next to an object means the object has been recreated)
const state2 = *{
a: *{
b: *{
c: 4
},
d: [2, 3, 4],
e: 'Hello'
}
}
Notice how d and e have not been touched.
You can now verify things are properly working:
state1 === state2 // false
state1.a === state2.a // false
state1.a.b === state2.a.b //false
state1.d === state2.d // true
state1.e === state2.e // true
You may notice that d and e are shared between state1 and state2.
You could use a similar strategy to share information in your state without recreating a whole new state all the time.
As for your initial question:
array.map(identity) !== array
array.filter(i => true) !== array
{obj..., attr: obj.attr} !== obj
the answer is very simple.
When an array or an object is created, the Javascript VM assigns internally an identifier to that object. The identifier is incremental, so no two arrays/objects are alike.
When you perform an identity check on arrays or objects, only the internal identifier is checked for a match.
a = [] // internal identifier 1
[] // internal identifier to 2
b = [] // internal identifier 3
a === b // id 1 === id 3 is FALSE!
a === a // id 1 === id 1 is TRUE!

Resources