I have code that looks similar to this
const attr = event.target.value;
const query = filter.merge({
[attr]: !filter[attr]
});
where filter is a Immutable Record. Flow complains because an index signature declaring the expected key / value type is missing.
Can I somehow tell flow that this is OK or do I have use a $FlowFixMe?
I'm using Immutable v4.0.0-rc.14
Solved by using a switch statement
switch(attr) {
case "something":
const query = filter.merge({
something: !filter.something
});
break;
case etc...
Related
I have an endpoint which accepts a parameter and I'm trying to access the cached data using endpoint.select() in a redux slice. The problem is i cant figure out how to pass in the cache key. I've done the following:
export const selectProductsResult = (storeId) =>
storeApi.endpoints.listProductsByStore.select(storeId);
This works fine if I use it within a component like this:
const currentStoreProducts = useSelector(selectProductResult(currentStoreId))
What I don't understand is how I can use this in another selector, for example this does not work:
const selectCurrentProducts = createSelector((selectCurrentStoreId), currentStoreId
=> selectProductResult(currentStoreId)
If I try to use this in a component like so:
const currentProducts = useSelector(selectCurrentProducts)
The value obtained is a memoized function. I've played around quite a bit and can't seem to build the desired selector.
The call to someEndpoint.select() generates a new selector function that is specialized to look up the data for that cache key. Loosely put, imagine it looks like this:
const createEndpointSelector = (cacheKey) => {
return selectorForCacheKey = () => {
return state.api.queries['someEndpointName'][cacheKey];
}
}
const selectDataForPikachu = createEndpointSelector('pikachu');
So, you need to call someEndpoint.select() with the actual cache key itself, and it returns a new selector that knows how to retrieve the data for that cache key:
const selectDataForPikachu = apiSlice.endpoints.getPokemon.select('pikachu');
// later, in a component
const pikachuData = useSelector(selectDataForPikachu);
I am struggling to find the correct syntax to search a String Set/List target attribute (tags) by an array of strings. The idea would be that if the SS attribute contains all of the passed strings, it passes the filter. The passed strings do not need to match all of the strings within the target attribute. The more strings you pass, the more accurate your results.
// Compile tags into a list
let tagSqlValues = {};
let tagSqlStatement = query.tags.map((tag: string, index: number) => {
let tagParam = `:tag${index}`;
tagSqlValues[tagParam] = tag;
return `${tagParam} in tags`;
}).join(" and ");
// Console Logs
// tagSqlStatement = :tag0 in tags and :tag1 in tags (also tried tags contains :tag0 and tags contains :tag1)
// tagSqlValues = {":tag0":"Modern",":tag1":" Spring"}
let params = {
TableName: "Art",
FilterExpression: tagSqlStatement,
ExpressionAttributeValues: tagSqlValues,
};
let results = await this.DDB_CLIENT.scan(params).promise();
// Console Logs
// "Invalid FilterExpression: Syntax error; token: \"tags\", near: \"in tags and\""
// "Invalid FilterExpression: Syntax error; token: \"contains\", near: \"tags contains :tag0\""
I've tried several variations with IN and CONTAINS without luck. Is this possible with DynamoDB?
It looks like my CONTAINS syntax was wrong. I did a little digging and found this answer by Zanon. With a minor modification to include the and join, it seems like the filter is working as expected!
// Compile tags into a list
let tagSqlValues = {};
let tagSqlStatement = query.tags.map((tag: string, index: number) => {
let tagParam = `:tag${index}`;
tagSqlValues[tagParam] = tag;
return `contains(tags, ${tagParam})`;
}).join(" and ");
I am new to flutter and I am sure there is a simple way of doing this. Let me first give you a background. I have 2 tables(collections). The first one store a mapping. Therefore it returns a key based on an id which will be used to query the second table and retrieve the data from firebase.
I have written 2 data models and 2 functions which return Future<> data. They are as follows-
Future<SpecificDevice> getSpecificDevice(String deviceId) {
Future<SpecificDevice> obj =_database.reference().child("deviceMapping").orderByChild("deviceId").equalTo(deviceId).once().then((snapshot) {
SpecificDevice specificDevice = new SpecificDevice(deviceId, "XXXX", new List<String> ());
if(snapshot.value.isNotEmpty){
print(snapshot.value);
snapshot.value.forEach((key,values) {
if(values["deviceId"] == deviceId) {
specificDevice.deviceKey = values["deviceDesc"];
specificDevice.vendorList = List.from(values["vendorList"]);
}
});
}
return specificDevice;
});
return obj;
}
This function gets the mapping deviceId -> deviceKey.
This is the key of record stored in another table. Following is the function for it.
Future<Device> getDeviceDescription(String deviceKey) {
Future<Device> device = _database.reference().child("deviceDescription").once().then((snapshot) {
Device deviceObj = new Device("YYYY", "YYYY", "YYY", "YYYY", "YYYY");
if(snapshot.value.isNotEmpty){
print(snapshot.value);
//Future<SpecificDevice> obj = getSpecificDevice(deviceId);
//obj.then((value) {
snapshot.value.forEach((key,values) {
if(key == deviceKey) { // compare with value.deviceKey instead
print(values["deviceDescription"]); // I get the correct data here.
deviceObj.manual = values["deviceManual"];
deviceObj.deviceType = values["deviceType"];
deviceObj.description = values["deviceDescription"];
deviceObj.brand = values["deviceBrand"];
deviceObj.picture = values["devicePicture"];
}
// });
});
}
return deviceObj;
});
return device;
}
Now both of these functions work. I want to make it work one after the other. In the above function, if I uncomment the lines of code, the data is retrieved properly in the inner function but it returns initial default values set because the values get returned before setting the obj of SpecificDevice.
Here is where I am getting the error. I am calling the second function in FutureBuilder<> code with the above lines uncommented and taking input param as deviceId.
return FutureBuilder<Device>(
future: getDeviceDescription(deviceId),
builder:(BuildContext context,AsyncSnapshot snapshot){... // using snapshot.data in its child.
Here in snapshot.data. would give me YYYY. But it should get me the value from the database.
I am stuck with this for a while. Any help in fixing this? or if what I am trying to do is clear then please suggest me a better way to approach this. Thanks in advance!
The answer is rather simple:
first and foremost - you forgot to use async / await keywords, which will guarantee synchronous data retrieval from the database. Always use them, if you are connecting to any network service
to make one command work after another - use .then((value) {}). It will get data from the first function (which you pass using return) and use it in the second function.
Solved the problem by changing the calling function to -
return FutureBuilder<Device>(
future: getSpecificDevice(deviceId).then((value){
return getDeviceDescription(value.deviceKey);
}),
builder:(BuildContext context,AsyncSnapshot snapshot){
In My Cloud Firestore database structure looks like this. Now, I'd like to delete index positions based on Index 0, Index 1 like this.
const arrayLikedImagesRef = {imageurl: image, isliked: true};
const db = firebase.firestore();
const deleteRef = db.collection('userdata').doc(`${phno}`);
deleteRef.update({
likedimages: firebase.firestore.FieldValue.arrayRemove(arrayLikedImagesRef)
});
});
As explained here, “bad things can happen when trying to update or delete array elements at specific indexes”. This is why the Firestore official documentation indicates that the arrayRemove() function will take elements (strings) as arguments, but not indexes.
As suggested in this answer, if you prefer using indexes then you should get the entire document, get the array, modify it and add it back to the database.
You can't use FieldValue to remove array items by index. Instead, you could use a transaction to remove the array items. Using a transaction ensures you are actually writing back the exact array you expect, and can deal with other writers.
For example (the reference I use here is arbitrary, of course, you would need to provide the correct reference):
db.runTransaction(t => {
const ref = db.collection('arrayremove').doc('targetdoc');
return t.get(ref).then(doc => {
const arraydata = doc.data().likedimages;
// It is at this point that you need to decide which index
// to remove -- to ensure you get the right item.
const removeThisIndex = 2;
arraydata.splice(removeThisIndex, 1);
t.update(ref, {likedimages: arraydata});
});
});
Of course, as noted in the above code, you can only be sure you are about to delete the correct index when you are actually inside the transaction itself -- otherwise the array you fetch might not line up with the array data that you originally selected the index at. So be careful!
That said, you might be asking what to do given that FieldValue.arrayRemove doesn't support nested arrays (so you can't pass it multiple maps to remove). In that case, you just want a variant of the above that actually checks values (this example only works with a single value and a fixed object type, but you could easily modify it to be more generic):
const db = firebase.firestore();
const imageToRemove = {isliked: true, imageurl: "url1"};
db.runTransaction(t => {
const ref = db.collection('arrayremove').doc('byvaluedoc');
return t.get(ref).then(doc => {
const arraydata = doc.data().likedimages;
const outputArray = []
arraydata.forEach(item => {
if (!(item.isliked == imageToRemove.isliked &&
item.imageurl == imageToRemove.imageurl)) {
outputArray.push(item);
}
});
t.update(ref, {likedimages: outputArray});
});
});
(I do note that in your code you are using a raw boolean, but the database has the isliked items as strings. I tested the above code and it appears to work despite that, but it'd be better to be consistent in your use of types).
Hello kind Stackoverflow folks,
I'm trying to create a function to guard off code from being executed at run-time with an incorrect Flow type present.
My understanding is that the way to do this at run-time is by refining, or checking, that the type matches what is required and using Flow to keep an eye that no cases are missed along the way.
A simple case is where I have a string input that I would like to confirm matches to a enum/Union type. I have this working as I would expect with literals e.g.
/* #flow */
type typeFooOrBaa = "foo"| "baa"
const catchType = (toCheck: string): void => {
// Working check
if (toCheck === "foo" || toCheck === "baa") {
// No Flow errors
const checkedValue: typeFooOrBaa = toCheck
// ... do something with the checkedValue
}
};
Try it over here
Naturally, I would like to avoid embedding literals.
One of the things I've tried is the equivalent object key test, which doesn't work :-( e.g.
/* #flow */
type typeFooOrBaa = "foo"| "baa"
const fooOrBaaObj = {"foo": 1, "baa": 2}
const catchType = (toCheck: string): void => {
// Non working check
if (fooOrBaaObj[toCheck]) {
/*
The next assignment generates the following Flow error
Cannot assign `toCheck` to `checkedVariable` because: Either string [1] is incompatible
with string literal `foo` [2]. Or string [1] is incompatible with string literal `baa` [3].",
"type"
*/
const checkedVariable: typeFooOrBaa = toCheck
}
};
Try it over here
Is it possible to achieve something like this without having to go down the full flow-runtime route? If so how is it best done?
Thanks for your help.
One approach that appears to works is to use the const object which defines the allowed values, to:
Generate a union type using the $keys utility.
Use that union type to create a map object where the keys are the desired input (our case strings) and the values are "maybe"s of the type that needs refining.
Here's the example from earlier reworked so that it:
Sets the type up as we'd expect to allow either "foo" or "baa" but nothing else.
Detects when a string is suitably refined so that it only contains "foo" or "baa".
Detects when a string might contain something else other than what's expected.
Credit to #vkurchatkin for his answer that helped me crack this (finally).
/* #flow */
// Example of how to persuade Flow to detect safe adequately refined usage of a Union type
// at runtime and its unsafe, inadequately refined counterparts.
const fooOrBaaObj = {foo: 'foo', baa: 'baa'}
type typeFooOrBaa = $Keys<typeof fooOrBaaObj>
// NB: $Keys used inorder for the type definition to avoid aliasing typeFooOrBaa === string
// which allows things like below to correctly spot problems.
//const testFlowSpotsBadDefition: typeFooOrBaa = "make_flow_barf"
const fooOrBaaMap: { [key: string]: ?typeFooOrBaa } = fooOrBaaObj;
// NB: Use of the "?" maybe signifier in the definition a essential to inform Flow that indexing into
// the map "might" produce a "null". Without it the subsequent correct detection of unsafe
// unrefined variables fails.
const catchType = (toCheck: string): void => {
const myValue = fooOrBaaMap[toCheck];
if (myValue) {
// Detects refined safe usage
const checkedVariable: typeFooOrBaa = myValue
}
// Uncommenting the following line correctly causes Flow to flag the unsafe type. Must have the
// "?" in the map defininiton to get Flow to spot this.
//const testFlowSpotsUnrefinedUsage: typeFooOrBaa = myValue
}
Have a play with it over here
You can type the object as {[fooOrBaa]: number}, but flow will not enforce that all members of fooOrBaa exist in the object.