I need to fetch from BaaS data store all records that doesn't match condition
I use query string like:
https://api.usergrid.com/<org>/<app>/<collection>?ql=location within 10 of 30.494697,50.463509 and Partnership eq 'Reject'
that works right (i don't url encode string after ql).
But any attempt to put "not" in this query cause "The query cannot be parsed".
Also i try to use <>, !=, NE, and some variation of "not"
How to configure query to fetch all records in the range but Partnership NOT Equal 'Reject' ?
Not operations are supported, but are not performant because it requires a full scan. When coupled with a geolocation call, it could be quite slow. We are working on improving this in the Usergrid core.
Having said that, in general, it is much better to inverse the call if possible. For example, instead of adding the property when the case is true, always write the property to every new entity (even when false), then edit the property when the case is true.
Instead of doing this:
POST
{
'name':'fred'
}
PUT
{
'name':'fred'
'had_cactus_cooler':true
}
Do this:
POST
{
'name':'fred'
'had_cactus_cooler':'no'
}
PUT
{
'name':'fred'
'had_cactus_cooler':'yes'
}
In general, try to put your data in the way you want to get it out. Since you know upfront that you want to query on whether this property exists, simply add it, but with a negative value. The update it when the condition becomes true.
You should be able to use this syntax:
https://api.usergrid.com/<org>/<app>/<collection>?ql=location within 10 of 30.494697,50.463509 and not Partnership eq 'Reject'
Notice that the not operator comes before the expression (as indicated in the docs).
Related
My application use keywords extensively, everything is tagged with keywords, so whenever use wants to search data or add data I have to show keywords in auto complete box.
As of now I am storing keywords in another collection as below
export interface IKeyword {
Id:string;
Name:string;
CreatedBy:IUserMin;
CreatedOn:firestore.Timestamp;
}
export interface IUserMin {
UserId:string;
DisplayName:string;
}
export interface IKeywordMin {
Id:string;
Name:string;
}
My main document holds array of Keywords
export interface MainDocument{
Field1:string;
Field2:string;
........
other fields
........
Keywords:IKeywordMin[];
}
But problem is auto complete reads data frequently and my document reads quota increases very fast.
Is there a way to implement this without increasing reads for keyword ? Because keyword is not the real data we need to get.
Below is my query to get main documents
query = query.where("Keywords", "array-contains-any", keywords)
I use below query to get keywords in auto complete text box
query = query.orderBy("Name").startAt(searchTerm).endAt(searchTerm+ '\uf8ff').limit(20)
this query run many times when user types auto complete search which is causing more document reads
Does this answer your question
https://fireship.io/lessons/typeahead-autocomplete-with-firestore/
Though the receommended solution is to use 3rd party tool
https://firebase.google.com/docs/firestore/solutions/search
To reduce documents read:
A solution that come to my mind however I'm not sure if it's suitable for your use case is using Firestore caching feature. By default, firestore client will always try to reach the server to get the new changes on your documents and if it cannot reach the server, it will reach to the cached data on the client device. you can take advantage of this feature by using the cache first and reach the server only when you want. For web application, this feature is disabled by default and you can enable it like in
https://firebase.google.com/docs/firestore/manage-data/enable-offline
to help you understand this feature more check this article:
https://firebase.google.com/docs/firestore/manage-data/enable-offline
I found a solution, thought I would share here
Create a new collection named typeaheads in below format
export interface ITypeAHead {
Prefix:string;
CollectionName:string;
FieldName:string;
MatchingValues:ILookupItem[]
}
export interface ILookupItem {
Key:string;
Value:string;
}
depending on the minimum letters add either 2 or 3 letters to Prefix, and search based on the prefix, collection and field. so most probably you will end up with 2 or 3 document reads for on search.
Hope this helps someone else.
These two filters return zero results:
resource.labels:* AND resource.labels.namespace_name:*
resource.labels:* AND NOT resource.labels.namespace_name:*
While this one returns plenty:
resource.labels:*
I have three questions about this:
What's going on here?
More importantly, how do I exclude a particular value of
namespace_name while not excluding records that don't define
namespace_name ?
Similarly, how do I write a filter for all records that don't define namespace_name?
I work on Stackdriver Logging and have worked with the code that handles queries.
You are correct: something's up with the presence operator (:*), and it works differently than the other operators. As a result the behavior of a negated presence operator is not intuitive (or particularly useful).
We consider this a bug, and it's something that I'd really like to fix; however, fixing this class of bug is a lengthy process, so I've proposed some workarounds.
What's going on here?
I cannot reproduce your first "zero result" filter: resource.labels:* AND resource.labels.namespace_name:*
This gives me a large list of logs that contain the namespace_name label. For what it's worth, resource.labels.namespace_name:* implies resource.labels:*, so really you only need the latter half of this filter.
Your second "zero result" filter: resource.labels:* AND NOT resource.labels.namespace_name:*
... runs into a bug where field presence check (:*) does not interact properly with negation.
More importantly, how do I exclude a particular value of namespace_name while not excluding records that don't define namespace_name ?
While not required by the logging API, GCP-emitted resources generally emit the same sets of labels for a given resource type. You can take advantage of this by using resource.type to isolate resources-with-label from resources-without-label, then only apply the label constraint to the resources-with-label clause:
(resource.type != "k8s_container") OR
(resource.type = "k8s_container" AND resource.labels.namespace_name != "my-value")
Here, we are relying on all k8s_container-type entries having the namespace_name label, which should generally be the case. You can modify this to select multiple Kubernetes-prefixed resources:
(NOT resource.type:"k8s_") OR
(resource.type:"k8s_" AND resource.labels.namespace_name != "my-value")
... or use a complex resource.type clause to specifically select which you want to include/exclude from the namespace matching.
(NOT (resource.type = "k8s_container" OR resource.type = "k8s_pod")) OR
((resource.type = "k8s_container" OR resource.type = "k8s_pod") AND resource.labels.namespace_name != "my-value")
You cannot query for a k8s_container type that does not have the namespace_name label, but those should generally not be emitted in the first place.
Similarly, how do I write a filter for all records that don't define namespace_name?
You can't do this right now because of the bug. I think your best bet is to identify all of the resource types that use namespace_name and exclude those types with a resource.type filter:
NOT (
resource.type = "k8s_container" OR
resource.type = "k8s_pod" OR
resource.type = "knative_revision")
Note that, as mentioned earlier, while it's possible (allowed by the API) to have a k8s_container resource without a namespace_name label, emitted k8s_container logs should generally have the label.
Say that I have node user, item and user_items used to join them.
Typically one would(as advised in official documents and videos) use such a structure:
"user_items": {
"$userKey": {
"$itemKey1": true,
"$itemKey2": true,
"$itemKey3": true
}
}
I would like to use the following structure instead:
"user_items": {
"$userKey": {
"$itemKey1": 1494912826601,
"$itemKey2": 1494912826602,
"$itemKey3": 1494912826603
}
}
with values being a timestamp value. So that i can order them by creation date also while being able to tell the associated time. Seems like killing two birds with one stone situation. Or is it?
Any down sides to this approach?
EDIT: Also I'm using this approach for the boolean fields such as: approved_at, seen_at,... etc instead of using two fields like:
"some_message": {
"is_seen": true,
"seen_timestamp": 1494912826602,
}
You can model your database in every way you want, as long as you follow Firebase rules. The most important rule is to have the data as flatten as possible. According to this rule your database is structured correctly. There is no 100% solution to have a perfect database but according to your needs and using one of the following situations, you can consider that is a good practice to do it.
1. "$itemKey1": true,
2. "$itemName1": true,
3. "$itemKey1": 1494912826601,
4. "$itemName1": 1494912826601,
What is the meaning of "$itemKey1": 1494912826601,? Beacause you already have set a timestamp, means that your item was uploaded into your database and is linked to the specific user, which means also in other words true. So is not a bad approach to do something like this.
Hope it helps.
Great minds must think alike, because I do the exact same thing :) In my case, the "items" are posts that the user has upvoted. I use the timestamps with orderBy(), along with limitToLast(50) to get the "last 50 posts that the user has upvoted". And from there they can load more. I see no downsides to doing this.
I have a query API against my service, that looks like this (JSON-ish format):
{
filter: {
,attribute2: [val21, val22]
,attribute3: []
}
}
means effectively, select data WHERE attribute2 in ("val21", "val22") AND attribute3 IS NOT NULL in SQL-ish syntax (meaning, the object being returned has attribute 3 set, but I really don't care what its value is. SQL isn't very good at expressing this of course, as my data is key-value store where a key may be "not set" at all instead of being null valued).
I need to expand this API to be able to express IS NOT SET predicate, and I'm at a loss as to what a good way to do so would be.
The only thing I can possibly think of is to add a special "NOT_SET" value in the request API, that would produce NOT SET semantics; but it seems really klunky and hard to grasp:
The API syntax can be thought of as JSON as far as its expressiveness/capability
An ideal answer would reference some well accepted rules on API design, to show that it's "good".
{
filter: {
,attribute2: [val21, val22]
,attribute4: [__NOT_SET__]
}
}
My suggestion would be to move away from trying to use a key-value pair to represent a predicate phrase. You should have a lot more flexibility with a structure similar to:
{
filters: [
{ attribute: "attribute2", verb: "IN", values: [val21, val22] },
{ attribute: "attribute2", verb: "NOT IN", values: [val21, val22] },
{ attribute: "attribute4", verb: "IS NOT SET" },
]
}
You'd want an enum of verbs, of course, and values would have to be optional. You can add more verbs later if you need them, and you're no longer putting quite so much pressure on the poor :. You can also provide to the client a list of supported verbs and how many values (if any) of what type they take, so the client can build the UI dynamically, if desired.
Of course, this is a breaking change, which may or may not be an issue.
Taking the car dashboard example, I altered the initial #genre node to be #genre:classical. I also added a list to the contex
"choices":["Beethoven","Mahler 9","Brahms 3rd"]
and the Watson response is "I have 3 selections". The condition on the next node is $choices.contains(input.text). The "Found a match" response is just for testing. It looks like this:
When I test this in the api tool and type "Beethoven" both "Found a match" and "Great choice!..." appear. Same for the other two choices, but only if I type the exact choice, e.g., "Mahler 9". Typing "Mahler" or "mahler" doesn't get a match. I read through the SpEL documentation but couldn't see a way in a one-line condition to parse through the list looking for partial matches.
So my question is, is there an condition expression that would match partial user input, e.g., "Mahler"? I'll be using the Java SDK to code the app server, so alternatively I wondered if I could add a temporary #entity just for this sequence instead of using the context list then delete it when the conversation is done? Or is there a way to construct a more complex condition in the MessageRequest and will Watson recognize it? Or is this just not the right way to go about this? Any pointers, examples or docs much appreciated.
So my question is, is there an condition expression that would match partial user input
You can't add temporary entities or intents. As adding them forces Watson to start training itself (even if you could it through code).
You can however create quite complex regular expressions, pass them in as a context variable.
For example your advanced node can have:
{
"output": {
"text": "Please ask me a question."
},
"context": {
"rx": "fish|[0-9]+"
}
}
Then in you condition you would write.
input.text.matches(context.rx)
This will then trigger if the person mentions a number, or the word fish. So you can create your partial user input checking that way.