These two filters return zero results:
resource.labels:* AND resource.labels.namespace_name:*
resource.labels:* AND NOT resource.labels.namespace_name:*
While this one returns plenty:
resource.labels:*
I have three questions about this:
What's going on here?
More importantly, how do I exclude a particular value of
namespace_name while not excluding records that don't define
namespace_name ?
Similarly, how do I write a filter for all records that don't define namespace_name?
I work on Stackdriver Logging and have worked with the code that handles queries.
You are correct: something's up with the presence operator (:*), and it works differently than the other operators. As a result the behavior of a negated presence operator is not intuitive (or particularly useful).
We consider this a bug, and it's something that I'd really like to fix; however, fixing this class of bug is a lengthy process, so I've proposed some workarounds.
What's going on here?
I cannot reproduce your first "zero result" filter: resource.labels:* AND resource.labels.namespace_name:*
This gives me a large list of logs that contain the namespace_name label. For what it's worth, resource.labels.namespace_name:* implies resource.labels:*, so really you only need the latter half of this filter.
Your second "zero result" filter: resource.labels:* AND NOT resource.labels.namespace_name:*
... runs into a bug where field presence check (:*) does not interact properly with negation.
More importantly, how do I exclude a particular value of namespace_name while not excluding records that don't define namespace_name ?
While not required by the logging API, GCP-emitted resources generally emit the same sets of labels for a given resource type. You can take advantage of this by using resource.type to isolate resources-with-label from resources-without-label, then only apply the label constraint to the resources-with-label clause:
(resource.type != "k8s_container") OR
(resource.type = "k8s_container" AND resource.labels.namespace_name != "my-value")
Here, we are relying on all k8s_container-type entries having the namespace_name label, which should generally be the case. You can modify this to select multiple Kubernetes-prefixed resources:
(NOT resource.type:"k8s_") OR
(resource.type:"k8s_" AND resource.labels.namespace_name != "my-value")
... or use a complex resource.type clause to specifically select which you want to include/exclude from the namespace matching.
(NOT (resource.type = "k8s_container" OR resource.type = "k8s_pod")) OR
((resource.type = "k8s_container" OR resource.type = "k8s_pod") AND resource.labels.namespace_name != "my-value")
You cannot query for a k8s_container type that does not have the namespace_name label, but those should generally not be emitted in the first place.
Similarly, how do I write a filter for all records that don't define namespace_name?
You can't do this right now because of the bug. I think your best bet is to identify all of the resource types that use namespace_name and exclude those types with a resource.type filter:
NOT (
resource.type = "k8s_container" OR
resource.type = "k8s_pod" OR
resource.type = "knative_revision")
Note that, as mentioned earlier, while it's possible (allowed by the API) to have a k8s_container resource without a namespace_name label, emitted k8s_container logs should generally have the label.
Related
I am new to Java and Spring, and I am building a sytem using Spring JPA. I am now working on my service and controller classes, and I would like to create a dynamic query. I have created a form, in which the user can enter values in the fields, or leave them blank. I then use example matcher to create an example based on non null fields and query objects in the database that match non null fields of the object.
It is working fine with Strings, and it works ok with numbers, in case the number entered by the user is matching the number in the database. What I would like to ask the community is: how can we, using Spring ExampleMatcher, add logic so that the query relating to numbers is not Select * from Projects where project.return = 10 but for instance Select * from Projects where project.return >=10?
It is a pretty basic question, but I have looked everywhere on the web, and I could not find an answer. All sources that I found said that ExampleMatcher deals only with Strings, but I find that strange that such a powerful system does not have a logic to deal with higherthan / lowerthan number type of criteria.
My code for the example matcher:
ExampleMatcher matcher = ExampleMatcher.matching()
.withIgnoreNullValues()
.withIgnoreCase()
.withIgnorePaths("projectId", "businessPlans", "projectReturn", "projectAddress.addressId")
I would like to add something like:
.withMatcher("projectAmountRaised", IsMoreThan(Long.parseLong()));
What I would have loved to have, but it is deprecated:
public static List getStockDailyRecordCriteria(Date startDate,Date endDate,
Long volume,Session session){
Criteria criteria = session.createCriteria(StockDailyRecord.class);
if(startDate!=null){
criteria.add(Expression.ge("date",startDate));
}
if(endDate!=null){
criteria.add(Expression.le("date",endDate));
}
if(volume!=null){
criteria.add(Expression.ge("volume",volume));
}
criteria.addOrder(Order.asc("date"));
return criteria.list();
}
I am thus looking for something similar... I could create a broad results list from just Strings criteria using ExampleMatcher, and then write my own logic to delete objects that do not fit number criteria, but I am sure there is a more elegant approach.
Thank you a lot for your help, and for your indulgence!
This is how you can use QBE and pageable with additional filters:
ExampleMatcher matcher = UntypedExampleMatcher.matching()
.withIgnoreCase()
.withIgnorePaths("startDate");
MyDao probe = new MyDao()
final Example<MyDao> example = Example.of(probe, matcher);
Query q = new Query(new Criteria().alike(example)).with(pageable);
q.addCriteria(Criteria.where("startDate").gte(probe.getStartDate()));
List<MyDao> list = mongoTemplate.find(q, example.getProbeType(), "COLLECTION_NAME");
PageableExecutionUtils.getPage(list, pageable, () -> mongoTemplate.count(q, example.getProbeType(), "COLLECTION_NAME"));
Say that I have node user, item and user_items used to join them.
Typically one would(as advised in official documents and videos) use such a structure:
"user_items": {
"$userKey": {
"$itemKey1": true,
"$itemKey2": true,
"$itemKey3": true
}
}
I would like to use the following structure instead:
"user_items": {
"$userKey": {
"$itemKey1": 1494912826601,
"$itemKey2": 1494912826602,
"$itemKey3": 1494912826603
}
}
with values being a timestamp value. So that i can order them by creation date also while being able to tell the associated time. Seems like killing two birds with one stone situation. Or is it?
Any down sides to this approach?
EDIT: Also I'm using this approach for the boolean fields such as: approved_at, seen_at,... etc instead of using two fields like:
"some_message": {
"is_seen": true,
"seen_timestamp": 1494912826602,
}
You can model your database in every way you want, as long as you follow Firebase rules. The most important rule is to have the data as flatten as possible. According to this rule your database is structured correctly. There is no 100% solution to have a perfect database but according to your needs and using one of the following situations, you can consider that is a good practice to do it.
1. "$itemKey1": true,
2. "$itemName1": true,
3. "$itemKey1": 1494912826601,
4. "$itemName1": 1494912826601,
What is the meaning of "$itemKey1": 1494912826601,? Beacause you already have set a timestamp, means that your item was uploaded into your database and is linked to the specific user, which means also in other words true. So is not a bad approach to do something like this.
Hope it helps.
Great minds must think alike, because I do the exact same thing :) In my case, the "items" are posts that the user has upvoted. I use the timestamps with orderBy(), along with limitToLast(50) to get the "last 50 posts that the user has upvoted". And from there they can load more. I see no downsides to doing this.
I have a query API against my service, that looks like this (JSON-ish format):
{
filter: {
,attribute2: [val21, val22]
,attribute3: []
}
}
means effectively, select data WHERE attribute2 in ("val21", "val22") AND attribute3 IS NOT NULL in SQL-ish syntax (meaning, the object being returned has attribute 3 set, but I really don't care what its value is. SQL isn't very good at expressing this of course, as my data is key-value store where a key may be "not set" at all instead of being null valued).
I need to expand this API to be able to express IS NOT SET predicate, and I'm at a loss as to what a good way to do so would be.
The only thing I can possibly think of is to add a special "NOT_SET" value in the request API, that would produce NOT SET semantics; but it seems really klunky and hard to grasp:
The API syntax can be thought of as JSON as far as its expressiveness/capability
An ideal answer would reference some well accepted rules on API design, to show that it's "good".
{
filter: {
,attribute2: [val21, val22]
,attribute4: [__NOT_SET__]
}
}
My suggestion would be to move away from trying to use a key-value pair to represent a predicate phrase. You should have a lot more flexibility with a structure similar to:
{
filters: [
{ attribute: "attribute2", verb: "IN", values: [val21, val22] },
{ attribute: "attribute2", verb: "NOT IN", values: [val21, val22] },
{ attribute: "attribute4", verb: "IS NOT SET" },
]
}
You'd want an enum of verbs, of course, and values would have to be optional. You can add more verbs later if you need them, and you're no longer putting quite so much pressure on the poor :. You can also provide to the client a list of supported verbs and how many values (if any) of what type they take, so the client can build the UI dynamically, if desired.
Of course, this is a breaking change, which may or may not be an issue.
I need to fetch from BaaS data store all records that doesn't match condition
I use query string like:
https://api.usergrid.com/<org>/<app>/<collection>?ql=location within 10 of 30.494697,50.463509 and Partnership eq 'Reject'
that works right (i don't url encode string after ql).
But any attempt to put "not" in this query cause "The query cannot be parsed".
Also i try to use <>, !=, NE, and some variation of "not"
How to configure query to fetch all records in the range but Partnership NOT Equal 'Reject' ?
Not operations are supported, but are not performant because it requires a full scan. When coupled with a geolocation call, it could be quite slow. We are working on improving this in the Usergrid core.
Having said that, in general, it is much better to inverse the call if possible. For example, instead of adding the property when the case is true, always write the property to every new entity (even when false), then edit the property when the case is true.
Instead of doing this:
POST
{
'name':'fred'
}
PUT
{
'name':'fred'
'had_cactus_cooler':true
}
Do this:
POST
{
'name':'fred'
'had_cactus_cooler':'no'
}
PUT
{
'name':'fred'
'had_cactus_cooler':'yes'
}
In general, try to put your data in the way you want to get it out. Since you know upfront that you want to query on whether this property exists, simply add it, but with a negative value. The update it when the condition becomes true.
You should be able to use this syntax:
https://api.usergrid.com/<org>/<app>/<collection>?ql=location within 10 of 30.494697,50.463509 and not Partnership eq 'Reject'
Notice that the not operator comes before the expression (as indicated in the docs).
For a web application, I need to get a list or collection of all SalesOrders that meet the folowing criteria:
Have a WarehouseKey.ID equal to "test", "lucmo" or "Inno"
Have Lines that have a QuantityToBackorder greater than 0
Have Lines that have a RequestedShipDate greater than current day.
I've succesfully used these two methods to retrieve documents, but I can't figure out how return only the ones that meet above criteria.
http://msdn.microsoft.com/en-us/library/cc508527.aspx
http://msdn.microsoft.com/en-us/library/cc508537.aspx
Please help!
Short answer: your query isn't possible through the GP Web Services. Even your warehouse key isn't an accepted criteria for GetSalesOrderList. To do what you want, you'll need to drop to eConnect or direct table access. eConnect has come a long way in .Net if you use the Microsoft.Dynamics.GP.eConnect and Microsoft.Dynamics.GP.eConnect.Serialization libraries (which I highly recommend). Even in eConnect, you're stuck with querying based on the document header rather than line item values, though, so direct table access may be the only way you're going to make it work.
In eConnect, the key piece you'll need is generating a valid RQeConnectOutType. Note the "ForList = 1" part. That's important. Since I've done something similar, here's what it might start out as (you'd need to experiment with the capabilities of the WhereClause, I've never done more than a straightforward equal):
private RQeConnectOutType getRequest(string warehouseId)
{
eConnectOut outDoc = new eConnectOut()
{
DOCTYPE = "Sales_Transaction",
OUTPUTTYPE = 1,
FORLIST = 1,
INDEX1FROM = "A001",
INDEX1TO = "Z001",
WhereClause = string.Format("WarehouseId = '{0}'", warehouseId)
};
RQeConnectOutType outType = new RQeConnectOutType()
{
eConnectOut = outDoc
};
return outType;
}
If you have to drop to direct table access, I recommend going through one of the built-in views. In this case, it looks like ReqSOLineView has the fields you need (LOCNCODE for the warehouseIds, QTYBAOR for backordered quantity, and ReqShipDate for requested ship date). Pull the SOPNUMBE and use them in a call to GetSalesOrderByKey.
And yes, hybrid solutions kinda suck rocks, but I've found you really have to adapt if you're going to use GP Web Services for anything with any complexity to it. Personally, I isolate my libraries by access type and then use libraries specific to whatever process I'm using to coordinate them. So I have Integration.GPWebServices, Integration.eConnect, and Integration.Data libraries that I use practically everywhere and then my individual process libraries coordinate on top of those.