The reason is for CDN cache entries. Those are specific strings used as cache keys and to invalidate them they need to match. Preferably, I'd like to enforce the query param order so that they always will match the CDN cache keys.
The solution is to just enforce ordering when building the query string.
Related
Is there a way to store common query statements in Kusto.Explorer for future use. For example:
Most of my queries start with:
set notruncation;
set maxmemoryconsumptionperiterator=68719476736;
set servertimeout = timespan(15m);
I would like to use a 'variable name' to reference these instead of explicitly calling them out every time. Something like this:
Setlimitations
T
| summarize count() by Key
set statements, when used, must be specified as part of each request.
However, you can define a request limits policy on the default / a custom workload group with the same settings, and those will apply to all requests classified to that workload group.
also see: https://y0nil.github.io/kusto.blog/blog-posts/workload-groups.html
do note that always running with notruncation, a very high maxmemoryconsumptionperiterator and an extended servertimeout probably indicates some inefficiency in your workload, and you may want to revisit the reason for these being used to begin with
e.g. if you're frequently exporting large volumes of data, you may prefer exporting them to cloud storage instead of via a query.
DAO.fetch(query) allows us to get a collection of entities from the sqlite database that meets the query condition. query can be a map or string []. How can we specify ordering with the ORDER BY clause and also how do we apply the LIMIT and OFFSET clauses or do we have to default to db.execute(query)?
Currently ORDER BY, LIMIT, and OFFSET clauses aren't supported. It wouldn't be hard to add. Please file an RFE.
Alternatively it wouldn't be difficult to add this in your own DAO subclass. You can see how fetch(query) is implemented here.
I'm using CloudFormation to construct an AWS::DynamoDB::Table resource, and I have my DeletionPolicy set to Retain. Suppose I make a change to the AttributeDefinitions properties of this logical resource, such as renaming a hash key, and then perform a CloudFormation update_stack; such a change requires a 'replacement' of the resource. So far so good; I expect that the existing DynamoDB table is 'deleted' and a new one created in its place with the changed key definition.
However, I'm surprised that the original table is not 'left behind' as a result of the DeletionPolicy. Certainly, it would be possible to block the update entirely via a stack policy, but I was hoping that the DeletionPolicy would result in the now-defunct table being ejected from the CloudFormation stack and a new one arising in its place, but nonetheless not actually deleted.
Is this expected behaviour?
Yes, it is an expected behavior.
The DeletionPolicy is only applied when you actually delete the whole CloudFormation stack.
Source: DeletionPolicy # docs.aws.amazon.com
If you want to keep your former DynamoDB tables during an update, you will need to back it up manually beforehand. You may use AWS Data Pipeline to backup your DynamoDB tables on Amazon S3.
Use AWS attribute "UpdateReplacePolicy: Retain".
I have a user object with these attributes.
id (key), name and email
And I am trying to make sure these attributes are unique in the DB.
How do I prevent a put/create/save operation from succeeding, in case of either of the non-key attributes, email and name, already exists in the DB?
I have a table, tblUsers with one key-attribute being the id.
I then have two globally secondary indexes, each with also one key-attribute, being the email for the first index-table, and name for the second.
I am using microsoft .net identity framework, which itself checks for existing users with a given name or email, before creating a user.
The problem I forsee, is the delay between checking for existing users and creating a new. There is no safety, that multiple threads wont end up creating two users with the same name or email.
dynamodb can force uniqueness only for hash-range table keys (not for global secondary index keys)
do in your case there are 2 options:
1) force it on application level - if your problem is safety, then use locks (cache locks)
2) dont use dynamodb (maybe its not answer your requirements )
I am using ehcache locally to check for duplicates . Adding one more check if ehcache is empty (for some reason ,cache has been reset ) .repopulate cache by making query to dynamoDb .
I'm concerned about read performance, I want to know if putting an indexed field value as null is faster than giving it a value.
I have lots of items with a status field. The status can be, "pending", "invalid", "banned", etc...
my typical request is to find the status "ok" (or null). Since null fields are not saved to datastore, it is already a win to avoid to have a "useless" default value I can replace with null. So I already have less disk space use.
But I was wondering, since datastore is noSql, it doesn't know about the data structure and it doesn't know there is a missing column status. So how does it do the status = null request check?
Does it have to check all columns of each row trying to find my column? or is there some smarter mechanism?
For example, index (null=Entity,key) when we pass a column explicitly saying it is null (if this is the case, does Objectify respect that and keep the field in the list when passing it to the native API if it's null?)
And mainly, which request is more efficient?
The low level API (and Objectify) stores and indexes nulls if you specify that a field/property should be indexed. For Objectify, you can specify #Ignore(IfNull.class) or #Unindex(IfNull.class) if you want to alter this behavior. You are probably confusing this with documentation for other data access APIs.
Since GAE only allows you to query for indexed fields, your question is really: Is it better to index nulls and query for them, or to query for everything and filter out non-null values?
This is purely a question of sparsity. If the overwhelming majority of your records contain null values, then you're probably better off querying for everything and filtering out the ones you don't want manually. A handful of extra entity reads are probably cheaper than updating and storing an extra index. On the other hand, if null records are a small percentage of your data, then you will certainly want the index.
This indexing dilema is not unique to GAE. All databases present this question with respect to low-cardinality fields; it's just that they'll do the table scan (testing and skipping rows) for you.
If you really want to fine-tune this behavior, read Objectify's documentation on Partial Indexes.
null is also treated as a value in datastore and there will be entries for null values in indexes. Datastore doc says, "Datastore distinguishes between an entity that does not possess a property and one that possesses the property with a null value"
Datastore will never check all columns or all records. If you have this property indexed, it will get records from the index only If not indexed, you cannot query by that property.
In terms of query performance, it should be the same, but you can always profile and check.