I am going to create a policy update by takeing policy defintion from another table.
Let's assume, we have a sampleTable table with the following definition:
.alter table sampleTable policy update #'[{"Source": "sourceTable", "Query": "function()", "IsEnabled": "True", "IsTransactional": false}]';
I would like to use policy for a newTable which were used for sampleTable. I have tried to do something like below:
let definition = (.show table sampleTable policy update | project Policy);
.alter table sampleTable policy update definition ;
I deeply belive, that it is doable, but I don't know the syntax here.
Could you please support me?
a control command must start with a dot (.): https://stackoverflow.com/a/55387571/7861807
you need to explicitly specify the policy as a string literal. you can't base it on a result of a different query/command.
you can orchestrate this programmatically using the API - run a command to get the policy definition (as a string), then generate the following command using that string, then invoke the generated command.
Related
I go through some posts and came to know that in dynamodb case insensitive search is not possible, hence trying to update existing dynamodb table's column values to lowercase.
I searched for syntax but havent get any satisfactory result. In mysql we achieve same thing by "
set name = LOWERCASE(name)
Please help me to write same thing in dynamodb.
I wrote this query
aws dynamodb update-item --profile test --table-name test-event-tickets --key '{"university_id": {"S": "112"}}' --update-expression 'SET #nameAttribute = :inputScope' --expression-attribute-names '{"#scopeAttribute":"name"}' --expression-attribute-values '{":inputname":{"S":"george philips"}}'
but here i have hardcoded inputname to "george philips". instead of this I want to read column value and convert it to lowercase
Unforetunately, there is no such syntax in DynamoDB. Although DynamoDB is capable of doing some transformations to data in-place, such as incrementing a counter, the syntax to do this is very limited, and lowercasing a value is NOT one of the things you can do.
So you'll have to scan the entire table, reading the old value of the attribute, calculating the lowercase version in your application, and writing the value back. If your application is doing regular writes in parallel to this transformation, you'll need to be very careful not to overwrite data that is being overwritten in parallel. You can do this with a conditional expression, but I think it will be easier if the new lowercase attribute will have a different name from the old not-always-lowercase attribute, so your transformation process will be able to write to the new attribute only (using ConditionalExpression) if the new attribute is not yet set.
I am building a parameterised Mapping dataflow pipeline and have run into a problem that I need help with.
My ADF Load is based on a config file, a sample of which is given below:
I would like the ability to join using the Stagekeys column in my config file using the EXISTS transformation shown below
Any suggestions on how I can achieve it?
Kind Regards
If my understanding was right we can parameterize key columns and prepare Exists Expression.
FYI, attached condition for single key we can extend that with multi keys as "source1#keyColumn1 == source2#keyColumn1 && source1#keyColumn2 == source2#keyColumn2"
--Dataflow Parameter
--Exists Expressions
For multiple keys from same target table can use following expression and send key columns as array
array(byNames($pKeyColumns,'sourceADLSCSV')) == array(byNames($pKeyColumns,'targetASQL'))
--Pipeline Parameter
--Dataflow Parameter
--Exists Expressions
Can we insert data into kusto table using flow?
I tried to insert data into kusto table using .ingest inline command but it throws a bad request error shown below:
Bad request: Control commands (starting with a dot '.') cannot be served from the query endpoint unless they are .show control commands.\r\nPlease provide the following information when contacting the Kusto.
So can we insert data into kusto table using flow?
it is possible, you just need to choose the Run control command... action instead of the Run query ... option (as .ingest, like any other command that starts with a dot (.) is a control command, and not a query)
that said, using direct ingestion is not necessarily recommended for large scale - you can read more about why here: https://learn.microsoft.com/en-us/azure/kusto/management/data-ingestion/
Inline ingestion (push): A control command (.ingest inline) is sent to the engine, with the data to be ingested being a part of the command text itself. This method is primarily intended for ad-hoc testing purposes, and should not be used for production purposes.
#Yoni L
Thank you it worked but we need to choose Chart Type as Html Table
I am trying to create a function that will accept name of tag and a datetime value and drop a extent within a specific table which has that tag and then ingest a new record into that table with the same tag and the input datetime value -- sort of 'update' simulation. I am not bothered about performance, it's just going to hold metadata -- maybe 20-30 rows at max.
So this is how the create table looks:-
.create table MyTable(sometext:string,somevalue:datetime)
And shown below is my function creation step, which is failing:-
.create-or-alter function MyFunction(arg_sometext:string,arg_somedate:datetime)
{
.drop extents <| .show table MyTable extents where tags has arg_sometext;
.ingest inline into table MyTable with (tags="[arg_sometext]") <| arg_somedate
}
So you can see I am trying to do something simple -- I am suspecting that Kusto won't allow commands in a function. Is there any workaround for achieving this?
Generally:
Kusto mandates that control commands start with a dot (.), and that this must be the first character in the text of the command. As queries, functions, etc. don't start with a dot, this precludes them from invoking control commands.
This is an intentional limitation that prevents a wide range of code injection attacks. By imposing this rule, Kusto makes it easy to guarantee that any query that does not begin with a dot will only have read access to the data and metadata, and never be able to alter them.
Specifically: with regards to your specific scenario:
I'm assuming it's triggered automatically (even if you did have the option to create a function), which suggests you should be able to achieve your goal using Kusto's API / Client libraries and a simple script/app.
An alternative, and perhaps even better approach, would be to re-consider if you actually need to delete or update specific records, or you can use summarize arg_max() in order to query for only the latest "versions" of the records (you could also create a function which encapsulates that logic and overrides the table, by naming the function with the table's name).
Can we create multiple schemas for a particular user? I am currently logged in as X/Y user and when I tried creating a schema using create schema authorization sample_schema, I got the error the schema name is missing or is incorrect in an authorization clause of a create schema statement. I do know that a default schema X would have been created.
CREATE SCHEMA in Oracle does - contrary to its name - not create a new schema.
It is merely a shorthand to create several tables in a single statement.
Quote from the manual:
Use the CREATE SCHEMA statement to create multiple tables and views and perform multiple grants in your own schema in a single transaction
and further down the explanation on what the "schema" name parameter is:
The schema name must be the same as your Oracle Database username.
Well you could create a user named sample_schema (From the above example) and give user X/Y permission to use sample_schema tablespace.