I am using REST API (https://learn.microsoft.com/en-us/azure/kusto/api/rest/request) to interact with the database in ADX.
I want to create more databases in the same cluster. How should I do it using Java?
I am not using the Java SDK. I have relied on the REST APIs so far.
I think I cannot create a new database using the REST API, so looking for alternative.
It would have been really helpful if there was a command like ".create table tablename" just for the database.
Clusters and databases can be managed using the "Control Plane", aka ARM APIs. These APIs have libraries in different languanges (as well as REST).
For instance, for the java library use this link, for C# use this link
Example for how to create a database in C# library (Java should be very similar):
var database = managementClient.Databases.CreateOrUpdate(resourceGroup, clusterName, databaseName, new Database(location, softDeletePeriod: softDeletePeriod, hotCachePeriod: hotCachePeriod));
Read more here
I think you'll need to use the Azure ARM REST API since the database is treated as a resource. From that point you can interact with it through the ADX APIs.
Related
We process and refresh our AAS models in ADFv2 using API calls.
We want to be able vertically scale up (change the tier) just before we refresh the model and then scale back down once it's done.
I can't find an API call to do this or any kind of command to execute within a pipeline in ADF.
It's simple for Azure SQL database
ALTER DATABASE <db_name> MODIFY (SERVICE_OBJECTIVE = 'service-tier')
For reasons out of my control, using runbooks is not an option.
There must be alternatives.
You can leverage the REST API
Set-AzureRmAnalysisServicesServer -ResourceGroupName $ResourceGroupName -Name $AnalysisServerName -Sku "S1"
for altering the tier.
Note: you cannot change between basic to standard via manual or API approach.
You can only alter around within the same scope of tier like S1,S2.S4 etc
You can leverage ADF web activity to do the same
I need to regenerate the ServiceBus primary and secondary keys on a periodic basis. I am able to do it in the .NET framework, but i need to do it in .NET Core or .NET 6 as it will be an Azure Function with a timer trigger.
I am using Azure.Messaging.ServiceBus, but I cannot find the corresponding methods that are in Microsoft.ServiceBus.Messaging.... in order to generate the keys nor to update the rules.
Can someone please direct me to the documentation or sample code?
thanks
You can regenerate the keys using the method RegenerateKeysAsync which is available in Microsoft.Azure.Management.ServiceBus.Fluent. It regenerates the keys at namespace level.
Alternatively, you can use below code to generate new key
string newPrimaryKey = SharedAccessAuthorizationRule.GenerateRandomKey();
rule.PrimaryKey = newPrimaryKey;
Here is a sample where you can find Azure Service Bus Queues with ASP.NET Core Services.
REFERENCES:
azure-sdk-for-net/ScenarioTests.TopicsTests.CRUDAuthorizationRules.cs
Update azure service bus queue shared access policy programmatically
I currently create a CosmosDB with the following properties:
cosmosDb = await azure.CosmosDBAccounts
.Define(cosmosDbResource.Name)
.WithRegion(cosmosDbResource.Region)
.WithExistingResourceGroup(cosmosDbResource.ResourceGroup.Name)
.WithKind(DatabaseAccountKind.GlobalDocumentDB)
.WithStrongConsistency()
.WithTags(cosmosDbResource.ResourceGroup.Tags)
.CreateAsync();
The only place I have seen to be able to set Zone Redundancy on is the ReadReplication database, like so:
cosmosDb = await azure.CosmosDBAccounts
.Define(cosmosDbResource.Name)
.WithRegion(cosmosDbResource.Region)
.WithExistingResourceGroup(cosmosDbResource.ResourceGroup.Name)
.WithKind(DatabaseAccountKind.GlobalDocumentDB)
.WithStrongConsistency()
.WithReadReplication(Region.USEast, true)
.WithTags(cosmosDbResource.ResourceGroup.Tags)
.CreateAsync();
The problem is that I don't care about a Read Replication database. I want to set Zone Redundancy on the initial database I create. I noticed that in the Azure Portal when I create a CosmosDB manually, it gives me the option to set Zone Redundancy. Is this not possible via the Azure Libraries for NET SDK?
To specify write region with Zone Redundancy do this below:
.WithWriteReplication(Region.USWest2, true)
PS: If at all possible I would recommend you use the Auto-rest generated version of this SDK. The fluent API is not generally as up to date as the Auto-rest generated API's. This gets built directly off our the Cosmos DB swagger spec and everything downstream is built upon this including ARM, PowerShell and CLI.
There is a repository with a fairly complete set of examples as well that you can use to help build your own management libraries. It also includes fluent samples but also out of date. Cosmos DB Samples
This is the repo for the Auto-rest generated SDK. Cosmos DB Management SDK for .NET
Context: We are trying to load some CSV format data into GCP BigQuery using GCP Dataflow (Apache Beam). As a part of this for the first time (for each table) creating the BQ tables thru BigQueryIO API. One of the customer requirement is the data on GCP needs to be encrypted using Customer supplied/managed Encryption keys.
Problem Statement: We are not able to find any way to specify the "Custom Encryption Keys" thru APIs while creating Tables. The GCP documentation details about how to specify the Custom encryption keys thru GCP BQ Console but could not find anything for specifying it thru APIs from within DataFlow Code.
Code Snippet:
String tableSpec = new StringBuilder().append(PipelineConstants.PROJECT_ID).append(":")
.append(dataValue.getKey().target_dataset).append(".").append(dataValue.getKey().target_table_name)
.toString();
ValueProvider<String> valueProvider = StaticValueProvider.of("gs://bucket/folder/");
dataValue.getValue().apply(Count.globally()).apply(ParDo.of(new RowCount(dataValue.getKey())))
.apply(ParDo.of(new SourceAudit(runId)));
dataValue.getValue().apply(ParDo.of(new PreProcessing(dataValue.getKey())))
.apply(ParDo.of(new FixedToDelimited(dataValue.getKey())))
.apply(ParDo.of(new CreateTableRow(dataValue.getKey(), runId, timeStamp)))
.apply(BigQueryIO.writeTableRows().to(tableSpec)
.withSchema(CreateTableRow.getSchema(dataValue.getKey()))
.withCustomGcsTempLocation(valueProvider)
.withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_IF_NEEDED)
.withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_APPEND));
Query: If anybody could let us know
If this is possible to provide encryption key thru Beam API?
If its not possible with the current version what could be the possible work
around?
Kindly let know if additional information is required.
Customer supplied encryption keys is a new feature, not all libraries have been updated to support it yet.
If you know the table name in advance, you can use UI/CLI or API to create table, then run your normal flow to load data into that table. That might be a work around for you.
https://cloud.google.com/bigquery/docs/customer-managed-encryption#create_table
API to create table: https://cloud.google.com/bigquery/docs/reference/rest/v2/tables/insert
You need to set this section on table object:
"encryptionConfiguration": {
"kmsKeyName": string
}
More details on table: https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#resource
I am interested to know what commands allows me to write and read data to and from Amazon ElasticCache using the ASP.NET SDK. I've viewed the online documentation but couldn't figure out how it is done.
What I did in the code: I created to keys in the web.config to store the Id and Access password.
AmazonElastiCacheClient client = new AmazonElastiCacheClient(ElasticCache_Id, ElasticCache_Pass);
Initialize the AmazonElasticCacheClient object and pass the credentials strings.
I need a sample code that will demonstrate how to put data and how to retrieve data from the ElasticCache cluster. thanks.
It looks like you can only manage elasticcache clusters through the AWS SDK.
You can use any memcached client to read and write to elasticcache since that is the underlying technology.
Here is an example:
http://geekswithblogs.net/shaunxu/archive/2010/04/07/first-round-playing-with-memcached.aspx