I am developing a set of Scripts for Azure and I would like to know how to populate a CosmosDB collection with az.
Currently, I know how to create a Database and Collection but how to initialize the Database?
az cosmosdb create \
--resource-group $RESOURCE_GROUP \
--name $COSMOS_DB_ACCOUNT
az cosmosdb database create \
--resource-group-name $RESOURCE_GROUP \
--name $COSMOS_DB_ACCOUNT \
--db-name $COSMOS_DB_NAME
az cosmosdb collection create \
--resource-group-name $RESOURCE_GROUP \
--name $COSMOS_DB_ACCOUNT \
--collection-name $COSMOS_DB_COLLECTION_NAME \
--db-name $COSMOS_DB_NAME \
--partition-key-path $COSMOS_DB_COLLECTION_PARTITION_KEY
Reading the documentation, I didnĀ“t see a solution.
az doesn't provide any data-movement options for Cosmos DB.
For the SQL API, you'll either need to create your own command-line tool, or use the Cosmos DB-supplied Data Migration Tool (Windows-only, unlike az), which provides a command-line interface. For example:
dt /s:JsonFile /s.Files:.\inputdata.json /t:DocumentDBBulk /t.ConnectionString:"AccountEndpoint=<name>;AccountKey=<key>;Database=<db>;" /t.Collection:<collname> /t.CollectionThroughput:<throughput>
This has support for the MongoDB API as well, but you can also use native command-line tools such as mongoimport.
Related
I have two different background cloud functions, doing similar things but using different algorithm and libraries for testing.
Now I want to measure the total GB-Seconds & GHz-Seconds by functions to optimize for pricing.
I think this is available in Functions metrics explorer, but I can't create the report.
One option is to attach labels to your Cloud Functions, then go to your GCP billing report, group by SKU, and filter it by labels to see the breakdown per function.
The only downside is that labels can only be configured in gcloud command, GCP Client Libraries, or via REST API, it's currently not yet available in Firebase CLI (feature request here).
In this first approach, you'll have to redeploy your functions using gcloud. Here's an example and further information can be seen on this documentation:
gcloud functions deploy FUNCTION_NAME \
--runtime RUNTIME \
--trigger-event "providers/cloud.firestore/eventTypes/document.write" \
--trigger-resource "projects/YOUR_PROJECT_ID/databases/(default)/documents/messages/{pushId}" \
--update-labels KEY=VALUE
The second approach to avoid redeployment is to create a PATCH request to add/update labels to your function. It can be done by running this command (update all caps with your input):
curl \
--request PATCH \
--header "Authorization: Bearer $(gcloud auth print-access-token)" \
--header "content-type: application/json" \
--data "{\"labels\":{\"KEY\":\"VALUE\"}}" \
https://cloudfunctions.googleapis.com/v1/projects/PROJECT-ID/locations/REGION/functions/FUNCTION-NAME?updateMask=labels
Is it possible to switch cosmosdb container from manual to autoscale using ARM templates?
I'm trying to achieve this with following arm , but I still get TU settings set to manual
{
"name": "db/collection/container/default",
"type": "Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/throughputSettings",
"apiVersion": "2020-03-01",
"properties": {
"resource": {
"throughput": "4000",
"autoscaleSettings": {
"maxThroughput": "800000"
}
}
},
It is not possible to do this as this call is a POST on the Cosmos DB resource provider.
The only way to migrate from standard to autoscale throughput is to use the Azure Portal, PowerShell or Azure CLI. You can then modify your ARM templates and update the throughput amount by redeploying the template with the appropriate throughput json in the resources options.
Here is PS example for a container from standard to autoscale.
Invoke-AzCosmosDBSqlContainerThroughputMigration `
-ResourceGroupName $resourceGroupName `
-AccountName $accountName `
-DatabaseName $databaseName `
-Name $containerName `
-ThroughputType Autoscale
More PowerShell examples
Here is cli example for a container from standard to autoscale
az cosmosdb sql container throughput migrate \
-a $accountName \
-g $resourceGroupName \
-d $databaseName \
-n $containerName \
-t 'autoscale'
More CLI examples
If doing this for other database API's find the PS or CLI examples in the docs. There are examples for all database API's.
Background
I try to learn to use Azure blob through azure doc.
Troubles
I got an error when I want to use these codes.
blobStorageAccount=probestudent
blobStorageAccountKey=$(az storage account keys list -g ProbeStudent \
-n $blobStorageAccount --query [0].value --output tsv)
az storage container set-permission \ --account-name $blobStorageAccount \ --account-key $blobStorageAccountKey \ --name thumbnails \
--public-access off
These codes mean that Set container public access.
This my container status.
This my storage account status.
Error
Please remove the character \ if the command is in one line. The symbol \ is just used when the command is too long and need another line.
The below command is working:
az storage container set-permission --account-name $blobStorageAccount --account-key $blobStorageAccountKey --name thumbnails --public-access off
The test result:
Microsoft documentation for az cosmosdb collection create
says that --partition-key-path can be used to name a key to use for a collection. In this case, it's a "MongoDB" collection:
name='myName'
databaseName='myDatabase'
resourceGroupName='myRg'
collectionName='myCollection'
echo "Create database account"
az cosmosdb create --name $name --kind MongoDB --locations "Central US"=0 --resource-group $resourceGroupName
echo "Create database"
az cosmosdb database create --name $name --db-name $databaseName --resource-group $resourceGroupName
echo "Create collection $collectionName"
az cosmosdb collection create --name $name --db-name $databaseName --resource-group $resourceGroupName --collection-name $collectionName --partition-key-path '/partition'
What do I need to change to avoid the following error and create a partitioned collection?
Create database account
...
Create database
...
Create collection myCollection
ERROR: Operation Failed: Invalid Arg {"Errors":["The partition key component definition path
'C:\/Apps\/Git\/partition' could not be accepted, failed near position '0'. Partition key paths must contain only
valid characters and not contain a trailing slash or wildcard character."]}
So after all the back and forth it appears to be a path escaping issue, where the partition key, when provided with single quotes, was assumed to be a folder from bash.
However changing single quotes to double quotes worked.
When tried to run with cmd /C the following seems to be working.
cmd "/C az cosmosdb collection create --name $name --db-name $databaseName --resource-group $resourceGroupName --collection-name $collectionName --partition-key-path "/partition""
The docs mention that it can be accessed via CLI, but there are no samples for CLI in the docs, only PowerShell.
Any advice on where to get started?
You can find the details on how to manage Cognitive Services in the Cognitive Services Azure CLI documentation.
As usual with Azure Government:
Set your CLI to Azure Government via the az cloud set --name=AzureUSGovernment
Use the Azure Government regions.
Use the subset of APIs available in Azure Government (see this doc for the literal values to use with --kind).
Use the subset of SKUs available in Azure Government
Here's a quick sample on how to get going:
az cloud set --name AzureUSGovernment
az login
az group create -n cogstestrg -l usgovvirginia
az cognitiveservices account create -n cogstestcv -g cogstestrg --sku S0 --kind ComputerVision -l usgovvirginia
az cognitiveservices account show -g cogstestrg -n cogstestcv
az cognitiveservices account keys list -g cogstestrg -n cogstestcv
#Make sure you REPLACE_WITH_YOUR_KEY in the curl command below using the key from the previous command
curl -v -X POST "https://virginia.api.cognitive.microsoft.us/vision/v1.0/analyze?visualFeatures=Categories,Description,Color&details=&language=en" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: REPLACE_WITH_YOUR_KEY" --data-ascii "{'url' : 'http://upload.wikimedia.org/wikipedia/commons/3/3c/Shaki_waterfall.jpg'}"