Switch cosmosdb from manual to autoscale - azure-cosmosdb

Is it possible to switch cosmosdb container from manual to autoscale using ARM templates?
I'm trying to achieve this with following arm , but I still get TU settings set to manual
{
"name": "db/collection/container/default",
"type": "Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/throughputSettings",
"apiVersion": "2020-03-01",
"properties": {
"resource": {
"throughput": "4000",
"autoscaleSettings": {
"maxThroughput": "800000"
}
}
},

It is not possible to do this as this call is a POST on the Cosmos DB resource provider.
The only way to migrate from standard to autoscale throughput is to use the Azure Portal, PowerShell or Azure CLI. You can then modify your ARM templates and update the throughput amount by redeploying the template with the appropriate throughput json in the resources options.
Here is PS example for a container from standard to autoscale.
Invoke-AzCosmosDBSqlContainerThroughputMigration `
-ResourceGroupName $resourceGroupName `
-AccountName $accountName `
-DatabaseName $databaseName `
-Name $containerName `
-ThroughputType Autoscale
More PowerShell examples
Here is cli example for a container from standard to autoscale
az cosmosdb sql container throughput migrate \
-a $accountName \
-g $resourceGroupName \
-d $databaseName \
-n $containerName \
-t 'autoscale'
More CLI examples
If doing this for other database API's find the PS or CLI examples in the docs. There are examples for all database API's.

Related

How can determine managed identity of Azure VM a script is running on?

For post-processing of AzD=Azure Developer CLI I need to authorize the managed identity of the Azure VM, the script is currently running on, to the subscription selected by AzD. How can I determine managed identity of the VM with help of the metadata endpoint?
I created this script authorize-vm-identity.sh which determines the VM's resourceID (could be in a different subscription than the actual resources managed by AzD) from the metadata endpoint and then obtains the managed identities' principalId to make the actual role assignment with:
#!/bin/bash
source <(azd env get-values | sed 's/AZURE_/export AZURE_/g')
AZURE_VM_ID=`curl -s -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/instance?api-version=2021-02-01" | jq -r '.compute.resourceId'`
if [ ! -z $AZURE_VM_ID ];
then
AZURE_VM_MI_ID=`az vm show --id $AZURE_VM_ID --query 'identity.principalId' -o tsv`
fi
if [ ! -z $AZURE_VM_MI_ID ];
then
az role assignment create --role Contributor --assignee $AZURE_VM_MI_ID --scope /subscriptions/$AZURE_SUBSCRIPTION_ID
fi
Prerequisites:
Azure CLI
jq
curl

How to test Firestore Security Rules with Jenkins?

I'm developing some Firestore security rules locally. I use mocha to test the rules, and locally everything works. I've a Jenkins pipeline that every time I merge a PR on develop it published the rules on Firebase in cloud. What I want to do is running my unit tests within Jenkins. Anyway, every time Jenkins calls yarn test from the pipeline, I get an error that says
#firebase/firestore: Firestore (7.18.0): Could not reach Cloud Firestore backend. Connection failed 1 times. Most recent error: FirebaseError: [code=internal]: 13 INTERNAL: Received RST_STREAM with code 2 triggered by internal client error: Protocol error
This typically indicates that your device does not have a healthy Internet connection at the moment. The client will operate in offline mode until it is able to successfully connect to the backend.
Is there a way to run the firebase emulators from Jenkins?
Thanks!
I found a way to do that.
By using firebase-tools-docker I can easily run my tests inside a docker container that brings up the emulators suite.
The Jenkinsfile goes like this:
def jenkinsUser = 1001
def firebaseDocker = 'andreysenov/firebase-tools:9.14.0'
stage('Pull docker image') {
sh "docker pull $firebaseDocker"
}
stage('Unit tests') {
sh "docker run -d --rm \
--user $jenkinsUser:$jenkinsUser \
-p 8080:8080 \
-v ${pwd()}:/home/node \
--name firebase-emulators \
$firebaseDocker \
firebase emulators:start"
sleep(5)
sh "docker exec firebase-emulators /bin/bash -c 'cd tests && yarn test'"
sh "docker stop firebase-emulators"
}
This is my folder structure (for reference):
Hope this helps 😉

How to populate a CosmosDB collection by command line?

I am developing a set of Scripts for Azure and I would like to know how to populate a CosmosDB collection with az.
Currently, I know how to create a Database and Collection but how to initialize the Database?
az cosmosdb create \
--resource-group $RESOURCE_GROUP \
--name $COSMOS_DB_ACCOUNT
az cosmosdb database create \
--resource-group-name $RESOURCE_GROUP \
--name $COSMOS_DB_ACCOUNT \
--db-name $COSMOS_DB_NAME
az cosmosdb collection create \
--resource-group-name $RESOURCE_GROUP \
--name $COSMOS_DB_ACCOUNT \
--collection-name $COSMOS_DB_COLLECTION_NAME \
--db-name $COSMOS_DB_NAME \
--partition-key-path $COSMOS_DB_COLLECTION_PARTITION_KEY
Reading the documentation, I didn´t see a solution.
az doesn't provide any data-movement options for Cosmos DB.
For the SQL API, you'll either need to create your own command-line tool, or use the Cosmos DB-supplied Data Migration Tool (Windows-only, unlike az), which provides a command-line interface. For example:
dt /s:JsonFile /s.Files:.\inputdata.json /t:DocumentDBBulk /t.ConnectionString:"AccountEndpoint=<name>;AccountKey=<key>;Database=<db>;" /t.Collection:<collname> /t.CollectionThroughput:<throughput>
This has support for the MongoDB API as well, but you can also use native command-line tools such as mongoimport.

Azure storage container permission cannot be set

Background
I try to learn to use Azure blob through azure doc.
Troubles
I got an error when I want to use these codes.
blobStorageAccount=probestudent
blobStorageAccountKey=$(az storage account keys list -g ProbeStudent \
-n $blobStorageAccount --query [0].value --output tsv)
az storage container set-permission \ --account-name $blobStorageAccount \ --account-key $blobStorageAccountKey \ --name thumbnails \
--public-access off
These codes mean that Set container public access.
This my container status.
This my storage account status.
Error
Please remove the character \ if the command is in one line. The symbol \ is just used when the command is too long and need another line.
The below command is working:
az storage container set-permission --account-name $blobStorageAccount --account-key $blobStorageAccountKey --name thumbnails --public-access off
The test result:

Spinnaker Nexus Integration

I'm facing issue while integrating spinnaker with Nexus.
Basically, here is my process - Building docker image using Jenkins and uploading to Nexus. Next, want to trigger spinnaker pipelines based on new image available on Nexus to deploy apps on kubernetes.
I've used these 2 commands
hal config provider docker-registry enable
hal config provider docker-registry account add my-docker-registry \
--address <pvtIP>:9082 \
--repositories repository/<repoName> \
--username <userName> \
--password
Getting error as below
+ Get current deployment
Success
- Add the my-docker-registry account
Failure
Problems in default.provider.dockerRegistry.my-docker-registry:
! ERROR Unable to fetch tags from the docker repository:
repository/test-docker-snapshots/, Unrecognized SSL message, plaintext
connection?
? Can the provided user access this repository?
- WARNING None of your supplied repositories contain any tags.
Spinnaker will not be able to deploy any docker images.
? Push some images to your registry.
- Failed to add account my-docker-registry for provider
dockerRegistry.
is it mandatory to have nexus on HTTPS ? I'm running on http, and using in internal network only...
please advise.. thanks..
If your nexus repo is running on HTTP then you should set --insecure-registry flag in your command. So you would final command would be as follows:
hal config provider docker-registry account add my-docker-registry \
--address <pvtIP>:9082 \
--repositories repository/<repoName> \
--insecure-registry true \
--username <userName> \
--password

Resources