The docs mention that it can be accessed via CLI, but there are no samples for CLI in the docs, only PowerShell.
Any advice on where to get started?
You can find the details on how to manage Cognitive Services in the Cognitive Services Azure CLI documentation.
As usual with Azure Government:
Set your CLI to Azure Government via the az cloud set --name=AzureUSGovernment
Use the Azure Government regions.
Use the subset of APIs available in Azure Government (see this doc for the literal values to use with --kind).
Use the subset of SKUs available in Azure Government
Here's a quick sample on how to get going:
az cloud set --name AzureUSGovernment
az login
az group create -n cogstestrg -l usgovvirginia
az cognitiveservices account create -n cogstestcv -g cogstestrg --sku S0 --kind ComputerVision -l usgovvirginia
az cognitiveservices account show -g cogstestrg -n cogstestcv
az cognitiveservices account keys list -g cogstestrg -n cogstestcv
#Make sure you REPLACE_WITH_YOUR_KEY in the curl command below using the key from the previous command
curl -v -X POST "https://virginia.api.cognitive.microsoft.us/vision/v1.0/analyze?visualFeatures=Categories,Description,Color&details=&language=en" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: REPLACE_WITH_YOUR_KEY" --data-ascii "{'url' : 'http://upload.wikimedia.org/wikipedia/commons/3/3c/Shaki_waterfall.jpg'}"
Related
For post-processing of AzD=Azure Developer CLI I need to authorize the managed identity of the Azure VM, the script is currently running on, to the subscription selected by AzD. How can I determine managed identity of the VM with help of the metadata endpoint?
I created this script authorize-vm-identity.sh which determines the VM's resourceID (could be in a different subscription than the actual resources managed by AzD) from the metadata endpoint and then obtains the managed identities' principalId to make the actual role assignment with:
#!/bin/bash
source <(azd env get-values | sed 's/AZURE_/export AZURE_/g')
AZURE_VM_ID=`curl -s -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/instance?api-version=2021-02-01" | jq -r '.compute.resourceId'`
if [ ! -z $AZURE_VM_ID ];
then
AZURE_VM_MI_ID=`az vm show --id $AZURE_VM_ID --query 'identity.principalId' -o tsv`
fi
if [ ! -z $AZURE_VM_MI_ID ];
then
az role assignment create --role Contributor --assignee $AZURE_VM_MI_ID --scope /subscriptions/$AZURE_SUBSCRIPTION_ID
fi
Prerequisites:
Azure CLI
jq
curl
I have two different background cloud functions, doing similar things but using different algorithm and libraries for testing.
Now I want to measure the total GB-Seconds & GHz-Seconds by functions to optimize for pricing.
I think this is available in Functions metrics explorer, but I can't create the report.
One option is to attach labels to your Cloud Functions, then go to your GCP billing report, group by SKU, and filter it by labels to see the breakdown per function.
The only downside is that labels can only be configured in gcloud command, GCP Client Libraries, or via REST API, it's currently not yet available in Firebase CLI (feature request here).
In this first approach, you'll have to redeploy your functions using gcloud. Here's an example and further information can be seen on this documentation:
gcloud functions deploy FUNCTION_NAME \
--runtime RUNTIME \
--trigger-event "providers/cloud.firestore/eventTypes/document.write" \
--trigger-resource "projects/YOUR_PROJECT_ID/databases/(default)/documents/messages/{pushId}" \
--update-labels KEY=VALUE
The second approach to avoid redeployment is to create a PATCH request to add/update labels to your function. It can be done by running this command (update all caps with your input):
curl \
--request PATCH \
--header "Authorization: Bearer $(gcloud auth print-access-token)" \
--header "content-type: application/json" \
--data "{\"labels\":{\"KEY\":\"VALUE\"}}" \
https://cloudfunctions.googleapis.com/v1/projects/PROJECT-ID/locations/REGION/functions/FUNCTION-NAME?updateMask=labels
I've run Artifactory using Docker.
Downloaded JFrog cli inside the container and configured it.
So ./jfrog rt ping returns
OK
Is there a way to perform system level export/import using JFrog cli?
Succeeded to perform it using web ui. Couldn't find information on how to perform system level export/import in the documentation.
Edit
Succeeded to perform export using REST API:
curl -u admin:pass -X POST -H "Content-Type: application/json" --data #/tmp/export-settings.json http://localhost:8081/artifactory/api/export/system
You can invoke the same REST API using JFrog CLI's curl command as shown below. This way, you don't need to provide the URL and credentials. JFrog CLI's config storage will be used. You can manage this storage using the jfrog rt c command.
If you have multiple Artifactory severs configured, and you don't want to use the default server, the jfrog rt curl command also accepts the --server-id option, with the pre configured Artifactory server ID as the valve.
jfrog rt curl -X POST -H "Content-Type: application/json" --data #/tmp/export-settings.json api/export/system
This feature is currently not supported by the CLI.
Feel free to create a feature request.
I am developing a set of Scripts for Azure and I would like to know how to populate a CosmosDB collection with az.
Currently, I know how to create a Database and Collection but how to initialize the Database?
az cosmosdb create \
--resource-group $RESOURCE_GROUP \
--name $COSMOS_DB_ACCOUNT
az cosmosdb database create \
--resource-group-name $RESOURCE_GROUP \
--name $COSMOS_DB_ACCOUNT \
--db-name $COSMOS_DB_NAME
az cosmosdb collection create \
--resource-group-name $RESOURCE_GROUP \
--name $COSMOS_DB_ACCOUNT \
--collection-name $COSMOS_DB_COLLECTION_NAME \
--db-name $COSMOS_DB_NAME \
--partition-key-path $COSMOS_DB_COLLECTION_PARTITION_KEY
Reading the documentation, I didnĀ“t see a solution.
az doesn't provide any data-movement options for Cosmos DB.
For the SQL API, you'll either need to create your own command-line tool, or use the Cosmos DB-supplied Data Migration Tool (Windows-only, unlike az), which provides a command-line interface. For example:
dt /s:JsonFile /s.Files:.\inputdata.json /t:DocumentDBBulk /t.ConnectionString:"AccountEndpoint=<name>;AccountKey=<key>;Database=<db>;" /t.Collection:<collname> /t.CollectionThroughput:<throughput>
This has support for the MongoDB API as well, but you can also use native command-line tools such as mongoimport.
Background
I try to learn to use Azure blob through azure doc.
Troubles
I got an error when I want to use these codes.
blobStorageAccount=probestudent
blobStorageAccountKey=$(az storage account keys list -g ProbeStudent \
-n $blobStorageAccount --query [0].value --output tsv)
az storage container set-permission \ --account-name $blobStorageAccount \ --account-key $blobStorageAccountKey \ --name thumbnails \
--public-access off
These codes mean that Set container public access.
This my container status.
This my storage account status.
Error
Please remove the character \ if the command is in one line. The symbol \ is just used when the command is too long and need another line.
The below command is working:
az storage container set-permission --account-name $blobStorageAccount --account-key $blobStorageAccountKey --name thumbnails --public-access off
The test result: