Access denied due to invalid subscription key or wrong API endpoint (Cognitive Services Custom Vision) - microsoft-cognitive

I'm trying to connect to my Cognitive Services resource but I'm getting the following error:
(node:3246) UnhandledPromiseRejectionWarning: Error: Access denied due to invalid subscription key or wrong API endpoint. Make sure to provide a valid key for an active subscription and use a correct regional API endpoint for your resource.
I created the the resource with kind CognitiveServices like this:
az cognitiveservices account create -n <name> -g <group> --kind CognitiveServices --sku S0 -l eastus --yes
Using kind CustomVision.Training didn't work too.
I have already looked at this answer but it is no the same problem. I believe I am entering the correct credentials and endpoint.
I checked both Azure Portal and customvision.ai resource, I'm using the correct URL and key, but it is not working.
I even tried reseting the key but also it had no effect.
import { TrainingAPIClient } from "#azure/cognitiveservices-customvision-training";
const { CognitiveServicesCredentials } = require("#azure/ms-rest-azure-js");
const cognitiveServiceCredentials = new CognitiveServicesCredentials("<MY_API_KEY>");
const client = new TrainingAPIClient(cognitiveServiceCredentials, "https://eastus.api.cognitive.microsoft.com");
const projects = client.getProjects()
I was also able to run it using the REST API, got HTTP 200.

You can clone this Microsoft Cognitive Services sample (UWP application) and check out Computer Vision feature in the sample. You will have to setup App Settings in the app before you proceed to check.
You can follow the below steps on how to do that through azure cli commands from bash / git bash:
Create resource group
# Create resource group, replace resouce group name and location of resource group as required
az group create -n kiosk-cog-service-keys -l westus
Generate keys and echo the keys
Please note! jq needs to be installed to execute the commands below. If you do not want to use jq then you can just execute the az group deployment command and then search in the outputs section of the json where you will find the keys.
To get the keys with the default parameters execute the following commands
# The command below creates the cognitive service keys required by the KIOSK app, and then prints the keys
echo $(az group deployment create -n cog-keys-deploy -g kiosk-cog-service-keys --template-uri https://raw.githubusercontent.com/Microsoft/Cognitive-Samples-IntelligentKiosk/master/Kiosk/cognitive-keys-azure-deploy.json) | jq '.properties.outputs'
# If you dont have jq installed you can execute the command, and manually search for the outputs section
# az group deployment create -n cog-keys-deploy -g kiosk-cog-service-keys --template-uri https://raw.githubusercontent.com/Microsoft/Cognitive-Samples-IntelligentKiosk/master/Kiosk/cognitive-keys-azure-deploy.json
If instead you want to modify the default parameters you need to get the cognitive-keys-azure-deploy.json and cognitive-keys-azure-deploy.parameters.json files locally and execute the following commands
# Change working directory to Kiosk
cd Kiosk
# The command below creates the cognitive service keys required by the KIOSK app, and then prints the keys. You can modifiy the tiers associated with the generated keys by modifying the parameter values
echo $(az group deployment create -n cog-keys-deploy -g kiosk-cog-service-keys --template-file cognitive-keys-azure-deploy.json --parameters #cognitive-keys-azure-deploy.parameters.json) | jq '.properties.outputs'
# If you dont have jq installed you can execute the command, and manually search for the outputs section
# az group deployment create -n cog-keys-deploy -g kiosk-cog-service-keys --template-file cognitive-keys-azure-deploy.json --parameters #cognitive-keys-azure-deploy.parameters.json
Sample output of above commands is as follows:
# Sample output of above command
{
"bingAugosuggestKey1": {
"type": "String",
"value": "cb4******************************"
},
"bingSearchKey1": {
"type": "String",
"value": "88*********************************"
},
"compVisionEndpoint": {
"type": "String",
"value": "https://westus.api.cognitive.microsoft.com/vision/v1.0"
},
"compVisionKey1": {
"type": "String",
"value": "fa5**************************************"
},
"faceEndpoint": {
"type": "String",
"value": "https://westus.api.cognitive.microsoft.com/face/v1.0"
},
"faceKey1": {
"type": "String",
"value": "87f7****************************************"
},
"textAnalyticsEndpoint": {
"type": "String",
"value": "https://westus.api.cognitive.microsoft.com/text/analytics/v2.0"
},
"textAnalyticsKey1": {
"type": "String",
"value": "ba3*************************************"
}
}
Also note that you can follow similar steps to generate CV key and endpoint only and use them in your application.

The correct credentials object is this one:
import { ApiKeyCredentials } from "#azure/ms-rest-js";
Documentation updated, full discussion at #10362

Related

How to verify existence of resources and components in Jfrog artifactory

We are trying to automate the repos, group and permission creation in Jfrog as part of the azuredevops pipeline. So here I decided to use JfrogCLI and API calls as the azuredevops tasks. But facing some difficulties in Jfrog cli or API calls related as the idempotent behavior is not enabled for most of the API calls operations and CLI commands.
Scenario:- For Application app-A1,
2 repos(local-A1 and virtual-A1) ,
2 groups(appA1-developers, appA1-contributors)
1 permission target (appA1-permission) where we will include the created 2 repos and groups with the permissions
Below is what I tried to achieve so far.
For creating the groups
jf rt group-create appA1-developers --url https://myrepo/artifactory --user jfuser --password jfpass
jf rt group-create appA1-contributors --url https://myrepo/artifactory --user jfuser --password jfpass
Create Repos using the command as below
Repo creation and Update template
jfrog rt rc local-repo-template
{
"description": "$variable",
"excludesPattern":"$variable",
"includesPattern":"$variable",
"key":"$variable",
"notes":"$variable",
"packageType":"$variable",
"rclass":"$variable"
}
Repo Update Command
jfrog rt ru local-repo-template
{
"description": "$variable",
"excludesPattern":"$variable",
"includesPattern":"$variable",
"key":"$variable",
"notes":"$variable",
"packageType":"$variable",
"rclass":"$variable"
}
For creating permission
curl -u 'jfuser' -X PUT "https://myrepo/artifactory/api/v2/security/permissions/java-developers" -H "Content-type: application/json" -T permision.json
{
"name": "appA1-developers",
"repo": {
"include-patterns": ["**"],
"exclude-patterns": [""],
"repositories": ["appA1-local"],
"actions": {
"groups" : {
"appA1-developers" : ["manage","read","annotate"]
}
}
},
"build": {
"include-patterns": ["testmaven/**"],
"exclude-patterns": [""],
"repositories": ["artifactory-build-info"],
"actions": {
"groups" : {
"appA1-developers" : ["manage","read","write","annotate","delete"]
}
}
}
}
But when I trying in Azuredevops all the above tasks, Not able to identify and condition if any of the resources above are already existing.
Looking for a way to Identify first,
If the specified groupname is already existing or not, if existing skip group creation, else create the groups
check if the repos already existing,
if not existing create as per the template
But if it exists and all the properties are same- Skip it
But if it exists and there are property changes- use the update command
Similarly, the permission target also need to be updated only if there are changes , But existing properties or settings shouldnt be altered.

Disable local authentication methods for Cosmos DB database accounts using Azure CLI

I am trying to create a cosmos DB account using Azure CLI.
One of required policies I have to comply with is "Cosmos DB database accounts should have local authentication methods disabled". In the following document I see how to set it using Azure Resource Manager templates . See below
"resources": [
{
"type": " Microsoft.DocumentDB/databaseAccounts",
"properties": {
"disableLocalAuth": true,
// ...
},
// ...
},
// ...
]
Now my question is how to do the same using AZ CLI?
The command I am using is => az cosmosdb create ...
I don't see any flag that will allow the similar setting in AZ CLI.
As of January 2022 this is only supported via ARM Templates but support for PS and CLI is planned. No ETA to share at this time.
You can always use Azure REST API invocation to apply any change in the CosmosDB account, see here
https://learn.microsoft.com/en-us/rest/api/cosmos-db-resource-provider/2021-10-15/database-accounts/create-or-update
I've used Postman for that, btw I post a CURL example here by which I was able to modify a couple of properties (you need to get an oauth2 token before):
curl --location --request PUT 'https://management.azure.com/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.DocumentDB/databaseAccounts/<database-account-name>?api-version=2021-10-15' \
--header 'Authorization: Bearer <oauth2-token>' \
--header 'Content-Type: application/json' \
--data-raw '{
"location": "North Europe",
"properties": {
"databaseAccountOfferType": "Standard",
"disableLocalAuth": true,
"disableKeyBasedMetadataWriteAccess":true,
"locations": [
{
"isVirtualNetworkFilterEnabled": false,
"locationName": "North Europe",
"failoverPriority": 0,
"isZoneRedundant": false
}
]
}
}'
No , this is not supported through the Azure CLI when you are creating Azure Cosmos DB account via az cosmosdb create
It's not supported through the az cosmosdb commands but you could use the az resource update command to update this property:
$cosmosdbname = "<cosmos-db-account-name>"
$resourcegroup = "<resource-group-name>"
$cosmosdb = az cosmosdb show --name $cosmosdbname --resource-group $resourcegroup | ConvertFrom-Json
az resource update --ids $cosmosdb.id --set properties.disableLocalAuth=true --latest-include-preview

How to configure dynamodb-to-lambda trigger using amplify framework/cli

The amplify docks here says that we can configure a lambda function as a dynamodb trigger by running **amplify add function** and selecting the "Lambda Trigger" option, but when I run the "amplify add api" (selected Python as runtime language) I am not getting the lambda trigger option, I'm only getting the "Serverless function" and "lambda layer" options.
Please help me to resolve this issue to access the feature.
docs snapshot - showing 4 options
my CLI snapshot - showing only 2 options
I know it works for nodejs runtime lambda, but I want this option for Python Lambda as well.
Just followed these steps with amplify CLI version 4.50.2.
To create a lambda function that is triggered by changes to a DynamoDB table, you can use the following command line actions, which are walked-through inside of the CLI after entering the below command:
amplify add function
Select which capability you want to add:
❯ Lambda function (serverless function)
Provide an AWS Lambda function name:
<YourFunctionsName>
Choose the runtime that you want to use:
> NodeJS # IMPORTANT: Must be NodeJS as of now, you can change this later by manually editing ...-cloudformation-template.json file inside function directory
Choose the function template you want to use
> Lambda Trigger
What event source do you want to associate with the lambda trigger
> Amazon DynamoDB Stream
Choose a DynamoDB event source option
>Use API category graphql #model backend DynamoDB table(s) in the current Amplify project
Choose the graphql #model(s)
<Select any models (using spacebar) you want to trigger the function after editing>
Do you want to trigger advanced settings
Y # IMPORTANT: If you are using a dynamodb event source based on a table defined by graphql schema, you will need to give this function read access to the api resource that contains the graphql schema that defines the table that drives the event
Do you want to access other resources in this project from your Lambda function?
y # See above, select your api that contains the data model and make sure that the function has at least read access.
After this, the other options (layer, call scheduling) are up to you.
After creating the function via the above CLI options, you can change the "Runtime" field inside the -cloudformation-template.json file inside function directory, eg if you want a python lambda function change the runtime to "python3.8". You will also need to create a file called index.py inside your function's directory which has a handler(event, context) function. See example below:
import json
def handler(event, context):
print("Triggered via DynamoDB")
print(event)
return json.dumps({'status_code': 200, "message": "Received from DynamoDB"})
After making these edits, you can run amplify push and, if you open your fxn in the management console online, it should show an attached dynamoDB stream.
Doesn't appear to be available anymore in the CLI codebase - see Supported-service.json deleted and replaced by supported-services.ts
https://github.com/aws-amplify/amplify-cli/commit/607ae21287941805f44ea8a9b78dd12d16d71f85#diff-a0fd8c5607fd81977cb4745b9af3af2c6649ded748991bf9968a7d782b000c6b
https://github.com/aws-amplify/amplify-cli/commits/4e974007d95c894ab4108a2dff8d5996e7e3ce25/packages/amplify-category-function/src/provider-utils/supported-services.ts
Select nodejs and you will be able to view lambda trigger
just add the following to {YOUR_FUNCTION_NAME}-cloudformation-template.json, remember to replace (YOUR_TABLE_NAME) to your table name.
"LambdaTriggerPolicyPurchase": {
"DependsOn": [
"LambdaExecutionRole"
],
"Type": "AWS::IAM::Policy",
"Properties": {
"PolicyName": "amplify-lambda-execution-policy-Purchase",
"Roles": [
{
"Ref": "LambdaExecutionRole"
}
],
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"dynamodb:DescribeStream",
"dynamodb:GetRecords",
"dynamodb:GetShardIterator",
"dynamodb:ListStreams"
],
"Resource": {
"Fn::ImportValue": {
"Fn::Sub": "${apilanguageGraphQLAPIIdOutput}:GetAtt:(YOUR_TABLE_NAME):StreamArn"
}
}
}
]
}
}
},
"LambdaEventSourceMappingPurchase": {
"Type": "AWS::Lambda::EventSourceMapping",
"DependsOn": [
"LambdaTriggerPolicyPurchase",
"LambdaExecutionRole"
],
"Properties": {
"BatchSize": 100,
"Enabled": true,
"EventSourceArn": {
"Fn::ImportValue": {
"Fn::Sub": "${apilanguageGraphQLAPIIdOutput}:GetAtt:(YOUR_TABLE_NAME):StreamArn"
}
},
"FunctionName": {
"Fn::GetAtt": [
"LambdaFunction",
"Arn"
]
},
"StartingPosition": "LATEST"
}
},
i got them by creating a dummy function using the template that shows up after you choose nodejs and checking compare its -cloudformation-template.json with my own function

Unable to update IAM policy in AI Platform Notebooks

I can't update IAM policy in my AI Platform Notebook.
I created a new AI Platform Notebooks instance:
gcloud beta notebooks instances create nb1 \
--vm-image-project=deeplearning-platform-release \
--vm-image-family=tf-latest-cpu \
--machine-type=n1-standard-4 \
--location=us-west1-b
When I try to apply a new IAM policy I get an Error:
gcloud beta notebooks instances set-iam-policy nb1 --location=us-west1-b notebooks.policy
ERROR: (gcloud.beta.notebooks.instances.set-iam-policy) INTERNAL: An
internal error has occurred (506011f7-b62e-4308-9bde-10b97dd7b99c)
My policy looks like this:
{
"bindings": [
{
"members": [
"user:myuser#gmail.com",
],
"role": "roles/notebooks.admin"
}
],
"etag": "BwWlgdvxWT0=",
"version": 1
}
when I do a
gcloud beta notebooks instances get-iam-policy nb1 --location=us-west1-b --format=json
I get:
ACAB
As there is no policy set.
Please take a look at etag field:
An etag is returned in the response to getIamPolicy, and systems are expected to put that etag in the request to setIamPolicy to ensure that their change will be applied to the same version of the policy.
From documentation here
string (bytes format)
etag is used for optimistic concurrency control as a way to help prevent simultaneous updates of a policy from overwriting each other. It is strongly suggested that systems make use of the etag in the read-modify-write cycle to perform policy updates in order to avoid race conditions: An etag is returned in the response to getIamPolicy, and systems are expected to put that etag in the request to setIamPolicy to ensure that their change will be applied to the same version of the policy.
Important: If you use IAM Conditions, you must include the etag field whenever you call setIamPolicy. If you omit this field, then IAM allows you to overwrite a version 3 policy with a version 1 policy, and all of the conditions in the version 3 policy are lost.
A base64-encoded string.
You can just easily change your policy etag to ACAB, which is the default one.
{
"bindings": [
{
"members": [
"user:myuser#gmail.com",
],
"role": "roles/notebooks.admin"
}
],
"etag": "ACAB",
"version": 1
}
or you can use add-iam-policy-binding command, to create a new policy, then you can extract the etag using get-iam-policy and update your JSON file with it, finally run the set-iam-policy
You may also use this format:
{
"policy": {
"bindings": [
{
"members": [
"user:myuser#gmail.com"
],
"role": "roles/notebooks.admin"
}
],
"etag": "ACAB",
"version": 1
}
}

Openstack API Authentication

Openstack noob here. I have setup an Ubuntu VM with DevStack, and am trying to authenticate with Keystone to obtain a token to be used for subsequent Openstack API calls. The identity endpoint shown on the “API Access” page in Horizon is: http://<DEVSTACK_IP>/identity.
When I post the below JSON payload to this endpoint, I get the error get_version_v3() got an unexpected keyword argument 'auth’.
{
"auth": {
"identity": {
"methods": [
"password"
],
"password": {
"user": {
"name": "admin",
"domain": {
"name": "Default"
},
"password": “AdminPassword”
}
}
}
}
}
Based on the Openstack docs, I should be hitting http://<DEVSTACK_IP>/v3/auth/tokens to obtain a token, but when I hit that endpoint, I get 404 Not Found.
I'm currently using Postman for testing this, but will eventually be doing programmatically.
Does anybody have any experience with authenticating against the Openstack API that can help?
Not sure whether you want to do it in a python way, but if you do, here is a way to do it:
from keystoneauth1.identity import v3
from keystoneauth1 import session
v3_auth = v3.Password(auth_url=V3_AUTH_URL,
username=USERNAME,
password=PASSWORD,
project_name=PROJECT_NAME,
project_domain_name="default",
user_domain_name="default")
v3_ses = session.Session(auth=v3_auth)
auth_token = v3_ses.get_token()
And you V3_AUTH_URL should be http://<DEVSTACK_IP>:5000/v3 since keystone is using port 5000 as a default.
If you do have a multi-domain devstack, you can change the domains, otherwise, they should be default
Just in case you don't have the client library installed: pip install python-keystoneclient
Here is a good doc for you to read about it:
https://docs.openstack.org/keystoneauth/latest/using-sessions.html
HTH

Resources