I am trying below command after aws
--configure command:
aws dynamodb create-table
--table-name MusicCollection2
--attribute-definitions
AttributeName=Artist,AttributeType=S AttributeName=SongTitle,AttributeType=S --
key-schema AttributeName=Artist,KeyType=HASH AttributeName=SongTitle,KeyType=RANGE
--provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5
Output: Nothing
Please give suggestion how to create dyanmodb table using AWS CLI.
Create the JSON file create-table-movies.json with the below content
{
"TableName": "MusicCollection2",
"KeySchema": [
{ "AttributeName": "Artist", "KeyType": "HASH" },
{ "AttributeName": "SongTitle", "KeyType": "RANGE" }
],
"AttributeDefinitions": [
{ "AttributeName": "Artist", "AttributeType": "S" },
{ "AttributeName": "SongTitle", "AttributeType": "S" }
],
"ProvisionedThroughput": {
"ReadCapacityUnits": 5,
"WriteCapacityUnits": 5
}
}
Browse to the file path on DOS command prompt (assuming Windows OS) and execute the below command
Creates the table on local DynamoDB:-
aws dynamodb create-table --cli-input-json file://create-table-movies.json --endpoint-url http://localhost:8000
To create the table on AWS DynamoDB service, please provide the correct region name. If your config is done already, it should work.
aws dynamodb create-table --cli-input-json file://create-table-movies.json --region us-west-2
AWS CLI Configure:-
$ aws configure
AWS Access Key ID [None]: accesskey
AWS Secret Access Key [None]: secretkey
Default region name [None]: us-west-2
Default output format [None]:
Once you execute the above command, it updates the data on your profile (on windows).
C:\Users\<username>\.aws\
Check the following files:-
config - should have the region name
credentials - should have access key and secret key
Credentials Sample:-
[default]
aws_access_key_id = aaaadffewe
aws_secret_access_key = t45435egfdg456retgfg
Config File Sample:-
[default]
region = us-east-1
aws dynamodb create-table --table-name contact --attribute-definitions AttributeName=name,AttributeType=S AttributeName=email,AttributeType=S --key-schema AttributeName=name,KeyType=HASH AttributeName=email,KeyType=RANGE --provisioned-throughput ReadCapacityUnits=1,WriteCapacityUnits=1
Use this command instead.
Related
We are trying to automate the repos, group and permission creation in Jfrog as part of the azuredevops pipeline. So here I decided to use JfrogCLI and API calls as the azuredevops tasks. But facing some difficulties in Jfrog cli or API calls related as the idempotent behavior is not enabled for most of the API calls operations and CLI commands.
Scenario:- For Application app-A1,
2 repos(local-A1 and virtual-A1) ,
2 groups(appA1-developers, appA1-contributors)
1 permission target (appA1-permission) where we will include the created 2 repos and groups with the permissions
Below is what I tried to achieve so far.
For creating the groups
jf rt group-create appA1-developers --url https://myrepo/artifactory --user jfuser --password jfpass
jf rt group-create appA1-contributors --url https://myrepo/artifactory --user jfuser --password jfpass
Create Repos using the command as below
Repo creation and Update template
jfrog rt rc local-repo-template
{
"description": "$variable",
"excludesPattern":"$variable",
"includesPattern":"$variable",
"key":"$variable",
"notes":"$variable",
"packageType":"$variable",
"rclass":"$variable"
}
Repo Update Command
jfrog rt ru local-repo-template
{
"description": "$variable",
"excludesPattern":"$variable",
"includesPattern":"$variable",
"key":"$variable",
"notes":"$variable",
"packageType":"$variable",
"rclass":"$variable"
}
For creating permission
curl -u 'jfuser' -X PUT "https://myrepo/artifactory/api/v2/security/permissions/java-developers" -H "Content-type: application/json" -T permision.json
{
"name": "appA1-developers",
"repo": {
"include-patterns": ["**"],
"exclude-patterns": [""],
"repositories": ["appA1-local"],
"actions": {
"groups" : {
"appA1-developers" : ["manage","read","annotate"]
}
}
},
"build": {
"include-patterns": ["testmaven/**"],
"exclude-patterns": [""],
"repositories": ["artifactory-build-info"],
"actions": {
"groups" : {
"appA1-developers" : ["manage","read","write","annotate","delete"]
}
}
}
}
But when I trying in Azuredevops all the above tasks, Not able to identify and condition if any of the resources above are already existing.
Looking for a way to Identify first,
If the specified groupname is already existing or not, if existing skip group creation, else create the groups
check if the repos already existing,
if not existing create as per the template
But if it exists and all the properties are same- Skip it
But if it exists and there are property changes- use the update command
Similarly, the permission target also need to be updated only if there are changes , But existing properties or settings shouldnt be altered.
i was trying to mint some nfts with candy machine but when i try to execute:
ts-node ~/metaplex/js/packages/cli/src/candy-machine-v2-cli.ts set_collection \
-e devnet \
-k ~/.config/solana/devnet.json \
-c example \
-m C2eGm8iQPnKVWxakyo8QhwJUvYrZHKF52DPQuAejpTWG
I got this error:
throw new Error(`Invalid mint owner: ${JSON.stringify(info.owner)}`);
^
Error: Invalid mint owner: "11111111111111111111111111111111"
at Token.getMintInfo (/Users/btk-macmini-01/Desktop/repo/peppermint/docs/metaplex/js/node_modules/#solana/spl-token/client/token.js:731:13)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async parseCollectionMintPubkey (/Users/btk-macmini-01/Desktop/repo/peppermint/docs/metaplex/js/packages/cli/src/helpers/various.ts:438:5)
at async Command.<anonymous> (/Users/btk-macmini-01/Desktop/repo/peppermint/docs/metaplex/js/packages/cli/src/candy-machine-v2-cli.ts:941:34)
Someone knows why? I have tried putting a different address from the one with that i have created the candymachine and with I made the upload or also the same but the issue is the same, maybe there is something wrong with it or with other things?
This is an example of my json:
{
"name": "#1",
"description": "description",
"external_url": "",
"image": "0.png",
"attributes": [
{
"trait_type": "Background Color Woman",
"value": "Light Blue"
},
{
"trait_type": "Background color man",
"value": "Metal Grey"
}
],
"properties": {
"files": [
{
"uri": "0.png",
"type": "image/png"
}
],
"creators": [
{
"address": "GM1ByqbTfgRwXEQCLJ2N4bsA3P1WcuyL9kZT79gLqYuE",
"share": 100
}
]
},
"compiler": "https://the-nft-generator.com",
"symbol": "Test",
"collection": {
"name": "test",
"family": "test"
}
}
If i upload without execute set_collections it works but with a different name of the collection from the one specified in the jsons file
set_collection is used to set the collection field to all the nfts inside a Candy machine that has not started the mint (0 minted NFTs). To set a collection you can pass any NFT (that is a masterEditionV2) that has the same updateAuthority as the wallet that you used to create ur CandyMachine.
In this case you are trying to set a collection that uses this NFT (-m C2eGm8iQPnKVWxakyo8QhwJUvYrZHKF52DPQuAejpTWG), and you said that ur CM was created with the wallet that has pubkey GM1ByqbTfgRwXEQCLJ2N4bsA3P1WcuyL9kZT79gLqYuE. The NFT has updateAuthority 42NevAWA6A8m9prDvZRUYReQmhNC3NtSZQNFUppPJDRB and thats a completly different pubkey that the one that you used to create the Candy Machine.
You can always use the collection webpage. This webpage allows you to create and mint a collection NFT with certain metadata, and also will migrate (change the onchain collections to the new created collection) the NFTs that you pass on the website and this can be updated at anytime with more NFTs. This website WILL NOT migrate unminted nfts from a candy machine.
If you want to use set_collection make sure to provide, on the -m parameter, an NFT that has the same updateAuthority that ur Candy Machine. Also make sure that your Candy Machine has 0 minted NFTs.
I am trying to create a cosmos DB account using Azure CLI.
One of required policies I have to comply with is "Cosmos DB database accounts should have local authentication methods disabled". In the following document I see how to set it using Azure Resource Manager templates . See below
"resources": [
{
"type": " Microsoft.DocumentDB/databaseAccounts",
"properties": {
"disableLocalAuth": true,
// ...
},
// ...
},
// ...
]
Now my question is how to do the same using AZ CLI?
The command I am using is => az cosmosdb create ...
I don't see any flag that will allow the similar setting in AZ CLI.
As of January 2022 this is only supported via ARM Templates but support for PS and CLI is planned. No ETA to share at this time.
You can always use Azure REST API invocation to apply any change in the CosmosDB account, see here
https://learn.microsoft.com/en-us/rest/api/cosmos-db-resource-provider/2021-10-15/database-accounts/create-or-update
I've used Postman for that, btw I post a CURL example here by which I was able to modify a couple of properties (you need to get an oauth2 token before):
curl --location --request PUT 'https://management.azure.com/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.DocumentDB/databaseAccounts/<database-account-name>?api-version=2021-10-15' \
--header 'Authorization: Bearer <oauth2-token>' \
--header 'Content-Type: application/json' \
--data-raw '{
"location": "North Europe",
"properties": {
"databaseAccountOfferType": "Standard",
"disableLocalAuth": true,
"disableKeyBasedMetadataWriteAccess":true,
"locations": [
{
"isVirtualNetworkFilterEnabled": false,
"locationName": "North Europe",
"failoverPriority": 0,
"isZoneRedundant": false
}
]
}
}'
No , this is not supported through the Azure CLI when you are creating Azure Cosmos DB account via az cosmosdb create
It's not supported through the az cosmosdb commands but you could use the az resource update command to update this property:
$cosmosdbname = "<cosmos-db-account-name>"
$resourcegroup = "<resource-group-name>"
$cosmosdb = az cosmosdb show --name $cosmosdbname --resource-group $resourcegroup | ConvertFrom-Json
az resource update --ids $cosmosdb.id --set properties.disableLocalAuth=true --latest-include-preview
I have a DynamoDB messages table with id, content, createdAt (int), userID fields.
I can obtain a user's messages using below resolver:
{
"version" : "2017-02-28",
"operation" : "Query",
"index" : "userid-createdat-index",
"query" : {
"expression": "userID = :userID",
"expressionValues" : {
":userID" : $util.dynamodb.toDynamoDBJson($context.arguments.userID)
}
}
}
My objective is to get user messages within the last 5 seconds using the createdAt field which is epoch time in milliseconds. I would like to avoid using Scan operation as my table will be large.
How do I do that? What kind of DynamoDB index do I need for it?
Assuming the id field is unique, you need to create the table partition key on id then a Global Secondary Index on (userID , createdAt). The query to access the result that you are looking for should look something like this --key-condition-expression "userID = :userID and createdAt >= :createdAt"
Table creation
aws dynamodb create-table \
--table-name messages \
--attribute-definitions \
AttributeName=id,AttributeType=S \
AttributeName=userID,AttributeType=S \
AttributeName=createdAt,AttributeType=N \
--key-schema AttributeName=id,KeyType=HASH \
--provisioned-throughput ReadCapacityUnits=10,WriteCapacityUnits=5 \
--global-secondary-indexes \
"[{\"IndexName\": \"UserIDIndex\",
\"KeySchema\": [{\"AttributeName\":\"userID\",\"KeyType\":\"HASH\"},
{\"AttributeName\":\"createdAt\",\"KeyType\":\"RANGE\"}],
\"Projection\":{\"ProjectionType\":\"ALL\"},
\"ProvisionedThroughput\": {\"ReadCapacityUnits\":10,\"WriteCapacityUnits\":10}}]"
Example query with GSI
aws dynamodb query \
--table-name messages \
--index-name UserIDIndex \
--key-condition-expression "userID = :userID and createdAt >= :createdAt" \
--expression-attribute-values '{":userID":{"S":"u1"} , ":createdAt":{"N":"2"} }'
More information on GSI can be found here
if you are running DynamoDB locally you can add --endpoint-url http://localhost:8000 to both of the above code snippets.
Using suggestions from CruncherBigData's answer, I successfully run such a query.
I created an index with a partition key for userID and a sort key with createdAt. My mistake here was forgetting to select Number as a data type for createdAt during index creation. Keeping it as a String failed my query.
I then used a query resolver to check messages of a user in the last 5 seconds:
#set( $messageTimeLimit = 5000 )
#set( $lastNSeconds = $util.time.nowEpochMilliSeconds() - $messageTimeLimit )
{
"version" : "2018-05-29",
"operation" : "Query",
"index" : "userID-createdAt-index",
"query" : {
"expression": "userID = :userID and createdAt >= :lastFiveSeconds",
"expressionValues" : {
":lastFiveSeconds" : $util.dynamodb.toDynamoDBJson($lastFiveSeconds),
":userID" : $util.dynamodb.toDynamoDBJson($context.arguments.userID)
}
}
}
I'm trying to connect to my Cognitive Services resource but I'm getting the following error:
(node:3246) UnhandledPromiseRejectionWarning: Error: Access denied due to invalid subscription key or wrong API endpoint. Make sure to provide a valid key for an active subscription and use a correct regional API endpoint for your resource.
I created the the resource with kind CognitiveServices like this:
az cognitiveservices account create -n <name> -g <group> --kind CognitiveServices --sku S0 -l eastus --yes
Using kind CustomVision.Training didn't work too.
I have already looked at this answer but it is no the same problem. I believe I am entering the correct credentials and endpoint.
I checked both Azure Portal and customvision.ai resource, I'm using the correct URL and key, but it is not working.
I even tried reseting the key but also it had no effect.
import { TrainingAPIClient } from "#azure/cognitiveservices-customvision-training";
const { CognitiveServicesCredentials } = require("#azure/ms-rest-azure-js");
const cognitiveServiceCredentials = new CognitiveServicesCredentials("<MY_API_KEY>");
const client = new TrainingAPIClient(cognitiveServiceCredentials, "https://eastus.api.cognitive.microsoft.com");
const projects = client.getProjects()
I was also able to run it using the REST API, got HTTP 200.
You can clone this Microsoft Cognitive Services sample (UWP application) and check out Computer Vision feature in the sample. You will have to setup App Settings in the app before you proceed to check.
You can follow the below steps on how to do that through azure cli commands from bash / git bash:
Create resource group
# Create resource group, replace resouce group name and location of resource group as required
az group create -n kiosk-cog-service-keys -l westus
Generate keys and echo the keys
Please note! jq needs to be installed to execute the commands below. If you do not want to use jq then you can just execute the az group deployment command and then search in the outputs section of the json where you will find the keys.
To get the keys with the default parameters execute the following commands
# The command below creates the cognitive service keys required by the KIOSK app, and then prints the keys
echo $(az group deployment create -n cog-keys-deploy -g kiosk-cog-service-keys --template-uri https://raw.githubusercontent.com/Microsoft/Cognitive-Samples-IntelligentKiosk/master/Kiosk/cognitive-keys-azure-deploy.json) | jq '.properties.outputs'
# If you dont have jq installed you can execute the command, and manually search for the outputs section
# az group deployment create -n cog-keys-deploy -g kiosk-cog-service-keys --template-uri https://raw.githubusercontent.com/Microsoft/Cognitive-Samples-IntelligentKiosk/master/Kiosk/cognitive-keys-azure-deploy.json
If instead you want to modify the default parameters you need to get the cognitive-keys-azure-deploy.json and cognitive-keys-azure-deploy.parameters.json files locally and execute the following commands
# Change working directory to Kiosk
cd Kiosk
# The command below creates the cognitive service keys required by the KIOSK app, and then prints the keys. You can modifiy the tiers associated with the generated keys by modifying the parameter values
echo $(az group deployment create -n cog-keys-deploy -g kiosk-cog-service-keys --template-file cognitive-keys-azure-deploy.json --parameters #cognitive-keys-azure-deploy.parameters.json) | jq '.properties.outputs'
# If you dont have jq installed you can execute the command, and manually search for the outputs section
# az group deployment create -n cog-keys-deploy -g kiosk-cog-service-keys --template-file cognitive-keys-azure-deploy.json --parameters #cognitive-keys-azure-deploy.parameters.json
Sample output of above commands is as follows:
# Sample output of above command
{
"bingAugosuggestKey1": {
"type": "String",
"value": "cb4******************************"
},
"bingSearchKey1": {
"type": "String",
"value": "88*********************************"
},
"compVisionEndpoint": {
"type": "String",
"value": "https://westus.api.cognitive.microsoft.com/vision/v1.0"
},
"compVisionKey1": {
"type": "String",
"value": "fa5**************************************"
},
"faceEndpoint": {
"type": "String",
"value": "https://westus.api.cognitive.microsoft.com/face/v1.0"
},
"faceKey1": {
"type": "String",
"value": "87f7****************************************"
},
"textAnalyticsEndpoint": {
"type": "String",
"value": "https://westus.api.cognitive.microsoft.com/text/analytics/v2.0"
},
"textAnalyticsKey1": {
"type": "String",
"value": "ba3*************************************"
}
}
Also note that you can follow similar steps to generate CV key and endpoint only and use them in your application.
The correct credentials object is this one:
import { ApiKeyCredentials } from "#azure/ms-rest-js";
Documentation updated, full discussion at #10362