I'm importing Java Cucumber test executions using the REST endpoint:
/rest/raven/1.0/import/execution/cucumber
I want to add the "Test environment" where this test was executed.
Is this possible for Cucumber test executions? The documentation refers to "testEnvironments" for Xray JSON format, but I don't see it for Cucumber JSON output format.
For cucumber, you have to use the multipart endpoint as the standard one that you referred doesn't provide that ability (the team aims to improve that in the future).
You should look at this documentation
Example:
curl -u admin:admin -F info=#createTestExec.json -F result=#results.json http://yourserver/rest/raven/1.0/import/execution/cucumber/multipart
Where you would use the following createTestExec.json to specify the fields of the Test Execution. You need to find the custom field id of the "Test Environments" custom field in your Jira administration.
{
"fields": {
"project": {
"key": "XRAY"
},
"summary": "Test Execution for cucumber Execution",
"issuetype": {
"id": "10009"
},
"customfield_10030" : [
"iOS"
]
}
}
Related
i was trying to mint some nfts with candy machine but when i try to execute:
ts-node ~/metaplex/js/packages/cli/src/candy-machine-v2-cli.ts set_collection \
-e devnet \
-k ~/.config/solana/devnet.json \
-c example \
-m C2eGm8iQPnKVWxakyo8QhwJUvYrZHKF52DPQuAejpTWG
I got this error:
throw new Error(`Invalid mint owner: ${JSON.stringify(info.owner)}`);
^
Error: Invalid mint owner: "11111111111111111111111111111111"
at Token.getMintInfo (/Users/btk-macmini-01/Desktop/repo/peppermint/docs/metaplex/js/node_modules/#solana/spl-token/client/token.js:731:13)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async parseCollectionMintPubkey (/Users/btk-macmini-01/Desktop/repo/peppermint/docs/metaplex/js/packages/cli/src/helpers/various.ts:438:5)
at async Command.<anonymous> (/Users/btk-macmini-01/Desktop/repo/peppermint/docs/metaplex/js/packages/cli/src/candy-machine-v2-cli.ts:941:34)
Someone knows why? I have tried putting a different address from the one with that i have created the candymachine and with I made the upload or also the same but the issue is the same, maybe there is something wrong with it or with other things?
This is an example of my json:
{
"name": "#1",
"description": "description",
"external_url": "",
"image": "0.png",
"attributes": [
{
"trait_type": "Background Color Woman",
"value": "Light Blue"
},
{
"trait_type": "Background color man",
"value": "Metal Grey"
}
],
"properties": {
"files": [
{
"uri": "0.png",
"type": "image/png"
}
],
"creators": [
{
"address": "GM1ByqbTfgRwXEQCLJ2N4bsA3P1WcuyL9kZT79gLqYuE",
"share": 100
}
]
},
"compiler": "https://the-nft-generator.com",
"symbol": "Test",
"collection": {
"name": "test",
"family": "test"
}
}
If i upload without execute set_collections it works but with a different name of the collection from the one specified in the jsons file
set_collection is used to set the collection field to all the nfts inside a Candy machine that has not started the mint (0 minted NFTs). To set a collection you can pass any NFT (that is a masterEditionV2) that has the same updateAuthority as the wallet that you used to create ur CandyMachine.
In this case you are trying to set a collection that uses this NFT (-m C2eGm8iQPnKVWxakyo8QhwJUvYrZHKF52DPQuAejpTWG), and you said that ur CM was created with the wallet that has pubkey GM1ByqbTfgRwXEQCLJ2N4bsA3P1WcuyL9kZT79gLqYuE. The NFT has updateAuthority 42NevAWA6A8m9prDvZRUYReQmhNC3NtSZQNFUppPJDRB and thats a completly different pubkey that the one that you used to create the Candy Machine.
You can always use the collection webpage. This webpage allows you to create and mint a collection NFT with certain metadata, and also will migrate (change the onchain collections to the new created collection) the NFTs that you pass on the website and this can be updated at anytime with more NFTs. This website WILL NOT migrate unminted nfts from a candy machine.
If you want to use set_collection make sure to provide, on the -m parameter, an NFT that has the same updateAuthority that ur Candy Machine. Also make sure that your Candy Machine has 0 minted NFTs.
The amplify docks here says that we can configure a lambda function as a dynamodb trigger by running **amplify add function** and selecting the "Lambda Trigger" option, but when I run the "amplify add api" (selected Python as runtime language) I am not getting the lambda trigger option, I'm only getting the "Serverless function" and "lambda layer" options.
Please help me to resolve this issue to access the feature.
docs snapshot - showing 4 options
my CLI snapshot - showing only 2 options
I know it works for nodejs runtime lambda, but I want this option for Python Lambda as well.
Just followed these steps with amplify CLI version 4.50.2.
To create a lambda function that is triggered by changes to a DynamoDB table, you can use the following command line actions, which are walked-through inside of the CLI after entering the below command:
amplify add function
Select which capability you want to add:
❯ Lambda function (serverless function)
Provide an AWS Lambda function name:
<YourFunctionsName>
Choose the runtime that you want to use:
> NodeJS # IMPORTANT: Must be NodeJS as of now, you can change this later by manually editing ...-cloudformation-template.json file inside function directory
Choose the function template you want to use
> Lambda Trigger
What event source do you want to associate with the lambda trigger
> Amazon DynamoDB Stream
Choose a DynamoDB event source option
>Use API category graphql #model backend DynamoDB table(s) in the current Amplify project
Choose the graphql #model(s)
<Select any models (using spacebar) you want to trigger the function after editing>
Do you want to trigger advanced settings
Y # IMPORTANT: If you are using a dynamodb event source based on a table defined by graphql schema, you will need to give this function read access to the api resource that contains the graphql schema that defines the table that drives the event
Do you want to access other resources in this project from your Lambda function?
y # See above, select your api that contains the data model and make sure that the function has at least read access.
After this, the other options (layer, call scheduling) are up to you.
After creating the function via the above CLI options, you can change the "Runtime" field inside the -cloudformation-template.json file inside function directory, eg if you want a python lambda function change the runtime to "python3.8". You will also need to create a file called index.py inside your function's directory which has a handler(event, context) function. See example below:
import json
def handler(event, context):
print("Triggered via DynamoDB")
print(event)
return json.dumps({'status_code': 200, "message": "Received from DynamoDB"})
After making these edits, you can run amplify push and, if you open your fxn in the management console online, it should show an attached dynamoDB stream.
Doesn't appear to be available anymore in the CLI codebase - see Supported-service.json deleted and replaced by supported-services.ts
https://github.com/aws-amplify/amplify-cli/commit/607ae21287941805f44ea8a9b78dd12d16d71f85#diff-a0fd8c5607fd81977cb4745b9af3af2c6649ded748991bf9968a7d782b000c6b
https://github.com/aws-amplify/amplify-cli/commits/4e974007d95c894ab4108a2dff8d5996e7e3ce25/packages/amplify-category-function/src/provider-utils/supported-services.ts
Select nodejs and you will be able to view lambda trigger
just add the following to {YOUR_FUNCTION_NAME}-cloudformation-template.json, remember to replace (YOUR_TABLE_NAME) to your table name.
"LambdaTriggerPolicyPurchase": {
"DependsOn": [
"LambdaExecutionRole"
],
"Type": "AWS::IAM::Policy",
"Properties": {
"PolicyName": "amplify-lambda-execution-policy-Purchase",
"Roles": [
{
"Ref": "LambdaExecutionRole"
}
],
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"dynamodb:DescribeStream",
"dynamodb:GetRecords",
"dynamodb:GetShardIterator",
"dynamodb:ListStreams"
],
"Resource": {
"Fn::ImportValue": {
"Fn::Sub": "${apilanguageGraphQLAPIIdOutput}:GetAtt:(YOUR_TABLE_NAME):StreamArn"
}
}
}
]
}
}
},
"LambdaEventSourceMappingPurchase": {
"Type": "AWS::Lambda::EventSourceMapping",
"DependsOn": [
"LambdaTriggerPolicyPurchase",
"LambdaExecutionRole"
],
"Properties": {
"BatchSize": 100,
"Enabled": true,
"EventSourceArn": {
"Fn::ImportValue": {
"Fn::Sub": "${apilanguageGraphQLAPIIdOutput}:GetAtt:(YOUR_TABLE_NAME):StreamArn"
}
},
"FunctionName": {
"Fn::GetAtt": [
"LambdaFunction",
"Arn"
]
},
"StartingPosition": "LATEST"
}
},
i got them by creating a dummy function using the template that shows up after you choose nodejs and checking compare its -cloudformation-template.json with my own function
I'm trying to connect to my Cognitive Services resource but I'm getting the following error:
(node:3246) UnhandledPromiseRejectionWarning: Error: Access denied due to invalid subscription key or wrong API endpoint. Make sure to provide a valid key for an active subscription and use a correct regional API endpoint for your resource.
I created the the resource with kind CognitiveServices like this:
az cognitiveservices account create -n <name> -g <group> --kind CognitiveServices --sku S0 -l eastus --yes
Using kind CustomVision.Training didn't work too.
I have already looked at this answer but it is no the same problem. I believe I am entering the correct credentials and endpoint.
I checked both Azure Portal and customvision.ai resource, I'm using the correct URL and key, but it is not working.
I even tried reseting the key but also it had no effect.
import { TrainingAPIClient } from "#azure/cognitiveservices-customvision-training";
const { CognitiveServicesCredentials } = require("#azure/ms-rest-azure-js");
const cognitiveServiceCredentials = new CognitiveServicesCredentials("<MY_API_KEY>");
const client = new TrainingAPIClient(cognitiveServiceCredentials, "https://eastus.api.cognitive.microsoft.com");
const projects = client.getProjects()
I was also able to run it using the REST API, got HTTP 200.
You can clone this Microsoft Cognitive Services sample (UWP application) and check out Computer Vision feature in the sample. You will have to setup App Settings in the app before you proceed to check.
You can follow the below steps on how to do that through azure cli commands from bash / git bash:
Create resource group
# Create resource group, replace resouce group name and location of resource group as required
az group create -n kiosk-cog-service-keys -l westus
Generate keys and echo the keys
Please note! jq needs to be installed to execute the commands below. If you do not want to use jq then you can just execute the az group deployment command and then search in the outputs section of the json where you will find the keys.
To get the keys with the default parameters execute the following commands
# The command below creates the cognitive service keys required by the KIOSK app, and then prints the keys
echo $(az group deployment create -n cog-keys-deploy -g kiosk-cog-service-keys --template-uri https://raw.githubusercontent.com/Microsoft/Cognitive-Samples-IntelligentKiosk/master/Kiosk/cognitive-keys-azure-deploy.json) | jq '.properties.outputs'
# If you dont have jq installed you can execute the command, and manually search for the outputs section
# az group deployment create -n cog-keys-deploy -g kiosk-cog-service-keys --template-uri https://raw.githubusercontent.com/Microsoft/Cognitive-Samples-IntelligentKiosk/master/Kiosk/cognitive-keys-azure-deploy.json
If instead you want to modify the default parameters you need to get the cognitive-keys-azure-deploy.json and cognitive-keys-azure-deploy.parameters.json files locally and execute the following commands
# Change working directory to Kiosk
cd Kiosk
# The command below creates the cognitive service keys required by the KIOSK app, and then prints the keys. You can modifiy the tiers associated with the generated keys by modifying the parameter values
echo $(az group deployment create -n cog-keys-deploy -g kiosk-cog-service-keys --template-file cognitive-keys-azure-deploy.json --parameters #cognitive-keys-azure-deploy.parameters.json) | jq '.properties.outputs'
# If you dont have jq installed you can execute the command, and manually search for the outputs section
# az group deployment create -n cog-keys-deploy -g kiosk-cog-service-keys --template-file cognitive-keys-azure-deploy.json --parameters #cognitive-keys-azure-deploy.parameters.json
Sample output of above commands is as follows:
# Sample output of above command
{
"bingAugosuggestKey1": {
"type": "String",
"value": "cb4******************************"
},
"bingSearchKey1": {
"type": "String",
"value": "88*********************************"
},
"compVisionEndpoint": {
"type": "String",
"value": "https://westus.api.cognitive.microsoft.com/vision/v1.0"
},
"compVisionKey1": {
"type": "String",
"value": "fa5**************************************"
},
"faceEndpoint": {
"type": "String",
"value": "https://westus.api.cognitive.microsoft.com/face/v1.0"
},
"faceKey1": {
"type": "String",
"value": "87f7****************************************"
},
"textAnalyticsEndpoint": {
"type": "String",
"value": "https://westus.api.cognitive.microsoft.com/text/analytics/v2.0"
},
"textAnalyticsKey1": {
"type": "String",
"value": "ba3*************************************"
}
}
Also note that you can follow similar steps to generate CV key and endpoint only and use them in your application.
The correct credentials object is this one:
import { ApiKeyCredentials } from "#azure/ms-rest-js";
Documentation updated, full discussion at #10362
I can't update IAM policy in my AI Platform Notebook.
I created a new AI Platform Notebooks instance:
gcloud beta notebooks instances create nb1 \
--vm-image-project=deeplearning-platform-release \
--vm-image-family=tf-latest-cpu \
--machine-type=n1-standard-4 \
--location=us-west1-b
When I try to apply a new IAM policy I get an Error:
gcloud beta notebooks instances set-iam-policy nb1 --location=us-west1-b notebooks.policy
ERROR: (gcloud.beta.notebooks.instances.set-iam-policy) INTERNAL: An
internal error has occurred (506011f7-b62e-4308-9bde-10b97dd7b99c)
My policy looks like this:
{
"bindings": [
{
"members": [
"user:myuser#gmail.com",
],
"role": "roles/notebooks.admin"
}
],
"etag": "BwWlgdvxWT0=",
"version": 1
}
when I do a
gcloud beta notebooks instances get-iam-policy nb1 --location=us-west1-b --format=json
I get:
ACAB
As there is no policy set.
Please take a look at etag field:
An etag is returned in the response to getIamPolicy, and systems are expected to put that etag in the request to setIamPolicy to ensure that their change will be applied to the same version of the policy.
From documentation here
string (bytes format)
etag is used for optimistic concurrency control as a way to help prevent simultaneous updates of a policy from overwriting each other. It is strongly suggested that systems make use of the etag in the read-modify-write cycle to perform policy updates in order to avoid race conditions: An etag is returned in the response to getIamPolicy, and systems are expected to put that etag in the request to setIamPolicy to ensure that their change will be applied to the same version of the policy.
Important: If you use IAM Conditions, you must include the etag field whenever you call setIamPolicy. If you omit this field, then IAM allows you to overwrite a version 3 policy with a version 1 policy, and all of the conditions in the version 3 policy are lost.
A base64-encoded string.
You can just easily change your policy etag to ACAB, which is the default one.
{
"bindings": [
{
"members": [
"user:myuser#gmail.com",
],
"role": "roles/notebooks.admin"
}
],
"etag": "ACAB",
"version": 1
}
or you can use add-iam-policy-binding command, to create a new policy, then you can extract the etag using get-iam-policy and update your JSON file with it, finally run the set-iam-policy
You may also use this format:
{
"policy": {
"bindings": [
{
"members": [
"user:myuser#gmail.com"
],
"role": "roles/notebooks.admin"
}
],
"etag": "ACAB",
"version": 1
}
}
Need to have script to export data from Grafana dashboard to csv file.
Input: (dashboard slug/name and time frame like: -1h or
-24h)
any link to grafana api/doc should be fine.
Well, 3yr old question, but I've actually done a whole bunch of exactly this to yield some reporting on dashboards in our grafana. You can use whatever you want (including bash to pull dashboard data out based on the UID and you can certainly search for the slugs, but the API pulls out all of its information in JSON like below:
DASH:
{
"dashboard": {
"id": 1,
"uid": "cIBgcSjkk",
"title": "Production Overview",
"tags": [
"templated"
],
"timezone": "browser",
"schemaVersion": 16,
"version": 0
},
"meta": {
"isStarred": false,
"url": "/d/cIBgcSjkk/production-overview"
}
}
This code can then be piped through jq for your reporting. You can pull any variable through simplistic pathing of the dashboard json with option to use loops and lots of other features.
JQ:
$ curl -s https://grafana.local/api/dashboards/uid/cIBgcSjkk \
| jq -r '.dashboard |[ .uid, .title, .version ]| #csv'
"cIBgcSjkk","Production Overview",0
Refs:
https://grafana.com/docs/grafana/latest/http_api/dashboard/
https://stedolan.github.io/jq/manual/#Formatstringsandescaping