How to verify existence of resources and components in Jfrog artifactory - artifactory

We are trying to automate the repos, group and permission creation in Jfrog as part of the azuredevops pipeline. So here I decided to use JfrogCLI and API calls as the azuredevops tasks. But facing some difficulties in Jfrog cli or API calls related as the idempotent behavior is not enabled for most of the API calls operations and CLI commands.
Scenario:- For Application app-A1,
2 repos(local-A1 and virtual-A1) ,
2 groups(appA1-developers, appA1-contributors)
1 permission target (appA1-permission) where we will include the created 2 repos and groups with the permissions
Below is what I tried to achieve so far.
For creating the groups
jf rt group-create appA1-developers --url https://myrepo/artifactory --user jfuser --password jfpass
jf rt group-create appA1-contributors --url https://myrepo/artifactory --user jfuser --password jfpass
Create Repos using the command as below
Repo creation and Update template
jfrog rt rc local-repo-template
{
"description": "$variable",
"excludesPattern":"$variable",
"includesPattern":"$variable",
"key":"$variable",
"notes":"$variable",
"packageType":"$variable",
"rclass":"$variable"
}
Repo Update Command
jfrog rt ru local-repo-template
{
"description": "$variable",
"excludesPattern":"$variable",
"includesPattern":"$variable",
"key":"$variable",
"notes":"$variable",
"packageType":"$variable",
"rclass":"$variable"
}
For creating permission
curl -u 'jfuser' -X PUT "https://myrepo/artifactory/api/v2/security/permissions/java-developers" -H "Content-type: application/json" -T permision.json
{
"name": "appA1-developers",
"repo": {
"include-patterns": ["**"],
"exclude-patterns": [""],
"repositories": ["appA1-local"],
"actions": {
"groups" : {
"appA1-developers" : ["manage","read","annotate"]
}
}
},
"build": {
"include-patterns": ["testmaven/**"],
"exclude-patterns": [""],
"repositories": ["artifactory-build-info"],
"actions": {
"groups" : {
"appA1-developers" : ["manage","read","write","annotate","delete"]
}
}
}
}
But when I trying in Azuredevops all the above tasks, Not able to identify and condition if any of the resources above are already existing.
Looking for a way to Identify first,
If the specified groupname is already existing or not, if existing skip group creation, else create the groups
check if the repos already existing,
if not existing create as per the template
But if it exists and all the properties are same- Skip it
But if it exists and there are property changes- use the update command
Similarly, the permission target also need to be updated only if there are changes , But existing properties or settings shouldnt be altered.

Related

Disable local authentication methods for Cosmos DB database accounts using Azure CLI

I am trying to create a cosmos DB account using Azure CLI.
One of required policies I have to comply with is "Cosmos DB database accounts should have local authentication methods disabled". In the following document I see how to set it using Azure Resource Manager templates . See below
"resources": [
{
"type": " Microsoft.DocumentDB/databaseAccounts",
"properties": {
"disableLocalAuth": true,
// ...
},
// ...
},
// ...
]
Now my question is how to do the same using AZ CLI?
The command I am using is => az cosmosdb create ...
I don't see any flag that will allow the similar setting in AZ CLI.
As of January 2022 this is only supported via ARM Templates but support for PS and CLI is planned. No ETA to share at this time.
You can always use Azure REST API invocation to apply any change in the CosmosDB account, see here
https://learn.microsoft.com/en-us/rest/api/cosmos-db-resource-provider/2021-10-15/database-accounts/create-or-update
I've used Postman for that, btw I post a CURL example here by which I was able to modify a couple of properties (you need to get an oauth2 token before):
curl --location --request PUT 'https://management.azure.com/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.DocumentDB/databaseAccounts/<database-account-name>?api-version=2021-10-15' \
--header 'Authorization: Bearer <oauth2-token>' \
--header 'Content-Type: application/json' \
--data-raw '{
"location": "North Europe",
"properties": {
"databaseAccountOfferType": "Standard",
"disableLocalAuth": true,
"disableKeyBasedMetadataWriteAccess":true,
"locations": [
{
"isVirtualNetworkFilterEnabled": false,
"locationName": "North Europe",
"failoverPriority": 0,
"isZoneRedundant": false
}
]
}
}'
No , this is not supported through the Azure CLI when you are creating Azure Cosmos DB account via az cosmosdb create
It's not supported through the az cosmosdb commands but you could use the az resource update command to update this property:
$cosmosdbname = "<cosmos-db-account-name>"
$resourcegroup = "<resource-group-name>"
$cosmosdb = az cosmosdb show --name $cosmosdbname --resource-group $resourcegroup | ConvertFrom-Json
az resource update --ids $cosmosdb.id --set properties.disableLocalAuth=true --latest-include-preview

Access denied due to invalid subscription key or wrong API endpoint (Cognitive Services Custom Vision)

I'm trying to connect to my Cognitive Services resource but I'm getting the following error:
(node:3246) UnhandledPromiseRejectionWarning: Error: Access denied due to invalid subscription key or wrong API endpoint. Make sure to provide a valid key for an active subscription and use a correct regional API endpoint for your resource.
I created the the resource with kind CognitiveServices like this:
az cognitiveservices account create -n <name> -g <group> --kind CognitiveServices --sku S0 -l eastus --yes
Using kind CustomVision.Training didn't work too.
I have already looked at this answer but it is no the same problem. I believe I am entering the correct credentials and endpoint.
I checked both Azure Portal and customvision.ai resource, I'm using the correct URL and key, but it is not working.
I even tried reseting the key but also it had no effect.
import { TrainingAPIClient } from "#azure/cognitiveservices-customvision-training";
const { CognitiveServicesCredentials } = require("#azure/ms-rest-azure-js");
const cognitiveServiceCredentials = new CognitiveServicesCredentials("<MY_API_KEY>");
const client = new TrainingAPIClient(cognitiveServiceCredentials, "https://eastus.api.cognitive.microsoft.com");
const projects = client.getProjects()
I was also able to run it using the REST API, got HTTP 200.
You can clone this Microsoft Cognitive Services sample (UWP application) and check out Computer Vision feature in the sample. You will have to setup App Settings in the app before you proceed to check.
You can follow the below steps on how to do that through azure cli commands from bash / git bash:
Create resource group
# Create resource group, replace resouce group name and location of resource group as required
az group create -n kiosk-cog-service-keys -l westus
Generate keys and echo the keys
Please note! jq needs to be installed to execute the commands below. If you do not want to use jq then you can just execute the az group deployment command and then search in the outputs section of the json where you will find the keys.
To get the keys with the default parameters execute the following commands
# The command below creates the cognitive service keys required by the KIOSK app, and then prints the keys
echo $(az group deployment create -n cog-keys-deploy -g kiosk-cog-service-keys --template-uri https://raw.githubusercontent.com/Microsoft/Cognitive-Samples-IntelligentKiosk/master/Kiosk/cognitive-keys-azure-deploy.json) | jq '.properties.outputs'
# If you dont have jq installed you can execute the command, and manually search for the outputs section
# az group deployment create -n cog-keys-deploy -g kiosk-cog-service-keys --template-uri https://raw.githubusercontent.com/Microsoft/Cognitive-Samples-IntelligentKiosk/master/Kiosk/cognitive-keys-azure-deploy.json
If instead you want to modify the default parameters you need to get the cognitive-keys-azure-deploy.json and cognitive-keys-azure-deploy.parameters.json files locally and execute the following commands
# Change working directory to Kiosk
cd Kiosk
# The command below creates the cognitive service keys required by the KIOSK app, and then prints the keys. You can modifiy the tiers associated with the generated keys by modifying the parameter values
echo $(az group deployment create -n cog-keys-deploy -g kiosk-cog-service-keys --template-file cognitive-keys-azure-deploy.json --parameters #cognitive-keys-azure-deploy.parameters.json) | jq '.properties.outputs'
# If you dont have jq installed you can execute the command, and manually search for the outputs section
# az group deployment create -n cog-keys-deploy -g kiosk-cog-service-keys --template-file cognitive-keys-azure-deploy.json --parameters #cognitive-keys-azure-deploy.parameters.json
Sample output of above commands is as follows:
# Sample output of above command
{
"bingAugosuggestKey1": {
"type": "String",
"value": "cb4******************************"
},
"bingSearchKey1": {
"type": "String",
"value": "88*********************************"
},
"compVisionEndpoint": {
"type": "String",
"value": "https://westus.api.cognitive.microsoft.com/vision/v1.0"
},
"compVisionKey1": {
"type": "String",
"value": "fa5**************************************"
},
"faceEndpoint": {
"type": "String",
"value": "https://westus.api.cognitive.microsoft.com/face/v1.0"
},
"faceKey1": {
"type": "String",
"value": "87f7****************************************"
},
"textAnalyticsEndpoint": {
"type": "String",
"value": "https://westus.api.cognitive.microsoft.com/text/analytics/v2.0"
},
"textAnalyticsKey1": {
"type": "String",
"value": "ba3*************************************"
}
}
Also note that you can follow similar steps to generate CV key and endpoint only and use them in your application.
The correct credentials object is this one:
import { ApiKeyCredentials } from "#azure/ms-rest-js";
Documentation updated, full discussion at #10362

Unable to update IAM policy in AI Platform Notebooks

I can't update IAM policy in my AI Platform Notebook.
I created a new AI Platform Notebooks instance:
gcloud beta notebooks instances create nb1 \
--vm-image-project=deeplearning-platform-release \
--vm-image-family=tf-latest-cpu \
--machine-type=n1-standard-4 \
--location=us-west1-b
When I try to apply a new IAM policy I get an Error:
gcloud beta notebooks instances set-iam-policy nb1 --location=us-west1-b notebooks.policy
ERROR: (gcloud.beta.notebooks.instances.set-iam-policy) INTERNAL: An
internal error has occurred (506011f7-b62e-4308-9bde-10b97dd7b99c)
My policy looks like this:
{
"bindings": [
{
"members": [
"user:myuser#gmail.com",
],
"role": "roles/notebooks.admin"
}
],
"etag": "BwWlgdvxWT0=",
"version": 1
}
when I do a
gcloud beta notebooks instances get-iam-policy nb1 --location=us-west1-b --format=json
I get:
ACAB
As there is no policy set.
Please take a look at etag field:
An etag is returned in the response to getIamPolicy, and systems are expected to put that etag in the request to setIamPolicy to ensure that their change will be applied to the same version of the policy.
From documentation here
string (bytes format)
etag is used for optimistic concurrency control as a way to help prevent simultaneous updates of a policy from overwriting each other. It is strongly suggested that systems make use of the etag in the read-modify-write cycle to perform policy updates in order to avoid race conditions: An etag is returned in the response to getIamPolicy, and systems are expected to put that etag in the request to setIamPolicy to ensure that their change will be applied to the same version of the policy.
Important: If you use IAM Conditions, you must include the etag field whenever you call setIamPolicy. If you omit this field, then IAM allows you to overwrite a version 3 policy with a version 1 policy, and all of the conditions in the version 3 policy are lost.
A base64-encoded string.
You can just easily change your policy etag to ACAB, which is the default one.
{
"bindings": [
{
"members": [
"user:myuser#gmail.com",
],
"role": "roles/notebooks.admin"
}
],
"etag": "ACAB",
"version": 1
}
or you can use add-iam-policy-binding command, to create a new policy, then you can extract the etag using get-iam-policy and update your JSON file with it, finally run the set-iam-policy
You may also use this format:
{
"policy": {
"bindings": [
{
"members": [
"user:myuser#gmail.com"
],
"role": "roles/notebooks.admin"
}
],
"etag": "ACAB",
"version": 1
}
}

How to generate HTTPS Git credentials for AWS CodeCommit?

I use terraform for create IAM User.
How to use terraform for generate HTTPS Git credentials for AWS CodeCommit ?
My code :
resource "aws_iam_user" "gitlab" {
name = "user-gitlab"
}
resource "aws_iam_policy_attachment" "gitlab" {
name = "iam-gitlab"
users = ["${aws_iam_user.gitlab.name}"]
policy_arn = "arn:aws:iam::aws:policy/AWSCodeCommitPowerUser"
}
Regards,
Use data.external to execute a CLI script:
credentials=$(aws --profile dev iam list-service-specific-credentials \
--user-name jenkins --service-name codecommit.amazonaws.com --query 'ServiceSpecificCredentials[0]')
if [[ $credentials == "null" ]]; then
credentials=$(aws --profile dev iam create-service-specific-credential --user-name jenkins \
--service-name codecommit.amazonaws.com --query ServiceSpecificCredential)
fi
echo "$credentials"
Then the terraform:
data "external" "jenkins" {
program = ["${path.root}/jenkins.sh"]
}
resource "aws_ssm_parameter" "jenkins_cc_id" {
name = "${local.jenkins}/codecommit_https_user"
value = "${lookup(data.external.jenkins.result, "ServiceUserName", "")}"
}
resource "aws_ssm_parameter" "jenkins_cc_p" {
name = "${local.jenkins}/codecommit_https_pass"
value = "${lookup(data.external.jenkins.result, "ServicePassword", "")}"
}
Unfortunately, there appears to be no support for this API in Terraform. I recommend that you post a feature request in the AWS provider GitHub repo.
The feature has been implemented in 4.1.0 (https://github.com/hashicorp/terraform-provider-aws/blob/v4.1.0/CHANGELOG.md).
Take a look at https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_service_specific_credential#service_specific_credential_id
resource "aws_iam_service_specific_credential" "example" {
service_name = "codecommit.amazonaws.com"
user_name = aws_iam_user.example.name
}

Artifactory delete all artifacts older than 6 months

How would you delete all artifacts that match a pattern (e.g older than 6 months old) from artifactory?
Using either curl, or the go library
The jfrog cli takes a 'spec file' to search for artifacts. See here for information on jfrog spec files
The jfrog cli documentation is available here:
Create an aql search query to find just the artifacts you want:
If your aql search syntax were like:
/tmp/foo.query
items.find(
{
"repo":"foobar",
"modified" : { "$lt" : "2016-10-18T21:26:52.000Z" }
}
)
And you could find the artifacts like so:
curl -X POST -u admin:<api_key> https://artifactory.example.com/artifactory/api/search/aql -T foo.query
Then the spec file would be
/tmp/foo.spec
{
"files": [
{
"aql": {
"items.find": {
"repo": "foobar",
"$or": [
{
"$and": [
{
"modified": { "$lt": "2016-10-18T21:26:52.000Z"}
}
]
}
]
}
}
}
]
}
And you would use the golang library like so:
jfrog rt del --spec /tmp/foo.spec --dry-run
Instead of modified, you can also do a relative date
"modified": { "$before":"6mo" }
If you get error 405 Method not allowed, verify you have an api or password correct, and try using PUT instead of POST
Install jforg cli
configure jfrog cli to make API calls
jfrog rt c Artifactory --url=https://<>url/artifactory --apikey=<add api key> #can generate api key from user profile
Create a spec file called artifactory.spec You can change the find query as required. Below query will
display artifacts from
registry - docker-staging
path - *
download status = null # Not downloaded
created before 6 month
{
"files": [
{
"aql": {
"items.find": {
"repo": {"$eq":"docker-staging"},
"path": {"$match":"*"},
"name": {"$match":"*"},
"stat.downloads":{"$eq":null},
"$or": [
{
"$and": [
{
"created": { "$before":"6mo" }
}
]
}
]
}
}
}
]
You can search for the packages avalable to delete by
jfrog rt s --spec artifactory.spec
Run the delete command
jfrog rt del --spec artifactory.spec
We've built a tool for Artifactory artifacts management (cleanup) https://github.com/devopshq/artifactory-cleanup
it'd look like
# artifactory-cleanup.yaml
artifactory-cleanup:
server: https://repo.example.com/artifactory
# $VAR is auto populated from environment variables
user: $ARTIFACTORY_USERNAME
password: $ARTIFACTORY_PASSWORD
policies:
- name: Remove all files from repo-name-here older then 7 days
rules:
- rule: Repo
name: "reponame"
- rule: DeleteOlderThan
days: 7
then you should run it to see what it'll remove and when you are ready - add --destroy flag to it!
artifactory-cleanup

Resources