Unable to clone AWS CodeCommit with IAM Role - aws-codecommit

I have following settings with my ec2 instance, but no luck.
And there is a same issue on aws forum but no answer.
~/.gitconfig:
[credential]
helper = !aws --region us-east-1 codecommit credential-helper $#
UseHttpPath = true
IAM Role Policy for the EC2 Instance:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"codecommit:*"
],
"Resource": "*"
}
]
}
Then following code works:
echo -e "protocol=https\npath=/v1/repos/my-repo\nhost=git-codecommit.us-east-1.amazonaws.com" | aws --region us-east-1 codecommit credential-helper get
However, with git, it doesn't.
git clone https://git-codecommit.us-east-1.amazonaws.com/v1/repos/my-repo
Cloning into 'my-repo'...
fatal: unable to access 'https://git-codecommit.us-east-1.amazonaws.com/v1/repos/my-repo/': The requested URL returned error: 403
Any ideas?
UPDATE
After some investigation, I figure out that attached IAM Role doesn't work git operation, but IAM User worked fine.
| Type | list-repositories | credential-helper | git operation |
| IAM User with CodeCommitFullAccess | OK | OK | OK |
| IAM Role with CodeCommitFullAccess | OK | OK | NG |
Tries following command:
list-repositories
aws codecommit list-repositories
credential-helper
echo -e "protocol=http\npath=/v1/repos/my-repo\nhost=git-codecommit.us-east-1.amazonaws.com" | aws --region=us-east-1 codecommit credential-helper get
git operation
git clone --config credential.helper='!aws --region=us-east-1 codecommit credential-helper $#' --config credential.UseHttpPath=true https://git-codecommit.us-east-1.amazonaws.com/v1/repos/my-repo
my awscli version is following:
$ aws --version
aws-cli/1.10.44 Python/2.7.5 Linux/3.10.0-327.10.1.el7.x86_64 botocore/1.4.34
Update2
My git and curl version is as following:
$ git --version
git version 1.8.3.1
$ curl --version
curl 7.29.0 (x86_64-redhat-linux-gnu) libcurl/7.29.0 NSS/3.19.1 Basic ECC zlib/1.2.7 libidn/1.28 libssh2/1.4.3
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp scp sftp smtp smtps telnet tftp
Features: AsynchDNS GSS-Negotiate IDN IPv6 Largefile NTLM NTLM_WB SSL libz

You need to be using at least curl 7.33 or later. From the CodeCommit documentation:
AWS CodeCommit requires curl 7.33 and later. However, there is a known issue with HTTPS
and curl update 7.41.0.

Related

How can determine managed identity of Azure VM a script is running on?

For post-processing of AzD=Azure Developer CLI I need to authorize the managed identity of the Azure VM, the script is currently running on, to the subscription selected by AzD. How can I determine managed identity of the VM with help of the metadata endpoint?
I created this script authorize-vm-identity.sh which determines the VM's resourceID (could be in a different subscription than the actual resources managed by AzD) from the metadata endpoint and then obtains the managed identities' principalId to make the actual role assignment with:
#!/bin/bash
source <(azd env get-values | sed 's/AZURE_/export AZURE_/g')
AZURE_VM_ID=`curl -s -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/instance?api-version=2021-02-01" | jq -r '.compute.resourceId'`
if [ ! -z $AZURE_VM_ID ];
then
AZURE_VM_MI_ID=`az vm show --id $AZURE_VM_ID --query 'identity.principalId' -o tsv`
fi
if [ ! -z $AZURE_VM_MI_ID ];
then
az role assignment create --role Contributor --assignee $AZURE_VM_MI_ID --scope /subscriptions/$AZURE_SUBSCRIPTION_ID
fi
Prerequisites:
Azure CLI
jq
curl

Curl connection refused on circleci but works on local machine

I have a circleci pipeline, and after deployment I run a smoke test to check the application status. This is the code below:
smoke-test:
docker:
- image: python:3.10.5-alpine3.16
steps:
- checkout
- run:
name: Install dependencies
command: |
apk add --update --no-cache curl aws-cli tar gzip jq
- run:
name: Backend smoke test
command: |
export BACKEND_IP=$(aws ec2 describe-instances \
--filters "Name=tag:Name,Values=UdaPeople-backend-${CIRCLE_WORKFLOW_ID:0:5}" \
'Name=instance-state-name,Values=running' \
--query 'Reservations[*].Instances[*].PublicIpAddress' \
--output text)
export API_URL="http://${BACKEND_IP}:3030/api/status"
echo "${API_URL}"
wget "${API_URL}"
if curl -s -v "${API_URL}" | grep "ok"
then
return 0
else
return 1
fi
More details:
the server I am trying to query is an ec2 instance with a security group that allows all IP addresses on port 3030
I downloaded the container I am using in circle ci and tested the curl command and wget. It works perfectly
I have made more than 30 deployments, and the result is the same
The error output from circleci shows that it actually hits the IP address.
I increased the timeout seconds and also set the retries to 5
Please what could I be missing?

Cannot push asp.net container to aws ecr

I have an asp.net project, that's in at GitLab and I try to build and push it to AWS ECR.
The building is completed successfully, but I have this error on the push
Here is a screen of permissions, that I have on the IAM user
and pipeline .yml file
step-deploy-development:
stage: development
image: docker:stable
services:
- docker:18.09.7-dind
before_script:
# - export DOCKER_HOST="tcp://localhost:2375"
# - docker info
- export DYNAMIC_ENV_VAR=DEVELOPMENT
- apk update
- apk upgrade
- apk add util-linux pciutils usbutils coreutils binutils findutils grep
- apk add python3 python3-dev python3 py3-pip
- pip install awscli
script:
- echo setting up env $DYNAMIC_ENV_VAR
- $(aws ecr get-login --no-include-email --region eu-west-2)
- docker build --build-arg ASPNETCORE_ENVIRONMENT=${ASPNETCORE_ENVIRONMENT_DEV} --build-arg DB_CONNECTION=${DB_CONNECTION_DEV} --build-arg CORS_ORIGINS=${CORS_ORIGINS_DEV} --build-arg SERVER_ROOT_ADDRESS=${SERVER_ROOT_ADDRESS_DEV} -f src/COROI.Web.Host/Dockerfile -t $ECR_DEV_REPOSITORY_URL:$CI_COMMIT_SHA .
- docker push $ECR_DEV_REPOSITORY_URL:$CI_COMMIT_SHA
- cd deployment
- sed -i -e "s/TAG/$CI_COMMIT_SHA/g" ecs_task_dev.json
- aws ecs register-task-definition --region $ECS_REGION --cli-input-json file://ecs_task_dev.json >> temp.json
- REV=`grep '"revision"' temp.json | awk '{print $2}'`
- aws ecs update-service --cluster $ECS_DEV_CLUSTER --service $ECS_DEV_SERVICE --task-definition $ECS_DEV_TASK --region $ECS_REGION
environment: development
tags:
# - CoroiAdmin
only:
- main
Where can be the problem?
You need to set the ECR repository policy to allow your IAM user to pull and push the images as explained in the following link Setting a repository policy statement, and for example this repository policy allows the IAM user to push and pull images to and from a repository.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowPushPull",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::account-id:user/iam-user"
]
},
"Action": [
"ecr:BatchGetImage",
"ecr:BatchCheckLayerAvailability",
"ecr:CompleteLayerUpload",
"ecr:GetDownloadUrlForLayer",
"ecr:InitiateLayerUpload",
"ecr:PutImage",
"ecr:UploadLayerPart"
]
}
]
}

Error - Artifactory response: 405 Method Not Allowed

I'm trying to download a file from my Jfrog artifactory to my local machine with CLI command:
jfrog rt dl --user *username* --password *password* -url https://*domain*.jfrog.io/artifactory/*my-folder-name*/ --flat=false * c:/jfrog/
I'm getting:
Log path: C:\Users\Administrator\.jfrog\logs\jfrog-cli.2020-08-19.18-38-11.3780.log
{ "status": "failure",
"totals": {
"success": 0,
"failure": 0
}
}
[Error] Download finished with errors, please review the logs.
From the logs:
[Error] Artifactory response: 405 Method Not Allowed
but, when I'm running jfrog rt ping I'm getting
"OK"
The reason you are getting 405 is that JFrog CLI is trying ping the Artifactory using the --url https://domain.jfrog.io/artifactory/my-folder-name/. To overcome this you should try to download using the below JFrog CLI,
jfrog rt dl --user username --password password -url
https://domain.jfrog.io/artifactory/ "<repository_key>/" --flat=false *
c:/jfrog/
For example, if I want to download any artifacts from the "generic-local" repository under "jars" folder then my JFrog CLI command would be as below,
$ jfrog rt dl --user admin --password password -url
http://localhost:8081/artifactory "generic-local/jars/"
--flat=false
It should download all the artifacts under "generic-local/jars" under the current directory.

Openstack-Folsom keystone script fail to configure

Based on this link https://github.com/mseknibilel/OpenStack-Folsom-Install-guide/blob/master/OpenStack_Folsom_Install_Guide_WebVersion.rst#openstack-folsom-install-guide , I tried running these scripts but it fails despite me setting the HOST_IP & EXT HOST_IP.
./keystone_basic.sh
./keystone_endpoints_basic.sh
Below is the error log received:-
-keystone:error:unrecognized arguments: service id of 18ea5916544429bed2c84af0303077
I have provide the information such as tenant_name, tenant_id and so on in a source file but it happens to be the script provided does not get recognized by the system. Below are the details of the OS I use.
I created VMs instead of using physical machines. Installed with Ubuntu 12.04 LTS.
Please advice on how to tackle this issue.
Thanks.
I had the same problem. I am using Ubuntu 12.04 LTS. After running:
keystone help user-create tenant id appears as follows:
Optional arguments:
...
--service_id <service-id>
Change --service-id to --service_id with a global replace
[Using command line]
# sed -i 's/--service-id/--service_id/g' /path/to/script.sh
restart keystone & It's database entries
mysql -u root -ppassword -e "drop database keystone"
mysql -u root -ppassword -e "create database keystone"
mysql -u root -ppassword -e "grant all privileges on keystone.* TO 'keystone'#'%' identified by 'password'"
mysql -u root -ppassword -e "grant all privileges on keystone.* TO 'keystone'#'localhost' identified by 'password'"
service keystone restart
keystone-manage db_sync

Resources