{
"Sid": "Some_ID",
"Effect": "Allow",
"Principal": {
"Service": "sqs.amazonaws.com"
},
"Action": [
"kms:GenerateDataKey",
"kms:Decrypt"
],
"Resource": "*"
}
messages should be encrypted to unauthorised users and automatically decrypt in sqs for authorised user/queue.
gusto2:- the difference is the data in the underlying storage will be encrypted, but the client itself won't see that.
Related
I have an aws ecs ec2 instance in one account and it is trying to access the dynamob db tables on another aws account. I am not using any aws access key and id, instead using AWS iam role attached to the ec2 instance.
This is a .net project and my appsettings.Staging.json is this.
{
"aws": {
"region": "ap-southeast-1"
},
"DynamoDbTables": {
"BenefitCategory": "stag_table1",
"Benefit": "stag_table2"
},
"Logging": {
"LogLevel": {
"Default": "Debug",
"System": "Information",
"Microsoft": "Information"
}
}
}
Here is my inline policy attached to the "ecsInstanceRole"
"xxxxxxxxxxxxx" >> this is the aws account on which the dynamodb table resides.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"dynamodb:PutItem",
"dynamodb:DescribeTable",
"dynamodb:DeleteItem",
"dynamodb:GetItem",
"dynamodb:Scan",
"dynamodb:Query",
"dynamodb:UpdateItem",
"dynamodb:DeleteTable",
"dynamodb:UpdateTable",
"dynamodb:GetRecords"
],
"Resource": [
"arn:aws:dynamodb:ap-southeast-1:xxxxxxxxxxx:table/stag_table1",
"arn:aws:dynamodb:ap-southeast-1:xxxxxxxxxxx:table/stag_table2",
]
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"dynamodb:ListGlobalTables",
"dynamodb:ListTables"
],
"Resource": "*"
}
]
}
In this set up the api is trying to connect to the table in the same account. I have added the other aws account in the trusted entity in the role ecsInstanceRole still not working.
is there any way the aws sdk or aws ecs/ec2 instance automatically find dynamodb table in the other aws account?
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html
A role policy for ec2 will be needed in both accounts, and a trust policy allowing the EC2 service to assume those roles. The role policy in the Destination account will have give IAM permissions to the Dynamodb table.
Then the Source EC2 instance will have to assume that role to get access to the table.
Grant the EC2 Server access to assume the role
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "abcdTrustPolicy",
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Principal": {"AWS": "arn:aws:iam::SOURCE_ACCOUNT_ID:role/NAME_A"}
}
]
}
Allowing NAME_A Instance Profile Role to Switch to a Role in Another Account
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowToAssumeCrossAccountRole",
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::DESTINATION_ACCOUNT_ID:role/ACCESS_DYNAMODB"
}
]
}
Role granting access to Dynamodb named ACCESS_DYNAMODB
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowDDBActions",
"Effect": "Allow",
"Action": [
"dynamodb:*"
],
"Resource": "*"
}
]
}
Trust policy in Destination
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DestinationTrustPolicy",
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Principal": {"Service": "ec2.amazonaws.com"}
}
]
}
I am trying to deploy my sampleApplication code via AWS CodeDeploy for Bitbucket
I have used this tutorial, I have followed all the steps. Trust Relationship for role is like this
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::accountId:root"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"sts:ExternalId": "connectionId"
}
}
}
]
}
and while I am creating a deployment group I got error of 'can't assume role' when I select above role as Service role ARN*.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"ec2.amazonaws.com",
"codedeploy.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}
But when I add above trust relationship I can crete deployment group but then aws integration on bitbucket doesn't work and throw error to add sufficient permission.
Neither of your posted roles have given permission to CodeCommit or S3.
As per the tutorial you linked, you must provide access to CodeCommit and S3. These are likely the permissions you are missing:
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": ["s3:ListAllMyBuckets", "s3:PutObject"],
"Resource": "arn:aws:s3:::*"
}, {
"Effect": "Allow",
"Action": ["codedeploy:*"],
"Resource": "*"
}]
}
I have a SNS topic in account "A", which is a trigger of a Lambda function in that same account. This Lambda function sends a message to a private Slack channel.
This works fine, as long as the CloudWatch alarm is in the same account (Account A).
But I also want to do this from "Account B", but there I get:
{
"error": "Resource: arn:aws:cloudwatch:REGION:ACCOUNT_B:alarm:ALARM is not authorized to perform: SNS:Publish on resource: arn:aws:sns:REGION:ACCOUNT_A:TOPIC",
"actionState": "Failed",
"notificationResource": "arn:aws:sns:REGION:ACCOUNT_A:TOPIC",
"stateUpdateTimestamp": 1495732611020,
"publishedMessage": null
}
So how do I allow the CloudWatch Alarm ARN access to publish to the topic?
Trying to add the policy fails with:
Invalid parameter: Policy Error: PrincipalNotFound (Service: AmazonSNS; Status Code: 400; Error Code: InvalidParameter; Request ID: 7f5c202e-4784-5386-8dc5-718f5cc55725)
I see that someone else have/had the same problem (years ago!) at https://forums.aws.amazon.com/thread.jspa?threadID=143607, but it was never answered.
Update:
Trying to solve this, I'm now trying to use a local SNS topic, which then sends this to the remove account. However, I'm still getting:
"error": "Resource: arn:aws:cloudwatch:REGION:LOCAL_ACCOUNT:alarm:ALARM is not authorized to perform: SNS:Publish on resource: arn:aws:sns:REGION:LOCAL_ACCOUNT:TOPIC"
This, with this SNS policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowLambdaAccountToSubscribe",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::REMOTE_ACCOUNT:root"
},
"Action": [
"sns:Subscribe",
"sns:Receive"
],
"Resource": "arn:aws:sns:REGION:LOCAL_ACCOUNT:TOPIC"
},
{
"Sid": "AllowLocalAccountToPublish",
"Effect": "Allow",
"Principal": "*",
"Action": "sns:Publish",
"Resource": "arn:aws:sns:REGION:LOCAL_ACCOUNT:TOPIC",
"Condition": {
"StringEquals": {
"AWS:SourceAccount": "LOCAL_ACCOUNT"
}
}
}
]
}
If I manually send a message to the topic with the Publish to topic, I can see that it reaches the Lambda function, so everything except the CloudWatch access rights.
By means of trial-and-error, I discovered it was the Condition that didn't work. For some reason. Not sure why it didn't see the source account...
A more extensive policy made it work:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowLambdaAccountToSubscribe",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::REMOTE_ACCOUNT:root"
},
"Action": [
"sns:Subscribe",
"sns:Receive"
],
"Resource": "arn:aws:sns:REGION:LOCAL_ACCOUNT:TOPIC"
},
{
"Sid": "AllowLocalAccountToPublish",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "sns:Publish",
"Resource": "arn:aws:sns:REGION:LOCAL_ACCOUNT:TOPIC",
"Condition": {
"StringEquals": {
"AWS:SourceAccount": "LOCAL_ACCOUNT"
}
}
},
{
"Sid": "AllowCloudWatchAlarmsToPublish",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "sns:Publish",
"Resource": "arn:aws:sns:REGION:LOCAL_ACCOUNT:TOPIC",
"Condition": {
"ArnLike": {
"AWS:SourceArn": "arn:aws:cloudwatch:REGION:LOCAL_ACCOUNT:alarm:*"
}
}
}
]
}
I have a 3-node cluster with SX running on Ubuntu v14.04.5 LTS with ports 80 & 443 and Libres3 running on the same servers with ports 8008 & 8443.
libres3 1.3-1-1~wheezy
sx 2.1-1-1~wheezy
s3cmd info s3://test-dev
s3://test-dev/ (bucket): Location: us-east-1 Payer:
BucketOwner Expiration Rule: none policy: { "Version":
"2012-10-17", "Statement": [
{
"Effect": "Allow",
"Principal": "",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::test-dev/"
} ] } cors: OptionPUTPOSTGETHEAD3000* ACL: admin: FULL_CONTROL ACL: test: FULL_CONTROL
I'm trying to put files from a Meteor application using the Slingshot package: https://github.com/CulturalMe/meteor-slingshot
but getting
'Access Denied':
"Sep 6 11:10:46: main: Replying with code 403: Access Deniedlibres3_1ff0aa644987498111ea4c91bca7b532_13817_587_1473174646.21AccessDenied
"
I can use S3 Browser and Cloudberry Explorer with the same credentials and access the buckets no problem.
Any thoughts or directions to solve putting files from the web?
Thanks,
-Matt
{ "Version": "2012-10-17",
"Statement":
[
{ "Effect":"Allow",
"Principal": "",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::test-dev/*"
}
]
}
You need to add "*" after "test-dev/"
I have get this error in wordpress: Error retrieving a list of your S3 buckets from AWS:Access Denied. I have write policy for IAM user. I don't know where I am doing wrong. Please help me.
My IAM policy is here:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmtxxxxxxxxxxx",
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:ListBucket",
"s3:ListBucketMultipartUploads"
],
"Resource": [
"arn:aws:s3:::bucketname/*"
]
}
]
}
First of all, in order to list all your buckets, you need to create a statement that will allow your IAM to perform "s3:ListAllMyBuckets" in the entire S3 account
{
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "*",
"Condition": {}
}
Also, it's seems like you have trouble with bucket listing because the actions that you are trying to allow:
"Action": [
"s3:GetBucketLocation",
"s3:ListBucket",
"s3:ListBucketMultipartUploads"
],
must be applied to the entire bucket:
"Resource": "arn:aws:s3:::bucketname",
while you are trying to allow this actions to the bucket's content:
"Resource": "arn:aws:s3:::bucketname/*",
Anyway, please try the policy below:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmtxxxxxxxxxxx",
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:ListBucket",
"s3:ListBucketMultipartUploads"
],
"Resource": "arn:aws:s3:::bucketname",
"Condition": {}
},
{
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "*",
"Condition": {}
}
]
}
I tested it on my site and it works.
If you have any other questions, feel free to ask.
Thanks,
Vlad