IAM policy to allow access to DynamoDB console for specific tables - amazon-dynamodb

Is it possible to create an AWS IAM policy that provides access to the DynamoDB console only for specific tables? I have tried:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt0000000001",
"Action": [
"dynamodb:DescribeTable",
"dynamodb:ListTables",
<other actions>
],
"Effect": "Allow",
"Resource": [
"arn:aws:dynamodb:<region>:<account>:table/FooTable",
"arn:aws:dynamodb:<region>:<account>:table/BarTable"
]
}
]
}
but for a user with this policy attached, the DynamoDB tables list says Not Authorized (as it does when no policy is attached).
Setting "Resource" to "*" and adding a new statement like below lets the user perform <other actions> on FooTable and BarTable, but they can also see all other tables in the tables list.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt0000000001",
"Action": [
<other actions>
],
"Effect": "Allow",
"Resource": [
"arn:aws:dynamodb:<region>:<account>:table/FooTable",
"arn:aws:dynamodb:<region>:<account>:table/BarTable"
]
},
{
"Sid": "Stmt0000000002",
"Action": [
"dynamodb:DescribeTable",
"dynamodb:ListTables"
],
"Effect": "Allow",
"Resource": "*"
}
]
}

Sorry for the bad news, but the AWS Management Console requires both DescribeTable and ListTables permissions against the whole of DynamoDB in order to operate correctly.
However, there is a small workaround... You can give Console users a URL that takes them directly to the table, and operates fine for viewing and adding items, etc.
Just copy the URL from a user that has correct permissions, eg:
https://REGION.console.aws.amazon.com/dynamodb/home?region=REGION#explore:name=TABLE-NAME

I found that apart from DynamoDB, users need to have wildcard permissions for CloudWatch and SNS. (Consider "Example 5: Set Up Permissions Policies for Separate Test and Production Environments").
You may add AmazonDynamoDBReadOnlyAccess to your predefined policies.

Related

Error: Error executing "ListBuckets" on Amazon s3 using w3tc plugin for wordpress

I was trying to use the the W3TC plugin for Wordpress in order to use Amazon S3 as storage for my files.
Had no problem (well, after a little headscratching anyway) creating a new IAM user and getting the connection from the plugin to S3 - however when I clicked on "Test S3 Upload" it came back with the following error:
Error: Error executing "ListBuckets" on "https://s3.eu-west-2.amazonaws.com/"; AWS HTTP error: Client error: `GET https://s3.eu-west-2.amazonaws.com/` resulted in a `403 Forbidden` response: AccessDeniedAccess Denied3G27GE (truncated...) AccessDenied (client): Access Denied - AccessDeniedAccess Denied
The IAM user had the following policy attached, which is the standard policy given in pretty much all examples I could find online of how to set up a user which allows uploads to an s3 bucket:
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:CreateBucket",
"s3:DeleteObject",
"s3:Put*",
"s3:Get*",
"s3:List*"
],
"Resource": [
"arn:aws:s3:::com.fatpigeons.fatpigeons-object-storage",
"arn:aws:s3:::com.fatpigeons.fatpigeons-object-storage/*"
]
}
]
}```
It seems that the "Test S3 Upload" button was trying to search for my bucket, rather than going directly there.
Allowing the IAM user to list all of my buckets at a level above the bucket itself using the following code solved the problem:
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:CreateBucket",
"s3:DeleteObject",
"s3:Put*",
"s3:Get*",
"s3:List*"
],
"Resource": [
"arn:aws:s3:::com.fatpigeons.fatpigeons-object-storage",
"arn:aws:s3:::com.fatpigeons.fatpigeons-object-storage/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets"
],
"Resource": [
"arn:aws:s3:::*"
]
}
]
}```

Apply an Azure Policy to a management group using ARM

Goal: Deploy an Azure Policy to a management group so when certain tags are missing from a resource within its remit, apply the specified Tag from the resource group
Problem: Deploying this template to the management group results in "'The template function 'RESOURCEGROUP' is not expected at this location."
There is a fairly plain structure similar to:
<Management Group> - <Subscription 1> - <Resource Group 1> - <Resource A>
- <Resource Group 2> - <Resource B>
- <Subscription 2> - <Resource Group 3> - <Resource C>
- <Resource D>
There is a fairly simple template using a nested policy definition:
......
"resources": [
{
"type": "Microsoft.Authorization/policyDefinitions",
"apiVersion": "2019-09-01",
"name": ".",
"properties": {
"policyType": "Custom",
"mode": "Indexed",
"displayName": ".",
"description": ".",
"metadata": {
"category": "Tags"
},
"policyRule": {
"if": {
"anyOf": [
{
"field": "tags['costCenter']",
"exists": "false"
},
{
"field": "tags['CostCenter']",
"notin": "[parameters('allowedCostCenter')]"
}
]
},
"then": {
"effect": "modify",
"details": {
"roleDefinitionIds": [
"/providers/Microsoft.Authorization/roleDefinitions/4a9ae827-6dc8-4573-8ac7-8239d42aa03f"
],
"operations": [
{
"operation": "add",
"field": "tags['CostCenter']",
"value": "[resourcegroup().tags['CostCenter']]"
}
]
}
}
}
}
}
]
I realise that you can not use "resourcegroup()" on items that are not within a resource group, but the guides suggested using this within the nested template and on "indexed" resources should work.
I'm fairly sure the pipeline is correct as I already have several audit policies deploying
From experimenting in the portal, this looks like it should be possible
There is a decent amount of reading around, but I have not read (or at least understood) that seems to help with this
Is what I am trying to achieve possible? If so, can you see what I am doing wrong?
Thanks for your help!
You need to add escape character if you want resourcegroup() function to be executed as a part of the Azure Policy, not the MG-scope ARM template:
"value": "[[resourcegroup().tags['CostCenter']]"

How to "dependsOn" all copies of a resource?

How can I set up a dependsOn to depend on all copies of a certain resource? Hypothetically, I deploy 0..N number of websites and I need them all to complete before I deploy my traffic manager because the TM needs resource IDs.
Currently I'm only deploying 2 and so I'm just enumerating two items in the dependsOn array, but if I decide I want to deploy more copies (as determined by
[variables('tdfConfiguration')] array), it would be nice for dependsOn to dynamically figure this out.
"apiVersion": "[variables('apiVersion')]",
"type": "Microsoft.Resources/deployments",
"name": "[concat(resourceGroup().name, '-', variables('tdfConfiguration')[0]['roleName'], '-tmprofile')]",
"dependsOn": [
"[concat(resourceGroup().Name, '-', variables('tdfConfiguration')[0]['roleName'], '-website')]",
"[concat(resourceGroup().Name, '-', variables('tdfConfiguration')[1]['roleName'], '-website')]"
],
fairly easy, use copy name. suppose you have a resource like so:
"name": xxx,
"type": zzz,
...
"copy": {
"name": "myCopy",
"count": 0..N
}
you can use the followin dependsOn to depend on all copies:
"dependsOn": [ "myCopy" ]
Reading: https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-group-create-multiple#depend-on-resources-in-a-loop

Select AWS RDS Aurora into S3 encrypted bucket with KMS

I'm trying to use AWS RDS Aurora functionality SELECT * INTO OUTFILE S3 :some_bucket/object_key where some_bucket has default Server-side encryption with KMS.
I'm receiving this error, which makes sense:
InternalError: (InternalError) (1871, u'S3 API returned error: Unknown:Unable to parse ExceptionName: KMS.NotFoundException Message: Invalid keyId')
How can I make this work, make Aurora have the KMS key so that it can upload a file into S3?
As per the documentation
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.SaveIntoS3.html#AuroraMySQL.Integrating.SaveIntoS3.Statement
Compressed or encrypted files are not supported.
But you could create an exception policy for the bucket with "NotResource" policy for particular suffix and select into that, from there you could trigger an lambda to move the file to actual path with encryption.
Aurora MySQL currently supports this. Follow the above official documentation for adding an IAM role to your RDS cluster, and make sure the role has policies granting both S3 read/write and KMS encryption/decryption, e.g.
{
"Sid": "",
"Effect": "Allow",
"Action": [
"s3:AbortMultipartUpload",
"s3:DeleteObject",
"s3:GetObject",
"s3:GetObjectVersion",
"s3:ListMultipartUploadParts",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::<bucket-name>/*"
},
{
"Sid": "",
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::<bucket-name>"
},
{
"Sid": "",
"Effect": "Allow",
"Action": [
"kms:ReEncrypt*",
"kms:Encrypt",
"kms:DescribeKey",
"kms:Decrypt",
"kms:GenerateDataKey*"
],
"Resource": "arn:aws:kms:<region>:<account>:key/<key id>"
}

Query regarding Artifactory APIs

I am trying to integrate with Artifactory using rest APIs and to do that need to be able to do the following:
Give a filter list of repositories based on tool type e.g I will like to only get repositories which are either nuget or npm based. I tried using https://user.jfrog.io/user/api/repositories but it doesn't return the type of repository so i cannot filter the list. I see that https://user.jfrog.io/user/api/storageinfo returns repositoriesSummaryList which includes the package type of repositories. Is it ok to use this APIs for getting the list of repositories and filtering?
Given a repository I want to get list of packages in that repository. The only way i could find out for this was making a POST call to https://user.jfrog.io/user/api/search/aql with the body
items.find(
{
"repo":{"$eq":"myawesome-remotenugetrepo-cache"}
}
)
Is there any way to get this information using a GET call instead of POST?
In Artifactory different versions of same package are treated as different packages For Example: For the Query in 2 the result is something like this:
[
{
"repo": "myawesome-remotenugetrepo-cache",
"path": ".",
"name": "bootstrap.3.3.2.nupkg",
"type": "file",
"size": 264693,
"created": "2016-05-27T16:07:12.138Z",
"created_by": "admin",
"modified": "2015-12-03T12:57:47.000Z",
"modified_by": "admin",
"updated": "2016-05-27T16:07:12.166Z"
},
{
"repo": "myawesome-remotenugetrepo-cache",
"path": ".",
"name": "bootstrap.3.3.6.nupkg",
"type": "file",
"size": 290372,
"created": "2016-05-27T10:55:47.576Z",
"created_by": "admin",
"modified": "2015-12-03T12:57:48.000Z",
"modified_by": "admin",
"updated": "2016-05-27T10:55:47.613Z"
},
{
"repo": "myawesome-remotenugetrepo-cache",
"path": ".",
"name": "jQuery.1.9.1.nupkg",
"type": "file",
"size": 240271,
"created": "2016-05-27T10:55:43.895Z",
"created_by": "admin",
"modified": "2015-12-07T15:58:51.000Z",
"modified_by": "admin",
"updated": "2016-05-27T10:55:43.930Z"
}
]
As you can see the result includes entries for both versions of bootstrap 3.3.2 and 3.3.6. What I hoped was that the list of packages will just include bootstrap and jQuery, Is there anyway to get this list?
Also give the package bootstrap is there any way to query for different versions of it?
Yes.
You can use the Folder Info GET request.
There are two questions there :)
Not really. You'll probably have to write a simple script to group by the common part of the artifact name.
You can get information about the version of the artifact once you have a repository layout set up. Then you can use queries like Artifact Version Search.

Resources