Error: Error executing "ListBuckets" on Amazon s3 using w3tc plugin for wordpress - wordpress

I was trying to use the the W3TC plugin for Wordpress in order to use Amazon S3 as storage for my files.
Had no problem (well, after a little headscratching anyway) creating a new IAM user and getting the connection from the plugin to S3 - however when I clicked on "Test S3 Upload" it came back with the following error:
Error: Error executing "ListBuckets" on "https://s3.eu-west-2.amazonaws.com/"; AWS HTTP error: Client error: `GET https://s3.eu-west-2.amazonaws.com/` resulted in a `403 Forbidden` response: AccessDeniedAccess Denied3G27GE (truncated...) AccessDenied (client): Access Denied - AccessDeniedAccess Denied
The IAM user had the following policy attached, which is the standard policy given in pretty much all examples I could find online of how to set up a user which allows uploads to an s3 bucket:
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:CreateBucket",
"s3:DeleteObject",
"s3:Put*",
"s3:Get*",
"s3:List*"
],
"Resource": [
"arn:aws:s3:::com.fatpigeons.fatpigeons-object-storage",
"arn:aws:s3:::com.fatpigeons.fatpigeons-object-storage/*"
]
}
]
}```

It seems that the "Test S3 Upload" button was trying to search for my bucket, rather than going directly there.
Allowing the IAM user to list all of my buckets at a level above the bucket itself using the following code solved the problem:
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:CreateBucket",
"s3:DeleteObject",
"s3:Put*",
"s3:Get*",
"s3:List*"
],
"Resource": [
"arn:aws:s3:::com.fatpigeons.fatpigeons-object-storage",
"arn:aws:s3:::com.fatpigeons.fatpigeons-object-storage/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets"
],
"Resource": [
"arn:aws:s3:::*"
]
}
]
}```

Related

Getting Error "FeatureSwitchNotEnabled" while creating MYSQL through ARM template

Getting Error "FeatureSwitchNotEnabled" while creating MYSQL through ARM template
Hi All,
When I create MySQL through ARM Template there are 2 tasks on which I get FeatureSwitchNotEnabled.
1 - Microsoft.DBforMySQL/servers/firewallRules
2 - Microsoft.DBforMySQL/servers/virtualNetworkRules
Do we need to enable any setting to configure these through ARM template. All other 20-30 operations on MYSql completed successfully, except the above 2.
Any help would be appreciated.
Here is a snippet:
{
"type": "Microsoft.DBforMySQL/servers/firewallRules",
"apiVersion": "2017-12-01",
"name": "[concat(parameters('DEV-SIT_mysql_name'), '/AllowAllWindowsAzureIps')]",
"dependsOn": [
"[resourceId('Microsoft.DBforMySQL/servers', parameters('DEV-SIT_mysql_name'))]"
],
"properties": {
"startIpAddress": "0.0.0.0",
"endIpAddress": "0.0.0.0"
}
},

Apply an Azure Policy to a management group using ARM

Goal: Deploy an Azure Policy to a management group so when certain tags are missing from a resource within its remit, apply the specified Tag from the resource group
Problem: Deploying this template to the management group results in "'The template function 'RESOURCEGROUP' is not expected at this location."
There is a fairly plain structure similar to:
<Management Group> - <Subscription 1> - <Resource Group 1> - <Resource A>
- <Resource Group 2> - <Resource B>
- <Subscription 2> - <Resource Group 3> - <Resource C>
- <Resource D>
There is a fairly simple template using a nested policy definition:
......
"resources": [
{
"type": "Microsoft.Authorization/policyDefinitions",
"apiVersion": "2019-09-01",
"name": ".",
"properties": {
"policyType": "Custom",
"mode": "Indexed",
"displayName": ".",
"description": ".",
"metadata": {
"category": "Tags"
},
"policyRule": {
"if": {
"anyOf": [
{
"field": "tags['costCenter']",
"exists": "false"
},
{
"field": "tags['CostCenter']",
"notin": "[parameters('allowedCostCenter')]"
}
]
},
"then": {
"effect": "modify",
"details": {
"roleDefinitionIds": [
"/providers/Microsoft.Authorization/roleDefinitions/4a9ae827-6dc8-4573-8ac7-8239d42aa03f"
],
"operations": [
{
"operation": "add",
"field": "tags['CostCenter']",
"value": "[resourcegroup().tags['CostCenter']]"
}
]
}
}
}
}
}
]
I realise that you can not use "resourcegroup()" on items that are not within a resource group, but the guides suggested using this within the nested template and on "indexed" resources should work.
I'm fairly sure the pipeline is correct as I already have several audit policies deploying
From experimenting in the portal, this looks like it should be possible
There is a decent amount of reading around, but I have not read (or at least understood) that seems to help with this
Is what I am trying to achieve possible? If so, can you see what I am doing wrong?
Thanks for your help!
You need to add escape character if you want resourcegroup() function to be executed as a part of the Azure Policy, not the MG-scope ARM template:
"value": "[[resourcegroup().tags['CostCenter']]"

How to I prevent Microsoft.Automation/automationAccounts/Compilationjobs to always run in ARM deployment?

My ARM template is below which is nested template in bigger ARM template. For some reason DSC Compilation job always run on each deployment. I expected it not be run if it was already run before. How do I control this behavior? I tried using "incrementNodeConfigurationBuild": "false" but it did not do the trick.
{
"name": "WorkerNodeDscConfiguration",
"type": "Microsoft.Resources/deployments",
"apiVersion": "2017-05-10",
"resourceGroup": "[parameters('automationAccountRGName')]",
"dependsOn": [],
"properties": {
"mode": "Incremental",
"template": {
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.1",
"resources": [
{
"apiversion": "2015-10-31",
"location": "[reference(variables('automationAccountResourceId'), '2018-01-15','Full').location]",
"name": "[parameters('automationAccountName')]",
"type": "Microsoft.Automation/automationAccounts",
"properties": {
"sku": {
"name": "Basic"
}
},
"tags": {},
"resources": [
{
"name": "workernode",
"type": "configurations",
"apiVersion": "2018-01-15",
"location": "[reference(variables('automationAccountResourceId'), '2018-01-15','Full').location]",
"dependsOn": [
"[concat('Microsoft.Automation/automationAccounts/', parameters('AutomationAccountName'))]"
],
"properties": {
"state": "Published",
"overwrite": "false",
"incrementNodeConfigurationBuild": "false",
"Source": {
"Version": "1.2",
"type": "uri",
"value": "[parameters('WorkerNodeDSCConfigURL')]"
}
}
},
{
"name": "[guid(resourceGroup().id, deployment().name)]",
"type": "Compilationjobs",
"apiVersion": "2018-01-15",
"tags": {},
"dependsOn": [
"[concat('Microsoft.Automation/automationAccounts/', parameters('AutomationAccountName'))]",
"[concat('Microsoft.Automation/automationAccounts/', parameters('AutomationAccountName'),'/Configurations/workernode')]"
],
"properties": {
"configuration": {
"name": "workernode"
},
"incrementNodeConfigurationBuild": "false",
"parameters": {
"WebServerContentURL": "[parameters('WebServerContentURL')]"
}
}
}
]
}
]
}
}
}
In short, AFAIK you should be able to control this behaviour with 'condition'.
To explain it in detail, the DSC compilation jobs resource always run on each deployment because when we use the DSC compilation jobs resource (i.e., Microsoft.Automation/automationAccounts/compilationjobs) in the ARM template, IMHO what it does in the behind is, basically clicks on 'Compile' button of the DSC configuration.
If you click on that 'Compile' button, the compilation of job happens for sure even if it already compiled the job. You may check the same part manually as well.
So AFAIK that was the reason for compilation job always running on each deployment.
What you could do is, update your ARM template with 'condition' (For more information, refer https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-templates-resources#condition and https://learn.microsoft.com/en-us/azure/architecture/building-blocks/extending-templates/conditional-deploy) and then wrap your template with below sample piece of PowerShell code that would determine if the Compilation of job for particular DSC configuration is done already and then deploy the template by passing inline parameter value or by updating condition parameter in parameters template file with new or existing value accordingly. (For more information, refer https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-group-template-deploy#pass-parameter-values)
$DscCompilationJob = Get-AzAutomationDscCompilationJob -AutomationAccountName AUTOMATIONACCOUNTNAME -ResourceGroupName RESOURCEGROUPNAME|Sort-Object -Descending -Property CreationTime|Select -First 1| Select Status
$DscCompilationJobStatus = $DscCompilationJob.Status
if ($DscCompilationJobStatus -ne "Completed"){
$DscCompilationJobStatusInlineParameter = "new"
New-AzResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName testgroup -TemplateFile TEMPLATEFILEPATH\demotemplate.json -exampleString $DscCompilationJobStatusInlineParameter
#or update condition parameter in parameters template file with new value accordingly and use below command to deploy the template
New-AzResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName ExampleResourceGroup -TemplateFile TEMPLATEFILEPATH\demotemplate.json -TemplateParameterFile TEMPLATEFILEPATH\demotemplate.parameters.json
}else{
$DscCompilationJobStatusInlineParameter = "existing"
New-AzResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName testgroup -TemplateFile TEMPLATEFILEPATH\demotemplate.json -exampleString $DscCompilationJobStatusInlineParameter
#or update condition parameter in parameters template file with existing value accordingly and use below command to deploy the template
New-AzResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName ExampleResourceGroup -TemplateFile TEMPLATEFILEPATH\demotemplate.json -TemplateParameterFile TEMPLATEFILEPATH\demotemplate.parameters.json
}
And regarding incrementNodeConfigurationBuild property, IMHO this property is just with regards to creation of a new build version of Node Configuration is required or not i.e., when incremental node configuration build is set to false, it does not override the earlier existing Node Configuration by creating a new Node Configuration with the name CONFIGNAME[<2>] (the version number is incremented based on the existing version number already present).
Hope this helps!! Cheers!! :)

Select AWS RDS Aurora into S3 encrypted bucket with KMS

I'm trying to use AWS RDS Aurora functionality SELECT * INTO OUTFILE S3 :some_bucket/object_key where some_bucket has default Server-side encryption with KMS.
I'm receiving this error, which makes sense:
InternalError: (InternalError) (1871, u'S3 API returned error: Unknown:Unable to parse ExceptionName: KMS.NotFoundException Message: Invalid keyId')
How can I make this work, make Aurora have the KMS key so that it can upload a file into S3?
As per the documentation
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.SaveIntoS3.html#AuroraMySQL.Integrating.SaveIntoS3.Statement
Compressed or encrypted files are not supported.
But you could create an exception policy for the bucket with "NotResource" policy for particular suffix and select into that, from there you could trigger an lambda to move the file to actual path with encryption.
Aurora MySQL currently supports this. Follow the above official documentation for adding an IAM role to your RDS cluster, and make sure the role has policies granting both S3 read/write and KMS encryption/decryption, e.g.
{
"Sid": "",
"Effect": "Allow",
"Action": [
"s3:AbortMultipartUpload",
"s3:DeleteObject",
"s3:GetObject",
"s3:GetObjectVersion",
"s3:ListMultipartUploadParts",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::<bucket-name>/*"
},
{
"Sid": "",
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::<bucket-name>"
},
{
"Sid": "",
"Effect": "Allow",
"Action": [
"kms:ReEncrypt*",
"kms:Encrypt",
"kms:DescribeKey",
"kms:Decrypt",
"kms:GenerateDataKey*"
],
"Resource": "arn:aws:kms:<region>:<account>:key/<key id>"
}

IAM policy to allow access to DynamoDB console for specific tables

Is it possible to create an AWS IAM policy that provides access to the DynamoDB console only for specific tables? I have tried:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt0000000001",
"Action": [
"dynamodb:DescribeTable",
"dynamodb:ListTables",
<other actions>
],
"Effect": "Allow",
"Resource": [
"arn:aws:dynamodb:<region>:<account>:table/FooTable",
"arn:aws:dynamodb:<region>:<account>:table/BarTable"
]
}
]
}
but for a user with this policy attached, the DynamoDB tables list says Not Authorized (as it does when no policy is attached).
Setting "Resource" to "*" and adding a new statement like below lets the user perform <other actions> on FooTable and BarTable, but they can also see all other tables in the tables list.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt0000000001",
"Action": [
<other actions>
],
"Effect": "Allow",
"Resource": [
"arn:aws:dynamodb:<region>:<account>:table/FooTable",
"arn:aws:dynamodb:<region>:<account>:table/BarTable"
]
},
{
"Sid": "Stmt0000000002",
"Action": [
"dynamodb:DescribeTable",
"dynamodb:ListTables"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
Sorry for the bad news, but the AWS Management Console requires both DescribeTable and ListTables permissions against the whole of DynamoDB in order to operate correctly.
However, there is a small workaround... You can give Console users a URL that takes them directly to the table, and operates fine for viewing and adding items, etc.
Just copy the URL from a user that has correct permissions, eg:
https://REGION.console.aws.amazon.com/dynamodb/home?region=REGION#explore:name=TABLE-NAME
I found that apart from DynamoDB, users need to have wildcard permissions for CloudWatch and SNS. (Consider "Example 5: Set Up Permissions Policies for Separate Test and Production Environments").
You may add AmazonDynamoDBReadOnlyAccess to your predefined policies.

Resources