I'm trying to use AWS RDS Aurora functionality SELECT * INTO OUTFILE S3 :some_bucket/object_key where some_bucket has default Server-side encryption with KMS.
I'm receiving this error, which makes sense:
InternalError: (InternalError) (1871, u'S3 API returned error: Unknown:Unable to parse ExceptionName: KMS.NotFoundException Message: Invalid keyId')
How can I make this work, make Aurora have the KMS key so that it can upload a file into S3?
As per the documentation
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.SaveIntoS3.html#AuroraMySQL.Integrating.SaveIntoS3.Statement
Compressed or encrypted files are not supported.
But you could create an exception policy for the bucket with "NotResource" policy for particular suffix and select into that, from there you could trigger an lambda to move the file to actual path with encryption.
Aurora MySQL currently supports this. Follow the above official documentation for adding an IAM role to your RDS cluster, and make sure the role has policies granting both S3 read/write and KMS encryption/decryption, e.g.
{
"Sid": "",
"Effect": "Allow",
"Action": [
"s3:AbortMultipartUpload",
"s3:DeleteObject",
"s3:GetObject",
"s3:GetObjectVersion",
"s3:ListMultipartUploadParts",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::<bucket-name>/*"
},
{
"Sid": "",
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::<bucket-name>"
},
{
"Sid": "",
"Effect": "Allow",
"Action": [
"kms:ReEncrypt*",
"kms:Encrypt",
"kms:DescribeKey",
"kms:Decrypt",
"kms:GenerateDataKey*"
],
"Resource": "arn:aws:kms:<region>:<account>:key/<key id>"
}
Related
I was trying to use the the W3TC plugin for Wordpress in order to use Amazon S3 as storage for my files.
Had no problem (well, after a little headscratching anyway) creating a new IAM user and getting the connection from the plugin to S3 - however when I clicked on "Test S3 Upload" it came back with the following error:
Error: Error executing "ListBuckets" on "https://s3.eu-west-2.amazonaws.com/"; AWS HTTP error: Client error: `GET https://s3.eu-west-2.amazonaws.com/` resulted in a `403 Forbidden` response: AccessDeniedAccess Denied3G27GE (truncated...) AccessDenied (client): Access Denied - AccessDeniedAccess Denied
The IAM user had the following policy attached, which is the standard policy given in pretty much all examples I could find online of how to set up a user which allows uploads to an s3 bucket:
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:CreateBucket",
"s3:DeleteObject",
"s3:Put*",
"s3:Get*",
"s3:List*"
],
"Resource": [
"arn:aws:s3:::com.fatpigeons.fatpigeons-object-storage",
"arn:aws:s3:::com.fatpigeons.fatpigeons-object-storage/*"
]
}
]
}```
It seems that the "Test S3 Upload" button was trying to search for my bucket, rather than going directly there.
Allowing the IAM user to list all of my buckets at a level above the bucket itself using the following code solved the problem:
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:CreateBucket",
"s3:DeleteObject",
"s3:Put*",
"s3:Get*",
"s3:List*"
],
"Resource": [
"arn:aws:s3:::com.fatpigeons.fatpigeons-object-storage",
"arn:aws:s3:::com.fatpigeons.fatpigeons-object-storage/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets"
],
"Resource": [
"arn:aws:s3:::*"
]
}
]
}```
I am truing to get a existing .Net functions app runing locally. It has been developed on Windows with Visual Studio, but I am on a Mac (M1 CPU) and using VS Code. I am pretty new to .Net I am struggeling to figure out what needs to be configured to get the project running.
I have added a launch.json:
{
"version": "0.2.0",
"configurations": [
{
"name": "Attach to .NET Functions",
"type": "coreclr",
"request": "attach",
"processId": "${command:azureFunctions.pickProcess}"
}
]
}
and a local.settings.json:
{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "UseDevelopmentStorage=true",
"FUNCTIONS_WORKER_RUNTIME": "dotnet"
}
}
and there is a tasks.json already in the project:
{
"version": "2.0.0",
"tasks": [
{
"label": "clean (functions)",
"command": "dotnet",
"args": [
"clean",
"/property:GenerateFullPaths=true",
"/consoleloggerparameters:NoSummary"
],
"type": "process",
"problemMatcher": "$msCompile",
"options": {
"cwd": "${workspaceFolder}/Naboor.Statistics"
}
},
{
"label": "build (functions)",
"command": "dotnet",
"args": [
"build",
"/property:GenerateFullPaths=true",
"/consoleloggerparameters:NoSummary"
],
"type": "process",
"dependsOn": "clean (functions)",
"group": {
"kind": "build",
"isDefault": true
},
"problemMatcher": "$msCompile",
"options": {
"cwd": "${workspaceFolder}/Naboor.Statistics"
}
},
{
"label": "clean release (functions)",
"command": "dotnet",
"args": [
"clean",
"--configuration",
"Release",
"/property:GenerateFullPaths=true",
"/consoleloggerparameters:NoSummary"
],
"type": "process",
"problemMatcher": "$msCompile",
"options": {
"cwd": "${workspaceFolder}/Naboor.Statistics"
}
},
{
"label": "publish (functions)",
"command": "dotnet",
"args": [
"publish",
"--configuration",
"Release",
"/property:GenerateFullPaths=true",
"/consoleloggerparameters:NoSummary"
],
"type": "process",
"dependsOn": "clean release (functions)",
"problemMatcher": "$msCompile",
"options": {
"cwd": "${workspaceFolder}/Naboor.Statistics"
}
},
{
"type": "func",
"dependsOn": "build (functions)",
"options": {
"cwd": "${workspaceFolder}/Naboor.Statistics/bin/Debug/net6.0"
},
"command": "host start",
"isBackground": true,
"problemMatcher": "$func-dotnet-watch"
}
]
}
Should I be able to run this project from the commandline somehow? Do I need to point to a task in the tasks.json?
If I run it with F5 in VS Code, I get this error:
Executing task: func host start
Can't determine project language from files. Please use one of [--csharp, --javascript, --typescript, --java, --python, --powershell, --custom]
Can't determine project language from files. Please use one of [--csharp, --javascript, --typescript, --java, --python, --powershell, --custom]
Can't determine project language from files. Please use one of [--csharp, --javascript, --typescript, --java, --python, --powershell, --custom]
Azure Functions Core Tools
Core Tools Version: 4.0.4544 Commit hash: N/A (64-bit)
Function Runtime Version: 4.3.2.18186
Can't determine project language from files. Please use one of [--csharp, --javascript, --typescript, --java, --python, --powershell, --custom]
Can't determine project language from files. Please use one of [--csharp, --javascript, --typescript, --java, --python, --powershell, --custom]
[2022-05-25T12:24:12.674Z] Failed to initialize worker provider for: /opt/homebrew/Cellar/azure-functions-core-tools#4/4.0.4544/workers/python
[2022-05-25T12:24:12.682Z] Microsoft.Azure.WebJobs.Script: Architecture Arm64 is not supported for language python.
[2022-05-25T12:24:12.991Z] Failed to initialize worker provider for: /opt/homebrew/Cellar/azure-functions-core-tools#4/4.0.4544/workers/python
[2022-05-25T12:24:12.991Z] Microsoft.Azure.WebJobs.Script: Architecture Arm64 is not supported for language python.
[2022-05-25T12:24:13.118Z] A host error has occurred during startup operation 'a0f1f8a3-92f6-434a-9ab1-17055f0828f4'.
[2022-05-25T12:24:13.118Z] Microsoft.Azure.WebJobs.Script.WebHost: Secret initialization from Blob storage failed due to missing both an Azure Storage connection string and a SAS connection uri. For Blob Storage, please provide at least one of these. If you intend to use files for secrets, add an App Setting key 'AzureWebJobsSecretStorageType' with value 'Files'.
Value cannot be null. (Parameter 'provider')
The terminal process "/opt/homebrew/bin/zsh '-c', 'func host start'" terminated with exit code: 1.
I thought that was what the "FUNCTIONS_WORKER_RUNTIME": "dotnet" part of local.settings.json was for?
I am pretty new to this, can anybody guide me on the correct path?
Thank you
Søren
In order to configure VSCode launch tasks etc I would recommend installing the Azure Functions extension from the marketplace:
https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions
Once that is installed you can open the project and it will likely detect the functions project and ask if you want to initiliase for use with VSCode. If it does not then you can use the option from the command palette.
You may also be able to just run func init against the project to initiliase any files that may be missing.
Please ensure any files are tracked in git or backed up before making changes to the existing files
Having worked with Azure Functions on both Windows and Mac (non-M1) I would highly recommend using devcontainers for development. This means you don't have to have the SDK/Runtime/Functions Core Tools installed locally and means anyone using the project can just spin up the container and begin debugging without having to install a bunch of dependencies.
https://code.visualstudio.com/docs/remote/containers
We have tried the same in our local and able to run it successfully.
I believe that you are just missing the configuration in your local .
Here are the steps :-
Make sure the Azure function runtime , Dotnet sdk, storage emulator has been installed in your local . If not you can download from VS CODE extension called AZURITE instead of emulator as it has been deprecated.
In VS CODE install extensions Azure where all the tools will be available , c# (Any language that you want to prefer) & Azure function being installed.
.
If you want to create new project click f1> Select create new azure function . As you have existing file there is no need to point task.json file once the aforementioned has been done test your project by running :
. dotnet build once build succeed run ,
. func host start (If you have existing/new project don't run func init as it will create one more .csproj file and then it may occur to fail)
SNAPSHOTS FOR REFERENCE:-
STORAGE EMULATOR STARTED IN LOCAL:-
For more information please refer this MICROSOFT DOCUMENTATION| STEP BY STEP TUTORIAL TO CREATE AZURE FUNCTION IN VS CODE.
Alternatively, If you want to learn using Visual studio Create Azure function on Macos please refer this MICROSOFT DOCUMENTATION.
How can I set up a dependsOn to depend on all copies of a certain resource? Hypothetically, I deploy 0..N number of websites and I need them all to complete before I deploy my traffic manager because the TM needs resource IDs.
Currently I'm only deploying 2 and so I'm just enumerating two items in the dependsOn array, but if I decide I want to deploy more copies (as determined by
[variables('tdfConfiguration')] array), it would be nice for dependsOn to dynamically figure this out.
"apiVersion": "[variables('apiVersion')]",
"type": "Microsoft.Resources/deployments",
"name": "[concat(resourceGroup().name, '-', variables('tdfConfiguration')[0]['roleName'], '-tmprofile')]",
"dependsOn": [
"[concat(resourceGroup().Name, '-', variables('tdfConfiguration')[0]['roleName'], '-website')]",
"[concat(resourceGroup().Name, '-', variables('tdfConfiguration')[1]['roleName'], '-website')]"
],
fairly easy, use copy name. suppose you have a resource like so:
"name": xxx,
"type": zzz,
...
"copy": {
"name": "myCopy",
"count": 0..N
}
you can use the followin dependsOn to depend on all copies:
"dependsOn": [ "myCopy" ]
Reading: https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-group-create-multiple#depend-on-resources-in-a-loop
Is it possible to make Riak CS apply ACL with public access by default upon new bucket or file in some bucket created? I mean I wanna put files for example using simply
s3cmd put file.jpg s3://my-bucket
And I'd like to have file.jpg in my-bucket to be public accessible.
Do you mean that "objects are anonymously readable" by "public
access"? I will continue with the assumption that it is true.
Because ACL is per bucket or per object, bucket policy will be more
suitable for the use case. After creating the bucket my-bucket, one
can set particular bucket policy through PUT Bucket Policy API [1].
Example policy JSON to allow public access to the bucket is like this:
{
"Version": "2008-10-17",
"Id": "Policy1355283297687",
"Statement": [
{
"Sid": "Stmt1355283289",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-bucket/*",
"Principal": { "AWS": ["*"] }
}
]
}
Then you can PUT it with proper URL as API doc [1] by any means,
a simple way is to use s3cmd as
s3cmd setpolicy </path/to/above/json/as/file> s3://my-bucket
Then each object written under the bucket can be accessed by any user
including anonymous one.
Unfortunately there is no way to apply such bucket policy at creating
bucket but, I hope, it's not difficult to write wrapper script to
create bucket and apply policy to it.
[1] http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTpolicy.html
Is it possible to create an AWS IAM policy that provides access to the DynamoDB console only for specific tables? I have tried:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt0000000001",
"Action": [
"dynamodb:DescribeTable",
"dynamodb:ListTables",
<other actions>
],
"Effect": "Allow",
"Resource": [
"arn:aws:dynamodb:<region>:<account>:table/FooTable",
"arn:aws:dynamodb:<region>:<account>:table/BarTable"
]
}
]
}
but for a user with this policy attached, the DynamoDB tables list says Not Authorized (as it does when no policy is attached).
Setting "Resource" to "*" and adding a new statement like below lets the user perform <other actions> on FooTable and BarTable, but they can also see all other tables in the tables list.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt0000000001",
"Action": [
<other actions>
],
"Effect": "Allow",
"Resource": [
"arn:aws:dynamodb:<region>:<account>:table/FooTable",
"arn:aws:dynamodb:<region>:<account>:table/BarTable"
]
},
{
"Sid": "Stmt0000000002",
"Action": [
"dynamodb:DescribeTable",
"dynamodb:ListTables"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
Sorry for the bad news, but the AWS Management Console requires both DescribeTable and ListTables permissions against the whole of DynamoDB in order to operate correctly.
However, there is a small workaround... You can give Console users a URL that takes them directly to the table, and operates fine for viewing and adding items, etc.
Just copy the URL from a user that has correct permissions, eg:
https://REGION.console.aws.amazon.com/dynamodb/home?region=REGION#explore:name=TABLE-NAME
I found that apart from DynamoDB, users need to have wildcard permissions for CloudWatch and SNS. (Consider "Example 5: Set Up Permissions Policies for Separate Test and Production Environments").
You may add AmazonDynamoDBReadOnlyAccess to your predefined policies.