Serve assets from s3 bucket in Rails 4.2.1 - css

I am just wondering if the configuration setup has changed at all (since Rails 4) with regards to setting up asset_sync and serving my assets from an s3 bucket.
I cannot seem to be able to serve my css or js assets (though images, jpg, png) are ok. I can upload everything to my s3 bucket.
I am trying to get this to work in my development environment before I push to production.
So my asset_sync.rb initializer is setup like so
if defined?(AssetSync)
AssetSync.configure do |config|
config.fog_provider = 'AWS'
config.aws_access_key_id = ENV['AWS_ACCESS_KEY_ID']
config.aws_secret_access_key = ENV['AWS_SECRET_ACCESS_KEY']
config.fog_directory = ENV['FOG_DIRECTORY']
config.existing_remote_files = "delete"
config.gzip_compression = true
config.manifest = true
config.custom_headers = { '.*' => { cache_control: 'max-age=315576000', expires: 1.year.from_now.httpdate } }
end
end
So I know that this part is ok as assets are uploaded to the bucket, but when I try and render my page no css/js or image are shown. I have set this in my development.rb
config.action_controller.asset_host = "https://#{ENV['FOG_DIRECTORY']}.s3.amazonaws.com"
All I get back at the moment on each asset is a 403 forbidden error.
This is my IAM policy, maybe someone could spot something in here that isn't correct?
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:ListAllMyBuckets",
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::*"
},
{
"Action": "s3:*",
"Effect": "Allow",
"Resource": "arn:aws:s3:::mybucket"
},
{
"Action": "s3:*",
"Effect": "Allow",
"Resource": "arn:aws:s3:::mybucket/*"
}
]
}
I have also added a bucket policy
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"AddPerm",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::mybucket/*"]
}
]
}
What I am looking for is for someone to show how they setup their Rails app with asset_sync please to see if I have missed anything.

Related

Using Amazon Appflow in AWS to move data

I work with AWS and I mainly use terraform for a lot of things.
I want to implement Amazon Appflow and be able to move data from
salesforce to a S3 bucket.
AppFlow is a wizard and needs to be setup step by step.
I will assume you cannot use Terraform to implement this right? Is this
thinking correct?
Yes, you can use Terraform to deploy AppFliw reaources. There are two providers you can use. The AWS Provider or the AWS Cloud Control Privider. I have had more luck with AWS Cloud Control currently as it is designed to support new resources sooner. It supports connectors, profiles and flows as well as supporting custom connectors. The AWS provider only supports Connectos and Profiles (no flows). I have also found it doesn't have good support for custom connectors yet.
Right now I'd recommend Cloud Control
Here is a good introduction.
https://www.hashicorp.com/resources/using-the-terraform-aws-cloud-control-provider
And the AWS Cloud Control provider.
https://registry.terraform.io/providers/hashicorp/awscc/latest/docs/resources/appflow_connector
And here are the AWS Provider AppFlow reaources.
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/appflow_connector_profile
resource "aws_s3_bucket" "example_source" {
bucket = "example_source"
}
resource "aws_s3_bucket_policy" "example_source" {
bucket = aws_s3_bucket.example_source.id
policy = <<EOF
{
"Statement": [
{
"Effect": "Allow",
"Sid": "AllowAppFlowSourceActions",
"Principal": {
"Service": "appflow.amazonaws.com"
},
"Action": [
"s3:ListBucket",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::example_source",
"arn:aws:s3:::example_source/*"
]
}
],
"Version": "2012-10-17"
}
EOF
}
resource "aws_s3_object" "example" {
bucket = aws_s3_bucket.example_source.id
key = "example_source.csv"
source = "example_source.csv"
}
resource "aws_s3_bucket" "example_destination" {
bucket = "example_destination"
}
resource "aws_s3_bucket_policy" "example_destination" {
bucket = aws_s3_bucket.example_destination.id
policy = <<EOF
{
"Statement": [
{
"Effect": "Allow",
"Sid": "AllowAppFlowDestinationActions",
"Principal": {
"Service": "appflow.amazonaws.com"
},
"Action": [
"s3:PutObject",
"s3:AbortMultipartUpload",
"s3:ListMultipartUploadParts",
"s3:ListBucketMultipartUploads",
"s3:GetBucketAcl",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::example_destination",
"arn:aws:s3:::example_destination/*"
]
}
],
"Version": "2012-10-17"
}
EOF
}
resource "aws_appflow_flow" "example" {
name = "example"
source_flow_config {
connector_type = "S3"
source_connector_properties {
s3 {
bucket_name = aws_s3_bucket_policy.example_source.bucket
bucket_prefix = "example"
}
}
}
destination_flow_config {
connector_type = "S3"
destination_connector_properties {
s3 {
bucket_name = aws_s3_bucket_policy.example_destination.bucket
s3_output_format_config {
prefix_config {
prefix_type = "PATH"
}
}
}
}
}
task {
source_fields = ["exampleField"]
destination_field = "exampleField"
task_type = "Map"
connector_operator {
s3 = "NO_OP"
}
}
trigger_config {
trigger_type = "OnDemand"
}
}

Error "Access Denied" while uploading file to s3 bucket asp.net core for UploadAsync method

My bucket policy is below. I have shown the information below as HIDDEN.
{
"Version": "2012-10-17",
"Id": "****",
"Statement": [
{
"Sid": "<HIDDEN>",
"Effect": "Allow",
"Principal": {
"AWS": "<HIDDEN>"
},
"Action": "s3:*",
"Resource": "<HIDDEN>"
}
]
Your aws credential file is stored in ~/.aws/credentials (on Linux) or "%UserProfile%\.aws\credentials" on Windows. By default the .NET SDK will retrieve your credentials from this file default profile.
To set up a default profile, you have to write your access key and secret acces keys in this file as follows:
[default]
aws_access_key_id=YOUR_ACCESS_KEY
aws_secret_access_key=YOUR_SECRET_ACCESS_KEY
See Configuration and Credential File Settings for more details
If you don't have those credentials, you can go to your IAM user in the AWS console and download them as a .csv for example

Cloudfront / S3 Bucket with W3TC

I have a setup with Amazon Cloudfront / S3 and Wordpress W3TC. Bucket has no ACL or Read-Settings for the public but a policy:
{
"Id": "Policy1502130814505",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1502130814505",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::xxxxxxx/*",
"Principal": "*"
}
]
}
In IAM the user has full Cloudfront- and S3Access.
In Cloudfront the Origin is not restricted by a user.
Connection with W3TC works fine. Using so far only the Cloudfront URL. Uploading media etc with no problem.
But the Cloudfront-URL gives only
AccessDenied
So the Wordpress-Site cannot fetch any media, Styles or Themefiles from Cloudfrount.
Any help is highly appreciated!
Thanks

Bitbucket integration with AWS CodeDeploy Roles Trust Relationship Error

I am trying to deploy my sampleApplication code via AWS CodeDeploy for Bitbucket
I have used this tutorial, I have followed all the steps. Trust Relationship for role is like this
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::accountId:root"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"sts:ExternalId": "connectionId"
}
}
}
]
}
and while I am creating a deployment group I got error of 'can't assume role' when I select above role as Service role ARN*.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"ec2.amazonaws.com",
"codedeploy.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}
But when I add above trust relationship I can crete deployment group but then aws integration on bitbucket doesn't work and throw error to add sufficient permission.
Neither of your posted roles have given permission to CodeCommit or S3.
As per the tutorial you linked, you must provide access to CodeCommit and S3. These are likely the permissions you are missing:
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": ["s3:ListAllMyBuckets", "s3:PutObject"],
"Resource": "arn:aws:s3:::*"
}, {
"Effect": "Allow",
"Action": ["codedeploy:*"],
"Resource": "*"
}]
}

Libres3 Access Denied from Meteor Slingshot

I have a 3-node cluster with SX running on Ubuntu v14.04.5 LTS with ports 80 & 443 and Libres3 running on the same servers with ports 8008 & 8443.
libres3 1.3-1-1~wheezy
sx 2.1-1-1~wheezy
s3cmd info s3://test-dev
s3://test-dev/ (bucket): Location: us-east-1 Payer:
BucketOwner Expiration Rule: none policy: { "Version":
"2012-10-17", "Statement": [
{
"Effect": "Allow",
"Principal": "",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::test-dev/"
} ] } cors: OptionPUTPOSTGETHEAD3000* ACL: admin: FULL_CONTROL ACL: test: FULL_CONTROL
I'm trying to put files from a Meteor application using the Slingshot package: https://github.com/CulturalMe/meteor-slingshot
but getting
'Access Denied':
"Sep 6 11:10:46: main: Replying with code 403: Access Deniedlibres3_1ff0aa644987498111ea4c91bca7b532_13817_587_1473174646.21AccessDenied
"
I can use S3 Browser and Cloudberry Explorer with the same credentials and access the buckets no problem.
Any thoughts or directions to solve putting files from the web?
Thanks,
-Matt
{ "Version": "2012-10-17",
"Statement":
[
{ "Effect":"Allow",
"Principal": "",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::test-dev/*"
}
]
}
You need to add "*" after "test-dev/"

Resources