I have a setup with Amazon Cloudfront / S3 and Wordpress W3TC. Bucket has no ACL or Read-Settings for the public but a policy:
{
"Id": "Policy1502130814505",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1502130814505",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::xxxxxxx/*",
"Principal": "*"
}
]
}
In IAM the user has full Cloudfront- and S3Access.
In Cloudfront the Origin is not restricted by a user.
Connection with W3TC works fine. Using so far only the Cloudfront URL. Uploading media etc with no problem.
But the Cloudfront-URL gives only
AccessDenied
So the Wordpress-Site cannot fetch any media, Styles or Themefiles from Cloudfrount.
Any help is highly appreciated!
Thanks
Related
I have a couple of firebase hosted sites pointing to the same directory.
For this particular instance I would like a specific site to use a different default index than the public folder's index.html file.
I've set the sub-site firebase hosting deployment to something similar to this:
{
"target": "subsite",
"public": "hosting/public_mysite",
"headers": [
{
"source": "/",
"headers": [
{
"key": "Cache-Control",
"value": "no-cache, no-store, must-revalidate"
}
]
}
],
"ignore": [
"firebase.json",
"**/node_modules/**"
],
"rewrites": [
{
"source": "/about",
"destination": "/subsite/about.html"
},
{
"source": "/reviews",
"destination": "/subsite/reviews.html"
},
{
"source": "**",
"destination": "/subsite/home.html"
}
]
}
On the sub-site, the urls /about and /reviews do indeed load the requested alternative pages listed in rewrites section.
The last rule "source": "**" seems to be completely ignored and firebase loads the /index.html anyway.
Why is it not loading /subsite/home.html instead?
The file is definitely there with the other ones.
See Priority order of Hosting responses.
Reserved namespaces that begin with a /__/* path segment
Configured redirects
Exact-match static content
Configured rewrites
Custom 404 page
Default 404 page
Visiting / will prioritize the root index.html before matching against any rewrites. If you want to render a different resource you'll have to deploy without the root index.html.
My bucket policy is below. I have shown the information below as HIDDEN.
{
"Version": "2012-10-17",
"Id": "****",
"Statement": [
{
"Sid": "<HIDDEN>",
"Effect": "Allow",
"Principal": {
"AWS": "<HIDDEN>"
},
"Action": "s3:*",
"Resource": "<HIDDEN>"
}
]
Your aws credential file is stored in ~/.aws/credentials (on Linux) or "%UserProfile%\.aws\credentials" on Windows. By default the .NET SDK will retrieve your credentials from this file default profile.
To set up a default profile, you have to write your access key and secret acces keys in this file as follows:
[default]
aws_access_key_id=YOUR_ACCESS_KEY
aws_secret_access_key=YOUR_SECRET_ACCESS_KEY
See Configuration and Credential File Settings for more details
If you don't have those credentials, you can go to your IAM user in the AWS console and download them as a .csv for example
I am trying to deploy my sampleApplication code via AWS CodeDeploy for Bitbucket
I have used this tutorial, I have followed all the steps. Trust Relationship for role is like this
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::accountId:root"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"sts:ExternalId": "connectionId"
}
}
}
]
}
and while I am creating a deployment group I got error of 'can't assume role' when I select above role as Service role ARN*.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"ec2.amazonaws.com",
"codedeploy.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}
But when I add above trust relationship I can crete deployment group but then aws integration on bitbucket doesn't work and throw error to add sufficient permission.
Neither of your posted roles have given permission to CodeCommit or S3.
As per the tutorial you linked, you must provide access to CodeCommit and S3. These are likely the permissions you are missing:
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": ["s3:ListAllMyBuckets", "s3:PutObject"],
"Resource": "arn:aws:s3:::*"
}, {
"Effect": "Allow",
"Action": ["codedeploy:*"],
"Resource": "*"
}]
}
I have a 3-node cluster with SX running on Ubuntu v14.04.5 LTS with ports 80 & 443 and Libres3 running on the same servers with ports 8008 & 8443.
libres3 1.3-1-1~wheezy
sx 2.1-1-1~wheezy
s3cmd info s3://test-dev
s3://test-dev/ (bucket): Location: us-east-1 Payer:
BucketOwner Expiration Rule: none policy: { "Version":
"2012-10-17", "Statement": [
{
"Effect": "Allow",
"Principal": "",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::test-dev/"
} ] } cors: OptionPUTPOSTGETHEAD3000* ACL: admin: FULL_CONTROL ACL: test: FULL_CONTROL
I'm trying to put files from a Meteor application using the Slingshot package: https://github.com/CulturalMe/meteor-slingshot
but getting
'Access Denied':
"Sep 6 11:10:46: main: Replying with code 403: Access Deniedlibres3_1ff0aa644987498111ea4c91bca7b532_13817_587_1473174646.21AccessDenied
"
I can use S3 Browser and Cloudberry Explorer with the same credentials and access the buckets no problem.
Any thoughts or directions to solve putting files from the web?
Thanks,
-Matt
{ "Version": "2012-10-17",
"Statement":
[
{ "Effect":"Allow",
"Principal": "",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::test-dev/*"
}
]
}
You need to add "*" after "test-dev/"
I am just wondering if the configuration setup has changed at all (since Rails 4) with regards to setting up asset_sync and serving my assets from an s3 bucket.
I cannot seem to be able to serve my css or js assets (though images, jpg, png) are ok. I can upload everything to my s3 bucket.
I am trying to get this to work in my development environment before I push to production.
So my asset_sync.rb initializer is setup like so
if defined?(AssetSync)
AssetSync.configure do |config|
config.fog_provider = 'AWS'
config.aws_access_key_id = ENV['AWS_ACCESS_KEY_ID']
config.aws_secret_access_key = ENV['AWS_SECRET_ACCESS_KEY']
config.fog_directory = ENV['FOG_DIRECTORY']
config.existing_remote_files = "delete"
config.gzip_compression = true
config.manifest = true
config.custom_headers = { '.*' => { cache_control: 'max-age=315576000', expires: 1.year.from_now.httpdate } }
end
end
So I know that this part is ok as assets are uploaded to the bucket, but when I try and render my page no css/js or image are shown. I have set this in my development.rb
config.action_controller.asset_host = "https://#{ENV['FOG_DIRECTORY']}.s3.amazonaws.com"
All I get back at the moment on each asset is a 403 forbidden error.
This is my IAM policy, maybe someone could spot something in here that isn't correct?
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:ListAllMyBuckets",
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::*"
},
{
"Action": "s3:*",
"Effect": "Allow",
"Resource": "arn:aws:s3:::mybucket"
},
{
"Action": "s3:*",
"Effect": "Allow",
"Resource": "arn:aws:s3:::mybucket/*"
}
]
}
I have also added a bucket policy
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"AddPerm",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::mybucket/*"]
}
]
}
What I am looking for is for someone to show how they setup their Rails app with asset_sync please to see if I have missed anything.