Duplicate permission setting instructions CodeDeploy - aws-code-deploy

I'm trying to use CodeDeploy to deploy an application onto EC2 but I am facing the following error
Duplicate permission setting instructions for /var/www/html/storage/framework
My appspec.yml is below
version: 0.0
os: linux
files:
- source: /
destination: /var/www/html
permissions:
- object: /var/www/html
owner: apache
group: apache
mode: 644
except:
- storage/*
type:
- directory
- object: /var/www/html/storage
owner: apache
group: apache
mode: 777
type:
- directory
I've tried various formats for except including
Explicitly listing relative paths
except:
- storage
- storage/app
- storage/logs
- storage/framework
- storage/framework/views
- storage/framework/cache
- storage/framework/sessions
Using a wildcard
except:
- storage/*
Using just the folder name
except:
- storage
None of which seem to resolve the issue.
Similar questions
AWS CodeDeploy Duplicate permission
Duplicate permission setting instructions

The except option needs to be specified as an array. (Use [] not a nested list)
You can see it in the permissions examples in the appspec reference guide (scroll down a little): http://docs.aws.amazon.com/codedeploy/latest/userguide/app-spec-ref.html#app-spec-ref-permissions

Related

Salt External Pillar Issues

I am trying to configure an external pillar in github, but no matter what I cannot get the minions to successfully read top.sls. Below is my ext_pillar and pillar_roots config:
pillar_roots:
base:
- /srv/pillar
fileserver_backend:
- gitfs
- roots
gitfs_update_interval: 60
gitfs_base: main
gitfs_remotes:
- https://gituser:gittoken#github.com/gitaccount/saltstack.git:
- mountpoint: salt://
ext_pillar:
- git:
- main https://gituser:gittoken#github.com/gitaccount/saltpillar.git
I have the following in the root of my saltpillar repo:
top.sls:
base:
'*':
- data
data.sls:
info: some test data from remote pillar
Repos are accessible with the URIs provided. When I run salt '*' saltutil.refresh_pillar and then salt '*' pillar.items I get no results. However, I can put top.sls and data.sls directly into /srv/pillar and it works. I put the master in debug mode and don't see any errors running the commands. Any help is appreciated.
Does the following ext_pillar configuration fix your issue? I'm assuming your top.sls you posted is still in the main branch of your git repo.
ext_pillar:
- git:
- main https://gituser:gittoken#github.com/gitaccount/saltpillar.git
- env: base
Your top.sls must reference your actual branch name or you can add the env option to specify a different name.
https://docs.saltproject.io/en/latest/ref/pillar/all/salt.pillar.git_pillar.html

Missing encryption key to decrypt file with.Ask your team for your master ... it in the ENV['RAILS_MASTER_KEY']. Platform.sh Deployment aborting,

ERROR MESSAGE:
W: Missing encryption key to decrypt file with. Ask your team for your master key and write it to /app/config/master.key or put it in the ENV['RAILS_MASTER_KEY'].
when deploying my project on Platform.sh, the operation failed because of the lack of the decryption key. from my google search, I found that the decryption key.
My Ubuntu .bashrc
export RAILS_MASTER_KEY='ad5e30979672cdcc2dd4f4381704292a'
rails project configuration for PLATFORM.SH
.platform.app.yaml
# The name of this app. Must be unique within a project.
name: app
type: 'ruby:2.7'
# The size of the persistent disk of the application (in MB).
disk: 5120
mounts:
'web/uploads':
source: local
source_path: uploads
relationships:
postgresdatabase: 'dbpostgres:postgresql'
hooks:
build: |
gem install bundler:2.2.5
bundle install
RAILS_ENV=production bundle exec rake assets:precompile
deploy: |
RACK_ENV=production bundle exec rake db:migrate
web:
upstream:
socket_family: "unix"
commands:
start: "\"unicorn -l $SOCKET -E production config.ru\""
locations:
'/':
root: "\"public\""
passthru: true
expires: "24h"
allow: true
routes.yaml
# Each route describes how an incoming URL is going to be processed by Platform.sh.
"https://www.{default}/":
type: upstream
upstream: "app:http"
"https://{default}/":
type: redirect
to: "https://www.{default}/"
services.yaml
# The name given to the PostgreSQL service (lowercase alphanumeric only).
dbpostgres:
type: postgresql:13
# The disk attribute is the size of the persistent disk (in MB) allocated to the service.
disk: 5120
db:
type: postgresql:13
disk: 5120
configuration:
extensions:
- pgcrypto
- plpgsql
- uuid-ossp
environments/production.rb
config.require_master_key = true
I suspect that the master.key is not accessible during deployment, and I don't understand how to solve the problem.
From what I understand, your export is in your .bashrc on your local machine, so it won't be accessible when deploying on Platform.sh. (The logs you see in your terminal when building and deploying are streamed, this doesn't happen on your machine.)
You need to make the RAILS_MASTER_KEY accessible on Platform.sh. To do so, this variable needs to be declared in your project.
Given the nature of the variable, I would suggest to use the Platform CLI to create this variable.
If this variable should be accessible on all your environments, you can make it a project level variable.
$ platform variable:create --level project --sensitive true env:RAILS_MASTER_KEY <your_key>
If it should only be accessible for a specific environment, then you need an environment level variable:
$ platform variable:create --level environment --environment '<your_envrionment>' --inheritable false --sensitive true env:RAILS_MASTER_KEY '<your_key>'
The env: prefix in the variable names tells Platform.sh to expose the variable with the rest of the environment variables. More information about this in the variables prefix section of the environment variables documentation page.
You could do the same via the management console if you prefer to avoid the command line.
Environment variables can also be configured directly in your .platform.app.yaml file, as described here. Keep in mind that this file being versioned, you should not use this method for sensitive information, such as encryption keys, API keys, and other kind of secrets.
The RAILS_MASTER_KEY environment variable should now be accessible during your Platform.sh deployment.

NGINX error when deploying static website with Concourse CI

I encounter an error when I try to deploy a static website to Pivotal Web Services with Concourse CI. I want to push a static website using the static_buildpack. The index.html is placed in the root folder. When I push the code from the command line directly to Pivotal Web Services using the cf push command everything works fine.
When I use the concourse pipeline the build is terminated successfully however I get an error when accessing the website. I get an nginx 403 Forbidden error when trying to access the website. I tried the following manifest with the following pipeline (see below). When using Concourse CI the container is created successfully, the buildpack is used, nginx is installed and the droplet is uploaded. The app itself starts successfully.
The Cloud Foundry Logs show the following error:
2017/09/05 08:42:54 [error] 70#0: *3 directory index of "/home/vcap/app/public/" is forbidden, client: <ip>, server: localhost, request: "GET / HTTP/1.1", host: "agencydemo.cfapps.io"
manifest.yml
---
applications:
- name: agencyDemo
memory: 64M
buildpack: staticfile_buildpack
host: agencyDemo
pipeline.yml
resources:
- name: app_sources
type: git
source:
uri: https://github.com/smichard/CloudFoundryDemo
branch: master
- name: staging_CloudFoundry
type: cf
source:
api: {{pws_api}}
username: {{pws_user}}
password: {{pws_password}}
organization: {{pws_org}}
space: {{pws_space}}
skip_cert_check: false
jobs:
- name: deploy-website
public: true
serial: true
plan:
- get: app_sources
trigger: true
- put: staging_CloudFoundry
params:
manifest: app_sources/manifest.yml
The source code can be found on GitHub
nginx 403 Forbidden happens mainly when index.html is not found. suggested steps
Check your buildpack (which now is updated as buildpacks in manifest file)
check command or dist folder
cf push -p ./dist/ -f manifest-{your_envireonment}.yml --no-start (if your index.html is directly under dist folder )
or
cf push -p ./dist/{your_app_name} -f manifest-{your_envireonment}.yml --no-start (if your index.html is under dist/{your_app_name} folder )
You must ensure that index.html and other angular static files are directly present inside public/ folder and not in something like public/your-app-name/
Another solution is to fix the path-attribute in your manifest.yml as follows:
---
applications:
- name: agencyDemo
memory: 64M
buildpack: staticfile_buildpack
host: agencyDemo
path: ./dist/your-app-name
Docs

Symfony, Liip Imagine bundle not working on server in prod environment

I have a project where I keep my uploaded images in src/My/Bundle/Resources/uploads/images/full and use the twig filter imagine_filter to dynamically create thumbnails.
On my local machine it works flawlessly and so does on my server, but there only under the dev environment. When I delete the previously created thumbnails (leaving only the full directory), clear the prod cache and load any web page, the images are not created, their url always remains under media/cache, and the logger gives me the request.ERROR:
"No route found for "GET /uploads/images/avatar/354026c94b773b77ca945b4a6323e15c84102f6b.jpg"" at /<path>/app/cache/prod/classes.php line 1964 {"exception":"[object] (Symfony\\Component\\HttpKernel\\Exception\\NotFoundHttpException: No route found for \"GET /uploads/images/avatar/354026c94b773b77ca945b4a6323e15c84102f6b.jpg\" at /<path>/app/cache/prod/classes.php:1964, Symfony\\Component\\Routing\\Exception\\ResourceNotFoundException: at /<path>/app/cache/prod/appProdUrlMatcher.php:1816
Some additional infos:
I have symlinked src/My/Bundle/Resources/uploads to web/uploads
my config is (thumbnail_min is a custom filter):
liip_imagine:
loaders:
default:
filesystem:
data_root: %kernel.root_dir%/../web/uploads/images/full
resolvers:
default:
web_path:
web_root: %kernel.root_dir%/../web
cache_prefix: /uploads/images
cache: default
filter_sets:
avatar:
quality: 90
filters:
thumbnail_min: { size: 50, mode: inset }
....
full:
quality: 100
permissions are always at least group rw (that's what is needed on the server)
Just experienced a similar problem on my local machine, however still in dev environment.
For me, I was missing group writable permission on the default media/cache folder, which stayed empty after a liip:imagine:cache:remove and page reload. Thus images were linking to empty folder.
To fix, I first made sure that Group was set to Apache's _www (mac) / www-data (linux), e.g.
sudo chgrp -R www-data web/media/cache
Then made sure the folder was set to readable, writable and executable, e.g.
sudo chmod -R g+rwx web/media/cache
Hope this helps...

Redmine under sub-directory on nginx

My vhost configuration: http://pastebin.com/ZyXUmQtx (only one domain on this installation)
I've been racking my head and Google for a solution the last two days and can't quite seem to come up with a solution that works.
My setup (from the above configuration):
IP.Board 3.4 installation in %root_domain%/forums/
IP.Content 2.3 installation in %root_domain/forums/ (with external access index.php on the top-level)
Redmine 2.2.2 install at /usr/share/redmine (this is working because Thin is running and there are no errors in either log file)
Stale phpMyAdmin configuration at /usr/share/phpmyadmin/ that also kinda doesn't load html/css properly.
Symlink to /usr/share/redmine/public to /srv/www/tiberian-genesis.net/public_html/redmine
I'm trying to get redmine setup to run under %root_domain%/redmine/, but I keep getting a 404 page from my IP.Content installation.
Accessing it takes me to the url: /redmine/login?back_url=http://redmine_thin_servers/redmine/ (which now that notice it, it seems to not like my upstream...)
In case someone requests the Thin configuration file:
---
pid: /var/run/thin/redmine.pid
group: tgmod
prefix: /redmine
timeout: 30
log: /var/log/thin/redmine.log
max_conns: 1024
require: []
max_persistent_conns: 512
environment: production
user: tgmod
servers: 1
daemonize: true
chdir: /usr/share/redmine
socket: /var/run/thin/redmine.sock
I'm out of ideas here.
Thanks in advance!
I just ended up setting it up on a sub-domain. I wanted to try to proxy it on a sub-directory, but my main website kept interfering with the rules.

Resources