The CodeDeploy agent did not find an AppSpec file within the unpacked revision directory at revision-relative path "appspec.yml" - aws-code-deploy

Based on below error message codedeploy copies my archive folder to some temp location. i can locate my archive.zip folder after deployment-archive folder.
The CodeDeploy agent did not find an AppSpec file within the unpacked revision directory at revision-relative path "appspec.yml". The revision was unpacked to directory "/opt/codedeploy-agent/deployment-root/59a04892-4afd-4e82-9335-52e8b6047d4b/d-WZDFGDBHU/deployment-archive", and the AppSpec file was expected but not found at path "/opt/codedeploy-agent/deployment-root/59a04892-4afd-4e82-9335-52e8b6047d4b/d-WZDFGDBHU/deployment-archive/appspec.yml". Consult the AWS CodeDeploy Appspec documentation for more information at AWS website
but it looks appspec.yml immediately after deployment-archive folder. But its located deployment-archive/archive/appspec.yml
appspec.yml and my war file all zipped in S3
how to resolve this issue?

I also encountered this because I found that CodeDeploy was filling up disk space with logs/deploy info at "/opt/codedeploy-agent/deployment-root/#####yourNumberWillBeDifferent#####". I had deleted all the directories in this location and on the very next deploy I experienced this issue. I found that if you keep the latest directory in this location then you will not get the error. What I wound up doing is have a script run every hour that deletes all the directories in this location except the latest one.
You probably deleted the the folder "d-WZDFGDBHU". CodeDeploy looked at the logs/info from the very last deployment it did on the instance and uses some info from there but could not find it. That is why it mentioned -
'''
The revision was unpacked to directory "/opt/codedeploy-agent/deployment-root/59a04892-4afd-4e82-9335-52e8b6047d4b/d-WZDFGDBHU/deployment-archive", and the AppSpec file was expected but not found at path "/opt/codedeploy-agent/deployment-root/59a04892-4afd-4e82-9335-52e8b6047d4b/d-WZDFGDBHU/deployment-archive/appspec.yml"
'''
d-WZDFGDBHU is the Deployment ID of the last deployment that was performed prior to the one you just tried.
I don't know why CodeDeploy needs to refer to the last deployment but indeed it does!!
Note that this only happens on in place deployments - not blue green.
I also discovered this -
CodeDeploy keeps a number of the last deployments to allow you to rollback to previous versions. By default it keeps the last 5 but this is configurable using the codedeploy agent config:
https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-agent-configuration.html
The setting that controls this is :max_revisions:

Found the issue.Instead of selecting appec.yml , myapp file and zipping them,i created folder for them and then created zip file.
I should have created zip file just by selecting files, without creating folder for them.
Wasted lot of time on this issue :(

I also encountered this issue under copydeploy which keeps failing code pipeline deployment. "The CodeDeploy agent did not find an AppSpec file within the unpacked revision directory at revision-relative path "appspec.yml".
Steps i took:
1) Copy the appspec.yml from the AWS test template and use that to modify to a new appspec.tml
2)Remember to zip the file within the folder itself (not to create a folder with the files and zip it)

You might see this error when we have CodeDeploy scripts have issues.
so please check the CodeDeploy logs for errors in the deployment server.
If its Linux server:
/opt/codedeploy-agent/deployment-root/CodeDeploy<DEPLOYMENT-GROUP-ID><DEPLOYMENT-ID>/logs/script.log
If its windows server:
C:\temp\CodeDeploy\b394d44e-ca20-4956-a3ba-d90b99afa87f\d-1K1K9PR1D\logs\scripts.log

as experienced sometime the file located in different deployment file
as example the codedeploy created and saved the deployment in the folder d-ERABTKHGF in the server but looking for the folder d-G9EZDPEGF.
The CodeDeploy agent did not find an AppSpec file within the unpacked
revision directory at revision-relative path "appspec.yml". The
revision was unpacked to directory
"/opt/codedeploy-agent/deployment-root/56474d41-fa14-41e0-9018-1bef9db19995/d-G9EZDPEGF/deployment-archive",
and the AppSpec file was expected but not found at path
"/opt/codedeploy-agent/deployment-root/56474d41-fa14-41e0-9018-1bef9db19995/d-G9EZDPEGF/deployment-archive/appspec.yml".
Consult the AWS CodeDeploy Appspec documentation for more information
at
http://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file.html.
And if you navigate to the server
total 0
drwxr-xr-x 2 root root 6 Jan 18 10:57 d-ERABTKHGF
[root#ip-173-31-56-188 d-ERABTKHGF]# pwd
/opt/codedeploy-agent/deployment-root/56474d41-fa14-41e0-9018-1bef9db19995/d-ERABTKHGF
Solution :
navigate to the /opt/codedeploy-agent/ and remove all the folder and the files within that folder.

The CodeDeploy agent did not find an AppSpec file within the unpacked revision directory at revision-relative path "appspec.yml". The revision was unpacked to directory "/opt/codedeploy-agent/deployment-root/59a04892-4afd-4e82-9335-52e8b6047d4b/d-WZDFGDBHU/deployment-archive", and the AppSpec file was expected but not found at path "/opt/codedeploy-agent/deployment-root/59a04892-4afd-4e82-9335-52e8b6047d4b/d-WZDFGDBHU/deployment-archive/appspec.yml". Consult the AWS CodeDeploy Appspec documentation for more information at AWS website
I ran into this issue after deleting the directory the error message is referring to inside of the deployment-root directory.
I took a look at the directories inside of deployment-root and found a directory called deployment-instructions. Inside this directory I found two files <<fingerprint>>_last_successful_install and <<fingerprint>>_most_recent_install.
I renamed these two files to <<fingerprint>>_last_successful_install.old and <<fingerprint>>_most_recent_install.old.
After doing this I re-ran my deployment and it generated the files again, but this time with the new deployment version instead of the old one (d-WZDFGDBHU in your case).

Two things cause this issue
If you deployed the scripts on asg or ec2 and then add additional scripts for the hook.
For this you need to restart the asg or ec2 before adding additional scripts.
If you used the same pipeline name again for a different deployment.
For this you need to delete the input artifact that is in s3 before running the deployment since aws will not delete the artifacts even if you delete the pipeline.

Related

Where should I add AppSpec.yml file in CodeDeploy

I am deploying code from Amazon S3 to EC2 instance with codeDeploy.I have configured the the configuration group and application but I am getting this error
The overall deployment failed because too many individual instances failed deployment, too few healthy instances are available for deployment, or some instances in your deployment group are experiencing problems.
When I check the logs I see
"The CodeDeploy agent did not find an AppSpec file within the unpacked revision directory at revision-relative path \\"appspec.yml\\"
What exactly is a appspec.yml file and where should I place it? i am new to AWS so any help would be appreciated.
All the information you need should be present : Amazon documentation Appspec File.
File location
To verify that you have placed your AppSpec file in the root directory
of the application's source content's directory structure
Add an Application Specification File to a Revision for CodeDeploy
Add an AppSpec File for an AWS Lambda Deployment For a deployment to
an AWS Lambda compute platform:
The AppSpec file contains instructions about the Lambda functions to
be deployed and used for deployment validation.
A revision is the same as an AppSpec file.
An AppSpec file can be written using JSON or YAML.
An AppSpec file can be saved as a text file or entered directly into a
console AppSpec editor when creating a deployment. For more
information, see Create an AWS Lambda Compute Platform Deployment
(Console).
To create an AppSpec file:
Copy the JSON or YAML template into a text editor or into the AppSpec
editor in the console.
Modify the template as needed.
Use a JSON or YAML validator to validate your AppSpec file. If you use
the AppSpec editor, the file is validated when you choose Create
deployment.
If you use a text editor, save the file. If you use the AWS CLI to
create your deployment, reference the AppSpec file if it's on your
hard drive or in an Amazon S3 bucket. If you use the console, you must
push your AppSpec file to Amazon S3.
Add an AppSpec File for an EC2/On-Premises Deployment
To add an AppSpec file to a revision:
Copy the template into a text editor.
Modify the template as needed.
Use a YAML validator to check the validity of your AppSpec file.
Save the file as appspec.yml in the root directory of the revision.
Run one of the following commands to verify that you have placed your
AppSpec file in the root directory:
For Linux, macOS, or Unix:
find /path/to/root/directory -name appspec.yml There will be no output
if the AppSpec file is not found there.
For Windows:
dir path\to\root\directory\appspec.yml A File Not Found error will be
displayed if the AppSpec file is not stored there.
Push the revision to Amazon S3 or GitHub.

Artifactory cli - download existing files

I'm using a JFROG cli to download content from an Artifactory. It seems that even though a destination contains same files, cli is trying to download it. If I re-run the command without cleaning the destination folder, I takes the same time.
Is there any option to speedup the process? If destination folder has the same SHA1 file, skip?
Our command (download all folders a* in the repo):
jfrog rt dl --threads=`nproc` repo_name/a*/ $TMP_FOLDER/
JFrog CLI is already skipping download in case of a file existence which is validated using a checksum.
You can see this by setting the environment variable "JFROG_CLI_LOG_LEVEL=DEBUG" and then running same download command again. In the debug log you will see on some files the following line: "File already exists locally" - this means the download was skipped due to a file existence.
The relevant code can be found in GitHub - see the method "downloadFileIfNeeded".
Keep in mind that the CLI still has to get the file info from Artifactory and calculate the local file checksum, so in case of a lot of small files, this won't have a strong effect as on big files download.

AWS beanstalk wordpress

I was trying to setting up my AWS beanstalk by following the implementation guide provided by AWS.
But when I got to the "Launch an Elastic Beanstalk Environment" section, this message appeared which basically said the app is not created.
Here's the message:
[Instance: i-088472611e1ef4405] Command failed on instance. Return
code: 1 Output: ln: failed to create symbolic link
'wp-content/uploads': No such file or directory. container_command
2link in wordpress-beanstalk/.ebextensions/efs-mount.config failed.
For more detail, check /var/log/eb-activity.log using console or EB
CLI.
Does anyone have the same problem or know how to resolve this?
Try changing the efs-mount to read the following, the directory clearly doesn't exist so lets just create it.
container_commands:
1chown:
command: "chown webapp:webapp /wpfiles"
2create:
command: "sudo -u webapp mkdir -p wp-content/uploads"
3link:
command: "sudo -u webapp ln -s /wpfiles wp-content/uploads"
2create will create the directory owned by the webapp user and should let you continue.
I just faced the same issue. I am going to assume deploying via the AWS console. That is how I started.
STEP 1: I checked if there was an actual directory wp-content/uploads in wordpress-beanstalk and there was not. It might get created on the first WP upload So, I created the folder, rezipped the application, and deployed to Beanstalk via the AWS Console.
I still received the same error and moved on to step 2
STEP 2: Run EB DEPLOY from the command line
from my local wordpress-beanstalk directory
eb init
choose region (if you already created your app should be this region)
if you already created application choose that wordpress-beanstalk for example
eb use name of your environment
eb deploy
I am not certain that Step 1 is related to Step 2, but was able to successfully deploy facing the same issue using EB CLI.
This is the mounting error of EFS .
EB is using EFS storage to store the wordpress files .
Please check the no.7 in documentation .
"Modify the configuration files in the .ebextensions folder with the IDs of your default VPC and subnets, and your public IP address."
Please edit the efs-create.config file inside the .ebextension folder.
A bit late here so for anyone else having this issue, it's caused when that directory does not exist. Here are some reasons this might happen:
1). WP has not created it - Check manually that it exists.
2). .gitignore - When a .ebignore file is not present, EB uses your .gitignore instead. This can cause the directory to not be uploaded with the EB deploy command. If this is the case, make a .ebignore, EB will start ignoring the .gitignore
3). Document root - If you have modified the document root, to something like /src you have to modify the efs-mount.config file.
##############################################
#### Do not modify values below this line ####
##############################################
container_commands:
1chown:
command: "chown webapp:webapp /wpfiles"
2link:
command: "sudo -u webapp ln -s /wpfiles src/wp-content/uploads"
Even though the file mentions not to modify it, you have to add your document root path in the 2link entry. Change wp-content/uploads to src/wp-content/uploads (replace src with your document root)
Finally, I would not include a command to automatically make this directory, as that only puts a band-aid on the problem.
Hope this helps

Oozie: Is there anything that needs to be done after placing an updated jar under lib folder?

I am trying to place an updated jar under lib path and removing the old jar. Unfortunately , I see the old logs in oozie console which were present in old jar. For confidential purpose I am unable to show logs here. But I am doing the below steps:
Replacing a jar (mycode.jar) under lib folder which is mentioned in workkflow.xml
Submitted the oozie job using oozie job -oozie http://host -config job.properties -run
When I see logs in console, I could see old jar(older version of mycode.jar) logs even if jar is replaced.
If you are talking about the lib directory in the oozie workflow application then you need not to do anything. The next execution of the workflow will automatically pick the new (updated) jar.
For updating the jars into share lib /user/oozie/share/lib/lib_*/* then after replacing the jar, you need to execute the following command to update the share lib into oozie server.
oozie admin -sharelibupdate
Hope this will help. Thanks.
To make sure issue is same I'll narrate what I was facing:
created a MapReduce JAR and placed it in lib folder.
Ran oozie(MapReduce action) job and picked the JAR as expected and ran fine.
I had some functionality changes in my code(JAR) so I added new log statements to make sure new JAR is being picked. Built the JAR and replaced the old JAR with newly built JAR in lib folder(hdfs)
Ran oozie job again, code from old JAR was executed because new log statements did not show up.
After few search I found following tips:
Clear the Yarn Cache: found this in HortonWorks site(https://community.hortonworks.com/articles/92339/how-to-clear-local-file-cache-and-user-cache-for-y.html) - pasting content below for reference
Short Description:
To use different version jar file with same name, clear cache on all NodeManager hosts to prevent the application using old jar
a. Find out the cache location by checking the value of the yarn.nodemanager.local-dirs property
< property >
< name >yarn.nodemanager.local-dirs< /name>
< value>/hadoop/yarn/local</value>
< /property>
b. Remove filecache and usercache folder located inside the folders that is specified in yarn.nodemanager.local-dirs.
[yarn#node2 ~]$ cd /hadoop/yarn/local/
[yarn#node2 local]$ ls filecache nmPrivate spark_shuffle usercache
[yarn#node2 local]$ rm -rf filecache/ usercache/
c. Restart YARN service.
I was unable to clear cache because I did not have the necessary access. Thus I followed below workaround
Rename the Package or class, since this package/class was written by me, I had the liberty to simply rename the class, thus in oozie when new Class name was looked up, automatically the new functionality was executed.
Option 2 may not be viable for many and the question remains open as to why oozie does not pick New JAR/Class.

OpsWorks war deployment failure from S3

I have a war file,. myapp.war (it happens to be a grails app, but this is not material)
I upload this to an s3 bucket, say myapp in us-west-2
I set up an OpsWorks using the S3 repository type:
Repository Type: S3
Repository URL: https://myapp.s3-us-west-2.amazonaws.com/myapp.war
Access key ID: A key with read permission on the above bucket
Secret access key: the secret for this key
Deploy to an instance in Java layer (Tomcat 7)
All lights are green, deployments succeeded
But the app isn't actually deployed
Shelling in to the instance and looking in /usr/share/tomcat7/webapps I find a directory called 'myapp'. Inside this directory is a file called 'archive'. 'archive' appears to be a war file, but it is not named 'archive.war', and it is in a subdirectory of webapps, so tomcat isn't going to deploy it anyway.
Now, the OpsWorks docs say the archive should be a 'zip' file. But:
zipping up myapp.war into a zip archive 'myapp.war.zip' and changing the path to this file results in 'myapp' containing 'myapp.war'. No deployment, since tomcat isn't looking for war files in 'webapps/myapp'
Changing the name of 'myapp.war' to 'myapp.zip' and changing the repository path results in 'myapp' containing the single file 'archive' again.
So. Can anyone describe how to properly provide a war file to OpsWorks from S3?
It appears that the problem has to do with how the zip archive is made.
Jars, war, and the like created with the java 'jar' tool do not work. Zip archives created with a zip tool, and then renamed to have a '.war' extension do.
This is explained here: https://forums.aws.amazon.com/thread.jspa?messageID=559582&#559582
Quoting that post's answer:
Our current extract script doesn't correctly identify WAR files. If
you unpack the WAR file and use zip to pack it, it should work until
we update our script.
So the procedure that works is to:
Explode the war made by your development environment (In the case of grails, the war build cleans up the staging directory for the war, so you don't have an exploded war directory laying around to zip up yourself, you have to unzip it first.)
Zip the contents of the directory created by exploding the war using a zip tool (or, if your build tool leaves the exploded war directory there, then just zip it directly)
Optionally, rename the new zip archive to have a '.war' extension.
Resume the procedure from the original question, step 3 -- that is, upload the war to the s3 bucket and
Specify the S3 path to the war file as the repository in the OpsWorks setup.
EDIT:
After answering this, I discovered that Grails can produce an exploded war directory after all.
// BuildConfig.groovy
...
grails.project.war.exploded.dir = "path/to/exploded/war-directory"
grails.war.exploded=true
...
That directory can be zipped or jarred or whatever you want by your builder/deployer.
From this wiki page you see that a WAR file is just a special JAR file. And if you check out what a JAR is here then you see it is just zipped up compiled java code.
This SuperUser question also touches on the .WAR vs .zip business. Basically, a WAR is just a special ZIP. So when you upload a WAR, you are uploading a ZIP.
Make sure it's a WAR file in the S3 bucket.
Provide the entire link to the S3 WAR file. To get this, right-click the WAR file in S3 and select Properties and then copy the link.

Resources