Capistrano 3 deploy WordPress with GIT available in release dir - wordpress

I use Capistrano 3 to deploy my WordPress projects (as implemented in the Bedrock WP stack: https://github.com/roots/bedrock).
WordPress specifically supports a number of features that update the actual code of the production/staging sites (plugin updates, settings files for certain plugins etc) and there are various scenarios where I might want to commit these code changes to the project GIT repo directly from a server.
So, the question is, is there a way configure Capistrano Deploy to keep the .git repo in the relase dir?
I gather this was doable with the 'copy strategy' settings in Cap 2, but I can't find any info about this for Cap 3.

I've solved this by modifying the custom deployment strategy that's implemented by https://github.com/Mixd/wp-deploy project.
Note the changed context.execute line.
# Usage:
# 1. Drop this file into lib/capistrano/submodule_strategy.rb
# 2. Add the following to your Capfile:
# require 'capistrano/git'
# require './lib/capistrano/submodule_strategy'
# 3. Add the following to your config/deploy.rb
# set :git_strategy, SubmoduleStrategy
module SubmoduleStrategy
# do all the things a normal capistrano git session would do
include Capistrano::Git::DefaultStrategy
# check for a .git directory
def test
test! " [ -d #{repo_path}/.git ] "
end
# same as in Capistrano::Git::DefaultStrategy
def check
test! :git, :'ls-remote', repo_url
end
def clone
git :clone, '-b', fetch(:branch), '--recursive', repo_url, repo_path
end
# same as in Capistrano::Git::DefaultStrategy
def update
git :remote, :update
end
# put the working tree in a release-branch,
# make sure the submodules are up-to-date
# and copy everything to the release path
def release
release_branch = fetch(:release_branch, File.basename(release_path))
git :checkout, '-B', release_branch,
fetch(:remote_branch, "origin/#{fetch(:branch)}")
git :submodule, :update, '--init'
# context.execute "rsync -ar --exclude=.git\* #{repo_path}/ #{release_path}"
context.execute "rsync -ar #{repo_path}/ #{release_path}"
end
end
This solution now deploys the release as a GIT repo set to a custom branch based on the release id.
This can then be committed and pushed up to the master repo for merging as required.
All credit goes to Aaron Thomas, the creator of the WP-Deploy project.

Related

How to package Qt5 application with snapcraft

I'm trying to create snap package of a Qt/QML application, the application is packaged well, when I try to run it I get /snap/swipe-app/x2/bin/qt5-launch: 74: exec: application: not found error.
here's my snapcraft.yaml file
name: swipe-app # you probably want to 'snapcraft register <name>'
version: '0.1' # just for humans, typically '1.2+git' or '1.3.2'
summary: Single-line elevator pitch for your amazing snap # 79 char long summary
description: description
grade: devel # must be 'stable' to release into candidate/stable channels
confinement: strict # use 'strict' once you have the right plugs and slots
apps:
swipe-app:
command: qt5-launch application
plugs:
- unity7
- home
parts:
application:
# See 'snapcraft plugins'
plugin: qmake
project-files: ["snap.pro"]
source: .
build-packages:
- qtbase5-dev
stage-packages:
# Here for the plugins-- they're not linked in automatically.
- libqt5gui5
after: [qt5conf] # A wiki part
As you have told the launch script that your program is called application then it will try to execute application from the current working directory when you run your snap. There are two things to note here:
The working directory is preserved from the terminal outside the snap context. For example if you are in your home directory /home/your-user then the working directory for swipe-app will also be /home/your-user.
As the working directory above is your home directory then commands without any anchor, such as application, will try to execute in your home directory. So in your example the launch script will attempt to run the command equivalent of /home/your-user/application.
You can fix this by either ensuring that the launch script executes a cd to change the working directory, e.g. cd $SNAP; or anchor your command by adding an achor, e.g. command: qt5-launch $SNAP/application.
Another thing you might need to check is that your qmake build actually outputs a binary called application. If you have not set TARGET= in your snap.pro project file then the binary will default to being called snap, not application. The line should read TARGET=application to make a binary called application: (ref: https://doc.qt.io/qt-5/qmake-variable-reference.html#target).

Can Ansible unarchive be made to write static folder modification times?

I am writing a build process for a WordPress installation using Ansible. It doesn't have a application-level build system at the moment, and I've chosen Ansible so that it can cleanly integrate with server build scripts, so I can bring up a working server at the touch of a button.
Most of my WordPress plugins are being installed with the unarchive feature, pointing to versioned plugin builds on the official wordpress.org installation server. I've encountered a problem with just one of these, which is that it is always being marked as "changed" even though the files are exactly the same.
Having examined the state of ls -Rl before and after, I noticed that this plugin (WordPress HTTPS) is the only one to use internal sub-directories, and upon each decompression, the modification time of folders is getting bumped.
It may be useful to know that this is a project build script, with a connection of local. I guess therefore that means that SSH is not being used.
Here is a snippet of my playbook:
- name: Install the W3 Total Cache plugin
unarchive: >
src=https://downloads.wordpress.org/plugin/w3-total-cache.0.9.4.1.zip
dest=wp-content/plugins
copy=no
- name: Install the WP DB Manager plugin
unarchive: >
src=https://downloads.wordpress.org/plugin/wp-dbmanager.2.78.1.zip
dest=wp-content/plugins
copy=no
# #todo Since this has internal sub-folders, need to work out
# how to preserve timestamps of the original folders rather than
# re-writing them, which forces Ansible to record a change of
# server state.
- name: Install the WordPress HTTPS plugin
unarchive: >
src=https://downloads.wordpress.org/plugin/wordpress-https.3.3.6.zip
dest=wp-content/plugins
copy=no
One hacky way of fixing this is to use ls -R before and after, using options to include file sizes but not timestamps, and then md5sum that output. I could then mark it as changed if there is a change in checksum. It'd work but it's not very elegant (and I'd want to do that for all plugins, for consistency).
Another approach is to abandon the task if a plugin file already exists, but that would cause problems when I bump the plugin version number to the latest copy.
Thus, ideally, I am looking for a switch to present to unarchive to say that I want the folder modification times from the zip file, not from playbook runtime. Is it possible?
Update: a commenter asked if the file contents could have changed in any way. To determine whether they have, I wrote this script, which creates a checksum for (1) all file contents and (2) all file/directory timestamps:
#!/bin/bash
# Save pwd and then change dir to root location
STARTDIR=`pwd`
cd `dirname $0`/../..
# Clear collation file
echo > /tmp/wp-checksum
# List all files recursively
find wp-content/plugins/wordpress-https/ -type f | while read file
do
#echo $file
cat $file >> /tmp/wp-checksum
done
# Get checksum of file contents
sha1sum /tmp/wp-checksum
# Get checksum of file sizes
ls -Rl wp-content/plugins/wordpress-https/ | sha1sum
# Go back to original dir
cd $STARTDIR
I ran this as part of my playbook (running it in isolation using tags) and received this:
PLAY [Set this playbook to run locally] ****************************************
TASK [setup] *******************************************************************
ok: [localhost]
TASK [jonblog : Run checksum command] ******************************************
changed: [localhost]
TASK [jonblog : debug] *********************************************************
ok: [localhost] => {
"checksum_before.stdout_lines": [
"374fadc4df1578f78fd60b1be6758477c2c533fa /tmp/wp-checksum",
"10d66f7bdbbdd3af531d1b11a3db3059a5868838 -"
]
}
TASK [jonblog : Install the WordPress HTTPS plugin] ***************
changed: [localhost]
TASK [jonblog : Run checksum command] ******************************************
changed: [localhost]
TASK [jonblog : debug] *********************************************************
ok: [localhost] => {
"checksum_after.stdout_lines": [
"374fadc4df1578f78fd60b1be6758477c2c533fa /tmp/wp-checksum",
"719c9da94b525e723b1abe188ee9f5bbaf121f3f -"
]
}
PLAY RECAP *********************************************************************
localhost : ok=6 changed=3 unreachable=0 failed=0
The debug lines reflect the checksum hash of the contents of the files (this is identical) and then the checksum hash of ls -Rl of the file structure (this has changed). This is in keeping with my prior manual finding that directory checksums are changing.
So, what can I do next to track down why folder modification times are incorrectly flagging this operation as changed?
Rather than overwriting all files each time and find a way to keep the same modification datetime, you may want to use the creates option of the unarchive module.
As you maybe already know, this tells Ansible that a specific file/folder will be created as a result of the task. Thus, next time the task will not be run again if that file/folder already exists.
See http://docs.ansible.com/ansible/unarchive_module.html#options
My solution is to modify the checksum script and to make that a permanent feature of the Ansible process. It feels a bit hacky to do my own checksumming, when Ansible should do it for me, but it works.
New answers that explain that I am doing something wrong, or that a new version of Ansible fixes the problem, would be most welcome.
If I get a moment, I will raise this as a possible bug with the Ansible team. However I do sometimes wonder about the effort/reward ratio when raising bugs on a busy tracker - I already have one item outstanding, it has been waiting a while, and I've chosen to work around that too.
Update (18 months later)
This Ansible build system never made it into live. It felt like I was always working around something. Recently, when I decided I needed to move my blog to another server, I finally Dockerised it. This took several weeks (since there is a surprising amount of things to think about in a real WordPress installation) but in general I found the process much nicer than using orchestration tools.

Command 'generate' not found, compiling with rebar

I am following this blog:
http://maplekeycompany.blogspot.se/2012/03/very-basic-cowboy-setup.html
In short, I am trying to compile an application with rebar just as the person in the blog.
Everything goes smoothly until I want to run the command:
./rebar get-deps compile generate
This then give me the following errors and warnings,
> User#user-:~/simple_server/rebar$ ./rebar get-deps compile generate
> ==> rebar (get-deps)
> ==> rebar (compile) Compiled src/simple_server.erl Compiled src/simple_server_http.erl src/simple_server_http_static.erl:5:
> Warning: behaviour cowboy_http_handler undefined Compiled
> src/simple_server_http_static.erl
> src/simple_server_http_catchall.erl:2: Warning: behaviour
> cowboy_http_handler undefined Compiled
> src/simple_server_http_catchall.erl WARN: 'generate' command does not
> apply to directory /home/harri/simple_server/rebar Command 'generate'
> not understood or not applicable
I have found a similar post with the same error:
Command 'generate' not understood or not applicable
I think the problem is in the reltool.config but do not know how to proceed, I changed the path to the following: {lib_dirs, ["home/user/simple_server/rebar"]}
Is there a problem with the path? How can rebar get access to all the src files and also the necessary rebar file to compile and build the application?
You need to make sure your directory structure and its contents are arranged so that rebar knows how to build everything in your system and generate a release for it. Your directory structure should look like this:
project
|
-- rel
|
-- deps
|
-- apps
|
-- myapp
| |
| -- src
| -- priv
|
-- another_app
The rel directory holds all the information needed to generate a release, and the apps directory is where the applications that make up your project live. Application dependencies live in the deps directory. Each app such as myapp and another_app under the apps directory can have their own rebar.config files. While two or more such applications are possible here, normally you'd have just one and all others would be dependencies.
In the top-level project directory there's also a rebar.config file with contents that look like this:
{sub_dirs, ["rel", "apps/myapp", "apps/another_app"]}.
{lib_dirs, ["apps"]}.
If necessary, you can use rebar to generate your apps from application skeletons:
cd apps
mkdir myapp another_app
( cd myapp && rebar create-app appid=myapp )
( cd another_app && rebar create-app appid=another_app )
If an application has dependencies, you'll have to add a rebar.config to its directory and declare each dependency there. For example, if myapp depends on application foo version 1.2, create apps/myapp/rebar.config with these contents:
{deps,
[{foo, "1.*", {git, "git://github.com/user/foo.git", {tag, "foo-1.2"}}}]
}.
When you run rebar get-deps, rebar will populate the top-level deps directory to hold all dependencies, creating deps if necessary. The top-level rebar.config can also declare dependencies if necessary.
You also need to generate a node, necessary for your releases:
cd ../rel
rebar create-node nodeid=project
You then need to modify the reltool.config file generated by the previous step. You need to change
{lib_dirs, []},
to
{lib_dirs, ["../apps", "../deps"]},
and just after the line {incl_cond, derived}, add {mod_cond, derived}, so that releases contain only the applications needed for correct execution.
Next, wherever the atom 'project' appears, you need to replace it with the applications under the apps directory. For our example, we'd change this part:
{rel, "project", "1",
[
kernel,
stdlib,
sasl,
project
]},
to this:
{rel, "project", "1",
[
kernel,
stdlib,
sasl,
myapp,
another_app
]},
and change this part:
{app, project, [{mod_cond, app}, {incl_cond, include}]}
to this:
{app, myapp, [{mod_cond, app}, {incl_cond, include}]},
{app, another_app, [{mod_cond, app}, {incl_cond, include}]}
You might also need to add the line:
{app, hipe, [{incl_cond, exclude}]},
to exclude the hipe application since sometimes it causes errors during release generation or when trying to run the release. Try without it first, but add it if you see errors related to hipe when generating a release, or if attempts to run the generated release result in this sort of error:
{"init terminating in do_boot",{'cannot load',elf_format,get_files}}
you'll need to add it.
With all this in place you can now execute:
rebar get-deps compile generate
and you should be able to successfully generate the release. Note that running rebar generate at the top level rather than in the rel dir will result in a harmless warning like this, which you can ignore:
WARN: 'generate' command does not apply to directory /path/to/project
Finally, you can run the release. Here's how to run it with an interactive console:
$ ./rel/project/bin/project console
Exec: /path/to/project/rel/project/erts-6.2/bin/erlexec -boot /path/to/project/rel/project/releases/1/project -mode embedded -config /path/to/project/rel/project/releases/1/sys.config -args_file /path/to/project/rel/project/releases/1/vm.args -- console
Root: /path/to/project/rel/project
Erlang/OTP 17 [erts-6.2] [source] [64-bit] [smp:8:8] [async-threads:10] [kernel-poll:false]
Eshell V6.2 (abort with ^G)
(project#127.0.0.1)1>
or you could run ./rel/project/bin/project start to start it in the background. Run ./rel/project/bin/project with no arguments to see all available options.

Deleting artifacts older than 2 years from local nexus repository

We're running nexus on some old hardware which is limited in disk space and would like to remove artifacts older than a certain threshold.
Is there any way to do this other than a combination of find and curl?
There is a scheduled task that can automatically remove old snapshot releases:
http://www.sonatype.com/people/2009/09/nexus-scheduled-tasks/
http://www.sonatype.com/books/nexus-book/reference/confignx-sect-managing-tasks.html
Unfortunately, this does not work for hosted release repositories.
As mentioned on a Sonatype blog post linked from a comment in the blog in gavenkoa's answer, since Nexus 2.5 there is a built in "Remove Releases From Repository" scheduled task which can be configured to delete old releases keeping a defined number.
This is sufficient to meet our needs.
Delete all files to which no one access more then 100 days and not modified more then 200 days:
find . -type f -atime +100 -mtime 200 -delete
To cleanup empty directories:
find . -type d -empty -delete
Or alternatively look to https://github.com/akquinet/nexus_cleaner/blob/master/nexus_clean.sh and corresponding blog entry http://blog.akquinet.de/2013/12/09/how-to-clean-your-nexus-release-repositories/ (delete all except last 10 releases).
auto purge older than 30 days(u can change it) not download docker images from nexus 3
https://gist.github.com/anjia0532/4a7fee95fd28d17f67412f48695bb6de
# nexus3's username and pwd
username = 'admin'
password = 'admin123'
# nexus host
nexusHost = 'http://localhost:8081'
# purge repo
repoName = 'docker'
# older than days
days = 30
#change and run it
For Nexus2, you can use my Spring Boot application https://github.com/vernetto/nexusclean , you can define rules based on date and on a minimum number of Artifacts to retain, and it generates "rm -rf" commands (using the REST API is damn slow).
For Nexus3, I would definitely use a Groovy script as a "Execute Admin Task". One is posted here groovy script to delete artifacts on nexus 3 (not nexus 2)

Generate xcarchive into a specific folder from the command line

For the purposes of CI, I need to be able to generate an XCARCHIVE and an IPA file in our nightly build. The IPA is for our testers, to be signed with our ad-hoc keys, and the XCARCHIVE is to send to the client so that they can import it into Xcode and submit it to the app store when they're happy with it.
Generating the IPA is simple enough with a bit of googling, however how to generate the .XCARCHIVE file is what eludes me. The closest I've found is:
xcodebuild -scheme myscheme archive
However, this stores the .xcarchive in some hard-to-find folder, eg:
/Users/me/Library/Developer/Xcode/Archives/2011-12-14/MyApp 14-12-11 11.42 AM.xcarchive
Is there some way to control where the archive is put, what its name is, and how to avoid having to re-compile it? I guess the best possible outcome would be to generate the xcarchive from the DSYM and APP that are generated when you do an 'xcodebuild build' - is this possible?
Xcode 5 now supports an -archivePath option:
xcodebuild -scheme myscheme archive -archivePath /path/to/AppName.xcarchive
You can also now export a signed IPA from the archive you just built:
xcodebuild -exportArchive -exportFormat IPA -exportProvisioningProfile my_profile_name -archivePath /path/to/AppName.xcarchive -exportPath /path/to/AppName.ipa
Starting with Xcode 4 Preview 5 there are three environment variables that are accessible in the scheme archive's post-actions.
ARCHIVE_PATH: The path to the archive.
ARCHIVE_PRODUCTS_PATH: The installation location for the archived product.
ARCHIVE_DSYMS_PATH: The path to the product’s dSYM files.
You could move/copy the archive in here. I wanted to have a little more control over the process in a CI script, so I saved a temporary file that could easily be sourced in my CI script that contained these values.
BUILD_DIR=$PROJECT_DIR/build
echo "ARCHIVE_PATH=\"$ARCHIVE_PATH\"" > $BUILD_DIR/archive_paths.sh
echo "ARCHIVE_PRODUCTS_PATH=\"$ARCHIVE_PRODUCTS_PATH\"" >> $BUILD_DIR/archive_paths.sh
echo "ARCHIVE_DSYMS_PATH=\"$ARCHIVE_DSYMS_PATH\"" >> $BUILD_DIR/archive_paths.sh
echo "INFOPLIST_PATH=\"$INFOPLIST_PATH\"" >> $BUILD_DIR/archive_paths.sh
Then in my CI script I can run the following:
xcodebuild -alltargets -scheme [Scheme Name] -configuration [Config Name] clean archive
source build/archive_paths.sh
ARCHIVE_NAME=AppName-$APP_VERSION-$APP_BUILD.xcarchive
cp -r "$ARCHIVE_PATH" "$BUILD_DIR/$ARCHIVE_NAME"
I have just solved this one - just add the argument -archivePath to your xcode build command line, given the initial question that would mean:
xcodebuild -scheme myscheme archive
becomes ...
xcodebuild -scheme myscheme archive -archivePath Build/Archive
(Note: paths are relative, I output my build to $PWD/Build)
This will then place your .app folder in:
Build/Archive.xarchive/Products/Application
If your build target already has your signing certificate and provisioning profile in it you can then create your IPA file without re-signing using the following command:
xcrun -v -sdk iphoneos PackageApplication -v `pwd`'/Build/Archive.xarchive/Products/Application/my.app' -o `pwd`'/myapp.ipa'
(Note: xcrun doesn't like relative paths hence the pwd)
The -v args dump lots of useful information - this command can fail to sign properly and still exit with code 0, sigh!
If you are finding that you can't run the built .ipa it's probably a signing issue that you can do a double check on using:
codesign --verify -vvvv myapp.app
If it's signed correctly and un-tampered with the output will have this in:
myapp.app: valid on disk
myapp.app: satisfies its Designated Requirement
If not you will see something similar to this:
Codesign check fails : /blahpath/myapp.app: a sealed resource is missing or invalid
file modified: /blahpath/ls-ios-develop.app/Assets.car
... which generally means you are trying to use an intermediate output directory rather than the proper archive.
My current solution is to rename the user's existing archives folder, run the build, and do a 'find' to copy the archives where i want, then delete the archives folder and rename the old folder back as it was, with code like this in my ruby build script:
# Move the existing archives out of the way
system('mv ~/Library/Developer/Xcode/Archives ~/Library/Developer/Xcode/OldArchivesTemp')
# Build the .app, the .DSYM, and the .xcarchive
system("xcodebuild -scheme \"#{scheme}\" clean build archive CONFIGURATION_BUILD_DIR=\"#{build_destination_folder}\"")
# Find the xcarchive wherever it was placed and copy it where i want it
system("find ~/Library/Developer/Xcode/Archives -name *.xcarchive -exec cp -r {} \"#{build_destination_folder}\" \";\"")
# Delete the new archives folder with this new xcarchive
system('rm -rf ~/Library/Developer/Xcode/Archives')
# Put the old archives back
system('mv ~/Library/Developer/Xcode/OldArchivesTemp ~/Library/Developer/Xcode/Archives')
Its a bit hacky but i don't see a better solution currently. At least it preserves the user's 'archives' folder and all their pre-existing archives.
--Important note!--
I since found out that the line of code where i find the archive and cp it to the folder i want doesn't copy the symlinks inside the archive correctly, thus breaking the code signing in the app. You'll want to replace that with a 'mv' or something that maintains symlinks. Cheers!
Here's a bit of bash that I've come up with for our Jenkins CI system. These commands should be run in a script immediately after the xcodebuild archive command finishes.
BUILD_DIR="${WORKSPACE}/build"
XCODE_SCHEME="myscheme"
# Common path and partial filename
ARCHIVE_BASEPATH="${HOME}/Library/Developer/Xcode/Archives/$(date +%Y-%m-%d)/${XCODE_SCHEME}"
# Find the latest .xcarchive for the given scheme
NEW_ARCHIVE=$(ls -td "${ARCHIVE_BASEPATH}"* | head -n 1)
# Zip it up so non-Apple systems won't treat it as a dir
pushd "${NEW_ARCHIVE%/*}"
zip -r "${BUILD_DIR}/${NEW_ARCHIVE##*/}.zip" "${NEW_ARCHIVE##*/}"
popd
# Optional, disk cleanup
rm -rf "${NEW_ARCHIVE}"
The BUILD_DIR is used to collect artifacts so that it's easy to archive them from Jenkins with a glob such as build/*.ipa,build/*.zip
Similar to the others, but perhaps a little simpler since I try to record the .xcarchive file's location. (I also don't move the archives folder, so this will work better if you're doing multiple builds at the same time.)
My caller build script generates a new tempfile and sets its path to an environment variable named XCARCHIVE_PATH_TMPFILE. This environment variable is available in my scheme's Archive post-action shell script, which then that writes the .xcarchive's path to that file. The build script that can then read that file after it calls xcodebuild archive.
post-action shell script
echo $ARCHIVE_PATH > "$XCARCHIVE_PATH_TMPFILE"
On Xcode 4.6 it is possible to specify a post-build action for the scheme to be compiled into an xcarchive:
echo "ARCHIVE_PATH=\"$ARCHIVE_PATH\"" > $PROJECT_DIR/archive_paths.sh
A build script can be used to check if $ARCHIVE_PATH is defined after running xcodebuild and if this is the case, the output xcarchive can be moved into a designated folder.
This method is not very maintainable if the targets in the project are a large number, as for each one it is necessary to tag the corresponding scheme as 'shared' and add the post-build action.
To address this problem, I have created a build script that generates the archive path programmatically by extracting the last build that matches the target name on the current day. This method works reliably as long as there aren't multiple builds with the same target name running on the machine (this may be a problem in production environments where multiple concurrent builds are run).
#!/bin/bash
#
# Script to archive an existing xcode project to a target location.
# The script checks for a post-build action that defines the $ARCHIVE_PATH as follows:
# echo "ARCHIVE_PATH=\"$ARCHIVE_PATH\"" > $PROJECT_DIR/archive_paths.sh
# If such post-build action does not exist or sourcing it doesn't define the $ARCHIVE_PATH
# variable, the script tries to generate it programmatically by finding the latest build
# in the expected archiving folder
#
post_build_script=archive_paths.sh
build_errors_file=build_errors.log
OUTPUT=output/
XCODEBUILD_CMD='/Applications/Xcode.app/Contents/Developer/usr/bin/xcodebuild'
TARGET_SDK=iphoneos
function archive()
{
echo "Archiving target '$1'"
# Delete $post_build_script if it already exists as it should be generated by a
# post-build action
rm -f $post_build_script
# Use custom provisioning profile and code sign identity if specified, otherwise
# default to project settings
# Note: xcodebuild always returns 0 even if the build failed. We look for failure in
# the stderr output instead
if [[ ! -z "$2" ]] && [[ ! -z "$3" ]]; then
${XCODEBUILD_CMD} clean archive -scheme $1 -sdk "${TARGET_SDK}" \
"CODE_SIGN_IDENTITY=$3" "PROVISIONING_PROFILE=$2" 2>$build_errors_file
else
${XCODEBUILD_CMD} clean archive -scheme $1 -sdk "${TARGET_SDK}"
2>$build_errors_file
fi
errors=`grep -wc "The following build commands failed" $build_errors_file`
if [ "$errors" != "0" ]
then
echo "BUILD FAILED. Error Log:"
cat $build_errors_file
rm $build_errors_file
exit 1
fi
rm $build_errors_file
# Check if archive_paths.sh exists
if [ -f "$post_build_script" ]; then
source "$post_build_script"
if [ -z "$ARCHIVE_PATH" ]; then
echo "'$post_build_script' exists but ARCHIVE_PATH was not set.
Enabling auto-detection"
fi
fi
if [ -z "$ARCHIVE_PATH" ]; then
# This is the format of the xcarchive path:
# /Users/$USER/Library/Developer/Xcode/Archives/`date +%Y-%m-%d`/$1\
# `date +%d-%m-%Y\ %H.%M`.xcarchive
# In order to avoid mismatches with the hour/minute of creation of the archive and
# the current time, we list all archives with the correct target that have been
# built in the current day (this may fail if the build wraps around midnight) and
# fetch the correct file with a combination of ls and grep.
# This script can break only if there are multiple targets with exactly the same
# name running at the same time.
EXTRACTED_LINE=$(ls -lrt /Users/$USER/Library/Developer/Xcode/Archives/`date
+%Y-%m-%d`/ | grep $1\ `date +%d-%m-%Y` | tail -n 1)
if [ "$EXTRACTED_LINE" == "" ]; then
echo "Error: couldn't fetch archive path"
exit 1
fi
# ls -lrt prints lines with the following format
# drwxr-xr-x 5 mario 1306712193 170 25 Jul 17:17 ArchiveTest 25-07-2013
# 17.17.xcarchive
# We can split this line with the " " separator and take the latest bit:
# 17.17.xcarchive
FILE_NAME_SUFFIX=$(echo $EXTRACTED_LINE | awk '{split($0,a," "); print a[11]}')
if [ "$FILE_NAME_SUFFIX" == "" ]; then
echo "Error: couldn't fetch archive path"
exit 1
fi
# Finally, we can put everything together to generate the path to the xcarchive
ARCHIVE_PATH="/Users/$USER/Library/Developer/Xcode/Archives/`date
+%Y-%m-%d`/$1 `date +%d-%m-%Y` $FILE_NAME_SUFFIX/"
fi
# Create output folder if it doesn't already exist
mkdir -p "$OUTPUT"
# Move archived xcarchive build to designated output folder
mv -v "$ARCHIVE_PATH" "$OUTPUT"
}
# Check number of command line args
if [ $# -lt 1 ]; then
echo "Syntax: `basename $0` <target name> [/path/to/provisioning-profile]
[<code sign identity]"
exit 1
fi
if [ ! -z "$2" ]; then
PROVISIONING_PROFILE="$2"
fi
if [ ! -z "$3" ]; then
SIGN_PROVISIONING_PROFILE="$3"
else
if [ ! -z "$PROVISIONING_PROFILE" ]; then
SIGN_PROVISIONING_PROFILE=$(cat "$PROVISIONING_PROFILE" | egrep -a -o
'[A-Fa-f0-9]{8}-[A-Fa-f0-9]{4}-[A-Fa-f0-9]{4}-[A-Fa-f0-9]{4}-[A-Fa-f0-9]{12}')
fi
fi
archive "$1" "$PROVISIONING_PROFILE" "$SIGN_PROVISIONING_PROFILE"
Full source code with an example Xcode project can be found here:
https://github.com/bizz84/Xcode-xcarchive-command

Resources