I have a Meteor application with Circle CI as continuous integration service.
Facebook Flow is running locally with the following .flowconfig:
[ignore]
.*/node_modules/.*
[options]
module.name_mapper='^\/\(.*\)$' -> '<PROJECT_ROOT>/\1'
module.name_mapper='^meteor\/\(.*\):\(.*\)$' -> '<PROJECT_ROOT>/.meteor/local/build/programs/server/packages/\1_\2'
module.name_mapper='^meteor\/\(.*\)$' -> '<PROJECT_ROOT>/.meteor/local/build/programs/server/packages/\1'
In CI I get errors like:
client/main.jsx:4
4: import { Meteor } from 'meteor/meteor';
^^^^^^^^^^^^^^^ meteor/meteor. Required module not found
Flow seems not to find my modules. The rewrite rules do not apply. With SSH access to Circle CI I found ot that the <PROJECT_ROOT>/.meteor/local directory is not present.
Once I run meteor on the CI machine the directory will appear.
Problem: If I run meteor the Meteor server will start up and my test will time out.
As far as I see I need to either
Adapt my .flowconfig or
Find a way to get Meteor to create the directory without running meteor or
Find a way to kill the meteor process once the server is running.
I went with the third option:
bbaja42 shared a script that saves the output of a program and terminates the program once a keyword is reached.
Adapted to my case I have two files:
ci-tests.sh
#!/bin/sh
# Check if the output directory exits. Flow needs the modules there.
if [ ! -d ".meteor/local/build/programs/server/packages" ]; then
echo "Meteor build directory does not exist. Starting Meteor."
# Run Meteor so the output directory is built.
./build-and-kill-meteor.sh
else
echo "Meteor build directory exists"
fi
./node_modules/.bin/flow --json
if [ $? -ne 0 ] ; then
exit 1
fi
build-and-kill-meteor.sh
#!/bin/bash
OUTPUT=/tmp/meteor-launch.log
PROGRAM=meteor
$PROGRAM > $OUTPUT &
PID=$!
echo Program is running under pid: $PID
#Every 10 seconds, check requirements
while true; do
tail -1 $OUTPUT
grep "App running at: http://localhost" $OUTPUT
if [ $? -eq 0 ] ; then
break
fi
sleep 10
done
kill $PID || echo "Killing process with pid $PID failed, try manual kill with -9 argument"
I ran into the same issue and came up with my own derivation based on the OP's answer. I run this script on every CI build to ensure that Meteor will always install any new atmosphere packages that I'm shipping with.
#!/bin/bash
# Install meteor
if [ -d ~/.meteor ]; then sudo ln -s ~/.meteor/meteor /usr/local/bin/meteor; fi
if [ ! -e $HOME/.meteor/meteor ]; then curl https://install.meteor.com | sh; fi
OUTPUT=/tmp/meteor-launch.log
PROGRAM=meteor
$PROGRAM > $OUTPUT &
PID=$!
echo Program is running under pid: $PID
# Start meteor to install atmosphere packages
while true; do
tail -1 $OUTPUT
grep "Your application is crashing." $OUTPUT
# Cancel the program once meteor has started
if [ $? -eq 0 ] ; then
break
fi
sleep 10
done
kill $PID || echo "Killing process with pid $PID failed, try manual kill with -9 argument."
Related
I need to install Nginx on my target which there is no internet connection, how can I install Nginx with all dependencies in an offline mode?? thanks in advance for your answers.
I have recently gone through this procedure and this is what worked for me on centos7:
You need an online Linux server to download dependencies. You can use virtual machines or anything else.
On your online server create a .sh file and copy script below in it. (I named it download_dependencies)
#!/bin/bash
# This script is used to fetch external packages that are not available in standard Linux distribution
# Example: ./fetch-external-dependencies ubuntu18.04
# Script will create nms-dependencies-ubuntu18.04.tar.gz in local directory which can be copied
# into target machine and packages inside can be installed manually
set -eo pipefail
# current dir
PACKAGE_PATH="."
mkdir -p $PACKAGE_PATH
declare -A CLICKHOUSE_REPO
CLICKHOUSE_REPO['ubuntu18.04']="https://repo.clickhouse.tech/deb/lts/main"
CLICKHOUSE_REPO['ubuntu20.04']="https://repo.clickhouse.tech/deb/lts/main"
CLICKHOUSE_REPO['centos7']="https://repo.clickhouse.tech/rpm/lts/x86_64"
CLICKHOUSE_REPO['centos8']="https://repo.clickhouse.tech/rpm/lts/x86_64"
CLICKHOUSE_REPO['rhel7']="https://repo.clickhouse.tech/rpm/lts/x86_64"
CLICKHOUSE_REPO['rhel8']="https://repo.clickhouse.tech/rpm/lts/x86_64"
declare -A NGINX_REPO
NGINX_REPO['ubuntu18.04']="https://nginx.org/packages/mainline/ubuntu/pool/nginx/n/nginx/"
NGINX_REPO['ubuntu20.04']="https://nginx.org/packages/mainline/ubuntu/pool/nginx/n/nginx/"
NGINX_REPO['centos7']="https://nginx.org/packages/mainline/centos/7/x86_64/RPMS/"
NGINX_REPO['centos8']="https://nginx.org/packages/mainline/centos/8/x86_64/RPMS/"
NGINX_REPO['rhel7']="https://nginx.org/packages/mainline/rhel/7/x86_64/RPMS/"
NGINX_REPO['rhel8']="https://nginx.org/packages/mainline/rhel/8/x86_64/RPMS/"
CLICKHOUSE_KEY="https://repo.clickhouse.com/CLICKHOUSE-KEY.GPG"
NGINX_KEY="https://nginx.org/keys/nginx_signing.key"
declare -A CLICKHOUSE_PACKAGES
# for Clickhouse package names are static between distributions
# we use ubuntu/centos entries as placeholders
CLICKHOUSE_PACKAGES['ubuntu']="
clickhouse-server_21.3.10.1_all.deb
clickhouse-common-static_21.3.10.1_amd64.deb"
CLICKHOUSE_PACKAGES['centos']="
clickhouse-server-21.3.10.1-2.noarch.rpm
clickhouse-common-static-21.3.10.1-2.x86_64.rpm"
CLICKHOUSE_PACKAGES['ubuntu18.04']=${CLICKHOUSE_PACKAGES['ubuntu']}
CLICKHOUSE_PACKAGES['ubuntu20.04']=${CLICKHOUSE_PACKAGES['ubuntu']}
CLICKHOUSE_PACKAGES['centos7']=${CLICKHOUSE_PACKAGES['centos']}
CLICKHOUSE_PACKAGES['centos8']=${CLICKHOUSE_PACKAGES['centos']}
CLICKHOUSE_PACKAGES['rhel7']=${CLICKHOUSE_PACKAGES['centos']}
CLICKHOUSE_PACKAGES['rhel8']=${CLICKHOUSE_PACKAGES['centos']}
declare -A NGINX_PACKAGES
NGINX_PACKAGES['ubuntu18.04']="nginx_1.21.3-1~bionic_amd64.deb"
NGINX_PACKAGES['ubuntu20.04']="nginx_1.21.2-1~focal_amd64.deb"
NGINX_PACKAGES['centos7']="nginx-1.21.4-1.el7.ngx.x86_64.rpm"
NGINX_PACKAGES['centos8']="nginx-1.21.4-1.el8.ngx.x86_64.rpm"
NGINX_PACKAGES['rhel7']="nginx-1.21.4-1.el7.ngx.x86_64.rpm"
NGINX_PACKAGES['rhel8']="nginx-1.21.4-1.el8.ngx.x86_64.rpm"
download_packages() {
local target_distribution=$1
if [ -z $target_distribution ]; then
echo "$0 - no target distribution specified"
exit 1
fi
mkdir -p "${PACKAGE_PATH}/${target_distribution}"
# just in case delete all files in target dir
rm -f "${PACKAGE_PATH}/${target_distribution}/*"
readarray -t clickhouse_files <<<"${CLICKHOUSE_PACKAGES[${target_distribution}]}"
readarray -t nginx_files <<<"${NGINX_PACKAGES[${target_distribution}]}"
echo "Downloading Clickhouse signing keys"
curl -fs ${CLICKHOUSE_KEY} --output "${PACKAGE_PATH}/${target_distribution}/clickhouse-key.gpg"
echo "Downloading Nginx signing keys"
curl -fs ${NGINX_KEY} --output "${PACKAGE_PATH}/${target_distribution}/nginx-key.gpg"
for package_file in "${clickhouse_files[#]}"; do
if [ -z $package_file ]; then
continue
fi
file_url="${CLICKHOUSE_REPO[$target_distribution]}/$package_file"
save_file="${PACKAGE_PATH}/${target_distribution}/$package_file"
echo "Fetching $file_url"
curl -fs $file_url --output $save_file
done
for package_file in "${nginx_files[#]}"; do
if [ -z $package_file ]; then
continue
fi
file_url="${NGINX_REPO[$target_distribution]}/$package_file"
save_file="${PACKAGE_PATH}/${target_distribution}/$package_file"
echo "Fetching $file_url"
curl -fs $file_url --output $save_file
done
bundle_file="${PACKAGE_PATH}/nms-dependencies-${target_distribution}.tar.gz"
tar -zcf $bundle_file -C "${PACKAGE_PATH}/${target_distribution}" .
echo "Bundle file saved as $bundle_file"
}
target_distribution=$1
if [ -z $target_distribution ]; then
echo "Usage: $0 target_distribution"
echo "Supported target distributions: ${!CLICKHOUSE_REPO[#]}"
exit 1
fi
# check if target distribution is supported
if [ -z ${CLICKHOUSE_REPO[$target_distribution]} ]; then
echo "Target distribution is not supported."
echo "Supported distributions: ${!CLICKHOUSE_REPO[#]}"
exit 1
fi
download_packages "${target_distribution}"
Then on the same directory that contains download_dependencies.sh run command below:
download_dependencies.sh <your linux version>
In my case, I ran code below (leave it blank to see options):
download_dependencies.sh centos7
It should start to download and when it finished you should see nms-dependencies-rhel7.tar.gz in your directory.
Copy that file(.tar.gz) to your offline target.
Now on your target machine, go to directory which you copied your file and run the code below:
tar -zxvf nms-dependencies-rhel7.tar.gz
sudo yum install *.rpm
After installation you can start nginx using systemctl:
sudo systemctl start clickhouse-server
sudo systemctl start nginx
Your nginx service must be running now!
you can download tar file in another system and copy
did you try this link?
https://gist.github.com/taufiqibrahim/d7f697de6bb8b93ca348a5b94d6adbfc
I've just gotten DDEV setup and I have multisite working by manually running ddev import-db --target-db=[db-name]. It's working just fine but I would like to figure out how to get database pulls from Acquia to work where I can specify the site to pull from.
I have this script working but is there a way to do this with DDEV commands that would be a little cleaner?
First I modified acquia.yaml to this:
environment_variables:
project_id: mysite.dev
uri: mysite.com
db_name: mysite_us
#uri: mysite.ca
#db_name: mysite_canada
#uri: mysite.co.uk
#db_name: mysite_unitedkingdom
# etc etc
db_pull_command:
command: |
# set -x # You can enable bash debugging output by uncommenting
ls /var/www/html/.ddev >/dev/null # This just refreshes stale NFS if possible
pushd /var/www/html/.ddev/.downloads >/dev/null
acli remote:drush -n ${project_id} -- sql-dump --extra-dump=--no-tablespaces --uri=${uri} >${db_name}.sql
Then I wrote the following script which i call like:
./ddev-refresh-db.sh mysite_us mysite.com
#!/bin/bash
site="$1"
uri="$2"
ddev pull acquia
ddev import-db --target-db=${site} --src=.ddev/.downloads/${site}.sql
ddev drush --uri=${uri} cr
However this still requires us to change the site and URI in the acquia.yaml file before running this command.
Is there a way to pass a variable through to ddev pull acquia ? And also a way to mimic what this script is doing with a real DDEV command?
Here's a more complete answer for Acquia multisite pull, pulling all sites. As of DDEV v1.18.0, the ddev pull itself really isn't robust enough to pull multiple sites, because it assumes one database and one set of files. This works where #kelly howard's answer in https://stackoverflow.com/a/68553116/215713 is inadequate. (In her example, she pulls just one of the multisites, and it works great for that situation.)
But here we'll put all the logic in a DDEV custom command and pull all databases and files for any named site, so ddev acquiapull <sitename>
Place this file in the project as .ddev/commands/web/acquiapull
#!/bin/bash
# This DDEV custom command is set up to pull database and files from Acquia for several subsites.
# Usage: `ddev acquiapull [ --skip-db ] [ --skip-files ] <site1> <site2>
# Example: `ddev acquiapull subsite1`
# This assumes that each subsite has its own database (named for the site)
# and that each subsite has its own files in sites/<sitename>/files
# To use it set up the needed ACQUIA_API_KEY and ACQUIA_API_SECRET in global
# or project config, just as described in
# https://ddev.readthedocs.io/en/stable/users/providers/acquia/
acquia_project_id=myprojectid.dev
tmpdir=/tmp #inside web container
set -eu -o pipefail
while :; do
case ${1:-} in
-h | -\? | --help)
show_help
exit
;;
-y|--yes)
SKIP_CONFIRMATION=true
;;
--skip-files)
SKIP_FILES=true
;;
--skip-db)
SKIP_DB=true
;;
--) # End of all options.
shift
break
;;
-?*)
printf 'WARN: Unknown option (ignored): %s\n' "$1" >&2
;;
*) # Default case: No more options, so break out of the loop.
break ;;
esac
shift
done
# Map sitename to database name
function target_db_name() {
site_name=$1
echo $site_name
}
# Map sitename to files dir
function target_files_dir() {
site_name=$1
echo "sites/${site_name}/files"
}
# Get the files from upstream and load them.
function files_pull() {
#set -x # You can enable bash debugging output by uncommenting
set -eu -o pipefail
site_name=$1
files_dir=$(target_files_dir $1)
mkdir -p ${DDEV_DOCROOT}/${files_dir}/
echo "Using drush rsync to update files for ${site_name}..."
drush rsync --alias-path=~/.drush -q -y -r ${DDEV_DOCROOT} --verbose #${acquia_project_id}:${files_dir}/ ${DDEV_DOCROOT}/${files_dir}/
}
# Get the db from upstream and load it
function db_pull() {
#set -x # You can enable bash debugging output by uncommenting
set -eu -o pipefail
site_name=$1
target_db=$(target_db_name ${site_name})
echo "Downloading ${site_name} database..."
acli remote:drush -n ${acquia_project_id} -- sql-dump --uri=${site_name} --extra-dump=--no-tablespaces >${tmpdir}/${site_name}.sql
echo "Loading ${site_name} into database '${target_db}'..."
mysql -uroot -proot -e "CREATE DATABASE IF NOT EXISTS ${target_db}; GRANT ALL ON ${target_db}.* TO 'db'#'%'"
mysql -uroot -proot ${target_db} <${tmpdir}/${site_name}.sql
drush -r root --uri=${site_name} cr
}
# Handle initial authentication via Acquia secrets and ssh
function authenticate() {
if [ -z "${ACQUIA_API_KEY:-}" ] || [ -z "${ACQUIA_API_SECRET:-}" ]; then echo "Please make sure you have set ACQUIA_API_KEY and ACQUIA_API_SECRET in your project or global config" && exit 1; fi
if ! command -v drush >/dev/null; then echo "Please make sure your project contains drush, ddev composer require drush/drush" && exit 1; fi
ssh-add -l >/dev/null || (echo "Please 'ddev auth ssh' before running this command." && exit 1)
acli auth:login -n --key="${ACQUIA_API_KEY}" --secret="${ACQUIA_API_SECRET}"
acli remote:aliases:download -n >/dev/null
}
# Main script
authenticate || (printf "Failed to authenticate" && exit $?)
if [ $# -eq 0 ]; then
printf "Usage: ddev acquiapull [ --skip-db ] [ --skip-files ] <sitename>"
exit 1
fi
if [ ${SKIP_CONFIRMATION:-} != "true" ]; then
echo "This will overwrite your database and files for sites $*. OK?"
select yn in "Yes" "No"; do
case $yn in
No ) exit;;
esac
done
fi
for subsite in $*; do
echo "Pulling subsite: $subsite"
if [ "${SKIP_DB:-}" != "true" ]; then
db_pull ${subsite} || (printf "Failed to pull db for ${subsite}" && exit $?)
else
echo "Skipping db pull for ${subsite}"
fi
if [ "${SKIP_FILES:-}" != "true" ]; then
files_pull ${subsite} || (printf "Failed to pull files for ${subsite}" && exit $?)
else
echo "Skipping files pull for ${subsite}"
fi
done
Thanks to the guidance from #rfay I set up a set of files in .ddev/providers for each country. Each one is structured like this:
environment_variables:
uri: mysite.be
db_name: belgium
auth_command:
command: |
<no changes>
db_pull_command:
command: |
# set -x # You can enable bash debugging output by uncommenting
ls /var/www/html/.ddev >/dev/null # This just refreshes stale NFS if possible
pushd /var/www/html/.ddev/.downloads >/dev/null
acli remote:drush -n ${ACQUIA_PROJECT_ID} -- sql-dump --extra-dump=--no-tablespaces --uri=${uri} >${db_name}.sql
Then I created a custom command in .ddev/commands/host that has the contents of my script. There are more cases in the real script to cover all the countries.
#!/usr/bin/env bash
## Description: Refresh a database from Acquia and run post-db commands
## Usage: refresh-db [dbname]
## Example: "ddev refresh-db france"
site="$1"
case $site in
canada)
uri="mysite.ca"
;;
australia)
uri="mysite.com.au"
;;
belgium)
uri="mysite.be"
;;
brazil)
uri="mysite.com.br"
;;
*)
site="db"
uri="mysite.com"
;;
esac
ddev pull ${site} -y 2>/dev/null # suppress pull failed message since it really didn't
ddev import-db --target-db=${site} --src=${DDEV_APPROOT}/.ddev/.downloads/${site}.sql
ddev drush --uri=${uri} cr
ddev drush --uri=${uri} -y pmu simplesamlphp_auth
ddev drush --uri=${uri} -y config-set system.performance css.preprocess 0
ddev drush --uri=${uri} -y config-set system.performance js.preprocess 0
I tried to handle the db import during the db_pull_command as suggested but I couldn't get past database permission errors for importing a DB that I had not already imported using ddev import-db. However with the custom command I can also incorporate the post-db-import steps that normally would only run against the default DB if done through config.yaml.
The other change I made was to move the project ID into the web environment settings in global_config.yaml file. This way if we want to change the environment we want to pull from, we just make an edit to the project ID there and don't have to edit the provider files.
I'm not experienced with contributing back to open source projects but if this can be helpful to others I'd love to work with someone to do that pull request on the documentation or wherever it belongs.
I'm going to go ahead and answer in general, but you can add a full answer when you get this sorted out. (I don't have access to an Acquia multisite.)
You're on the right track, but you can do all of this in the pull script. The problem you're having is that ddev just assumes a single database, and you have multiple.
Here's a strategy for your acquia.yaml:
Create all the databases. You can use mysql -e "CREATE DATABASE IF NOT EXISTS <dbname>;, use several lines or a for loop.
Pull all the databases. You can do this with separate acli lines, or use a for loop.
Import the databases that aren't the primary db using the mysql command. mysql <dbname> < <dbname.sql Again, this can be a few lines or a for loop. (You can also just import the primary db and it will just be re-imported by ddev, no harm done if it's not large.)
Thanks for the great question, and I hope you'll give a full answer here. Your answer could also be incorporated into https://ddev.readthedocs.io/en/stable/users/providers/acquia/ - you could do a PR there by clicking the pencil link at the upper right.
I have RasberryPi connected to Arduino via usb. Arduino is reading data from sensors and prints the data to serial port. I have server on Pi that reads serial port and pushes data to view. I also have very simple deployment script which pushes my server application to Pi and restarts service. Deployment script looks like this,
USER_HOST="pi#192.168.178.34"
REMOTE_DIR="/home/pi/plantmonitor"
RAILS_DIR="/home/pi/plantmonitor/plantmonitorweb"
set -ue
cd plantmonitorweb
bundle exec rake tmp:clear
bundle exec rake assets:precompile
cd ..
rsync -rvuz ./ ${USER_HOST}:${REMOTE_DIR} --exclude='.git/' --exclude='log/' --exclude='tmp' --delete
ssh ${USER_HOST} 'sudo service plant restart'
echo "Deploy Successful!"
here is my init.d,
PROJECT=/home/pi/plantmonitor/plantmonitorweb
PIDFILE="${PROJECT}/tmp/pids/server.pid"
start() {
if [ -f /var/run/$PIDFILE ] && kill -0 $(cat /var/run/$PIDFILE); then
echo 'Service already running' >&2
return 1
fi
echo -n "Starting : "
export SENSOR=true
# export SECRET_KEY_BASE=
export RAILS_SERVE_STATIC_FILES=true
cd $PROJECT
bundle exec rails s -e production -d > /tmp/plantservice.log 2>&1
echo 'Service started'
}
stop() {
if [ ! -f "$PIDFILE" ] || ! kill -0 $(cat "$PIDFILE"); then
echo 'Service not running'
return 1
fi
echo 'Stopping serviceā¦'
kill -15 $(cat "$PIDFILE") && rm -f "$PIDFILE"
echo 'Service stopped'
}
After deployment data that I read from serial port becomes corrupted. To fix that issue I have restart PI. After I restart PI can read data normally again.
After restart data looks like this
cat /dev/ttyACM0
{"temperature": "24", "moisture": "10", "humidity": "50"}
After running deployment script data looks as follows,
idity":3{"tidity":3{"tempe":37,"moemperatuisture":e":21,"hre":21}
I am very curious why after service has restarted data I read from serial port changes.
I am trying to write a script that will run and send a notification email regarding a successful server restart, however, how should i do so in the best way?
Weblogic 8.1
Probably not the best way, but assuming you are working in a Linux/Unix environment you could try this script. This will watch your Weblogic start up script for a keyword (I chose "in RUNNING mode").
COUNTER=0
while [ $COUNTER -le 5 ]
do
grep "started in RUNNING mode" <full path and name of log file>
if [ $? -eq 0 ];
then
mail -s 'Server started' your_email#mail.com </dev/null
break
fi
COUNTER=`expr $COUNTER + 1`
sleep 6
done
So I got tired of waiting for Emacs to load every time anew, and consulting Emacs Wiki, I wrote me an invocation script such as:
#!/bin/bash
# #file: /usr/local/bin/emacs
# #version: 1
server=/tmp/emacs${UID}/server
if [ ! -S ${server} ] ; then
/opt/emacs/bin/emacs --daemon
until [ -S ${server} ] ; do
sleep 1s
done
fi
/opt/emacs/bin/emacsclient -c "$#"
Immediately, however it failed due to a stale socket (for unrelated reasons my emacs --daemon was killed unexpectedly): So I wrote:
#!/bin/bash
# #file: /usr/local/bin/emacs
# #version: 2
server=/tmp/emacs${UID}/server
if ! /sbin/fuser ${server} 2> /dev/null ; then
/sbin/funser -k ${server}
rm -f ${server}
fi
if [ ! -S ${server} ] ; then
/opt/emacs/bin/emacs --daemon
until [ -S ${server} ] ; do
sleep 1s
done
fi
/opt/emacs/bin/emacsclient -c "$#"
This worked, but working with ClearCase views I noticed a wrinkle:
In a Unix the ClearCase command:
cleartool setview myview-myuser
creates a sub shell, that has a modified file system hierarchy: several new mounts under /vobs/ that are using mvfs, and visible only to that shell.
For each such new shell, the command /sbin/fuser ${server} returns 1 (error), the first time my Emacs invocation script runs. Thus:
For version 1: There is only one daemon, but the Emacs clients are unable to see the mvfs mounts under /vobs/.
For version 2: There are several daemons all using the same ${server} socket.
Thus, my questions are: Is it OK to use version 2? If yes, how can it work if all of the daemons apparently are using the same ${server} socket? If no, what should I do to fix this?
Progress:
So I got an answer (see answers below) to part of the question, and now I am stuck with
the how to fix it? part:
I am looking into putting the ${server} under /vobs/ and thus let ClearCase itself solve my problem. I only need to figure out if and how Emacs can let me do that:
According to my /opt/emacs/share/emacs/23.2/lisp/server.el the server-socket-dir is rooted at the value of the environment variable ${TMPDIR}, so I tried:
#!/bin/bash
# #file: /usr/local/bin/emacs
# #version: 3
[ "${CCVIEW}" ] && TMPDIR="/vbos/misc/tmp" || TMPDIR="/tmp"
export TMPDIR
function is_server_up() {
local server=${TMPDIR}/emacs${UID}/server
[ -e ${mysock} ] && /sbin/fuser ${server}
}
if ! is_server_up ; then
/opt/emacs/bin/emacs --daemon
until is_server_up ; do
sleep 1s
echo "DEBUG: sleeping"
done
fi
/opt/emacs/bin/emacsclient -c "$#"
But when running form a ClearCase view, I see:
Loading ~/.emacs.d/this-module.el (source)...
Loading ~/.emacs.d/this-module.el (source)...done
Loading ~/.emacs.d/that-module.el (source)...
Loading ~/.emacs.d/that-module.el (source)...done
... snip ...
Starting Emacs daemon.
ESC [ A
ESC
ESC [
ESC [ a
M-[ A is undefined
... and it never exits.
I also tried to patch server.el and use a different environment variable, but to no avail.
Just a note, really, but I would avoid using setview.
As you have noted, it creates a shell (with all its issues in term of communication with daemons)
Its is only for one view at a time. You always need to do a cleartool pwv to make sure in which view you are currently working when using the /vobs path.
I prefer using the full path of a dynamic view:
/view/myView/vobs/...
That way, no spawn shell, no disambiguation, no trouble.
OK, part of the mystery is solved:
Version 2 should not be used. It seems to work because:
/sbin/fuser -k ${server} does not actually kill the server process. Perhaps /sbin/fuser -k -SIGKILL ${server} will, but I haven't tried that. It also can be due to ClearCase's view magic (used by manipulation at the kernel level).
rm -f ${server} unlinks the file, but don't release the socket because it is still used by running processes.
Thus the last view to run the invocation script for the first time will be the owner of the ${server} socket file, and subsequent invocations of Emacs will use that socket, and will see the version files of that view.
Just imagine the hours of fun debugging this...
As for the how to fix it? part, I will take the cowardly way out, and just revert to stand alone Emacs in ClearCase views:
#!/bin/bash
# #file: /usr/local/bin/emacs
# #version: 4
if [ "${CCVIEW}" ] ; then
/opt/emacs/bin/emacs $#
exit $?
fi
function is_server_up() {
local server=/tmp/emacs${UID}/server
[ -e ${mysock} ] && /sbin/fuser ${server}
}
if ! is_server_up ; then
/opt/emacs/bin/emacs --daemon
until is_server_up ; do
sleep 1s
done
fi
/opt/emacs/bin/emacsclient -c "$#"