I have an SBT project that uses a pure-Java plugin to create a Debian package. The plugin is defined in plugins.sbt as such:
libraryDependencies += "org.vafer" % "jdeb" % "1.5" artifacts (Artifact("jdeb", "jar", "jar"))
In the SBT console, I execute this command:
debian:packageBin
This generates a .deb file which I am attempting to install on an Ubuntu machine using dpkg -i <filename>
The package installs. The libraries and configuration etc. are placed in /usr/share/, which is what I expected. That folder also includes a bin folder with a script that directly starts the program. So far so good.
However, I want to start the program using service start <projectname>, which doesn't work. Same goes for /etc/init.d/<projectname> start. The output for the latter command is:
[ ok ] Starting undagrid-api-server (via systemctl): undagrid-api-server.service.
But the process isn't running when I check it with ps. Here's the contents of /etc/init.d/<projectname>
#! /bin/bash
### BEGIN INIT INFO
# Provides: <projectname>
# Required-Start: $remote_fs $syslog
# Required-Stop: $remote_fs $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: <projectname>
### END INIT INFO
source /lib/init/vars.sh
source /lib/lsb/init-functions
# adding bashScriptEnvConfigLocation
[[ -f /etc/default/<projectname> ]] && . /etc/default/<projectname>
# $JAVA_OPTS used in $RUN_CMD wrapper
export JAVA_OPTS
PIDFILE=/var/run/<projectname>/running.pid
if [ -z "$DAEMON_USER" ]; then
DAEMON_USER=<projectname>
fi
if [ -z "$DAEMON_GROUP" ]; then
DAEMON_GROUP=<projectname>
fi
RUN_CMD="/usr/share/<projectname>/bin/<projectname>"
start_daemon() {
log_daemon_msg "Starting" "<projectname>"
[ -d "/var/run/<projectname>" ] || install -d -o "$DAEMON_USER" -g "$DAEMON_GROUP" -m755 "/var/run/<projectname>"
start-stop-daemon --background --chdir /usr/share/<projectname> --chuid "$DAEMON_USER" --make-pidfile --pidfile "$PIDFILE" --startas "$RUN_CMD" --start -- $RUN_OPTS
log_end_msg $?
}
stop_daemon() {
log_daemon_msg "Stopping" "<projectname>"
start-stop-daemon --stop --quiet --oknodo --pidfile "$PIDFILE" --retry=TERM/60/KILL/30
log_end_msg $?
rm -f "$PIDFILE"
}
case "$1" in
start)
start_daemon
exit $?
;;
stop)
stop_daemon
exit $?
;;
restart|force-reload)
stop_daemon
start_daemon
exit $?
;;
status)
status_of_proc -p "$PIDFILE" "$RUN_CMD" <projectname> && exit 0 || exit $?
;;
*)
log_daemon_msg "Usage: /etc/init.d/<projectname> {start|stop|restart|status}"
;;
esac
exit 0
With some simple debugging I have found out that if the command line argument is "start", the script never makes it past the second source statement: source /lib/lsb/init-functions
Systemctl gives me the following output:
<projectname>.service loaded active exited LSB: <projectname>
I have also run the script using bash -x but that produces quite a pile of output that I don't really know how to read.
Anybody know what's going on here? The entire point of using a packager and installing it as a package is to avoid headaches like these...
Related
I need to install Nginx on my target which there is no internet connection, how can I install Nginx with all dependencies in an offline mode?? thanks in advance for your answers.
I have recently gone through this procedure and this is what worked for me on centos7:
You need an online Linux server to download dependencies. You can use virtual machines or anything else.
On your online server create a .sh file and copy script below in it. (I named it download_dependencies)
#!/bin/bash
# This script is used to fetch external packages that are not available in standard Linux distribution
# Example: ./fetch-external-dependencies ubuntu18.04
# Script will create nms-dependencies-ubuntu18.04.tar.gz in local directory which can be copied
# into target machine and packages inside can be installed manually
set -eo pipefail
# current dir
PACKAGE_PATH="."
mkdir -p $PACKAGE_PATH
declare -A CLICKHOUSE_REPO
CLICKHOUSE_REPO['ubuntu18.04']="https://repo.clickhouse.tech/deb/lts/main"
CLICKHOUSE_REPO['ubuntu20.04']="https://repo.clickhouse.tech/deb/lts/main"
CLICKHOUSE_REPO['centos7']="https://repo.clickhouse.tech/rpm/lts/x86_64"
CLICKHOUSE_REPO['centos8']="https://repo.clickhouse.tech/rpm/lts/x86_64"
CLICKHOUSE_REPO['rhel7']="https://repo.clickhouse.tech/rpm/lts/x86_64"
CLICKHOUSE_REPO['rhel8']="https://repo.clickhouse.tech/rpm/lts/x86_64"
declare -A NGINX_REPO
NGINX_REPO['ubuntu18.04']="https://nginx.org/packages/mainline/ubuntu/pool/nginx/n/nginx/"
NGINX_REPO['ubuntu20.04']="https://nginx.org/packages/mainline/ubuntu/pool/nginx/n/nginx/"
NGINX_REPO['centos7']="https://nginx.org/packages/mainline/centos/7/x86_64/RPMS/"
NGINX_REPO['centos8']="https://nginx.org/packages/mainline/centos/8/x86_64/RPMS/"
NGINX_REPO['rhel7']="https://nginx.org/packages/mainline/rhel/7/x86_64/RPMS/"
NGINX_REPO['rhel8']="https://nginx.org/packages/mainline/rhel/8/x86_64/RPMS/"
CLICKHOUSE_KEY="https://repo.clickhouse.com/CLICKHOUSE-KEY.GPG"
NGINX_KEY="https://nginx.org/keys/nginx_signing.key"
declare -A CLICKHOUSE_PACKAGES
# for Clickhouse package names are static between distributions
# we use ubuntu/centos entries as placeholders
CLICKHOUSE_PACKAGES['ubuntu']="
clickhouse-server_21.3.10.1_all.deb
clickhouse-common-static_21.3.10.1_amd64.deb"
CLICKHOUSE_PACKAGES['centos']="
clickhouse-server-21.3.10.1-2.noarch.rpm
clickhouse-common-static-21.3.10.1-2.x86_64.rpm"
CLICKHOUSE_PACKAGES['ubuntu18.04']=${CLICKHOUSE_PACKAGES['ubuntu']}
CLICKHOUSE_PACKAGES['ubuntu20.04']=${CLICKHOUSE_PACKAGES['ubuntu']}
CLICKHOUSE_PACKAGES['centos7']=${CLICKHOUSE_PACKAGES['centos']}
CLICKHOUSE_PACKAGES['centos8']=${CLICKHOUSE_PACKAGES['centos']}
CLICKHOUSE_PACKAGES['rhel7']=${CLICKHOUSE_PACKAGES['centos']}
CLICKHOUSE_PACKAGES['rhel8']=${CLICKHOUSE_PACKAGES['centos']}
declare -A NGINX_PACKAGES
NGINX_PACKAGES['ubuntu18.04']="nginx_1.21.3-1~bionic_amd64.deb"
NGINX_PACKAGES['ubuntu20.04']="nginx_1.21.2-1~focal_amd64.deb"
NGINX_PACKAGES['centos7']="nginx-1.21.4-1.el7.ngx.x86_64.rpm"
NGINX_PACKAGES['centos8']="nginx-1.21.4-1.el8.ngx.x86_64.rpm"
NGINX_PACKAGES['rhel7']="nginx-1.21.4-1.el7.ngx.x86_64.rpm"
NGINX_PACKAGES['rhel8']="nginx-1.21.4-1.el8.ngx.x86_64.rpm"
download_packages() {
local target_distribution=$1
if [ -z $target_distribution ]; then
echo "$0 - no target distribution specified"
exit 1
fi
mkdir -p "${PACKAGE_PATH}/${target_distribution}"
# just in case delete all files in target dir
rm -f "${PACKAGE_PATH}/${target_distribution}/*"
readarray -t clickhouse_files <<<"${CLICKHOUSE_PACKAGES[${target_distribution}]}"
readarray -t nginx_files <<<"${NGINX_PACKAGES[${target_distribution}]}"
echo "Downloading Clickhouse signing keys"
curl -fs ${CLICKHOUSE_KEY} --output "${PACKAGE_PATH}/${target_distribution}/clickhouse-key.gpg"
echo "Downloading Nginx signing keys"
curl -fs ${NGINX_KEY} --output "${PACKAGE_PATH}/${target_distribution}/nginx-key.gpg"
for package_file in "${clickhouse_files[#]}"; do
if [ -z $package_file ]; then
continue
fi
file_url="${CLICKHOUSE_REPO[$target_distribution]}/$package_file"
save_file="${PACKAGE_PATH}/${target_distribution}/$package_file"
echo "Fetching $file_url"
curl -fs $file_url --output $save_file
done
for package_file in "${nginx_files[#]}"; do
if [ -z $package_file ]; then
continue
fi
file_url="${NGINX_REPO[$target_distribution]}/$package_file"
save_file="${PACKAGE_PATH}/${target_distribution}/$package_file"
echo "Fetching $file_url"
curl -fs $file_url --output $save_file
done
bundle_file="${PACKAGE_PATH}/nms-dependencies-${target_distribution}.tar.gz"
tar -zcf $bundle_file -C "${PACKAGE_PATH}/${target_distribution}" .
echo "Bundle file saved as $bundle_file"
}
target_distribution=$1
if [ -z $target_distribution ]; then
echo "Usage: $0 target_distribution"
echo "Supported target distributions: ${!CLICKHOUSE_REPO[#]}"
exit 1
fi
# check if target distribution is supported
if [ -z ${CLICKHOUSE_REPO[$target_distribution]} ]; then
echo "Target distribution is not supported."
echo "Supported distributions: ${!CLICKHOUSE_REPO[#]}"
exit 1
fi
download_packages "${target_distribution}"
Then on the same directory that contains download_dependencies.sh run command below:
download_dependencies.sh <your linux version>
In my case, I ran code below (leave it blank to see options):
download_dependencies.sh centos7
It should start to download and when it finished you should see nms-dependencies-rhel7.tar.gz in your directory.
Copy that file(.tar.gz) to your offline target.
Now on your target machine, go to directory which you copied your file and run the code below:
tar -zxvf nms-dependencies-rhel7.tar.gz
sudo yum install *.rpm
After installation you can start nginx using systemctl:
sudo systemctl start clickhouse-server
sudo systemctl start nginx
Your nginx service must be running now!
you can download tar file in another system and copy
did you try this link?
https://gist.github.com/taufiqibrahim/d7f697de6bb8b93ca348a5b94d6adbfc
Getting below error during platform installation:
"Required libaio package is not found. ..."
However, above package is already installed:
rpm -q libaio
libaio-0.3.107-10.el6.x86_64
Here is output from the installation script:
./platform-setup-x64-linux-4.4.3.10393.sh
Unpacking JRE ...
Preparing JRE ...
Starting Installer ...
May 30, 2018 6:51:23 PM java.util.prefs.FileSystemPreferences$2 run
INFO: Created system preferences directory in java.home.
Verifying if the libaio package is installed. /opt/appdynamics/platform/installer/checkLibaio.sh
I got this too... I was running from command-line as a non-root user:
./platform-setup-x64-linux-4.4.3.10393.sh -q -varfile /appd/home/Install/response.varfile
I added the shell expand(-x) switch and log to the command(s) like so:
bash -x ./platform-setup-x64-linux-4.4.3.10393.sh -q -varfile /appd/home/Install/response.varfile > install.log 2>&1
If we tail the last bit of that log you get, this response in debug mode:
Verifying if the libaio package is installed. /opt/appdynamics/platform/installer/checkLibaio.sh
Required libaio package is not found. For instructions on installing
the missing package, refer to https://docs.appdynamics.com/display/PRO44/Enterprise+Console+Requirements
and the script checkLibaio.sh isn't left there... so you cannot figure it out easily. I also have a RedHat variant with the packages installed:
rpm -qa | grep libaio
libaio-0.3.109-13.el7.x86_64
Strangely enough I have one VM from the same image that will install the distribution just fine, and one that will not, so on the broken install (where I really want to install this). I ran another command from the expanded view of the install.log, which was a really long JVM command line. Anyways I got it to work and then made a looping script to retrieve the file (Because AppD for some reason removes the check script before you can look at it). The script is as follows:
#!/bin/sh
# Script used to check if the machine has libaio on it or not.
cat /dev/null > /opt/appdynamics/platform/installer/.libaio_status
chmod 777 /opt/appdynamics/platform/installer/.libaio_status
# Check if the dpkg or rpm command exists before running it.
command -v dpkg >/dev/null 2>&1
OUT=$?
if [ $OUT -eq 0 ];
then
if [ `dpkg -l | grep -i libaio* | wc -l` -gt 0 ];
then
echo SUCCESS >> /opt/appdynamics/platform/installer/.libaio_status
exit 0
fi
else
command -v rpm >/dev/null 2>&1
OUT=$?
if [ $OUT -eq 0 ];
then
if [ `rpm -qa | grep -i libaio* | wc -l` -gt 0 ];
then
echo SUCCESS >> /opt/appdynamics/platform/installer/.libaio_status
exit 0
fi
fi
fi
echo FAILURE >> /opt/appdynamics/platform/installer/.libaio_status
exit 1
I you run this script like me on the faulty platform what you will discover is that your version of Linux has both:
dpkg
and
rpm
installed. To work around this you should temporarily make a name change to one of these two package manager executables so it cannot be found (by your shell environment).
Most common here will be that you are running a RedHat variant where someone chose to install dpkg (For who knows what reason). If so desired remove that package and the install should be successful.
I have a Meteor application with Circle CI as continuous integration service.
Facebook Flow is running locally with the following .flowconfig:
[ignore]
.*/node_modules/.*
[options]
module.name_mapper='^\/\(.*\)$' -> '<PROJECT_ROOT>/\1'
module.name_mapper='^meteor\/\(.*\):\(.*\)$' -> '<PROJECT_ROOT>/.meteor/local/build/programs/server/packages/\1_\2'
module.name_mapper='^meteor\/\(.*\)$' -> '<PROJECT_ROOT>/.meteor/local/build/programs/server/packages/\1'
In CI I get errors like:
client/main.jsx:4
4: import { Meteor } from 'meteor/meteor';
^^^^^^^^^^^^^^^ meteor/meteor. Required module not found
Flow seems not to find my modules. The rewrite rules do not apply. With SSH access to Circle CI I found ot that the <PROJECT_ROOT>/.meteor/local directory is not present.
Once I run meteor on the CI machine the directory will appear.
Problem: If I run meteor the Meteor server will start up and my test will time out.
As far as I see I need to either
Adapt my .flowconfig or
Find a way to get Meteor to create the directory without running meteor or
Find a way to kill the meteor process once the server is running.
I went with the third option:
bbaja42 shared a script that saves the output of a program and terminates the program once a keyword is reached.
Adapted to my case I have two files:
ci-tests.sh
#!/bin/sh
# Check if the output directory exits. Flow needs the modules there.
if [ ! -d ".meteor/local/build/programs/server/packages" ]; then
echo "Meteor build directory does not exist. Starting Meteor."
# Run Meteor so the output directory is built.
./build-and-kill-meteor.sh
else
echo "Meteor build directory exists"
fi
./node_modules/.bin/flow --json
if [ $? -ne 0 ] ; then
exit 1
fi
build-and-kill-meteor.sh
#!/bin/bash
OUTPUT=/tmp/meteor-launch.log
PROGRAM=meteor
$PROGRAM > $OUTPUT &
PID=$!
echo Program is running under pid: $PID
#Every 10 seconds, check requirements
while true; do
tail -1 $OUTPUT
grep "App running at: http://localhost" $OUTPUT
if [ $? -eq 0 ] ; then
break
fi
sleep 10
done
kill $PID || echo "Killing process with pid $PID failed, try manual kill with -9 argument"
I ran into the same issue and came up with my own derivation based on the OP's answer. I run this script on every CI build to ensure that Meteor will always install any new atmosphere packages that I'm shipping with.
#!/bin/bash
# Install meteor
if [ -d ~/.meteor ]; then sudo ln -s ~/.meteor/meteor /usr/local/bin/meteor; fi
if [ ! -e $HOME/.meteor/meteor ]; then curl https://install.meteor.com | sh; fi
OUTPUT=/tmp/meteor-launch.log
PROGRAM=meteor
$PROGRAM > $OUTPUT &
PID=$!
echo Program is running under pid: $PID
# Start meteor to install atmosphere packages
while true; do
tail -1 $OUTPUT
grep "Your application is crashing." $OUTPUT
# Cancel the program once meteor has started
if [ $? -eq 0 ] ; then
break
fi
sleep 10
done
kill $PID || echo "Killing process with pid $PID failed, try manual kill with -9 argument."
I have a Makefile which creates build a programme called monitor:
fo/monitor: fo/monitor.c fo/inotify.c
(cd fo ; $(MAKE) monitor)
I have two types of system that I can run my Make on, and only wish to have have one installer.
So I would like to add an IF statement to this to check for a file, and if it exists, then to build the monitor.
fo/monitor:
if [ -f path/to/file/exists ]; \
then \
fo/monitor.c fo/inotify.c \
(cd fo ; $(MAKE) monitor) \
else \
echo "" >/dev/null \
fi \
The problem is, when I attempt to run the Makefile - it falls over becuase it does not like this code - can anyone point me in the right direction please?
The fo/monitor.c and fo/inotify.c have to be added to the targets dependencies, and not in the if statement. You can also use the -C option of make instead of using a subshell. And you do have to echo nothing in nothing.
This should be good:
fo/monitor: fo/monitor.c fo/inotify.c
if [ -f path/to/file/exists ]; then \
$(MAKE) -C fo monitor; \
fi
Another way is to depend on that target only if path/to/file/exists exists:
# add fo/monitor dependency only if path/to/file/exists exists
all : $(shell test -e path/to/file/exists && echo "fo/monitor")
fo/monitor: fo/monitor.c fo/inotify.c
${MAKE} -C ${#D}
I want to write .sh script that will kill process started by maven exec plugin. At the moment i'm trying to get PID by start time, but this approach do not guarantee that there is no some collision with other processes.
Please, could someone say whether there is an approach to get PID or to kill process started by mvn exec .
Thanks.
The best approach is to start the maven process using start-stop-daemon. I use a modified bash script from a tomcat init.d to start/stop java or maven programs as daemons. Here is a snippet. Please customize as needed:
#!/bin/sh
case "$1" in
start)
log_daemon_msg "Starting $DESC" "$NAME"
start-stop-daemon --start -b -u "$MAVEN_USER" -g "$MAVEN_GROUP" \
-c "$MAVEN_USER" -d "$MAVEN_HOME" -p "$MAVEN_PID" -m \
-x mvn -- exec:java -Dexec.mainClass="com.company.Application" -Dexec.classpathScope=runtime -Dexec.args="/home/user 192.168.1.1" 2>&1
status="$?"
set +a -e
else
log_progress_msg "(already running)"
log_end_msg 0
fi
;;
stop)
log_daemon_msg "Stopping $DESC" "$NAME"
set +e
if [ -f "$MAVEN_PID" ]; then
start-stop-daemon --stop --pidfile "$MAVEN_PID" \
--user "$MAVEN_USER" \
--retry=TERM/20/KILL/5 >/dev/null
if [ $? -eq 1 ]; then
log_progress_msg "$DESC is not running but pid file exists, cleaning up"
elif [ $? -eq 3 ]; then
PID="`cat $MAVEN_PID`"
log_failure_msg "Failed to stop $NAME (pid $PID)"
exit 1
fi
rm -f "$MAVEN_PID"
rm -rf "$JVM_TMP"
else
log_progress_msg "(not running)"
fi
log_end_msg 0
set -e
;;