Missing drush in OpenShift Drupal module - drupal

i have sshd into my OpenShift Drupal install with the intent of changing my Drupal admin password ..however the drush tool does not seem to be installed
[drupal-.rhcloud.com 56f94fb97628e12ab700005f]> drush --help
bash: drush: command not found
[drupal-.rhcloud.com 56f94fb97628e12ab700005f]>
i am using :
Drupal 7
is there something that needs doing to get that drush addition?

it seams openshift drush is not working for every bodies.its happened because of using old PHP version, which is not working by new composer, SO you need to make one DIY app in openshift and install manually drush and new version of php (php >=5.5.0 ).
Drush installing in Openshift RHC
So you could install drush manually by this codes:
mkdir ${OPENSHIFT_HOMEDIR}/app-root/runtime/srv/composer
curl -ss https://getcomposer.org/installer | php -- --install-dir=${OPENSHIFT_HOMEDIR}/app-root/runtime/srv/composer
cd /tmp
rm -rf tt
mkdir tt
cd tt
#wget http://ftp.drupal.org/files/projects/drush-7.x-5.9.tar.gz
wget https://github.com/drush-ops/drush/archive/master.zip && unzip master.zip
rm master.zip
mv * drush
chmod u+x drush/drush
#tar xzf drush-7.x-5.9.tar.gz && rm drush-7.x-5.9.tar.gz && cd drush && mv drush drush_my
mkdir ${OPENSHIFT_HOMEDIR}/app-root/runtime/srv/
mkdir ${OPENSHIFT_HOMEDIR}/app-root/runtime/srv/drush
mv /tmp/tt/drush/* ${OPENSHIFT_HOMEDIR}/app-root/runtime/srv/drush && cd ../..
rm -rf tt
export PATH=${OPENSHIFT_HOMEDIR}/app-root/runtime/srv/drush:$PATH
cd ~/app-root/runtime/repo/.openshift/action_hooks
echo "export PATH=${OPENSHIFT_HOMEDIR}/app-root/runtime/srv/drush:$PATH
#export PATH=${OPENSHIFT_HOMEDIR}/app-root/runtime/srv/composer:$PATH" >> ~/app-root/runtime/repo/.openshift/action_hooks/start
chmod 755 start
echo "export PATH=${OPENSHIFT_HOMEDIR}/app-root/runtime/srv/drush:$PATH
export PATH=${OPENSHIFT_HOMEDIR}/app-root/runtime/srv/composer:$PATH
alias drush='${OPENSHIFT_HOMEDIR}/app-root/runtime/srv/drush/drush'
alias drush='/opt/rh/php54/root/usr/bin/php ${OPENSHIFT_HOMEDIR}/app-root/runtime/srv/drush/drush.php'" >> ~/app-root/data/.bash_profile
cd ${OPENSHIFT_HOMEDIR}/app-root/runtime/srv/drush
php -r "readfile('https://getcomposer.org/installer');" | php
mv composer.phar composer.phar0
php composer.phar0 install
php composer.phar0 update
#composer config --global bin-dir /usr/local/bin
#composer config --global bin-dir /opt/rh/php54/root/usr/bin
composer config --global vendor-dir ${OPENSHIFT_HOMEDIR}/app-root/runtime/srv/composer
php composer install
php ${OPENSHIFT_HOMEDIR}/app-root/runtime/srv/composer/composer.phar --no-interaction --no-ansi --no-scripts --optimize-autoloader --working-dir=${OPENSHIFT_HOMEDIR}/app-root/runtime/srv/drush install
cd ${OPENSHIFT_HOMEDIR}/app-root/runtime/srv/drush
# Drush settings
cp drush.php drush.php0
echo "\$options['uri'] = \$_ENV['OPENSHIFT_APP_DNS'];
\$options['root'] = \$_ENV['OPENSHIFT_REPO_DIR'].'php';" >> drush.php
if [[ -f ${OPENSHIFT_HOMEDIR}/app-root/runtime/srv/drush/drush.php ]]; then
echo "$repo_top = getcwd().'/..';
$options['config'] = $repo_top . '/drush/drushrc.php'; "
else
echo "<?php
$repo_top = getcwd().'/..';
$options['config'] = $repo_top . '/drush/drushrc.php'; " >> drush.php
fi
cat << EOF >>drushrc.php
<?php
ini_set('memory_limit', '256M');
if (array_key_exists('OPENSHIFT_APP_NAME', \$_SERVER)) {
\$src = \$_SERVER;
} else {
\$src = \$_ENV;
}
\$options['uri'] =\$src['OPENSHIFT_APP_DNS'];
\$options['root'] =\$src['OPENSHIFT_REPO_DIR'].'php';
\$options['db-url']=\$src['OPENSHIFT_MYSQL_DB_URL'].\$src['OPENSHIFT_APP_DNS'];
\$options['backup-dir'] = '/tmp';
?>
EOF
echo " \$options['backup-dir'] = '/tmp';">> drushrc.php
#nano drush.php
drush status
#install mysql
: <<'end_long_comment'
#
cd /tmp
wget http://wiki.diahosting.com/down/lnmp/mysql-5.1.46.tar.gz
nohup sh -c "./configure --prefix=${OPENSHIFT_HOMEDIR}/app-root/runtime/srv/mysql --enable-assembler --with-charset=utf8 --enable-thread-safe-client --with-extra-charsets=all --with-big-tables &&
make && make install"> $OPENSHIFT_LOG_DIR/mysql_install.log /dev/null 2>&1 & tail -f $OPENSHIFT_LOG_DIR/mysql_install.log
chown -R mysql:mysql ${OPENSHIFT_HOMEDIR}/app-root/runtime/srv/mysql
cp support-files/my-medium.cnf /etc/my.cnf
sed -i 's#\[mysqld\]#\[mysqld\]\nbasedir=/usr/local/mysql\ndatadir=/var/lib/mysql\n#' /etc/my.cnf
sed -i 's#log-bin=mysql-bin#\#log-bin=mysql-bin#' /etc/my.cnf
sed -i 's#binlog_format=mixed#\#binlog_format=mixed#' /etc/my.cnf
${OPENSHIFT_HOMEDIR}/app-root/runtime/srv/mysql/bin/mysql_install_db --basedir=${OPENSHIFT_HOMEDIR}/app-root/runtime/srv/mysql --datadir=/var/lib/mysql --user=mysql
cp ${OPENSHIFT_HOMEDIR}/app-root/runtime/srv/mysql/share/mysql/mysql.server /etc/init.d/mysqld
chmod 755 /etc/init.d/mysqld
/etc/init.d/mysqld start
${OPENSHIFT_HOMEDIR}/app-root/runtime/srv/mysql/bin/mysqladmin -u root password $myrootpwd
chkconfig mysqld on
#ln -s ${OPENSHIFT_HOMEDIR}/app-root/runtime/srv/mysql/bin/myisamchk /usr/bin/
#ln -s ${OPENSHIFT_HOMEDIR}/app-root/runtime/srv/mysql/bin/mysql /usr/bin/
#ln -s ${OPENSHIFT_HOMEDIR}/app-root/runtime/srv/mysql/bin/mysqldump /usr/bin/
end_long_comment
cd
export pass=$OPENSHIFT_MYSQL_DB_PASSWORD
export user=$OPENSHIFT_MYSQL_DB_USERNAME
mysql -u $OPENSHIFT_MYSQL_DB_USERNAME
DROP DATABASE drupal2;
CREATE DATABASE drupal2;
#CREATE USER druser#localhost;
#CREATE USER druser2#$OPENSHIFT_MYSQL_DB_HOST;
CREATE USER 'druser'#'$OPENSHIFT_MYSQL_DB_HOST' IDENTIFIED BY 'druser';
#CREATE USER 'juddi'#'$OPENSHIFT_MYSQL_DB_HOST' IDENTIFIED BY 'juddi';
#SET PASSWORD FOR druser#localhost= PASSWORD("password");
SET PASSWORD FOR 'druser'#$OPENSHIFT_MYSQL_DB_HOST= PASSWORD("password");
#GRANT ALL PRIVILEGES ON drupal2.* TO druser#localhost IDENTIFIED BY 'password';
GRANT ALL PRIVILEGES ON drupal2.* TO $user#$OPENSHIFT_MYSQL_DB_HOST IDENTIFIED BY $pass;
#GRANT ALL PRIVILEGES ON drupal.* TO $OPENSHIFT_MYSQL_DB_USERNAME#$OPENSHIFT_MYSQL_DB_HOST IDENTIFIED BY $OPENSHIFT_MYSQL_DB_PASSWORD;
FLUSH PRIVILEGES;
cd ${OPENSHIFT_HOMEDIR}/app-root/runtime/repo/php
chmod 755 . -R
rm -rf *
if [[ 1=2 ]];then
drush dl openpublic #--drupal-project-rename=folder_name
mv open*/* ./
cd pro*/openpu*
drush make --prepare-install build-openpublic.make openpublic
rm -rf ~/app-root/data/sites/default/settings.php
fi
echo " \$options['backup-dir'] = '/tmp';">> ~/.drush/drushrc.php
cd ${OPENSHIFT_HOMEDIR}/app-root/runtime/repo/php
nohup sh -c "export PATH=${OPENSHIFT_HOMEDIR}/app-root/runtime/srv/drush:$PATH && drush dl openpublic"> $OPENSHIFT_LOG_DIR/drush_site_install_1_1.log /dev/null 2>&1 &
tail -f $OPENSHIFT_LOG_DIR/drush_site_install_1_1.log
cd ${OPENSHIFT_HOMEDIR}/app-root/runtime/repo/php && mv */* './' && cd pro*/openpu*
nohup sh -c "export PATH=${OPENSHIFT_HOMEDIR}/app-root/runtime/srv/drush:$PATH && drush make --prepare-install build-openpublic.make openpublic &&\
cd ${OPENSHIFT_HOMEDIR}/app-root/runtime/repo/php &&\
drush site-install openpublic --db-url=mysql://$OPENSHIFT_MYSQL_DB_USERNAME:$OPENSHIFT_MYSQL_DB_PASSWORD#$OPENSHIFT_MYSQL_DB_HOST:$OPENSHIFT_MYSQL_DB_PORT/drupal2 --site-name=${OPENSHIFT_APP_NAME} --account-name='ss' --account-pass='ss' --yes"> $OPENSHIFT_LOG_DIR/drush_site_install_1_2.log /dev/null 2>&1 &
tail -f $OPENSHIFT_LOG_DIR/drush_site_install_1_2.log
#drush site-install weebpal --db-url=mysql://$OPENSHIFT_MYSQL_DB_USERNAME:$OPENSHIFT_MYSQL_DB_PASSWORD#$OPENSHIFT_MYSQL_DB_HOST:$OPENSHIFT_MYSQL_DB_PORT/$OPENSHIFT_APP_NAME --site-name=${OPENSHIFT_APP_NAME} --account-name='ss' --account-pass='ss' --account-mail='ss3#elec-lab.tk' --site-mail='ss3#elec-lab.tk' --yes
#nohup sh -c "drush site-install themebrain_profile --db-url=mysql://$OPENSHIFT_MYSQL_DB_USERNAME:$OPENSHIFT_MYSQL_DB_PASSWORD#$OPENSHIFT_MYSQL_DB_HOST:$OPENSHIFT_MYSQL_DB_PORT/drupal2 --site-name=${OPENSHIFT_APP_NAME} --account-name='ss' --account-pass='ss' --account-mail='ss3#elec-lab.tk' --site-mail='ss3#elec-lab.tk' --yes ">$OPENSHIFT_LOG_DIR/drush_site_install_1_2.log /dev/null 2>&1 & tail -f $OPENSHIFT_LOG_DIR/drush_site_install_1_2.log
#drush site-install opendeals --db-url=mysql://$OPENSHIFT_MYSQL_DB_USERNAME:$OPENSHIFT_MYSQL_DB_PASSWORD#$OPENSHIFT_MYSQL_DB_HOST:$OPENSHIFT_MYSQL_DB_PORT/$OPENSHIFT_APP_NAME --site-name=${OPENSHIFT_APP_NAME} --account-name='ss' --account-pass='ss' --account-mail=ss3#elec-lab.tk --yes
#drush site-install openpublic --db-url=mysql://druser:password#$OPENSHIFT_MYSQL_DB_HOST:$OPENSHIFT_MYSQL_DB_PORT/drupal2 --site-name=${OPENSHIFT_APP_NAME} --account-name='ss' --account-pass='ss' --yes
#drush site-install openpublic --site-name=${OPENSHIFT_APP_NAME} --account-pass=$admin_pwd --db-url=mysql://$OPENSHIFT_MYSQL_DB_USERNAME:$OPENSHIFT_MYSQL_DB_PASSWORD#$OPENSHIFT_MYSQL_DB_HOST:$OPENSHIFT_MYSQL_DB_PORT/$OPENSHIFT_APP_NAME --yes
#mysql -u $OPENSHIFT_MYSQL_DB_USERNAME -h $OPENSHIFT_MYSQL_DB_HOST drupal <
########DONE ##################
echo " \$options['backup-dir'] = '/tmp';">> ~/.drush/drushrc.php
### drush backup database###
drush sql-dump > /tmp/database-backup.sql
### drush restore database###
mysqldump -u $OPENSHIFT_MYSQL_DB_USERNAME:$OPENSHIFT_MYSQL_DB_PORT $OPENSHIFT_APP_NAME < database-backup.sql
mysqldump -u USERNAME -p'PASSWORD' DATABASENAME > /path/to/backup_dir/database-backup.sql
drush sql-cli < ~/my-sql-dump-file-name.sql
drush bam-backup
####################################
nohup sh -c " wget -P ${OPENSHIFT_HOMEDIR}/app-root/runtime/repo/php --mirror --user=u220290147 --password=ss123456 ftp://93.188.160.83:21/"> $OPENSHIFT_LOG_DIR/python_modules_install_1_1.log /dev/null 2>&1 &
tail -f $OPENSHIFT_LOG_DIR/python_modules_install_1_1.log
cd ${OPENSHIFT_HOMEDIR}/app-root/runtime/repo/php/*
nohup sh -c "zip -r elec-lab.zip . "> $OPENSHIFT_LOG_DIR/zip.log /dev/null 2>&1 &
tail -f $OPENSHIFT_LOG_DIR/zip.log
/tmp/tmp/tb/sites/all/modules
~/app-root/data/sites/all/modules
mkdir ~/app-root/data/sites/all/libraries
mv -n /tmp/tmp/tb/sites/all/libraries/* ~/app-root/data/sites/all/libraries
mkdir ~/app-root/data/sites/all/themes
mv -n /tmp/tmp/tb/sites/all/themes/* ~/app-root/data/sites/all/themes
mv -n /tmp/tmp/tb/sites/all/* ~/app-root/data/sites/all/
mv -n /tmp/tmp/tb/sites/* ~/app-root/data/sites/
mv -n /tmp/tmp/tb/sites/all/modules/* ~/app-root/data/sites/all/modules
mkdir ~/app-root/data/downloads/drupal-7.34/profiles/themebrain_profile
mv -n /tmp/tmp/tb/profiles/themebrain_profile/* ~/app-root/data/downloads/drupal-7.34/profiles/themebrain_profile
mv ~/app-root/runtime/repo/.openshift/install_profiles/standard ~/app-root/runtime/repo/.openshift/install_profiles/standard1
mkdir ~/app-root/runtime/repo/.openshift/install_profiles/standard
mv -n /tmp/op/openpublic-7.x-1.x-dev/profiles/openpublic/* ~/app-root/runtime/repo/.openshift/install_profiles/standard
chmod 755 ~/app-root/data/sites/default/settings.php
rm -rf ~/app-root/data/sites/default/settings.php
chmod 755 ~/app-root/runtime/repo/.openshift/action_hooks/deploy
nohup sh -c "./app-root/runtime/repo/.openshift/action_hooks/deploy "> $OPENSHIFT_LOG_DIR/deploy.log /dev/null 2>&1 & tail -f $OPENSHIFT_LOG_DIR/deploy.log
#tail -f $OPENSHIFT_LOG_DIR/deploy.log
#nohup sh -c "wget http://dl1.sarzamindownload.com/sdlftpuser/92/07/10/Android.Bootcamp_Part2.rar "> $OPENSHIFT_LOG_DIR/zip2.log /dev/null 2>&1 &
#tail -f $OPENSHIFT_LOG_DIR/zip2.log
rm drush_download.py
cat <<'EOF' >> drush_download.py
import subprocess
import ast
st1='"Nodeblock, Follow, Securepages, Addthis, Twitter_pull, Comment_notify, Context_field, Entity_autocomplete, Views_boxes, Delta, Delta_ui, Context_condition_admin_theme, Context_breadcrumb_current_page, Context_bool_field, Nodeconnect, Openpublic_splash, Phase2_profile, Openpublic_breaking_news, Openpublic_comments, Openpublic_base_fields, Openpublic_defaults, Openpublic_home_page_feature, Openpublic_most_popular, Openpublic_person, Openpublic_person_leadership, Openpublic_site_page, Openpublic_webform, Openpublic_editors_choice, Openpublic_captcha, Openpublic_media_room, Openpublic_menu, Openpublic_menu_utility, Openpublic_menu_footer, Openpublic_pages, Openpublic_accessibility, Openpublic_filters, Openpublic_comments_default, Openpublic_webform_defaults"'
st1='"Addthis, Openpublic_splash, Phase2_profile, Openpublic_breaking_news, Openpublic_comments, Openpublic_base_fields, Openpublic_defaults, Openpublic_home_page_feature, Openpublic_most_popular, Openpublic_person, Openpublic_person_leadership, Openpublic_site_page, Openpublic_webform, Openpublic_editors_choice, Openpublic_captcha, Openpublic_media_room, Openpublic_menu, Openpublic_menu_utility, Openpublic_menu_footer, Openpublic_pages, Openpublic_accessibility, Openpublic_filters, Openpublic_comments_default, Openpublic_webform_defaults"'
st1=st1.lower()
st1=st1.replace(',',"','").replace('"',"'")
st2='"['+st1+']"';st2=st2.replace('"','')
ss=ast.literal_eval(st2)
#print ss
print st
for module in ss:
try:
st='drush dl '+module+' -Y ';print st
awk_sort = subprocess.Popen( [st ], stdin= subprocess.PIPE, stdout= subprocess.PIPE,shell=True)
awk_sort.wait()
output = awk_sort.communicate()[0]
print output.rstrip()
except:
print 'module '+module+' could not bin installed !!!'
#print "END"
EOF
#python drush_download.py
drush site-install standard --site-name=${OPENSHIFT_APP_NAME} --account-pass=$admin_pwd --db-url=mysql://$OPENSHIFT_MYSQL_DB_USERNAME:$OPENSHIFT_MYSQL_DB_PASSWORD#$OPENSHIFT_MYSQL_DB_HOST:$OPENSHIFT_MYSQL_DB_PORT/$OPENSHIFT_APP_NAME --yes
nohup sh -c "python drush_download.py"> $OPENSHIFT_LOG_DIR/drush_download.log /dev/null 2>&1 &
tail -f $OPENSHIFT_LOG_DIR/drush_download.log
#nohup sh -c "zip -rT9 ferdowsi-elec-labs_tk.zip '${OPENSHIFT_HOMEDIR}/app-root/runtime/srv/tmp/.'"> $OPENSHIFT_LOG_DIR/python_ftp_sync_download.log /dev/null 2>&1 &
#nohup sh -c "cd ${OPENSHIFT_HOMEDIR}/app-root/runtime/srv/tmp/ && zip -rT9 ferdowsi-elec-labs_tk.zip ."> $OPENSHIFT_LOG_DIR/python_ftp_sync_download.log /dev/null 2>&1 &
#tail -f $OPENSHIFT_LOG_DIR/python_ftp_sync_download.log

Related

wso2 API Manager AutoScaling Group Not Creating

I've been trying to use the stock templates from the wso2 website to deploy wso2 to AWS. The CloudFormation stack fails to create because the auto scaler fails to create.
I checked the EC2 instances and the actual instance is running and healthy.
I SSH'ed to the instance and ran:
grep -ni 'error\|failure' $(sudo find /var/log -name cfn-init\* -or -name cloud-init\*)
to check the log files for errors or failures. I didn't find any.
I then tried to run:
/usr/bin/cfn-signal -e $? --stack ${AWS::StackName} --resource WSO2MINode1AutoScalingGroup --region ${AWS::Region}
from the correct instance. I filled in the correct information manually when I ran the command on the instance. I pulled this command from the YAML file from the wso2 website. This command returned an Access Denied error for the stack.
Any help would be greatly appreciated. I feel like I'm over looking something simple. I included the LaunchConfiguration and the template for the Auto Scaling group below if that's useful. Happy to provide other information.
WSO2MINode1LaunchConfiguration:
Type: 'AWS::AutoScaling::LaunchConfiguration'
Properties:
ImageId: !FindInMap
- WSO2APIMAMIRegionMap
- !Ref 'AWS::Region'
- !Ref OperatingSystem
InstanceType: !Ref WSO2InstanceType
BlockDeviceMappings:
- DeviceName: /dev/sda1
Ebs:
VolumeSize: '20'
VolumeType: gp2
DeleteOnTermination: 'true'
KeyName: !Ref KeyPairName
SecurityGroups:
- !Ref WSO2MISecurityGroup
UserData: !Base64
'Fn::Sub': |
Content-Type: multipart/mixed; boundary="//"
MIME-Version: 1.0
--//
Content-Type: text/cloud-config; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cloud-config.txt"
#cloud-config
cloud_final_modules:
- [scripts-user, always]
--//
Content-Type: text/x-shellscript; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="userdata.txt"
#!/bin/bash
exec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1
export PATH=~/.local/bin:$PATH
if [[ ${OperatingSystem} == "Ubuntu1804" ]]; then
export DEBIAN_FRONTEND=noninteractive
apt-get update
apt install -y puppet nfs-common
apt install -y python-pip
apt install -y python3-pip
pip3 install boto3
pip install boto3
sed -i '/\[main\]/a server=puppet' /etc/puppet/puppet.conf
fi
if [[ ${OperatingSystem} == "CentOS7" ]]; then
yum install -y epel-release zip unzip nfs-utils
yum install -y python-pip
pip install boto3
rpm -Uvh https://yum.puppetlabs.com/puppet5/puppet5-release-el-7.noarch.rpm
yum install -y puppet-agent
echo $'[main]\nserver = puppet\ncertname = agent3\nenvironment = production\n\runinterval = 1h' > /etc/puppetlabs/puppet/puppet.conf
fi
pip install https://s3.amazonaws.com/cloudformation-examples/aws-cfn-bootstrap-latest.tar.gz
export PuppetmasterIP=${PuppetMaster.PrivateIp}
echo "$PuppetmasterIP puppet puppetmaster" >> /etc/hosts
export MI_HOST=${WSO2APIMLoadBalancer.DNSName}
export MI_PORT=8290
service puppet restart
sleep 150
export FACTER_profile=mi
if [[ ${OperatingSystem} == "Ubuntu1804" ]]; then
puppet agent -vt >> /var/log/puppetlog.log
fi
if [[ ${OperatingSystem} == "CentOS7" ]]; then
/opt/puppetlabs/bin/puppet agent -vt >> /var/log/puppetlog.log
fi
sleep 30
service puppet stop
sh /usr/lib/wso2/wso2am/4.1.0/wso2mi-4.1.0/bin/micro-integrator.sh start
if [[ ${OperatingSystem} == "Ubuntu1804" ]]; then
echo "/usr/local/bin/cfn-signal -e $? --stack ${AWS::StackName} --resource WSO2MINode1AutoScalingGroup --region ${AWS::Region}" >> /home/ubuntu/cfn-signal.txt
/usr/local/bin/cfn-signal -e $? --stack ${AWS::StackName} --resource WSO2MINode1AutoScalingGroup --region ${AWS::Region}
fi
if [[ ${OperatingSystem} == "CentOS7" ]]; then
/usr/bin/cfn-signal -e $? --stack ${AWS::StackName} --resource WSO2MINode1AutoScalingGroup --region ${AWS::Region}
fi
echo 'export HISTTIMEFORMAT="%F %T "' >> /etc/profile.d/history.sh
cat /dev/null > ~/.bash_history && history -c
DependsOn:
- WSO2MISecurityGroup
- WSO2APIMSecurityGroup
- PuppetMaster
WSO2MINode1AutoScalingGroup:
Type: 'AWS::AutoScaling::AutoScalingGroup'
Properties:
LaunchConfigurationName: !Ref WSO2MINode1LaunchConfiguration
DesiredCapacity: 1
MinSize: 1
MaxSize: 1
VPCZoneIdentifier:
- !Ref WSO2APIMPrivateSubnet1
- !Ref WSO2APIMPrivateSubnet2
Tags:
- Key: Name
Value: !Sub ${EnvironmentName} WSO2MIInstance
PropagateAtLaunch: 'true'
CreationPolicy:
ResourceSignal:
Count: 1
Timeout: PT30M
UpdatePolicy:
AutoScalingRollingUpdate:
MaxBatchSize: '2'
MinInstancesInService: '1'
PauseTime: PT10M
SuspendProcesses:
- AlarmNotification
WaitOnResourceSignals: false
DependsOn:
- WSO2APIMNode1AutoScalingGroup
- WSO2APIMNode2AutoScalingGroup
Thank you!

/bin/sh: jar: command not found (inside nexus docker container) Nexus image: 3.34.0

When I try to run a command following snippet and getting the error like
/bin/sh: jar: command not found
RUN BASE_JAR=$(find /opt/sonatype/nexus/system/org/sonatype/nexus/nexus-base -name "nexus-base*.jar") && \
mkdir temp && cd temp && \
jar -xf $BASE_JAR static/css/nexus-content.css && \
echo ".nexus-header .product-spec { display:none }" >> static/css/nexus-content.css && \
jar uf $BASE_JAR static/css/nexus-content.css && \
cd .. && rm -rf temp
Need a help/suggestions to solve this problem?

error while running a R script in a bash script

e made a bash script as follows:
#! /bin/bash
OUTDIR=".//DATA/share/pipelines/results/"
INDIR="./DATA/share/pipelines/test_data/infile/"
projectname=$1
input_bam=$2
bins=$3
mkdir OUTDIR || true
of="${OUTDIR}"
ind="${INDIR}"
./DATA/share/pipelines/script.R \
-b "${bins}" \
-c "${projectname}" \
-o "${of}" \
-i "${ind}"
echo "first step is done"
when I run the script using the following command:
bash first.sh 30 beh
I will get this error:
mkdir: cannot create directory ‘OUTDIR’: File exists
first.sh: line 17: ./DATA/share/pipelines/script.R: No such file or directory
first step is done
do you know how to solve the problem?
When you call
bash first.sh 30 beh
$1 holds 30, $2 holds beh and $3 is not defined.
input_bam ist set to $2 but is never used.
With [ ! -d ${OUTDIR} ] you should be able to test if the directory exists.
#! /bin/bash
#Please check if it should be
# relative to the current working directory (starting with './')
# or absolute (starting with '/')
BASEDIR="/DATA/share/pipelines/" #"./DATA/share/pipelines/"
OUTDIR=${BASEDIR}"results/"
INDIR=${BASEDIR}"test_data/infile/"
projectname=$1
input_bam=$2 #This is never used
bins=$3 #This is not defined when callin >bash first.sh 30 beh<
[ ! -d ${OUTDIR} ] && mkdir ${OUTDIR} #Think you would create ${OUTDIR}
of="${OUTDIR}"
ind="${INDIR}"
./DATA/share/pipelines/script.R \
-b "${bins}" \
-c "${projectname}" \
-o "${of}" \
-i "${ind}"
echo "first step is done"

How do I deploy a artifact with maven layout using REST API?

I can do a normal deploy using the below command
curl -i -X PUT -u $artifactoryUser:$artifactoryPassword -T /path/to/file/file.zip http://localhost/artifactory/simple/repo/groupId/artifactId/version/file.zip
However, this will not resolve or update maven layout on the artifact. Is there a way I can upload without using the artifactory-maven plugin?
I found a solution to this question I had posted.
Syntax Used:
curl -i -X PUT -K $CURLPWD "http://localhost/artifactory/$REPO/$groupId/$artifactId/$versionId/$artifactId-$versionId.$fileExt"
Ended up writing a script so that md5 & sha1 values are uploaded with the file, or else, I had to go in Artifactory and fix it manually.
#!/bin/bash
usage() {
echo "Please check the Usage of the Script, there were no enough parameters supplied."
echo "Usage: ArtifactoryUpload.sh localFilePath Repo GroupID ArtifactID VersionID"
exit 1
}
if [ -z "$5" ]; then
usage
fi
localFilePath="$1"
REPO="$2"
groupId="$3"
artifactId="$4"
versionId="$5"
ARTIFAC=http://localhost/artifactory
if [ ! -f "$localFilePath" ]; then
echo "ERROR: local file $localFilePath does not exists!"
exit 1
fi
which md5sum || exit $?
which sha1sum || exit $?
md5Value="`md5sum "$localFilePath"`"
md5Value="${md5Value:0:32}"
sha1Value="`sha1sum "$localFilePath"`"
sha1Value="${sha1Value:0:40}"
fileName="`basename "$localFilePath"`"
fileExt="${fileName##*.}"
echo $md5Value $sha1Value $localFilePath
echo "INFO: Uploading $localFilePath to $targetFolder/$fileName"
curl -i -X PUT -K $CURLPWD \
-H "X-Checksum-Md5: $md5Value" \
-H "X-Checksum-Sha1: $sha1Value" \
-T "$localFilePath" \
"$ARTIFAC/$REPO/$groupId/$artifactId/$versionId/$artifactId-$versionId.$fileExt"

Overwiting target directory in catch-all rule in GNU Makefile

Consider the following:
SRCS = somefile.c util/someother.c ../src/more.c
%.o : %.c
$(GCC) -MD -c $< -o $# -Wp,-MD,.deps/$*.d
#cp .deps/$*.d .deps/$*.P; \
sed -e 's/#.*//' -e 's/^[^:]*: *//' -e 's/ *\\$$//' \
-e '/^$$/ d' -e 's/$$/ :/' < .deps/$*.d >> .deps/$*.P; \
rm -f .deps/$*.d
-include $(SRCS:%.c=.deps/%.P)
How would I change the above so that the .o files end up in ./? Currently they build in the directory the source file exists in and that's bad.
I'd do it this way:
SRCS = somefile.c someother.c more.c
%.o : %.c
[same as before]
-include $(SRCS:%.c=.deps/%.P)
vpath %.c util ../src

Resources