I've just gotten DDEV setup and I have multisite working by manually running ddev import-db --target-db=[db-name]. It's working just fine but I would like to figure out how to get database pulls from Acquia to work where I can specify the site to pull from.
I have this script working but is there a way to do this with DDEV commands that would be a little cleaner?
First I modified acquia.yaml to this:
environment_variables:
project_id: mysite.dev
uri: mysite.com
db_name: mysite_us
#uri: mysite.ca
#db_name: mysite_canada
#uri: mysite.co.uk
#db_name: mysite_unitedkingdom
# etc etc
db_pull_command:
command: |
# set -x # You can enable bash debugging output by uncommenting
ls /var/www/html/.ddev >/dev/null # This just refreshes stale NFS if possible
pushd /var/www/html/.ddev/.downloads >/dev/null
acli remote:drush -n ${project_id} -- sql-dump --extra-dump=--no-tablespaces --uri=${uri} >${db_name}.sql
Then I wrote the following script which i call like:
./ddev-refresh-db.sh mysite_us mysite.com
#!/bin/bash
site="$1"
uri="$2"
ddev pull acquia
ddev import-db --target-db=${site} --src=.ddev/.downloads/${site}.sql
ddev drush --uri=${uri} cr
However this still requires us to change the site and URI in the acquia.yaml file before running this command.
Is there a way to pass a variable through to ddev pull acquia ? And also a way to mimic what this script is doing with a real DDEV command?
Here's a more complete answer for Acquia multisite pull, pulling all sites. As of DDEV v1.18.0, the ddev pull itself really isn't robust enough to pull multiple sites, because it assumes one database and one set of files. This works where #kelly howard's answer in https://stackoverflow.com/a/68553116/215713 is inadequate. (In her example, she pulls just one of the multisites, and it works great for that situation.)
But here we'll put all the logic in a DDEV custom command and pull all databases and files for any named site, so ddev acquiapull <sitename>
Place this file in the project as .ddev/commands/web/acquiapull
#!/bin/bash
# This DDEV custom command is set up to pull database and files from Acquia for several subsites.
# Usage: `ddev acquiapull [ --skip-db ] [ --skip-files ] <site1> <site2>
# Example: `ddev acquiapull subsite1`
# This assumes that each subsite has its own database (named for the site)
# and that each subsite has its own files in sites/<sitename>/files
# To use it set up the needed ACQUIA_API_KEY and ACQUIA_API_SECRET in global
# or project config, just as described in
# https://ddev.readthedocs.io/en/stable/users/providers/acquia/
acquia_project_id=myprojectid.dev
tmpdir=/tmp #inside web container
set -eu -o pipefail
while :; do
case ${1:-} in
-h | -\? | --help)
show_help
exit
;;
-y|--yes)
SKIP_CONFIRMATION=true
;;
--skip-files)
SKIP_FILES=true
;;
--skip-db)
SKIP_DB=true
;;
--) # End of all options.
shift
break
;;
-?*)
printf 'WARN: Unknown option (ignored): %s\n' "$1" >&2
;;
*) # Default case: No more options, so break out of the loop.
break ;;
esac
shift
done
# Map sitename to database name
function target_db_name() {
site_name=$1
echo $site_name
}
# Map sitename to files dir
function target_files_dir() {
site_name=$1
echo "sites/${site_name}/files"
}
# Get the files from upstream and load them.
function files_pull() {
#set -x # You can enable bash debugging output by uncommenting
set -eu -o pipefail
site_name=$1
files_dir=$(target_files_dir $1)
mkdir -p ${DDEV_DOCROOT}/${files_dir}/
echo "Using drush rsync to update files for ${site_name}..."
drush rsync --alias-path=~/.drush -q -y -r ${DDEV_DOCROOT} --verbose #${acquia_project_id}:${files_dir}/ ${DDEV_DOCROOT}/${files_dir}/
}
# Get the db from upstream and load it
function db_pull() {
#set -x # You can enable bash debugging output by uncommenting
set -eu -o pipefail
site_name=$1
target_db=$(target_db_name ${site_name})
echo "Downloading ${site_name} database..."
acli remote:drush -n ${acquia_project_id} -- sql-dump --uri=${site_name} --extra-dump=--no-tablespaces >${tmpdir}/${site_name}.sql
echo "Loading ${site_name} into database '${target_db}'..."
mysql -uroot -proot -e "CREATE DATABASE IF NOT EXISTS ${target_db}; GRANT ALL ON ${target_db}.* TO 'db'#'%'"
mysql -uroot -proot ${target_db} <${tmpdir}/${site_name}.sql
drush -r root --uri=${site_name} cr
}
# Handle initial authentication via Acquia secrets and ssh
function authenticate() {
if [ -z "${ACQUIA_API_KEY:-}" ] || [ -z "${ACQUIA_API_SECRET:-}" ]; then echo "Please make sure you have set ACQUIA_API_KEY and ACQUIA_API_SECRET in your project or global config" && exit 1; fi
if ! command -v drush >/dev/null; then echo "Please make sure your project contains drush, ddev composer require drush/drush" && exit 1; fi
ssh-add -l >/dev/null || (echo "Please 'ddev auth ssh' before running this command." && exit 1)
acli auth:login -n --key="${ACQUIA_API_KEY}" --secret="${ACQUIA_API_SECRET}"
acli remote:aliases:download -n >/dev/null
}
# Main script
authenticate || (printf "Failed to authenticate" && exit $?)
if [ $# -eq 0 ]; then
printf "Usage: ddev acquiapull [ --skip-db ] [ --skip-files ] <sitename>"
exit 1
fi
if [ ${SKIP_CONFIRMATION:-} != "true" ]; then
echo "This will overwrite your database and files for sites $*. OK?"
select yn in "Yes" "No"; do
case $yn in
No ) exit;;
esac
done
fi
for subsite in $*; do
echo "Pulling subsite: $subsite"
if [ "${SKIP_DB:-}" != "true" ]; then
db_pull ${subsite} || (printf "Failed to pull db for ${subsite}" && exit $?)
else
echo "Skipping db pull for ${subsite}"
fi
if [ "${SKIP_FILES:-}" != "true" ]; then
files_pull ${subsite} || (printf "Failed to pull files for ${subsite}" && exit $?)
else
echo "Skipping files pull for ${subsite}"
fi
done
Thanks to the guidance from #rfay I set up a set of files in .ddev/providers for each country. Each one is structured like this:
environment_variables:
uri: mysite.be
db_name: belgium
auth_command:
command: |
<no changes>
db_pull_command:
command: |
# set -x # You can enable bash debugging output by uncommenting
ls /var/www/html/.ddev >/dev/null # This just refreshes stale NFS if possible
pushd /var/www/html/.ddev/.downloads >/dev/null
acli remote:drush -n ${ACQUIA_PROJECT_ID} -- sql-dump --extra-dump=--no-tablespaces --uri=${uri} >${db_name}.sql
Then I created a custom command in .ddev/commands/host that has the contents of my script. There are more cases in the real script to cover all the countries.
#!/usr/bin/env bash
## Description: Refresh a database from Acquia and run post-db commands
## Usage: refresh-db [dbname]
## Example: "ddev refresh-db france"
site="$1"
case $site in
canada)
uri="mysite.ca"
;;
australia)
uri="mysite.com.au"
;;
belgium)
uri="mysite.be"
;;
brazil)
uri="mysite.com.br"
;;
*)
site="db"
uri="mysite.com"
;;
esac
ddev pull ${site} -y 2>/dev/null # suppress pull failed message since it really didn't
ddev import-db --target-db=${site} --src=${DDEV_APPROOT}/.ddev/.downloads/${site}.sql
ddev drush --uri=${uri} cr
ddev drush --uri=${uri} -y pmu simplesamlphp_auth
ddev drush --uri=${uri} -y config-set system.performance css.preprocess 0
ddev drush --uri=${uri} -y config-set system.performance js.preprocess 0
I tried to handle the db import during the db_pull_command as suggested but I couldn't get past database permission errors for importing a DB that I had not already imported using ddev import-db. However with the custom command I can also incorporate the post-db-import steps that normally would only run against the default DB if done through config.yaml.
The other change I made was to move the project ID into the web environment settings in global_config.yaml file. This way if we want to change the environment we want to pull from, we just make an edit to the project ID there and don't have to edit the provider files.
I'm not experienced with contributing back to open source projects but if this can be helpful to others I'd love to work with someone to do that pull request on the documentation or wherever it belongs.
I'm going to go ahead and answer in general, but you can add a full answer when you get this sorted out. (I don't have access to an Acquia multisite.)
You're on the right track, but you can do all of this in the pull script. The problem you're having is that ddev just assumes a single database, and you have multiple.
Here's a strategy for your acquia.yaml:
Create all the databases. You can use mysql -e "CREATE DATABASE IF NOT EXISTS <dbname>;, use several lines or a for loop.
Pull all the databases. You can do this with separate acli lines, or use a for loop.
Import the databases that aren't the primary db using the mysql command. mysql <dbname> < <dbname.sql Again, this can be a few lines or a for loop. (You can also just import the primary db and it will just be re-imported by ddev, no harm done if it's not large.)
Thanks for the great question, and I hope you'll give a full answer here. Your answer could also be incorporated into https://ddev.readthedocs.io/en/stable/users/providers/acquia/ - you could do a PR there by clicking the pencil link at the upper right.
Related
When using WP CLI in docker, I need to execute it as root.
I need to add the flag --allow-root directly in .bashrc and I am trying to figure out why it doesn't work.
FROM webdevops/php-dev:7.3
# configure postfix to use mailhog
RUN postconf -e "relayhost = mail:1025"
# install wp cli
RUN curl -O https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar && \
chmod +x wp-cli.phar && \
mv wp-cli.phar /usr/local/bin/wp && \
echo 'wp() {' >> ~/.bashrc && \
echo '/usr/local/bin/wp "$#" --allow-root' >> ~/.bashrc && \
echo '}' >> ~/.bashrc
WORKDIR /var/www/html/
my .bashrc
# ~/.bashrc: executed by bash(1) for non-login shells.
# Note: PS1 and umask are already set in /etc/profile. You should not
# need this unless you want different defaults for root.
# PS1='${debian_chroot:+($debian_chroot)}\h:\w\$ '
# umask 022
# You may uncomment the following lines if you want `ls' to be colorized:
# export LS_OPTIONS='--color=auto'
# eval "`dircolors`"
# alias ls='ls $LS_OPTIONS'
# alias ll='ls $LS_OPTIONS -l'
# alias l='ls $LS_OPTIONS -lA'
#
# Some more alias to avoid making mistakes:
# alias rm='rm -i'
# alias cp='cp -i'
# alias mv='mv -i'
wp() {
/usr/local/bin/wp "$#" --allow-root
}
when I try to execute any wp command I get this error:
Error: YIKES! It looks like you're running this as root. You probably meant to run this as the user that your WordPress installation exists under.
If you REALLY mean to run this as root, we won't stop you, but just bear in mind that any code on this site will then have full control of your server, making it quite DANGEROUS.
If you'd like to continue as root, please run this again, adding this flag: --allow-root
If you'd like to run it as the user that this site is under, you can run the following to become the respective user:
sudo -u USER -i -- wp <command>
It looks like that command line doesn't consider what I input into .bashrc
Guys, do you have any suggestion how to fix this problem?
You are struggling with the classic conundrum: What goes in bashrc and what in bash_profile and which one is loaded when?
The extreme short version is:
$HOME/.bash_profile: read at login shells. Should always source $HOME/.bashrc. Should only contain environmental variables that can be passed on to other functions.
$HOME/.bashrc: read only for interactive shells that are not login
(eg. opening a terminal in X). Should only contain aliases and functions
How does this help the OP?
The OP executes the following line:
$ sudo -u USER -i -- wp <command>
The flag -i of the sudo-command initiates a login-shell
-i, --login: Run the shell specified by the target user's password database entry as a login shell. This means that login-specific resource files such as .profile, .bash_profile or .login will be read by the shell. If a command is specified, it is passed to the shell for execution via the shell's -c option. If no command is specified, an interactive shell is executed.
So the OP initiates a login-shell which only reads the .bash_profile. The way to solve the problem is now to source the .bashrc file in there as is strongly recommended.
# .bash_profile
if [ -n "$BASH" ] && [ -r ~/.bashrc ]; then
. ~/.bashrc
fi
more info on dot-files:
http://mywiki.wooledge.org/DotFiles
man bash
What's the difference between .bashrc, .bash_profile, and .environment?
About .bash_profile, .bashrc, and where should alias be written in?
related posts:
Run nvm (bash function) via sudo
Can I run a command loaded from .bashrc with sudo?
I recently had the same problem. In my Dockerfile, I was running:
RUN wp core download && wp plugin install woocommerce --activate --allow-root
I looked at the error message, and thought that from the way it was worded, the --allow-root gets ignored the first time you use it. So I added it to the first wp command, and It worked.
RUN wp core download --allow-root && wp plugin install woocommerce --activate --allow-root
The problem is that ~/.bashrc is not being sourced. It will only be sourced in an interactive Bash shell.
You might get better results doing it via executables. Something like this:
# install wp cli
RUN curl -O https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar && \
chmod +x wp-cli.phar && \
mv wp-cli.phar /usr/local/bin/wp-cli.phar && \
echo '#!/bin/sh' >> /usr/local/bin/wp && \
echo 'wp-cli.phar "$#" --allow-root' >> /usr/local/bin/wp && \
chmod +x /usr/local/bin/wp
/Users/ello/.zshrc:source:3: no such file or directory:
/Users/ello/Projects/config/env.sh
Ello-MacBook-Pro% /Users/ello/.zshrc:source
zsh: no such file or directory: /Users/ello/.zshrc:source
Ello-MacBook-Pro% /Users/ello/.zshrc
zsh: permission denied: /Users/ello/.zshrc
Ello-MacBook-Pro%
This has been happening, after I foolishly edited the .zshrc file. All that remains in the file now, after attempting to reset the shell, is this:
# Created by newuser for 5.3.1
# Add env.sh
How do I undo everything, reinstall zsh, or remake the .zshrc file?
This is on macOS Sierra.
Edit: I reinstalled oh-my-zsh, leading to this message:
ain() {
# Use colors, but only if connected to a terminal, and that terminal
# supports them.
if which tput >/dev/null 2>&1; then
ncolors=$(tput colors)
fi
if [ -t 1 ] && [ -n "$ncolors" ] && [ "$ncolors" -ge 8 ]; then
RED="$(tput setaf 1)"
GREEN="$(tput setaf 2)"
YELLOW="$(tput setaf 3)"
BLUE="$(tput setaf 4)"
BOLD="$(tput bold)"
NORMAL="$(tput sgr0)"
else
RED=""
GREEN=""
YELLOW=""
BLUE=""
BOLD=""
NORMAL=""
fi
# Only enable exit-on-error after the non-critical colorization
stuff,
# which may fail on systems lacking tput or terminfo
set -e
CHECK_ZSH_INSTALLED=$(grep /zsh$ /etc/shells | wc -l)
if [ ! $CHECK_ZSH_INSTALLED -ge 1 ]; then
printf "${YELLOW}Zsh is not installed!${NORMAL} Please install zsh
first!\n"
exit
fi
unset CHECK_ZSH_INSTALLED
if [ ! -n "$ZSH" ]; then
ZSH=~/.oh-my-zsh
fi
if [ -d "$ZSH" ]; then
printf "${YELLOW}You already have Oh My Zsh installed.${NORMAL}\n"
printf "You'll need to remove $ZSH if you want to re-install.\n"
exit
fi
# Prevent the cloned repository from having insecure permissions.
Failing to do
# so causes compinit() calls to fail with "command not found:
compdef" errors
# for users with insecure umasks (e.g., "002", allowing group
writability). Note
# that this will be ignored under Cygwin by default, as Windows ACLs
take
# precedence over umasks except for filesystems mounted with option
"noacl".
umask g-w,o-w
printf "${BLUE}Cloning Oh My Zsh...${NORMAL}\n"
hash git >/dev/null 2>&1 || {
echo "Error: git is not installed"
exit 1
}
# The Windows (MSYS) Git is not compatible with normal use on cygwin
if [ "$OSTYPE" = cygwin ]; then
if git --version | grep msysgit > /dev/null; then
echo "Error: Windows/MSYS Git is not supported on Cygwin"
echo "Error: Make sure the Cygwin git package is installed and is
first on the path"
exit 1
fi
fi
env git clone --depth=1 https://github.com/robbyrussell/oh-my-zsh.git
$ZSH || {
printf "Error: git clone of oh-my-zsh repo failed\n"
exit 1
}
printf "${BLUE}Looking for an existing zsh config...${NORMAL}\n"
if [ -f ~/.zshrc ] || [ -h ~/.zshrc ]; then
printf "${YELLOW}Found ~/.zshrc.${NORMAL} ${GREEN}Backing up to
~/.zshrc.pre-oh-my-zsh${NORMAL}\n";
mv ~/.zshrc ~/.zshrc.pre-oh-my-zsh;
fi
zsh itself does not have a default user configuration. So the default ~/.zshrc is actually no ~/.zshrc.
But as you tagged the question with oh-my-zsh I would assume that you want to restore the default oh-my-zsh configuration. For this it should be sufficient to copy templates/zshrc.zsh-template from your oh-my-zsh installation path, usually ~/.oh-my-zsh:
cp ~/.oh-my-zsh/templates/zshrc.zsh-template ~/.zshrc
You may want to backup your current ~/.zshrc beforehand. Although it may have some problems now, you still might want to look up some settings once you reverted to default.
There is no such thing as "default". The best you can do, is check if your system has /etc/skel/.zshrc. If yes copy that into your home.
When you log in first time, your home is populated with everything from /etc/skel.
My dumass decided to just put a crash command into the zsh file. Now when I open the terminal, it just kernel panics. so I just deleted the config file using rm -f ~/.zshrc* and by default, it just got replaced with another copy. So good luck.
You can copy .zshrc template from
https://github.com/ohmyzsh/ohmyzsh/blob/master/templates/zshrc.zsh-template
And copy and paste all content in to ~/.zshrc
[MS Windows Friendly Solution - If terminal(using vim editor) steps are confusing]
Actually, there is no default .zshrc file, but if you need to edit is as a simple notepad, do these:
Goto /Users/ Folder via Finder App.
Click Shift + Command + . (Dot) to view hidden system files.
Look on .zshrc file, double click to open, then it will open in a notepad(TextEdit.app) in default.
Clear whichever lines to be removed.
Retype/Edit the file as per the Paths to be added.
Hit Command + s to save and exit.
Make it your default shell using this command:
chsh -s $(which zsh)
I have the following $HOME/.zshrc file:
[vagrant#devel]/vagrant% cat ~/.zshrc
#!/usr/bin/env zsh
# BEGIN ANSIBLE MANAGED BLOCK
if [ $(history | wc -l) -eq 0 ]; then
# we've just shelled in; "magically" cd into the vagrant shared folder
cd "/vagrant"
fi
# END ANSIBLE MANAGED BLOCK
I'm using the same script with Bash and it works just fine; on initial login, the user has no history and is magically transported to /vagrant.
When I log in to this box with this $HOME/.zshrc, I see the following error:
/home/vagrant/.zshrc:fc:3: no such event: 1
[vagrant#devel]/vagrant%
I do not know what this means and Google isn't leading me to a result. Apparently the code works, but this appears to be some kind of error.
Any ideas?
You don't need to call history builtin command and count the lines.
You can just check HISTCMD variable being zero in your ~/.zshrc. HISTCMD represents current command sequence number in history.
So your ~/.zshrc can be simply this:
# BEGIN ANSIBLE MANAGED BLOCK
if [[ $HISTCMD -eq 0 ]]; then
# we've just shelled in; "magically" cd into the vagrant shared folder
cd "/vagrant"
fi
# END ANSIBLE MANAGED BLOCK
Apparently, zsh's history command emits errors when there is no history, so:
#!/usr/bin/env zsh
# BEGIN ANSIBLE MANAGED BLOCK
if [ $(history 2>/dev/null | wc -l) -eq 0 ]; then
# we've just shelled in; "magically" cd into the vagrant shared folder
cd "/vagrant"
fi
# END ANSIBLE MANAGED BLOCK
No more errors.
I have RStudio server installed on a remote aws server (ubuntu) and want to run several projects at the same time (one of which takes lots of time to finish). On Windows there is a simple GUI solution like 'Open Project in New Window'. Is there something similar for rstudio server?
Simple question, but failed to find a solution except this related question for Macs, which offers
Run multiple rstudio sessions using projects
but how?
While running batch scripts is certainly a good option, it's not the only solution. Sometimes you may still want interactive use in different sessions rather than having to do everything as batch scripts.
Nothing stops you from running multiple instances of RStudio server on your Ubuntu server on different ports. (I find this particularly easy to do by launching RStudio through docker, as outlined here. Because an instance will keep running even when you close the browser window, you can easily launch several instances and switch between them. You'll just have to login again when you switch.
Unfortunately, RStudio-server still prevents you having multiple instances open in the browser at the same time (see the help forum). This is not a big issue as you just have to log in again, but you can work around it by using different browsers.
EDIT: Multiple instances are fine, as long as they are not on the same browser, same browser-user AND on the same IP address. e.g. a session on 127.0.0.1 and another on 0.0.0.0 would be fine. More importantly, the instances keep on running even if they are not 'open', so this really isn't a problem. The only thing to note about this is you would have to log back in to access the instance.
As for projects, you'll see you can switch between projects using the 'projects' button on the top right, but while this will preserve your other sessions I do not think the it actually supports simultaneous code execution. You need multiple instances of the R environment running to actually do that.
UPDATE 2020 Okay, it's now 2020 and there's lots of ways to do this.
For running scripts or functions in a new R environment, check out:
the callr package
The RStudio jobs panel
Run new R sessions or scripts from one or more terminal sessions in the RStudio terminal panel
Log out and log in to the RStudio-server as a different user (requires multiple users to be set up in the container, obviously not a good workflow for a single user but just noting that many different users can access the same RStudio server instance no problem.
Of course, spinning up multiple docker sessions on different ports is still a good option as well. Note that many of the ways listed above still do not allow you to restart the main R session, which prevents you from reloading installed packages, switching between projects, etc, which is clearly not ideal. I think it would be fantastic if switching between projects in an RStudio (server) session would allow jobs in the previously active project to keep running in the background, but have no idea if that's in the cards for the open source version.
Often you don't need several instances of Rstudio - in this case just save your code in .R file and launch it using ubuntu command prompt (maybe using screen)
Rscript script.R
That will launch a separate R session which will do the work without freezing your Rstudio. You can pass arguments too, for example
# script.R -
args <- commandArgs(trailingOnly = TRUE)
if (length(args) == 0) {
start = '2015-08-01'
} else {
start = args[1]
}
console -
Rscript script.R 2015-11-01
I think you need R Studio Server Pro to be able to log in with multiple users/sessions.
You can see the comparison table below for reference.
https://www.rstudio.com/products/rstudio-server-pro/
Installing another instance of rstudio server is less than ideal.
Linux server admins, fear not. You just need root access or a kind admin.
Create a group to use: groupadd Rwarrior
Create an additional user with same home directory as your primary Rstudio login:
useradd -d /home/user1 user2
Add primary and new user into Rwarrior group:
gpasswd -a user2 Rwarrior
gpasswd -a user1 Rwarrior
Take care of the permissions for your primary home directory:
cd /home
chown -R user1:Rwarrior /home/user1
chmod -R 770 /home/user1
chmod g+s /home/user1
Set password for the new user:
passwd user2
Open a new browser window in incognito/private browsing mode and login to Rstudio with the new user you created. Enjoy.
I run multiple RStudio servers by isolating them in Singularity instances. Download the Singularity image with the command singularity pull shub://nickjer/singularity-rstudio
I use two scripts:
run-rserver.sh:
Find a free port
#!/bin/env bash
set -ue
thisdir="$(dirname "${BASH_SOURCE[0]}")"
# Return 0 if the port $1 is free, else return 1
is_port_free(){
port="$1"
set +e
netstat -an |
grep --color=none "^tcp.*LISTEN\s*$" | \
awk '{gsub("^.*:","",$4);print $4}' | \
grep -q "^$port\$"
r="$?"
set -e
if [ "$r" = 0 ]; then return 1; else return 0; fi
}
# Find a free port
find_free_port(){
local lower_port="$1"
local upper_port="$2"
for ((port=lower_port; port <= upper_port; port++)); do
if is_port_free "$port"; then r=free; else r=used; fi
if [ "$r" = "used" -a "$port" = "$upper_port" ]; then
echo "Ports $lower_port to $upper_port are all in use" >&2
exit 1
fi
if [ "$r" = "free" ]; then break; fi
done
echo $port
}
port=$(find_free_port 8080 8200)
echo "Access RStudio Server on http://localhost:$port" >&2
"$thisdir/cexec" \
rserver \
--www-address 127.0.0.1 \
--www-port $port
cexec:
Create a dedicated config directory for each instance
Create a dedicated temporary directory for each instance
Use the singularity instance mechanism to avoid that forked R sessions are adopted by PID 1 and stay around after the rserver has shut down. Instead, they become children of the Singularity instance and are killed when that shuts down.
Map the current directory to the directory /data inside the container and set that as home folder (this step might not be nessecary if you don't care about reproducible paths on every machine)
#!/usr/bin/env bash
# Execute a command in the container
set -ue
if [ "${1-}" = "--help" ]; then
echo <<EOF
Usage: cexec command [args...]
Execute `command` in the container. This script starts the Singularity
container and executes the given command therein. The project root is mapped
to the folder `/data` inside the container. Moreover, a temporary directory
is provided at `/tmp` that is removed after the end of the script.
EOF
exit 0
fi
thisdir="$(dirname "${BASH_SOURCE[0]}")"
container="rserver_200403.sif"
# Create a temporary directory
tmpdir="$(mktemp -d -t cexec-XXXXXXXX)"
# We delete this directory afterwards, so its important that $tmpdir
# really has the path to an empty, temporary dir, and nothing else!
# (for example empty string or home dir)
if [[ ! "$tmpdir" || ! -d "$tmpdir" ]]; then
echo "Error: Could not create temp dir $tmpdir"
exit 1
fi
# check if temp dir is empty (this might be superfluous, see
# https://codereview.stackexchange.com/questions/238439)
tmpcontent="$(ls -A "$tmpdir")"
if [ ! -z "$tmpcontent" ]; then
echo "Error: Temp dir '$tmpdir' is not empty"
exit 1
fi
# Start Singularity instance
instancename="$(basename "$tmpdir")"
# Maybe also superfluous (like above)
rundir="$(readlink -f "$thisdir/.run/$instancename")"
if [ -e "$rundir" ]; then
echo "Error: Runtime directory '$rundir' exists already!" >&2
exit 1
fi
mkdir -p "$rundir"
singularity instance start \
--contain \
-W "$tmpdir" \
-H "$thisdir:/data" \
-B "$rundir:/data/.rstudio" \
-B "$thisdir/.rstudio/monitored/user-settings:/data/.rstudio/monitored/user-settings" \
"$container" \
"$instancename"
# Delete the temporary directory after the end of the script
trap "singularity instance stop '$instancename'; rm -rf '$tmpdir'; rm -rf '$rundir'" EXIT
singularity exec \
--pwd "/data" \
"instance://$instancename" \
"$#"
Need some help to understand what's wrong.
In short: I've written a bourne shell script, which creates links to contents of source directory in the target directory.
It worked fine on the host system but when targeted on directories on mounted fs (both from chroot and native system) it doesn't work and provides no output at all.
Details:
mounted fs: ext3, rw
host system: 3.2.0-48-generic #74-Ubuntu SMP GNU/Linux
To narrow the question, "/usr" was taken as an example.
permissions for "/usr" in the host system: drwxr-xr-x
permissions for "/usr" on mounted partition: drwxr-xr-x
Tried to use both bash and dash from host system. Same result - works for native file systems, does not work for the mounted.
script (cord.sh; run from root in my cases):
# !/bin/sh
SRCFOLDER=$2 # folder with package installation
DESTFOLDER=$3 # destination folder to install symlinks to ('/' - for base sys; '/usr' - userland)
TARGETS=$(ls $SRCFOLDER) # targets to handle
SRCFOLDER=${SRCFOLDER%/} # stripping slashes from the end, if they are present
DESTFOLDER=${DESTFOLDER%/} #
##
## LINKING
##
if [ "$1" = "-c" ];
then printf %s "$TARGETS" | while IFS= read -r line
do
current_target=$(file $SRCFOLDER/$line) # had an issue with different output in different systems
if [ "${current_target% }" = "$SRCFOLDER/$line: directory" ]; # stripping space helped
then
mkdir -v $DESTFOLDER/$line # if other package created it - it'll fail
/usr/local/bin/cord.sh -c $SRCFOLDER/$line $DESTFOLDER/$line # RECURSION
else
ln -sv $SRCFOLDER/$line $DESTFOLDER/$line # will fail, if exists
fi;
done
##
## REMOVING LINKS
##
elif [ "$1" = "-d" ];
then printf %s "$TARGETS" | while IFS= read -r line
do
current_target=$(file $SRCFOLDER/$line)
if [ "${current_target% }" = "$SRCFOLDER/$line: directory" ];
then
/usr/local/bin/cord.sh -d $SRCFOLDER/$line $DESTFOLDER/$line # RECURSION
else
rm -v $DESTFOLDER/$line
fi;
done
elif [ "$1" = "-h" ];
then
echo "Usage:"
echo "cord -c /path/to/pkgdir /path/to/linkdir - create simlinks for package contents"
echo "cord -d /path/to/pkgdir /path/to/linkdir - delete links for package"
echo "cord -h - displays this help note"
else
echo "Usage:"
echo "cord -c /path/to/pkgdir /path/to/linkdir - create simlinks for package contents"
echo "cord -d /path/to/pkgdir /path/to/linkdir - delete links for package"
echo "cord -h - displays this help note"
fi;
The most obvious thing to suggest, was some issue with permissions. Yet everything looks sane. Maybe I've missed something?
I don't know what your main problem might be (permissions or something else - you should include an example of how you run the script and how you prepare for it with the mounts and everything). But this script can be cleaned up.
First, if you want to test whether something is a directory, use
if [ -d "$something ]
That'll get rid of the clumsy file usage.
Second, don't go through the redundant steps of converting your $TARGETS array to a series of lines and then reading the lines with a loop. Just loop over the array directly.
for line in $TARGETS
Also, instead of using ls to populate an array of filenames, I'd use a glob. But instead of either of those, I'd use find so it can take care of recursion and eliminate the tree of processes you're creating by recursing with a call to the same script. And instead of writing a symlink-tree-maker script I'd use something like lndir which already exists for that purpose...