Can't run sh on virtual env or macOS - zsh

I can't run the sh script on virtual env or macOS
#!/bin/bash
# This script sets up the environment for a Flask project.
echo "Starting script..."
# Initialize virtual environment.
echo "Activating virtual environment..."
source env/bin/activate
# Define environment variables.
echo "Setting environment variables..."
export FLASK_APP=app.py
export FLASK_ENV=development
echo $FLASK_APP
echo "Script completed."
The result only shows the echo path but neither the source or the export commands work.
(base) user#xxx % sh envset.sh
Starting script...
Activating virtual environment...
Setting environment variables...
application.py
Script completed.
(base) usser#xxx %
The env wasn't activated.

(base) user#xxx % sh envset.sh
you're invoking a shell here as a child process of the terminal's shell. a child process never changes its parent process's environment.
Instead you should . (portable source) the file which will execute its commands in the terminal's current shell process. Then, the exports will be available to the terminal shell after the source completes:
(base) user#xxx % . envset.sh

Related

Unable to export env variable from script

I'm currently struggling with running a .sh script I'm trying to trigger from Jenkins.
Within the Jenkins "execute shell" section, I'm connecting to a remote server (The Jenkins agent does not have right OS to build what I need.), using:
cp -r . /to/shared/drive/to/have/access/on/remote
ssh -t -t username#servername << EOF
cd /to/shared/drive/to/have/access/on/remote
source build.sh dev
exit
EOF
Inside build.sh, I'm exporting R_LIBS to build a package for different R versions.
...
for path in "${!rVersionPaths[#]}"; do
export R_LIBS="${path}"
Rscript -e 'install.packages(c("someDependency", "someOtherDependency"), repos="http://cran.r-project.org");'
...
Setting R_LIBS should functions here like setting lib within install.packages(...). For some reason the R_LIBS export doesn't get picked up. Also setting other env variables like http_proxy are ignored. This causes any requests outside the network to fail.
Is there any particular way of achieving this?
Maybe pass those variables with env, like
env R_LIBS="${path}" Rscript -e 'install.packages(c("someDependency", .....
Well i'm not able to comment on the question, so posting it as answer.
I had similar problem when calling remote shell script from Jenkins, the problem was somehow bash_profile variables were not loaded when called the script from Jenkins but locally it worked. Loading the bash profile in ssh connection solved it for me.
Add source to bash_profile in build.sh
. ~/.bash_profile OR source ~/.bash_profile
Or
Reload bash_profile in ssh connection
`ssh -t -t username#servername << EOF
. ~/.bash_profile
your commands here
exit
EOF
You can set that variable in the same command line like this:
R_LIBS="${path}" Rscript -e \
'install.packages(c("someDependency", "someOtherDependency"), repos="http://cran.r-project.org");'
It's possible to append more variables in this way. Note that this will set those environment variables only for the command being called after them (and its children processes as well).
You said that "R_LIBS export doesn't get picked up". Question Is the value UNSET? Or is it set to some other value & you are trying to override it?
It is possible that SSH may be invoking "/bin/sh -c". Based on the second answer to: Why does 'cd' command not work via SSH?, you can simplify the SSH command and explicitly invoke the build.sh script in Bash:
cp -r . /to/shared/drive/to/have/access/on/remote
ssh -t -t username#servername "cd /to/shared/drive/to/have/access/on/remote && bash -f build.sh dev"
This makes the SSH invocation more similar to invoking the command within a remote interactive shell. (You can avoid sourcing scripts and exporting variables.)
You don't need to export R_LIBSor env R_LIBS when it is possible to prefix any command with local environment variable overrides (agrees with Luis' answer):
...
for path in "${!rVersionPaths[#]}"; do
R_LIBS="${path}" Rscript -e 'install.packages(c("someDependency", "someOtherDependency"), repos="http://cran.r-project.org");'
...
The Rscript may be doing a lot with env vars. You can verify that you are setting the R_LIBS env var by replacing Rscript with the env command and observe the output:
...
for path in "${!rVersionPaths[#]}"; do
R_LIBS="${path}" env
...
According to this manual "Initialization at Start of an R Session", Rscript looks in several places to load "site and user files":
$R_PROFILE
$R_HOME/etc/Renviron
$R_HOME/etc/Renviron.site
$R_ENVIRON_USER
$R_PROFILE_USER
./.Rprofile
$HOME/.Rprofile
./.RData
The "Examples" section of that manual shows this:
## Not run:
## Example ~/.Renviron on Unix
R_LIBS=~/R/library
PAGER=/usr/local/bin/less
If you add the --vanilla command-line option to ignore all of these files, then you may get different results and know something in the site/init/environ files is affecting your R_LIBS! I cannot run this system myself. Hopefully we have given you some areas to investigate.
You probably don't want to source build.sh, just invoke it directly (i.e. remove the source command).
By source-ing the file your script is executed in the SSH shell (likely sh) rather than by bash, which it sounds like is what you intended.

Access a variable defined in Jenkinsfiles in Shell Script within Jenkinsfile

I am defining a shell script in one of the stages in my Jenkinsfile. How can I access a variable that I define in my Jenkinsfile with the shell script?
In below scenario , I am writing the value of the shell variable to a file and reading into a groovy variable. Is there a way to pass data from shell to groovy without writing it to file system?
unstash 'sources'
sh'''
source venv/bin/activate
export AWS_ROLE_ARN=arn:aws:iam::<accountid>:role/<role name>
layer_arn="$(awssume aws lambda list-layer-versions --layer-name dependencies --region us-east-1 --query \"LayerVersions[0].LayerVersionArn\" | tr -d '\"')"
echo $layer_arn > layer_arn
'''
layer_arn = readFile('layer_arn').trim()
You can can shell command line, providing variable value.
sh "some stuff $my_var"
You can defined environment variable and use it within your shell
withEnv(["MY_VAR=${my_var}") {
sh 'some stuff'
}
Regards

Run multiple instances of RStudio in a web browser

I have RStudio server installed on a remote aws server (ubuntu) and want to run several projects at the same time (one of which takes lots of time to finish). On Windows there is a simple GUI solution like 'Open Project in New Window'. Is there something similar for rstudio server?
Simple question, but failed to find a solution except this related question for Macs, which offers
Run multiple rstudio sessions using projects
but how?
While running batch scripts is certainly a good option, it's not the only solution. Sometimes you may still want interactive use in different sessions rather than having to do everything as batch scripts.
Nothing stops you from running multiple instances of RStudio server on your Ubuntu server on different ports. (I find this particularly easy to do by launching RStudio through docker, as outlined here. Because an instance will keep running even when you close the browser window, you can easily launch several instances and switch between them. You'll just have to login again when you switch.
Unfortunately, RStudio-server still prevents you having multiple instances open in the browser at the same time (see the help forum). This is not a big issue as you just have to log in again, but you can work around it by using different browsers.
EDIT: Multiple instances are fine, as long as they are not on the same browser, same browser-user AND on the same IP address. e.g. a session on 127.0.0.1 and another on 0.0.0.0 would be fine. More importantly, the instances keep on running even if they are not 'open', so this really isn't a problem. The only thing to note about this is you would have to log back in to access the instance.
As for projects, you'll see you can switch between projects using the 'projects' button on the top right, but while this will preserve your other sessions I do not think the it actually supports simultaneous code execution. You need multiple instances of the R environment running to actually do that.
UPDATE 2020 Okay, it's now 2020 and there's lots of ways to do this.
For running scripts or functions in a new R environment, check out:
the callr package
The RStudio jobs panel
Run new R sessions or scripts from one or more terminal sessions in the RStudio terminal panel
Log out and log in to the RStudio-server as a different user (requires multiple users to be set up in the container, obviously not a good workflow for a single user but just noting that many different users can access the same RStudio server instance no problem.
Of course, spinning up multiple docker sessions on different ports is still a good option as well. Note that many of the ways listed above still do not allow you to restart the main R session, which prevents you from reloading installed packages, switching between projects, etc, which is clearly not ideal. I think it would be fantastic if switching between projects in an RStudio (server) session would allow jobs in the previously active project to keep running in the background, but have no idea if that's in the cards for the open source version.
Often you don't need several instances of Rstudio - in this case just save your code in .R file and launch it using ubuntu command prompt (maybe using screen)
Rscript script.R
That will launch a separate R session which will do the work without freezing your Rstudio. You can pass arguments too, for example
# script.R -
args <- commandArgs(trailingOnly = TRUE)
if (length(args) == 0) {
start = '2015-08-01'
} else {
start = args[1]
}
console -
Rscript script.R 2015-11-01
I think you need R Studio Server Pro to be able to log in with multiple users/sessions.
You can see the comparison table below for reference.
https://www.rstudio.com/products/rstudio-server-pro/
Installing another instance of rstudio server is less than ideal.
Linux server admins, fear not. You just need root access or a kind admin.
Create a group to use: groupadd Rwarrior
Create an additional user with same home directory as your primary Rstudio login:
useradd -d /home/user1 user2
Add primary and new user into Rwarrior group:
gpasswd -a user2 Rwarrior
gpasswd -a user1 Rwarrior
Take care of the permissions for your primary home directory:
cd /home
chown -R user1:Rwarrior /home/user1
chmod -R 770 /home/user1
chmod g+s /home/user1
Set password for the new user:
passwd user2
Open a new browser window in incognito/private browsing mode and login to Rstudio with the new user you created. Enjoy.
I run multiple RStudio servers by isolating them in Singularity instances. Download the Singularity image with the command singularity pull shub://nickjer/singularity-rstudio
I use two scripts:
run-rserver.sh:
Find a free port
#!/bin/env bash
set -ue
thisdir="$(dirname "${BASH_SOURCE[0]}")"
# Return 0 if the port $1 is free, else return 1
is_port_free(){
port="$1"
set +e
netstat -an |
grep --color=none "^tcp.*LISTEN\s*$" | \
awk '{gsub("^.*:","",$4);print $4}' | \
grep -q "^$port\$"
r="$?"
set -e
if [ "$r" = 0 ]; then return 1; else return 0; fi
}
# Find a free port
find_free_port(){
local lower_port="$1"
local upper_port="$2"
for ((port=lower_port; port <= upper_port; port++)); do
if is_port_free "$port"; then r=free; else r=used; fi
if [ "$r" = "used" -a "$port" = "$upper_port" ]; then
echo "Ports $lower_port to $upper_port are all in use" >&2
exit 1
fi
if [ "$r" = "free" ]; then break; fi
done
echo $port
}
port=$(find_free_port 8080 8200)
echo "Access RStudio Server on http://localhost:$port" >&2
"$thisdir/cexec" \
rserver \
--www-address 127.0.0.1 \
--www-port $port
cexec:
Create a dedicated config directory for each instance
Create a dedicated temporary directory for each instance
Use the singularity instance mechanism to avoid that forked R sessions are adopted by PID 1 and stay around after the rserver has shut down. Instead, they become children of the Singularity instance and are killed when that shuts down.
Map the current directory to the directory /data inside the container and set that as home folder (this step might not be nessecary if you don't care about reproducible paths on every machine)
#!/usr/bin/env bash
# Execute a command in the container
set -ue
if [ "${1-}" = "--help" ]; then
echo <<EOF
Usage: cexec command [args...]
Execute `command` in the container. This script starts the Singularity
container and executes the given command therein. The project root is mapped
to the folder `/data` inside the container. Moreover, a temporary directory
is provided at `/tmp` that is removed after the end of the script.
EOF
exit 0
fi
thisdir="$(dirname "${BASH_SOURCE[0]}")"
container="rserver_200403.sif"
# Create a temporary directory
tmpdir="$(mktemp -d -t cexec-XXXXXXXX)"
# We delete this directory afterwards, so its important that $tmpdir
# really has the path to an empty, temporary dir, and nothing else!
# (for example empty string or home dir)
if [[ ! "$tmpdir" || ! -d "$tmpdir" ]]; then
echo "Error: Could not create temp dir $tmpdir"
exit 1
fi
# check if temp dir is empty (this might be superfluous, see
# https://codereview.stackexchange.com/questions/238439)
tmpcontent="$(ls -A "$tmpdir")"
if [ ! -z "$tmpcontent" ]; then
echo "Error: Temp dir '$tmpdir' is not empty"
exit 1
fi
# Start Singularity instance
instancename="$(basename "$tmpdir")"
# Maybe also superfluous (like above)
rundir="$(readlink -f "$thisdir/.run/$instancename")"
if [ -e "$rundir" ]; then
echo "Error: Runtime directory '$rundir' exists already!" >&2
exit 1
fi
mkdir -p "$rundir"
singularity instance start \
--contain \
-W "$tmpdir" \
-H "$thisdir:/data" \
-B "$rundir:/data/.rstudio" \
-B "$thisdir/.rstudio/monitored/user-settings:/data/.rstudio/monitored/user-settings" \
"$container" \
"$instancename"
# Delete the temporary directory after the end of the script
trap "singularity instance stop '$instancename'; rm -rf '$tmpdir'; rm -rf '$rundir'" EXIT
singularity exec \
--pwd "/data" \
"instance://$instancename" \
"$#"

How to execute bash script from any location?

In UNIX, I read that moving a shell script to /usr/local/bin will allow you to execute the script from any location by simply typing "[scriptname].sh" and pressing enter.
I have moved a script with both normal user and root permissions but I can't run it.
The script:
#! bin/bash
echo "The current date and time is:"
date
echo "The total system uptime is"
uptime
echo "The users currently logged in are:"
who
echo "The current user is:"
who -m
exit 0
This is what happens when I try to move and then run the script:
[myusername#VDDK13C-6DDE885 ~]$ sudo mv sysinfo.sh /usr/local/bin
[myusername#VDDK13C-6DDE885 ~]$ sysinfo.sh
bash: sysinfo.sh: command not found
If you want to run the script from everywhere you need to add it to your PATH. Usually /usr/local/bin is in the path of every user so this way it should work.
So check if in your system /usr/local/bin is in your PATH doing, on your terminal:
echo $PATH
You should see a lot of paths listed (like /bin, /sbin etc...). If its not listed you can add it. A even better solution is to keep all your scripts inside a directory, for example in your home and add it to your path.
To add a directory in your path you can modify your shell init scripts and add the new directories, for example if you're usin the BASH shell you can edi your .bashrc and add the line:
PATH=$PATH:/the_directory_you_want_to_add/:/another_directory/
This will append the new directories to your existing PATH.
You have to move it somewhere in your path. Try this:
echo $PATH
I bet /usr/local/bin is not listed.
I handle this by making a bin directory in my $HOME (i.e. mkdir ~/bin) and adding this to my ~/.bashrc file (make the file if you don't already have one):
export PATH=~/bin:$PATH
This may seem silly to mention, but did you make sure it is executable? Did you chmod +x script.sh? Does the shell script have the correct path to it's shell at the top (i.e #!/bin/bash)? Also, are you using UNIX or LINUX or FreeBSD? (last question is important)
To run executable from any directory:
1)Make a bin directory under your home directory and mv your executable scripts into it.
[root#ip9-114-192-179 ~]# cd /home
[root#ip9-114-192-179 home]# mkdir bin
[root#ip9-114-192-179 home]#ls
bin cloud-init-0.7.4-10.el7.noarch.rpm cloud-user epel-release-7-11.noarch.rpm
2)Move your executable scripts in bin direcoty.
mv preeti.sh /home/bin
3)Now add it to your path variable.And source it.
[root#ip9-114-192-179 ~]# echo 'export PATH="$PATH:/home/bin"' >> /etc/profile
[root#ip9-114-192-179 ~]# source /etc/profile
[root#ip9-114-192-179 ~]# echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin:/home/bin
4)Check if that path is added in path variable.
[root#ip9-114-192-179 ~]# echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin:/home/bin
5)Verify if script is running from any random directory.

How to always have the same current directory in VIm and in Terminal?

I would like to my terminal current directory follows my VIM one.
Example:
In TERMINAL:
> pwd
=> /Users/rege
> vim
Then in VIM
:cd /Users/rege/project
<Ctrl-z>(for suspend)
In terminal
> pwd
=> /Users/rege/project
I`m using MacOS, zsh, tmux.
I need this because when Im trying to use tags in VIM, tags are check in project from my terminal directory not vim one.
So I need to change terminal current directory always when I change VIM current directory.
What kind of command do you issue in your shell after you suspend Vim? Isn't Vim's :!command enough?
With set autochdir, Vim's current directory follows you as you jump from file to file. With this setting, a simple :!ctags -R . will always create a tags file in the directory of the current file.
Another useful setting is set tags=./tags,tags;$HOME which tells Vim to look for a tags file in the directory of the current file, then in the "current directory" and up and up until it reaches your ~/. You might modify the endpoint to suit your needs. This allows you to use a tags at the root of your project while editing any file belonging to the project.
So, basically, you can go a long way without leaving Vim at all.
If you really need to go back to the shell to issue your commands, :shell (or :sh) launchs a new shell with Vim's current directory. When you are done, you only have to $ exit to go back to Vim:
$ pwd
/home/romainl
$ vim
:cd Projects
:sh
$ pwd
/home/romainl/Projects
$ exit
In bash or zsh and on Unix you can do this: current working directory of the process is represented in /proc/{PID}/cwd as a symlink to a real directory. Speaking about zsh the following code will do the job:
function precmd()
{
emulate -L zsh
(( $#jobstates == 1 )) || return
local -i PID=${${${(s.:.)${(v)jobstates[1]}}[3]}%\=*}
cd $(readlink /proc/$PID/cwd)
}
. Note: with this code you won’t be able to pernamently switch directories in terminal anymore, only in vim or for duration of one command (using cd other-dir && some command).
Note 2: I have no idea how to express this in bash. The straightforward way is to get PIDs of all children of the shell (using ps --ppid $$ -o CMD), filter out the ps process (it will be shown as a child as well), check that there is only one other child and use its PID like in the last line above. But I am pretty sure there is a better way using some shell builtins like I did with zsh’s $jobstates associative array. I also don’t remember what is the analogue of precmd in bash.
Another idea would be making vim save its current directory into some file when you do <C-z> and make shell read this in precmd:
" In .vimrc:
function s:CtrlZ()
call writefile([fnamemodify('.', ':p')], $CWDFILE, 'b')
return "\<C-z>"
endfunction
nnoremap <expr> <C-z> <SID>CtrlZ()
# In .zshrc
function vim()
{
local -x CWDFILE=~/.workdirs/$$
test -d $CWDFILE:h || mkdir $CWDFILE:h
vim $#
}
function precmd()
{
local CWDFILE=~/.workdirs/$$
test -e $CWDFILE && cd "$(cat $CWDFILE)"
}
. It should be easier to port above code to bash.
you can open a new terminal like this
:!xterm -e bash -c "cd %:p:h;bash" &
actually I write this in my .vimrc
nmap <F3> :!xterm -e bash -c "cd %:p:h;bash" &<CR> | :redraw!
For bash users coming by:
Vim: Save pwd at <c-z> (with map and getpwd()).
Bash: Before prompt command, goto directory indicated by vim with PROMPT_COMMAND.
.bashrc
PROMPT_COMMAND='read -r line 2>/dev/null </tmp/cd_vim'\
'&& > /tmp/cd_vim && cd ${line##\r};'$PROMPT_COMMAND
vimrc
function! s:CtrlZ() call writefile([getcwd(),''], '/tmp/cd_vim', 'b')
return "\<C-z>"
endfunction
nnoremap <expr> <C-z> <SID>CtrlZ()
This is ZyX answer edited for bash https://stackoverflow.com/a/12241861/2544873

Resources