I hava a conda environment called test and a requirements file-requirements.txt.
What i need to achieve is that i need to check the versions of different packages against those in requirements.txt and display which are upto date and which are not.
I need to write a python script for the task. For eg: if requirements.txt has django==2.0.6, i have to check against the installed version of django in the test environment and display accordingly.
The steps that i thought are :
Activating the environment inside script
running "conda list" command and saving all the packages along with their versions in a map as key-value pairs
matching against the requirements.txt
How to activate the environment inside a python script using "conda activate test" and run the command "conda list"?
conda list accepts the argument -n to specify an environment like this:
conda list -n test
so no need to activate the conda env
Related
I've got 10 jupyter notebooks, each with many unique package dependencies (that conflict), so I've created a different anaconda environment for each notebook. Each notebook relies on the output of the previous one, which I store and read from local csv files.
Right now I am running each jupyter notebook manually (with their own anaconda environment) to get the final result. Is there a way to run a single script that runs the code of all the jupyter notebooks sequentially (with the correct anaconda environment for each one)?
You could do it in python and use runipy. You just have to install it with:
pip install runipy
An example on how to use it from the docs:
from runipy.notebook_runner import NotebookRunner
from IPython.nbformat.current import read
notebook = read(open("MyNotebook.ipynb"), 'json')
r = NotebookRunner(notebook)
r.run_notebook()
If you want to run each notebook in a different environment, you can activate each conda environmentfrom a python script. There are multiple ways to do so, one of them is this:
subprocess.run('source activate environment-name && "enter command here" && source deactivate', shell=True)
Replace the "enter command here" with the command you want to run. You
don't need the "source deactivate" at the end of the command but it's
included just to be safe.
This will temporarily activate the Anaconda environment for the
duration of the subprocess call, after which the environment will
revert back to your original environment. This is useful for running
any commands you want in a temporary environment.
I am new in the Unix environment.I have a small problem, due to some problem which i don't really get I had to create an new Conda environment and now I wonder if I can transfer all the packages from my old environment to the new (or i need to install them again? )
thanks in advance for your help
If you want to duplicate an env (say foo_env) in a new env (bar_env) you can use
conda create --clone foo_env --name bar_env
If you already have a new env (bar_env), and want to install packages from an existing env (foo_env) you can use
conda env export --name foo_env > foo.yaml
conda env update --name bar_env --file foo.yaml
Note that the conda env commands don't prompt for changes, so make sure to check the foo.yaml to verify that you really do want all the packages installed. Be aware that it will replace any duplicate packages if it involves a version change.
I'm exploring some bioinformatics data and I like to use R notebooks (i.e. Rmarkdown) when I can. Right now, I need to use a command line tool to analyze a VCF file and I would like to do it through a Bash code chunk in the Rmarkdown notebook.
The problem is that the command I want to use was installed with conda into my conda environment. The tool is bcftools. When I try to access this command, I get this error (code chunk commented out to show rmarkdown code chunk format):
#```{bash}
bcftools view -H test.vcf.gz
#```
/var/folders/9l/phf62p1s0cxgnzp4hgl7hy8h0000gn/T/RtmplzEvEh/chunk-code-6869322acde0.txt: line 3: bcftools: command not found
Whereas if I run from Terminal, I get output (using conda environment called "binfo"):
> bcftools view -H test.vcf.gz | head -n 3
chr10 78484538 . A C . PASS DP=57;SOMATIC;SS=2;SSC=16;GPV=1;SPV=0.024109 GT:GQ:DP:RD:AD:FREQ:DP4 0/0:.:34:33:0:0%:0,33,0,0 0/1:.:23:19:4:17.39%:1,18,0,4
chr12 4333138 . G T . PASS DP=119;SOMATIC;SS=2;SSC=14;GPV=1;SPV=0.034921 GT:GQ:DP:RD:AD:FREQ:DP4 0/0:.:72:71:1:1.39%:71,0,1,0 0/1:.:47:42:5:10.64%:42,0,5,0
chr15 75086860 . C T . PASS DP=28;SOMATIC;SS=2;SSC=18;GPV=1;SPV=0.013095 GT:GQ:DP:RD:AD:FREQ:DP4 0/0:.:15:15:0:0%:4,11,0,0 0/1:.:13:8:5:38.46%:5,3,1,4
(binfo)
So, how do I access tools installed with conda/in my conda env from an R notebook/Rmarkdown bash code chunk? I searched for quite a while and could not find anyone talking about running conda commands in a shell chunk in Rmarkdown. Any help would be appreciated because I like the R notebook format for exploratory analysis.
Passing Arguments to Engines
If your Conda is properly configured to work in bash, then you can use engine.opts to tell bash to launch in login mode (i.e., source your .bash_profile (Mac) or .bashrc (Linux)):
bash
```{bash engine.opts='-l'}
bcftools view -H test.vcf.gz
```
zsh
If working with zsh (e.g., Mac OS 10.15 Catalina users), then the interactive flag, --interactive|-i is what you want (Credit: #Leo).
```{zsh engine.opts='-i'}
bcftools view -H test.vcf.gz
```
Again, this presumes you've previously run conda init zsh to set up Conda to work with the shell.
Note on Reproducibility
Since reproducibility is usually a concern in scientific work, I will add that you may want to do something to capture the state of your Conda environment. For example, if you are working in version control, then commit a conda env export > environment.yaml. Another option would be to output that info directly at the end of the Rmd, like what is usually done with sessionInfo(). That is,
```{bash engine.opts='-l', comment=NA}
conda env export
```
where the comment=NA is so that the output can be cleanly copied from the rendered version.
Quick solution for bash: prepend the following init script into your Bash scripts.
eval "$(command conda 'shell.bash' 'hook' 2> /dev/null)"
# you may need to activate the "base" environment explicitly
conda activate base
Detail
When you open your terminal, an interactive shell is spawned. But your script is run in a non-interactive shell. Bash configuration file ~/.bashrc will not be used for the scripts, which skips the conda initialization and your "base" environment is not exposed into PATH.
References
Python - Activate conda env through shell script
Is there some way to tell jupyter notebook what the default conda env should be when creating new notebooks? Launching it on AWS Deep Learning AMI's gives me a pretty long list, but I really only care about one specific env.
If you go to your terminal first and activate the virtual environment:
$ source venv/bin/activate
or
$ conda activate venv
for conda environment.
And after that step, do the following:
$ jupyter notebook
And when you make a new script it should give you the option for chosing python3/python2, chose the one that solves your purpose. And this script will be using the activated environment. You can check it by importing a libraray specific to that environment.
I have created multiple Conda envs which I use as IPython kernels in Hydrogen for Atom using the following suggestion:
source activate thisenv
python -m ipykernel install --user --name thisenv
After deleting such a Conda env, however, Atom-Hydrogen still gives me that kernel as an option to select from when compiling the code.
How does one unlink or remove a Conda env after it is linked as a kernel to Atom-Hydrogen?
The original command you ran was to register your env as a kernel, which on OS X results in creating a folder in a common area, like so
/Users/<user>/Library/Jupyter/kernels/thisenv
If you only want to deregister the environment (but not delete it), you can simply delete the thisenv folder from this directory (or wherever the equivalent folder is on other systems). It is not necessary to remove the environment from Conda.
If you're having trouble finding where the env is registered, you can use the kernelspecs package to locate all the available kernels. This is the package that Atom uses to find kernels.