I installed Anaconda (with Python 2.7), and installed Tensorflow in an environment called tensorflow. I can import Tensorflow successfully in that environment.
The problem is that Jupyter Notebook does not recognize the new environment I just created. No matter I start Jupyter Notebook from the GUI Navigator or from the command line within the tensorflow env, there is only one kernel in the menu called Python [Root], and Tensorflow cannot be imported. Of course, I clicked on that option multiple times, saved file, re-opened, but these did not help.
Strangely, I can see the two environments when I open the Conda tab on the front page of Jupyter. But when I open the Files tab, and try to new a notebook, I still end up with only one kernel.
I looked at this question:
Link Conda environment with Jupyter Notebook
But there isn't such a directory as ~/Library/Jupyter/kernels on my computer! This Jupyter directory only has one sub-directory called runtime.
I am really confused. Are Conda environments supposed to become kernels automatically? (I followed https://ipython.readthedocs.io/en/stable/install/kernel_install.html to manually set up the kernels, but was told that ipykernel was not found.)
I don't think the other answers are working any more, as conda stopped automatically setting environments up as jupyter kernels. You need to manually add kernels for each environment in the following way:
source activate myenv
python -m ipykernel install --user --name myenv --display-name "Python (myenv)"
As documented here:http://ipython.readthedocs.io/en/stable/install/kernel_install.html#kernels-for-different-environments
Also see this issue.
Addendum:
You should be able to install the nb_conda_kernels package with conda install nb_conda_kernels to add all environments automatically, see https://github.com/Anaconda-Platform/nb_conda_kernels
If your environments are not showing up, make sure you have installed
nb_conda_kernels in the environment with Jupyter
ipykernel and ipywidgets in the Python environment you want to access (note that ipywidgets is to enable some Juptyer functionality, not environment visibility, see related docs).
Anaconda's documentation states that
nb_conda_kernels should be installed in the environment from which
you run Jupyter Notebook or JupyterLab. This might be your base conda
environment, but it need not be. For instance, if the environment
notebook_env contains the notebook package, then you would run
conda install -n notebook_env nb_conda_kernels
Any other environments you wish to access in your notebooks must have
an appropriate kernel package installed. For instance, to access a
Python environment, it must have the ipykernel package; e.g.
conda install -n python_env ipykernel
To utilize an R environment, it must have the r-irkernel package; e.g.
conda install -n r_env r-irkernel
For other languages, their corresponding kernels must be installed.
In addition to Python, by installing the appropriatel *kernel package, Jupyter can access kernels from a ton of other languages including R, Julia, Scala/Spark, JavaScript, bash, Octave, and even MATLAB.
Note that at the time originally posting this, there was a possible cause from nb_conda not yet supporting Python 3.6 environments.
If other solutions fail to get Jupyter to recognize other conda environments, you can always install and run jupyter from within a specific environment. You may not be able to see or switch to other environments from within Jupyter though.
$ conda create -n py36_test -y python=3.6 jupyter
$ source activate py36_test
(py36_test) $ which jupyter
/home/schowell/anaconda3/envs/py36_test/bin/jupyter
(py36_test) $ jupyter notebook
Notice that I am running Python 3.6.1 in this notebook:
Note that if you do this with many environments, the added storage space from installing Jupyter into every environment may be undesirable (depending on your system).
The annoying thing is that in your tensorflow environment, you can run jupyter notebook without installing jupyter in that environment. Just run
(tensorflow) $ conda install jupyter
and the tensorflow environment should now be visible in Jupyter Notebooks started in any of your conda environments as something like Python [conda env:tensorflow].
I had to run all the commands mentioned in the top 3 answers to get this working:
conda install jupyter
conda install nb_conda
conda install ipykernel
python -m ipykernel install --user --name mykernel
Just run conda install ipykernel in your new environment, only then you will get a kernel with this env. This works even if you have different versions installed in each envs and it doesn't install jupyter notebook again. You can start youe notebook from any env you will be able to see newly added kernels.
Summary (tldr)
If you want the 'python3' kernel to always run the Python installation from the environment where it is launched, delete the User 'python3' kernel, which is taking precedence over whatever the current environment is with:
jupyter kernelspec remove python3
Full Solution
I am going to post an alternative and simpler solution for the following case:
You have created a conda environment
This environment has jupyter installed (which also installs ipykernel)
When you run the command jupyter notebook and create a new notebook by clicking 'python3' in the 'New' dropdown menu, that notebook executes python from the base environment and not from the current environment.
You would like it so that launching a new notebook with 'python3' within any environment executes the Python version from that environment and NOT the base
I am going to use the name 'test_env' for the environment for the rest of the solution. Also, note that 'python3' is the name of the kernel.
The currently top-voted answer does work, but there is an alternative. It says to do the following:
python -m ipykernel install --user --name test_env --display-name "Python (test_env)"
This will give you the option of using the test_env environment regardless of what environment you launch jupyter notebook from. But, launching a notebook with 'python3' will still use the Python installation from the base environment.
What likely is happening is that there is a user python3 kernel that exists. Run the command jupyter kernelspec list to list all of your environments. For instance, if you have a mac you will be returned the following (my user name is Ted).
python3 /Users/Ted/Library/Jupyter/kernels/python3
What Jupyter is doing here is searching through three different paths looking for kernels. It goes from User, to Env, to System. See this document for more details on the paths it searches for each operating system.
The two kernels above are both in the User path, meaning they will be available regardless of the environment that you launch a jupyter notebook from. This also means that if there is another 'python3' kernel at the environment level, then you will never be able to access it.
To me, it makes more sense that choosing the 'python3' kernel from the environment you launched the notebook from should execute Python from that environment.
You can check to see if you have another 'python3' environment by looking in the Env search path for your OS (see the link to the docs above). For me (on my mac), I issued the following command:
ls /Users/Ted/anaconda3/envs/test_env/share/jupyter/kernels
And I indeed had a 'python3' kernel listed there.
Thanks to this GitHub issue comment (look at the first response), you can remove the User 'python3' environment with the following command:
jupyter kernelspec remove python3
Now when you run jupyter kernelspec list, assuming the test_env is still active, you will get the following:
python3 /Users/Ted/anaconda3/envs/test_env/share/jupyter/kernels/python3
Notice that this path is located within the test_env directory. If you create a new environment, install jupyter, activate it, and list the kernels, you will get another 'python3' kernel located in its environment path.
The User 'python3' kernel was taking precedence over any of the Env 'python3' kernels. By removing it, the active environment 'python3' kernel was exposed and able to be chosen every time. This eliminates the need to manually create kernels. It also makes more sense in terms of software development where one would want to isolate themselves into a single environment. Running a kernel that is different from the host environment doesn't seem natural.
It also seems that this User 'python3' is not installed for everyone by default, so not everyone is confronted by this issue.
To add a conda environment to Jupyter:
In Anaconda Prompt :
run conda activate <env name>
run conda install -c anaconda ipykernel
run python -m ipykernel install --user --name=<env name>
** tested on conda 4.8.3 4.11.0
$ conda install nb_conda_kernels
(in the conda environment where you run jupyter notebook) will make all conda envs available automatically. For access to other environments, the respective kernels must be installed. Here's the ref.
This worked for me in windows 10 and latest solution :
1) Go inside that conda environment ( activate your_env_name )
2) conda install -n your_env_name ipykernel
3) python -m ipykernel install --user --name build_central --display-name "your_env_name"
(NOTE : Include the quotes around "your_env_name", in step 3)
The nb_conda_kernels package is the best way to use jupyter with conda. With minimal dependencies and configuration, it allows you to use other conda environments from a jupyter notebook running in a different environment. Quoting its documentation:
Installation
This package is designed to be managed solely using conda. It should be installed in the environment from which you run Jupyter Notebook or JupyterLab. This might be your base conda environment, but it need not be. For instance, if the environment notebook_env contains the notebook package, then you would run
conda install -n notebook_env nb_conda_kernels
Any other environments you wish to access in your notebooks must have an appropriate kernel package installed. For instance, to access a Python environment, it must have the ipykernel package; e.g.
conda install -n python_env ipykernel
To utilize an R environment, it
must have the r-irkernel package; e.g.
conda install -n r_env r-irkernel
For other languages, their corresponding kernels must be installed.
Then all you need to do is start the jupyter notebook server:
conda activate notebook_env # only needed if you are not using the base environment for the server
# conda install jupyter # in case you have not installed it already
jupyter
Despite the plethora of answers and #merv's efforts to improve them, it still hard to find a good one. I made this one CW, so please vote it to the top or improve it!
This is an old thread, but running this in Anaconda prompt, in my environment of interest, worked for me:
ipython kernel install --name "myenvname" --user
We have struggle a lot with this issue, and here's what works for us. If you use the conda-forge channel, it's important to make sure you are using updated packages from conda-forge, even in your Miniconda root environment.
So install Miniconda, and then do:
conda config --add channels conda-forge --force
conda update --all -y
conda install nb_conda_kernels -y
conda env create -f custom_env.yml -q --force
jupyter notebook
and your custom environment will show up in Jupyter as an available kernel, as long as ipykernel was listed for installation in your custom_env.yml file, like this example:
name: bqplot
channels:
- conda-forge
- defaults
dependencies:
- python>=3.6
- bqplot
- ipykernel
Just to prove it working with a bunch of custom environments, here's a screen grab from Windows:
I ran into this same problem where my new conda environment, myenv, couldn't be selected as a kernel or a new notebook. And running jupter notebook from within the env gave the same result.
My solution, and what I learned about how Jupyter notebooks recognizes conda-envs and kernels:
Installing jupyter and ipython to myenv with conda:
conda install -n myenv ipython jupyter
After that, running jupter notebook outside any env listed myenv as a kernel along with my previous environments.
Python [conda env:old]
Python [conda env:myenv]
Running the notebook once I activated the environment:
source activate myenv
jupyter notebook
hides all my other environment-kernels and only shows my language kernels:
python 2
python 3
R
This has been so frustrating, My problem was that within a newly constructed conda python36 environment, jupyter refused to load “seaborn” - even though seaborn was installed within that environment. It seemed to be able to import plenty of other files from the same environment — for example numpy and pandas but just not seaborn. I tried many of the fixes suggested here and on other threads without success. Until I realised that Jupyter was not running kernel python from within that environment but running the system python as kernel. Even though a decent looking kernel and kernel.json were already present in the environment. It was only after reading this part of the ipython documentation:
https://ipython.readthedocs.io/en/latest/install/kernel_install.html#kernels-for-different-environments
and using these commands:
source activate other-env
python -m ipykernel install --user --name other-env --display-name "Python (other-env)"
I was able to get everything going nicely. (I didn’t actually use the —user variable).
One thing I have not yet figured is how to set the default python to be the "Python (other-env)" one. At present an existing .ipynb file opened from the Home screen will use the system python. I have to use the Kernel menu “Change kernel” to select the environment python.
I had similar issue and I found a solution that is working for Mac, Windows and Linux. It takes few key ingredients that are in the answer above:
To be able to see conda env in Jupyter notebook, you need:
the following package in you base env:
conda install nb_conda
the following package in each env you create:
conda install ipykernel
check the configurationn of jupyter_notebook_config.py
first check if you have a jupyter_notebook_config.py in one of the location given by jupyter --paths
if it doesn't exist, create it by running jupyter notebook --generate-config
add or be sure you have the following: c.NotebookApp.kernel_spec_manager_class='nb_conda_kernels.manager.CondaKernelSpecManager'
The env you can see in your terminal:
On Jupyter Lab you can see the same env as above both the Notebook and Console:
And you can choose your env when have a notebook open:
The safe way is to create a specific env from which you will run your example of envjupyter lab command. Activate your env. Then add jupyter lab extension example jupyter lab extension. Then you can run jupyter lab
While #coolscitist's answer worked for me, there is also a way that does not clutter your kernel environment with the complete jupyter package+deps.
It is described in the ipython docs and is (I suspect) only necessary if you run the notebook server in a non-base environment.
conda activate name_of_your_kernel_env
conda install ipykernel
python -m ipykernel install --prefix=/home/your_username/.conda/envs/name_of_your_jupyter_server_env --name 'name_of_your_kernel_env'
You can check if it works using
conda activate name_of_your_jupyter_server_env
jupyter kernelspec list
First you need to activate your environment .
pip install ipykernel
Next you can add your virtual environment to Jupyter by typing:
python -m ipykernel install --name = my_env
Follow the instructions in the iPython documentation for adding different conda environments to the list of kernels to choose from in Jupyter Notebook. In summary, after installing ipykernel, you must activate each conda environment one by one in a terminal and run the command python -m ipykernel install --user --name myenv --display-name "Python (myenv)", where myenv is the environment (kernel) you want to add.
Possible Channel-Specific Issue
I had this issue (again) and it turned out I installed from the conda-forge channel; removing it and reinstalling from anaconda channel instead fixed it for me.
Update: I again had the same problem with a new env, this time I did install nb_conda_kernels from anaconda channel, but my jupyter_client was from the conda-forge channel. Uninstalling nb_conda_kernels and reinstalling updated that to a higher-priority channel.
So make sure you've installed from the correct channels :)
I encountered this problem when using vscode server.
In the conda environment named "base", I installed the 1.2.0 version of opennmt-py, but I want to run jupyter notebook in the conda environment "opennmt2", which contains code using opennmt-py 2.0.
I solved the problem by reinstalling jupyter in conda(opennmt2).
conda install jupyter
After reinstalling, executing jupyter notebook in the opennmt2 environment will execute the newly installed jupyter
where jupyter
/root/miniconda3/envs/opennmt2/bin/jupyter
/root/miniconda3/bin/jupyter
For conda 4.5.12, what works for me is (my virtual env is called nwt)
conda create --name nwt python=3
after that I need to activate the virtual environment and install the ipykernel
activate nwt
pip install ipykernel
then what works for me is:
python -m ipykernel install --user --name env_name --display-name "name of your choosing."
As an example, I am using 'nwt' as the display name for the virtual env. And after running the commands above. Run 'jupyter notebook" in Anaconda Prompt again. What I get is:
Using only environment variables:
python -m ipykernel install --user --name $(basename $VIRTUAL_ENV)
I just wanted to add to the previous answers: in case installing nb_conda_kernels, ipywidgets and ipekernel dosen't work, make sure your version of Jupyter is up to date. My envs suddenly stopped showing up after a period of everything working fine, and it resumed working after I simply updated jupyter through the anaconda navigator.
In my case, using Windows 10 and conda 4.6.11, by running the commands
conda install nb_conda
conda install -c conda-forge nb_conda_kernels
from the terminal while having the environment active didn't do the job after I opened Jupyter from the same command line using conda jupyter notebook.
The solution was apparently to opened Jupyter from the Anaconda Navigator by going to my environment in Environments: Open Anaconda Navigator, select the environment in Environments, press on the "play" button on the chosen environment, and select 'open with Jupyter Notebook'.
Environments in Anaconda Navigator to run Jupyter from the selected environment
While installing LabelImg in M1 Mac using below command
pip install pyqt5 lxml
This is the error I got
ERROR: pyqt5 from https://files.pythonhosted.org/packages/7c/5b/e760ec4f868cb77cee45b4554bf15d3fe6972176e89c4e3faac941213694/PyQt5-5.14.0.tar.gz#sha256=0145a6b7de15756366decb736c349a0cb510d706c83fda5b8cd9e0557bc1da72 has a pyproject.toml file that does not comply with PEP 518: 'build-system.requires' contains an invalid requirement: 'sip >=5.0.1 <6'
How to install lableImg annotation tool in M1 Mac?
I got it to work by using the following commands
brew install pyqt#5
pip install labelimg
And that's it, it just works
You just need to type labelimg in the Terminal and the app will start running
I don't know why they don't tell you this in the installation guide
Alrighty!
On MacOS Monterey, none of the other solutions posted here solved this problem for me. However, I managed to easily solve the issue, without a virtual environment or too much fiddling about like so:
Firstly, you have to download all labelImg packages from this link:
https://github.com/tzutalin/labelImg#macos
(You can download it as a .zip file or clone it)
Unzip and then in your terminal cd into whatever directory you downloaded the above files to.
Then run the following commands in order:
pyrcc5 -o libs/resources.py resources.qrc
Then,
pip3 install lxml
Finally,
python3 labelImg.py
It should run without an issue now.
You can go one of two ways:
Using brew:
You can use homebrew to install the dependencies - like qt and libxml2. This will let your package manager handle everything and generally should solve the problem with the . Then you can run
python3 labelimg.py
Using Virtual Environments:
This is the more recommended way to go about in such cases. You can use conda, pipenv or venv to create a virtual environment which is isolated from your system python installation. Then you can try to install it as explained in the README.rst in the root of the repository:
brew install python3
pip3 install pipenv
pipenv run pip install pyqt5==5.12.1 lxml
pipenv run make qt5py3
pipenv run python3 labelImg.py
[Optional] rm -rf build dist; python setup.py py2app -A;mv "dist/labelImg.app" /Applications
You can try the two methods and and get back with the errors if there are any.
This is my note.
I just succeed on my Mac M1 Chip
CHECK THIS OUT!
Installation of labelimg on mac m1 chip
my first reference
my second reference
First, you must use terminal with rosetta version
Then, you already have python3
Then...
[Done]
# check where python3 is
$ where python3
# create env
$ /usr/bin/python3 -m venv env
# check env is
$ where env
# activate env list
$ source env/bin/activate
# updated to the newest
$ pip install --upgrade pip
# installation of PyQt5
$ pip install PyQt5
# start to run labelImg.py
$ cd Documents/repos/labelImg
$ pip3 install pyqt5 lxml
$ make qt5py3
# [run ok!!]
$ python3 labelImg.py
Using Conda
Create a virtual environment in conda and activate it
conda create -n venv
conda activate venv
Install pyqt using conda
conda install pyqt
Install lxml using pip
pip install lxml
Change directory to the downloaded/cloned labelImg folder
cd path/to/labelImg/folder/
Make qt5py3
make qt5py3
Run LabelImg
python labelImg.py
On my Ubuntu 18.04.1 LTS I have installed pipenv package using pip package manager. Package is accessible from ssh login bash.
$ pipenv --version
will print out following output:
pipenv, version 2018.10.13
What want:
I need to run $ pipenv --version command using absolute path. So This is how it should look like:
$ /absolute/path/to/pipenv --version
However so far it looks like it does not work by this way.
What I tried:
$ pip show pipenv
Name: pipenv
Version: 2018.10.13
Location: /user/.local/lib/python2.7/site-packages
Requires: enum34, virtualenv, typing, certifi, virtualenv-clone, pip, setuptools
...
I copied location from output above, and I tried these, but still does not work:
$ /user/.local/lib/python2.7/site-packages/pipenv --version
$ /user/.local/lib/python2.7/site-packages/pipenv/pipenv --version
I also tried:
which pipenv - outputs empty string
Recapping the comments, if pipenv command is available, you can:
run command -v pipenv or which pipenv if pipenv is an executable in PATH
run type pipenv if pipenv is an alias or a function
If the command is not available, you can extract the info about the executable from the package metadata: run
$ pip show -f pipenv
to list the files belonging to the pipenv package (If the output is empty, it means that pipenv is not installed for the Python version pip refers to). Among other things, it will print you the package location, similar to
Location: /Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages
and with other files, the executable:
../../../bin/pipenv
This is the path relative to the Location above - the resolved path leads you to the executable file.
I am new to Linux, but I am having a lot of trouble installing an R package that does not have windows binaries. I would rather not install a full Linux install and move everything. Judging by Windows Interoperability it seems like this should be possible.
I want to do any one of the options from the GNU R package cplexAPI documentation below in the block quote. I have tried:
C:\Users\zejas>bash
zejas#DESKTOP-JASON:/mnt/c/Users/zejas$ R CMD INSTALL cplexAPI_1.3.2.tar.gz
The program 'R' is currently not installed. You can install it by typing:
sudo apt-get install r-base-core
zejas#DESKTOP-JASON:/mnt/c/Users/zejas$
Based on the example:
$/mnt/c/Windows/System32/notepad.exe
I have tried:
zejas#DESKTOP-JASON:/mnt/c/Users/zejas$ $/mnt/C/Program Files/Microsoft/MRO-3.3.1/bin
bash: $/mnt/C/Program: No such file or directory
zejas#DESKTOP-JASON:/mnt/c/Users/zejas$ /mnt/C/Program Files/Microsoft/MRO-3.3.1/bin
bash: /mnt/C/Program: No such file or directory
zejas#DESKTOP-JASON:/mnt/c/Users/zejas$ /mnt/C/Program Files/Microsoft/MRO-3.3.1/bin/R.exe
bash: /mnt/C/Program: No such file or directory
zejas#DESKTOP-JASON:/mnt/c/Users/zejas$ $/mnt/C/Program Files/Microsoft/MRO-3.3.1/bin/R.exe
bash: $/mnt/C/Program: No such file or directory
zejas#DESKTOP-JASON:/mnt/c/Users/zejas$ C/Program Files/Microsoft/MRO-3.3.1/bin/R.exe
bash: C/Program: No such file or directory
zejas#DESKTOP-JASON:/mnt/c/Users/zejas$ $/mnt/C/Program Files/Microsoft/MRO-3.3.1/bin/R.exe
bash: $/mnt/C/Program: No such file or directory
zejas#DESKTOP-JASON:/mnt/c/Users/zejas$ /mnt/C/Program Files/Microsoft/MRO-3.3.1/bin/R.exe
bash: /mnt/C/Program: No such file or directory
zejas#DESKTOP-JASON:/mnt/c/Users/zejas$ /mnt/c/Program Files/Microsoft/MRO-3.3.1/bin/R.exe
bash: /mnt/c/Program: No such file or directory
zejas#DESKTOP-JASON:/mnt/c/Users/zejas$ /mnt/c/Windows/System32/notepad.exe
bash: /mnt/c/Windows/System32/notepad.exe: cannot execute binary file: Exec format error
Any ideas?
----------------------------------------------------------------------------
Linux and MacOS X installation
----------------------------------------------------------------------------
The locations of the CPLEX callable library and the CPLEX include
directory can be found in /README.html>, where
is the CPLEX installation directory. Also have a look at the
variables CLNFLAGS and COPT in the example Makefile of your CPLEX
installation. There, the variable CPLEXLIBDIR points to the callable
library directory.
There are several ways of installing the cplexAPI package:
1) Set variables PKG_CFLAGS, PKG_CPPFLAGS and PKG_LIBS directly, e.g.:
R CMD INSTALL --configure-args =" \ PKG_CFLAGS='-m64 -fPIC' \
PKG_CPPFLAGS='-I/cplex/include' \
PKG_LIBS='-L${CPLEXLIBDIR} -lcplex -m64 -lm -pthread'" \
cplexAPI_x.x.x.tar.gz
PKF_CFLAGS is optional, but both PKG_CPPFLAGS and PKG_LIBS must be
given.
2) Use --with-cplex-:
--with-cplex-include=PATH with PATH being the include directory
of CPLEX
--with-cplex-lib=PATH with PATH being the directory
containing the
callable library of CPLEX.
R CMD INSTALL --configure-args=" \
--with-cplex-include=/path/to/include/dir \
--with-cplex-lib=/path/to/lib/dir" cplexAPI_x.x.x.tar.gz
When using --with-cplex-, both arguments --with-cplex-lib and
--with-cplex-include must be given.
--with-cplex-link=-l... libraries to path to the linker during
compilation.
If --with-cplex-link is not given, '-lcplex -lm -pthread' will be
used as default.
--with-cplex-cflags=... optional CFLAGS
A further argument can be used in order to use the debuging
routines included in the C API of CPLEX:
--with-cplex-check=PATH with PATH being the directory
containing the
file check.c from the CPLEX examples directory.
R CMD INSTALL --configure-args=" \
--with-cplex-lib='/path/to/lib/dir' \ --with-cplex-include='/path/to/include/dir' \ --with-cplex-link='-lcplex -m64 -lm -pthread' \ --with-cplex-cflags='-m64 -fPIC' \ --with-cplex-check='/path/to/examples/dir/examples/src/c'" \ cplexAPI_x.x.x.tar.gz
3) Give the location of the CPLEX installation:
--with-cplex-dir=PATH
with PATH being the CPLEX directory. This is not the CPLEX installation directory , it is the directory including
the lib/ include/ and examples/ directory. Usually this is
/cplex.
R CMD INSTALL --configure-args="
--with-cplex-dir='/cplex'" \ cplexAPI_x.x.x.tar.gz
This procedure will take the first system type and library format
it finds. Information reqired for the compilation is taken from the
example Makefile.
4) Give no information:
R CMD INSTALL cplexAPI_x.x.x.tar.gz
This procedure will try to find the CPLEX interactive optimizer, or the CPLEX_BIN environment variable pointing to the
CPLEX interactive optimizer will be used. The directory two levels
above is used as CPLEX directory, all other information is taken
from teh CPLEX example Makefile as in #3 above.
First, to access a path with spaces in it, use double quotes:
"/mnt/c/Program Files/Microsoft/MRO-3.3.1/bin/R.exe"
Second, you can only run Windows programs from bash if you have build 14951 of Windows 10 or later. This is noted at the top of the MSDN page you linked to:
The Windows Subsystem for Linux can invoke native Windows binaries and be invoked from a Windows command line. This feature is available to Windows 10 users running Anniversary Update build 14951.
This build is still in Windows Insider release, so isn't generally available yet (latest GA is build 14393 as of 16 Jan 2017). For now, you can install cbwin if you want this functionality.
Third, running R from a Linux shell won't magically solve the reason why a precompiled binary package isn't available: cplexAPI depends on the CPLEX Studio application from IBM, and you still need to have this available for the R package to work. Assuming you do have this available, you can download the cplexAPI source and compile the package from Windows, without touching the bash shell.