I defined JUPYTER_PATH containing only a trusted set of kernels.
Is there an option to disable the search of kernels specs files in all other dirs?
I solved running
jupyter notebook '--KernelSpecManager.whitelist=["ir"]'
Source of solution in this comment: https://github.com/jupyter/jupyter_client/issues/144#issuecomment-242148826
Related
I'm coding on a .ipynb file on a linux server.
The linux server I use has multiple GPUs on it, but I should only use idle GPU so as not to accidentally abort others' programme.
I've already known that for common .py file we can add some instructions at the command line to choose a common GPU(e.g. export CUDA_VISIBLE_DEVICES=#), but will it work for jupyter notebook? If not, how can I specify a GPU to work on.
You have to choose its name correctly. For instance there may be 3 GPU devices available namely "cuda:0","cuda:1","cuda:2". To choose the third one you need to run the following code:
if torch.cuda.is_available():
dev = "cuda:2"
else:
dev = "cpu"
device = torch.device(dev)
with tensorflow as below
gpus = tf.config.list_physical_devices('GPU')
if len(gpus) > 0:
tf.config.experimental.set_visible_devices(gpus[0], 'GPU')
I have a Jupyter notebook running within an Amazon SageMaker Studio Lab (https://studiolab.sagemaker.aws/) environment, and I want to use Tensordboard to monitor my model's performance inside the notebook.
I have used the following commands to set up the Tensorboard:
%load_ext tensorboard
# tb_log_dir variable holds the path to the log directory
%tensorboard --logdir tb_log_dir
But nothing shows up in the output of the cell where I execute the commands. See:
The two buttons shown in the picture are not responding, BTW.
How to solve this problem? Any suggestions would be appreciated.
I would try the canonical way to use tensorboard in AWS Sagemaker, it should be supported also by Studio Lab, it is described here. Basically install tensorboard and using the EFS_PATH_LOG_DIR launch tensorboard using the embedded console (you can do the following also from a cell):
pip install tensorboard
tensorboard --logdir <EFS_PATH_LOG_DIR>
Be careful with the EFS_PATH_LOG_DIR, be sure this folder is valida path from the location you are, for example by default you are located in studio-lab-user/sagemaker-studiolab-notebooks/ so the proper command would be !tensorboard --logdir logs/fit.
Then open a browser to:
https://<YOUR URL>/studiolab/default/jupyter/proxy/6006/
I usually use Jupyter to have my interactive environment with Julia, now I am switching to JuliaPro, as they claim it is the fastest and easiest way of Julia programming. But, I cannot upload my .ipynb notebooks on JuliaPro. Are they compatible with each other? How can I work with my notebooks on JuliaPro? Thanks!
As was explained in the comments, the .ipynb file format was designed to be rendered in a browser, while Juno/Atom is a text editor that expects a plain text file for display. In general therefore you wouldn't be able to directly use an .ipynb file in Juno.
There is however an option to convert your notebooks to .jl scripts, which is exactly what Juno is expecting: in your Jupyter notebook click on File > Download as > Julia (.jl) (see below)
There's also an answer here that discusses a command line option if you need to batch convert a lot of files.
Also note that your choice of editor / programming environment is unrelated to the version of Julia you're using - while JuliaPro ships with Juno as standard (or potentially the Julia VS Code extension in future), nothing's keeping you from just doing using Pkg; Pkg.add("IJulia"); using IJulia; notebook() in your JuliaPro installation and continuing to work on your notebooks in Jupyter.
I am encountering package compatibility issues within my global Julia environment for specific packages I want to use in a Jupyter notebook. Is there a way to tell IJulia to use a different environment instead of my global one?
The default IJulia kernel sets --project=#. so the most convenient way (IMO) is to just keep your project in the same folder as the notebook. The result is that the correct project is used from the start and you don't have to worry about activating it while in the notebook.
You can always start up a notebook, and within a cell run
using Pkg
Pkg.activate("./path/to/folder")
When starting the notebook type:
notebook(dir="/path/to/your/environment/")
This will launch Jupyter notebook loading the environment (Project.toml) in the directory that you have specified. If there is no Project.toml in that directory, the default (global) environment will be used.
Depending on the complexity of your setup, you might want to consider Lmod
I use this with a module hierarchy: 1. Core module, 2. Compiler modules, MPI modules.
With this, its possible to quickly switch between difference branches.
Disclaimer: I use jupyter kernel, but the question is also relevant for jupyter notebook.
According to jupyter kernel --help-all, I should be able to change the jupyter kernel JSON connection file by setting a parameter called --KernelManager.connection_file.
If I understand this correctly, that means that the following command:
jupyter kernel --KernelManager.connection_file=connection.json
should start a kernel and give me a connection file called connection.json.
However, this is what I get:
→ jupyter kernel --KernelManager.connection_file='test-this-thing.json'
[KernelApp] Starting kernel 'python3'
[KernelApp] Connection file: /Users/me/Library/Jupyter/runtime/kernel-1e65d0fe-bf8e-1234-8208-463bd4a1234a.json
Now, jupyter doesn't complain that I've passed a wrong argument nor anything, it just doesn't change the connection file.
Am I doing something wrong? How can I correctly change the connection filename?
Essentially, nothing you are doing in the above code is wrong. Previously the kernel overrode whatever you set as the connection file with a hard coded file location.
This has now been fixed as per the following pull requests:
https://github.com/jupyter/jupyter_client/pull/399
Removed the static connection file name declaration on kernelapp initialize method.
https://github.com/jupyter/jupyter_client/pull/432
Set the default connection_file such that it preserves an existing configuration.
I useful workaround to set the connection file is to not call jupyter kernel directly, but rather use the kernel manager module, which is more flexible:
python -m ipykernel_launcher -f ~/kernels/file.json
The above works for current and previous versions of jupyter, so I'd consider it to be more reliable.