running any metpy function is fine till plotting when the kernel repeatedly dies. I think this may be a hardware issue but curious for answers
Related
I have been getting involved with the Python language, and especially through Jupyter notebook. I think Jupyter is great for prototyping code in a very convenient way. I've been working on code according to this tutorial over the past 2 days:
https://medium.com/#omar.ps16/stereo-3d-reconstruction-with-opencv-using-an-iphone-camera-part-iii-95460d3eddf0, and it's been working fine.
However, when I woke up this morning, it seems that a memory issue is causing Jupyter to crash. When I start Jupyter, there is no such memory issue, it is only when I click on my particular notebook file. Then the memory gradually increases (as seen on the task manager). Also, the screen is non-reactive, so I cannot reach the restart kernel or any of these options in the kernel. After about 30 seconds, the entire Jupyter system crashes due to a memory overflow.
I would greatly appreciate any help with this problem.
Okay I figured out that I was printing a huge matrix out, which blocked up the system. I had to open the notebook with notepad++, and get rid of the data that way, and everything is running fine. Stupid mistake.
I am sharing my notebooks with colleagues by exposing the jupyter door on our internal network.
When they leave the notebook, a lot of kernels keep running.
I am looking for a way for automatically shutdown a jupyter-notebook kernel when exiting.
Does anyone know if it is possible?
I have a piece of code (Python 3.7) in a Jupyter notebook cell that generates a pandas data frame (df) containing numpy arrays.
I am checking the memory consumption of the df just by looking at the system monitor app preinstalled in Ubuntu.
The problem is that if I run the cell a second time, the memory consumption double even if the df is assigned to the same variable.
If I run multiple times the same cell, the system goes out of memory, and the kernel will dye by itself.
Using del df or gc.collect() won't free the memory as well.
Restarting the notebook kernel is the only way to free the memory.
In practice, I would expect the memory to stay roughly the same because I am just reassigning a new df to the same variable over and over again.
Indeed, the memory accumulates only if I run the code on a linux machine and in the notebook. If I run the same code via terminal python script.py, or if I run the very same notebook on macOS, the memory pressure will not change, I can run the same cell multiple time and the occupied memory stays stable (as expected).
Can you help me pointing out where is the problem coming from and how to solve it?
P.S. Both Python and Jupiter are installed with Anaconda 2018.12 on Ubuntu 18.04.
I have asked the same question on the Ubuntu community since I am not sure this is strictly related to python itself, but I got no answers so far.
I'm using jupyter notebooks to train neural networks with GPUs. Specifically I have 3 notebooks open and each is training a different neural network.
I'm experiencing a decrease in performance in the time to train each network.
When I run nvidia-smi I see the processes on each GPU and the associated GPU memory used. The GPU memory used is not being maxed out on any 1 of the GPUs.
Is running multiple jupyter notebooks causing a problem?
If jupyter notebook causing the problem, what else could it be and how might I check?
I've got a Windows HPC Server running with some nodes in the backend. I would like to run Parallel R using multiple nodes from the backend. I think Parallel R might be using SNOW on Windows, but not too sure about it. My question is, do I need to install R also on the backend nodes?
Say I want to use two nodes, 32 cores per node:
cl <- makeCluster(c(rep("COMP01",32),rep("COMP02",32)),type="SOCK")
Right now, it just hangs.
What else do I need to do? Do the backend nodes need some kind of sshd running to be able to communicate each other?
Setting up snow on a Windows cluster is rather difficult. Each of the machines needs to have R and snow installed, but that's the easy part. To start a SOCK cluster, you would need an sshd daemon running on each of the worker machines, but you can still run into troubles, so I wouldn't recommend it unless you're good at debugging and Windows system administration.
I think your best option on a Windows cluster is to use MPI. I don't have any experience with MPI on Windows myself, but I've heard of people having success with the MPICH and DeinoMPI MPI distributions for Windows. Once MPI is installed on your cluster, you also need to install the Rmpi package from source on each of your worker machines. You would then create the cluster object using the makeMPIcluster function. It's a lot of work, but I think it's more likely to eventually work than trying to use a SOCK cluster due to the problems with ssh/sshd on Windows.
If you're desperate to run a parallel job once or twice on a Windows cluster, you could try using manual mode. It allows you to create a SOCK cluster without ssh:
workers <- c(rep("COMP01",32), rep("COMP02",32))
cl <- makeSOCKluster(workers, manual=TRUE)
The makeSOCKcluster function will prompt you to start each one of the workers, displaying the command to use for each. You have to manually open a command window on the specified machine and execute the specified command. It can be extremely tedious, particularly with many workers, but at least it's not complicated or tricky. It can also be very useful for debugging in combination with the outfile='' option.