How to run jupyter lab in a conda environment on a google compute engine (Deep Learning VM)? - jupyter-notebook

I made a conda environment in my Deep Learning VM. When I ssh to it (clicking SSH button of my instance in the VM instances page) and type source activate <environment_name> it gets activated correctly in the shell.
I successfully connect to jupyter lab from my local machine as explained from the docs
How can I use jupyter in a specific conda environment on this VM ?
The accepted way to run jupyter in a specific conda environment seems to be
Activate a conda environment in your terminal using source activate <environment_name> before you run jupyter notebook.
but the Deep Learning VM docs say
A Jupyter Lab session is started when your Deep Learning VM instance is initialized
so that I cannot source activate before the creation of the jupyter lab session.
Any ideas ?
run a standard jupyter notebook myself instead of using the jupyter lab provided by the VM ?
activate the environment in startup scripts of the VM before the creation of the jupyter lab ?

Please try out the below steps:
source activate < env_name >
conda install ipykernel
ipython kernel install --name < env_name > --user
After this, launch your python code from hub.colfaxresearch.com and select Kernel --> Change Kernel --> < env_name >

The only way we've found to make it see all your environments(conda and new python environments) is to run a new jupyter lab instance.
When connecting over SSH map the 8888 or any other port instead of 8080 gcloud compute ssh ... -L 8888:localhost:8888
After connecting run jupyter lab from console. The default port is 8888.
This is one of the ugliest issues I've seen with GCE so far!

Related

Cannot add a Linux image to my MicroStack / add an instance to OpenStack

How do I install a Linux image (https://cloud-images.ubuntu.com) in MicroStack (https://ubuntu.com/tutorials/microstack-get-started#2-install-microstack)? This question is close: How to install packages in cirros OS. But doesn't share the command that is needed. For example, I've tried the following:
microstack launch cirros --name test
The above works, but can't use it as it has no packages. So unusable.
So I tried the following, nothing is working
microstack launch ubuntu --name test
microstack launch debian --name test
microstack launch focal --name test
Second, I've download the a ubuntu ISO. And attempted to upload the image under Compute > Images. While it uploads in the GUI of OpenStack aka Horizon. The following error happens: "Flavor's disk is too small for requested image."

Unable to connect to Jupyter Notebook on Google Cloud

I am unable to access jupyter notebook after starting my virtual machine on Google cloud. I type the code below on the shell prompt:
jupyter notebook
This returns some information about the notebook server including:
[I 02:28:31.858 NotebookApp] The Jupyter Notebook is running at:
[I 02:28:31.858 NotebookApp] http://(my-fastai-instance2 or 127.0.0.1):8081/
However, when I try to access jupyter notebook at this address, the browser just returns a message saying it is unable to establish connection at that server address.
Resolved using:
gcloud compute ssh <zone> <instance name> <port number>.
Thank you for your help.
Maybe you need try in web browser this address:
http://localhost:{your port}/tree
insatall localtunnel :
localtunnel exposes your localhost to the world for easy testing and sharing! No need to mess with DNS
npm install -g localtunnel
after that run this command on port you are using
lt --port 8000
if the url wasn't working :
go set the configuration mentioned in the message from jupyter :
c.NotebookApp.allow_remote_access = True
in jupyter_notebook_config.py, e.g. /etc/jupyter/jupyter_notebook_config.py in your user image if using container-based deployment.
Try like this
gcloud compute ssh --zone=YOURZONE jupyter#INSTANCENAME -- -L 8080:localhost:8080
After login to cloud with that, open browser and type localhost:8080 and you should have jupyter.
This also should work by tunneling jupyter via ssh -i ~/.ssh/google_compute_engine -nNT -L 8888:localhost:8888 vm_external_IP and then localhost:8888 in your browser

Airflow installation issue on Windows 7

How to install Airflow on Windows 7? getting below error while installing it using pip install apache-airflow :
---------------------------------------- Command "c:\users\shrgupta5\appdata\local\programs\python\python36-32\python.exe
-u -c "import setuptools, tokenize;__file__='C:\\Users\\SHRGUP~1\\AppData\\Loca l\\Temp\\pip-build-_yptw7sa\\psutil\\setup.py';f=getattr(tokenize, 'open', open) (__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __fi le__, 'exec'))" install
--record C:\Users\SHRGUP~1\AppData\Local\Temp\pip-_cwm0n u7-record\install-record.txt --single-version-externally-managed
--compile" fail ed with error code 1 in C:\Users\SHRGUP~1\AppData\Local\Temp\pip-build-_yptw7sa\ psutil\
I wouldn't bother trying to install Airflow on windows, even after you install it successfully you cannot run the airflow script due to a dependency on the unix-only module pwd
You can run Airflow on Windows by using the Docker setup from puckel https://github.com/puckel/docker-airflow.
Use VirtualBox and Docker Toolbox(legacy) and setup a docker-machine on your Windows computer (docker-machine create -d virtualbox --virtualbox-cpu-count "2" --virtualbox-memory "2048" default)
make sure to fork puckels git repo underneath c:/Users/yourusername/documents otherwise mounting of DAGS won't work
You should now be able to spin up Airflow e.g. by using the celery-executor setup with docker compose -f docker-compose-CeleryExecutor.yml up -d
I have setup a environment where I develop the DAGs on Windows, test them within the dockercontainer and then push the Dockerimage to Linux to production. I have added a more detailed tutorial here.
Airflow cannot be installed on Windows within the standard command prompt.
You need to use bash and afterwards change the config:
How to run Airflow on Windows
Download the source of airflow from pypi:
https://pypi.org/project/airflow/#files
Unzip and edit setup.cfg, then go to the install_requires section and change the version of psutil with the following: 'psutil>=5.4.7',
Finally, run python setup.py install in the source directory

How to RE-connect to a Remote Jupyter instance (Running Code)

I have several long running scripts (in Jupyter Notebook) on a remote Google Cloud Compute Instance.
If I lose the ssh connection, I cannot reconnect to the (running) Notebook without stopping those running scripts--executing within the Notebook.
It seems that closing my macbook, will sever my connection to the remote (running) jupyter notebook. Is there some way to reconnect without stopping the script?
On Google Cloud, Jupyter is still running. I just can't connect to the notebook executing the code––without stopping code execution.
I'm sure other Jupyter users have figured this out :)
Thanks in advance
My GCloud Tunneling Script
gcloud compute ssh --zone us-central1-c my-compute-instance -- -N -p 22 -D localhost:5000
Bash Script that Launches Chrome
/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome
"localhost:22"
--proxy-server="socks5://localhost:5000"
--host-resolver-rules="MAP * 0.0.0 , EXCLUDE localhost"
--user-data-dir=/tmp/
Nohup that launches Jupyter on Gcloud
nohup jupyter notebook --no-browser > log.txt 2>&1 &
On my Sierra-os macbook, no proxy settings (System Preferences) are enabled
On Google Cloud, I'm NOT using a static ip, just an ephemeral ip.
Much appreciation in advance
What do you mean by "cannot reconnect" ? Do you mean you can't see the notebook interface anymore ? (In which case this is likely a google cloud question).Or do you mean you can't run code or see previous results ?
If the second, this is a known issue, the jupyter team is working on it; The way to go around that is to wrap your code in Python Futures, that store intermediate code; thus re-accessing the future will not trigger re-computation, but will show you intermediate results.

Instance creation in devstack icehouse

I want to create few instance having ubuntu installed on it using openstack.
I tried following steps
Approach 1
installed icehouse devstack
git clone -b stable/icehouse https://github.com/openstack-dev/devstack.git
cd devstack
./stack.sh
after successful installation i uploaded a ubuntu image
glance image-create --name Ubuntu --disk-format iso --container-format bare <~/sumit/images/ubuntu-14.04.2-desktop-amd64.iso
login to dashboard and launch the instance (m1.small, RAM GB, total disk 20GB) using this image.
open the instance console from horizon dashboard and try to install ubuntu
Βut it shows required space(6.5GB) in not available.
Τhen I tried to install neutron and heat also
Approach 2
installed icehouse devstack
git clone -b stable/icehouse https://github.com/openstack-dev/devstack.git
cd devstack
vi localrc
my localrc looks like
DEST=/opt/stack
LOGFILE=$DEST/logs/stack.sh.log
VERBOSE=True
LOG_COLOR=False
SCREEN_LOGDIR=$DEST/logs/screen
ADMIN_PASSWORD=password
MYSQL_PASSWORD=openstack
RABBIT_PASSWORD=openstack
SERVICE_PASSWORD=openstack
SERVICE_TOKEN=tokentoken
GLANCE_BRANCH=stable/icehouse
HORIZON_BRANCH=stable/icehouse
KEYSTONE_BRANCH=stable/icehouse
NOVA_BRANCH=stable/icehouse
NEUTRON_BRANCH=stable/icehouse
HEAT_BRANCH=stable/icehouse
CEILOMETER_BRANCH=stable/icehouse
DISABLED_SERVICES=n-net ENABLED_SERVICES+=,q-svc,q-agt,q-dhcp,q-l3,q-meta,q-metering,neutron
ENABLED_SERVICES+=,q-lbaas
ENABLED_SERVICES+=,heat,h-api,h-api-cfn,h-api-cw,h-eng
HEAT_STANDALONE=True
ENABLED_SERVICES+=,ceilometer-acompute,ceilometer-acentral,ceilometer-collector,ceilometer-api
ENABLED_SERVICES+=,ceilometer-alarm-notify,ceilometer-alarm-eval
After this
./stack.sh
after successful installation Ι uploaded a ubuntu image
glance image-create --name Ubuntu --disk-format iso --container-format bare <~/sumit/images/ubuntu-14.04.2-desktop-amd64.iso
login to dashboard and launch the instance (m1.small, RAM GB, total disk 20GB) using this image.
But now it displays
Error: Unable to connect to Neutron
Every time Ι list the instance it displays same error.
Can anyone help me out to overcome all these problems so that Ι can launch some instances and install ubuntu on that.
Unable to connect can be because neutron service is not running. Through Dashboard you cannot create instance without network. Use screen command in devstack to check if neutron is running properly.

Resources