Unable to connect to Jupyter Notebook on Google Cloud - jupyter-notebook

I am unable to access jupyter notebook after starting my virtual machine on Google cloud. I type the code below on the shell prompt:
jupyter notebook
This returns some information about the notebook server including:
[I 02:28:31.858 NotebookApp] The Jupyter Notebook is running at:
[I 02:28:31.858 NotebookApp] http://(my-fastai-instance2 or 127.0.0.1):8081/
However, when I try to access jupyter notebook at this address, the browser just returns a message saying it is unable to establish connection at that server address.

Resolved using:
gcloud compute ssh <zone> <instance name> <port number>.
Thank you for your help.

Maybe you need try in web browser this address:
http://localhost:{your port}/tree

insatall localtunnel :
localtunnel exposes your localhost to the world for easy testing and sharing! No need to mess with DNS
npm install -g localtunnel
after that run this command on port you are using
lt --port 8000
if the url wasn't working :
go set the configuration mentioned in the message from jupyter :
c.NotebookApp.allow_remote_access = True
in jupyter_notebook_config.py, e.g. /etc/jupyter/jupyter_notebook_config.py in your user image if using container-based deployment.

Try like this
gcloud compute ssh --zone=YOURZONE jupyter#INSTANCENAME -- -L 8080:localhost:8080
After login to cloud with that, open browser and type localhost:8080 and you should have jupyter.

This also should work by tunneling jupyter via ssh -i ~/.ssh/google_compute_engine -nNT -L 8888:localhost:8888 vm_external_IP and then localhost:8888 in your browser

Related

az acr login raises DOCKER_COMMAND_ERROR with message docker daemon not running

Windows 11 with wsl2 ubuntu-22.04.
In Windows Terminal I open a PowerShell window and start wsl with command:
wsl
Then I start the docker daemon in this window with the following command:
sudo dockerd
It prompts for the admin password, which I enter and then it starts the daemon.
Next I open a new PowerShell window in Windows Terminal, run wsl and run a container to verify everything is working. So far so good.
Now I want to login to Azure Container Registry with the following command:
az acr login -n {name_of_my_acr}
This returns the following error:
You may want to use 'az acr login -n {name_of_my_acr} --expose-token' to get an access token,
which does not require Docker to be installed.
An error occurred: DOCKER_COMMAND_ERROR
error during connect: This error may indicate that the docker daemon is not running.:
Get "http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.24/containers/json":
open //./pipe/docker_engine: The system cannot find the file specified.
The error suggests the daemon is not running, but since I can run a container I assume the deamon is running - otherwise I would not be able to run a container either, right? What can I do to narrow down or resolve this issue?
Docker version info using docker -v command:
Docker version 20.10.12, build 20.10.12-0ubuntu4
An error occurred: DOCKER_COMMAND_ERROR error during connect: This error may indicate that the docker daemon is not running.: Get"http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.24/containers/json": open //./pipe/docker_engine: The system cannot find the file specified.
The above error due to some times docker might be disabled from starting on boot or login.
The following suggestion can be used:
Open the Powershell and type dockerd which will start the daemon.
Open the docker with run as administrator and run the command as below :
C:\Program Files\Docker\Docker\DockerCli.exe -SwitchDaemon
Check the version of WSL2, if it is older it might be a problem and then download the latest package WSL2 Linux kernel update package for x64-bit machines in the windows 11.
Reference:
Manual installation steps for older versions of WSL | Microsoft Docs

How to run jupyter lab in a conda environment on a google compute engine (Deep Learning VM)?

I made a conda environment in my Deep Learning VM. When I ssh to it (clicking SSH button of my instance in the VM instances page) and type source activate <environment_name> it gets activated correctly in the shell.
I successfully connect to jupyter lab from my local machine as explained from the docs
How can I use jupyter in a specific conda environment on this VM ?
The accepted way to run jupyter in a specific conda environment seems to be
Activate a conda environment in your terminal using source activate <environment_name> before you run jupyter notebook.
but the Deep Learning VM docs say
A Jupyter Lab session is started when your Deep Learning VM instance is initialized
so that I cannot source activate before the creation of the jupyter lab session.
Any ideas ?
run a standard jupyter notebook myself instead of using the jupyter lab provided by the VM ?
activate the environment in startup scripts of the VM before the creation of the jupyter lab ?
Please try out the below steps:
source activate < env_name >
conda install ipykernel
ipython kernel install --name < env_name > --user
After this, launch your python code from hub.colfaxresearch.com and select Kernel --> Change Kernel --> < env_name >
The only way we've found to make it see all your environments(conda and new python environments) is to run a new jupyter lab instance.
When connecting over SSH map the 8888 or any other port instead of 8080 gcloud compute ssh ... -L 8888:localhost:8888
After connecting run jupyter lab from console. The default port is 8888.
This is one of the ugliest issues I've seen with GCE so far!

Host key verification failed. in docker

I wanted to launch the jenkins which is installed through docker automatically in browser.. im working on windows os. in docker base os is ubuntu.. then i used solution from this link1.now im getting following error when i ssh using -v command i find that "read_passphrase: can't open /dev/tty: No such device or address"
by going through many websites i have created ssh file through windows using gitbash it contains id_rsa,id_rsa.pub,known_hosts files.
Now what should i do to launch the jenkins file in browser which is build using docker
I'm just going to address the error message you pasted for now.
ssh is trying to get keyboard input for the passphrase on your private key, but can't open the terminal correctly. Are you running the ssh command directly in the terminal, or from a script? If not, try running ssh directly. If you need to run ssh from a script:
Maybe try with keys that don't have a passphrase.
If you can use ssh-agent: Run eval $(ssh-agent), then run ssh-add and enter your passphrase. ssh will no longer prompt for a passphrase now.

How to RE-connect to a Remote Jupyter instance (Running Code)

I have several long running scripts (in Jupyter Notebook) on a remote Google Cloud Compute Instance.
If I lose the ssh connection, I cannot reconnect to the (running) Notebook without stopping those running scripts--executing within the Notebook.
It seems that closing my macbook, will sever my connection to the remote (running) jupyter notebook. Is there some way to reconnect without stopping the script?
On Google Cloud, Jupyter is still running. I just can't connect to the notebook executing the code––without stopping code execution.
I'm sure other Jupyter users have figured this out :)
Thanks in advance
My GCloud Tunneling Script
gcloud compute ssh --zone us-central1-c my-compute-instance -- -N -p 22 -D localhost:5000
Bash Script that Launches Chrome
/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome
"localhost:22"
--proxy-server="socks5://localhost:5000"
--host-resolver-rules="MAP * 0.0.0 , EXCLUDE localhost"
--user-data-dir=/tmp/
Nohup that launches Jupyter on Gcloud
nohup jupyter notebook --no-browser > log.txt 2>&1 &
On my Sierra-os macbook, no proxy settings (System Preferences) are enabled
On Google Cloud, I'm NOT using a static ip, just an ephemeral ip.
Much appreciation in advance
What do you mean by "cannot reconnect" ? Do you mean you can't see the notebook interface anymore ? (In which case this is likely a google cloud question).Or do you mean you can't run code or see previous results ?
If the second, this is a known issue, the jupyter team is working on it; The way to go around that is to wrap your code in Python Futures, that store intermediate code; thus re-accessing the future will not trigger re-computation, but will show you intermediate results.

Rsync command not working

I am trying to run rsync as follows and running into error sshpass: Failed to run command: No such file or directory .I verified the source /local/mnt/workspace/common/sectool and destination directories/prj/qct/wlan_rome_su_builds are available and accessible?what am I missing?how to fix this?
username#xxx-machine-02:~$ sshpass –p 'password' rsync –progress –avz –e ssh /local/mnt/workspace/common/sectool cnssbldsw#hydwclnxbld4:/prj/qct/wlan_rome_su_builds
sshpass: Failed to run command: No such file or directory
Would that be possible for you to check whether 'rsync' works without 'sshpass'?
Also, check whether the ports used by rsync is enabled. You can find the port info via cat /etc/services | grep rsync
The first thing is to make sure that ssh connection is working smoothly. You can check this via "sudo ssh -vvv cnssbldsw#hydwclnxbld4" (please post the message). In advance, If you are to receive any messages such as "ssh: connect to host hydwclnxbld4 port 22: Connection refused", the issue is with the openssh-server (not being installed or a broken package). Let's see what you get you get for the first command

Resources