Jupyter lab : Server Connection Error: Failed to fetch - jupyter-notebook

I have created a Google Dataproc cluster with the optional components Anaconda and Jupyter. When I try to open the Jupyter Lab or Jupyter UI from cluster web interface, some times I am getting the following errors that pop-up in the UI which do not allow me to create or use any notebooks.
Server Connection Error: Failed to fetch
Server Connection Error: 500. That’s an error. That’s all we know
I am using Dataproc image version 1.3-debian9 with python2.7 which is is created using the following gcloud cmd:
REGION=us-central1
CLUSTER=spark-jupyter-1-3
gcloud beta dataproc clusters create ${CLUSTER} \
--region=${REGION} \
--image-version=1.3 \
--optional-components=ANACONDA,JUPYTER \
--enable-component-gateway
Any advise how to resolve this issue?

Related

az acr login raises DOCKER_COMMAND_ERROR with message docker daemon not running

Windows 11 with wsl2 ubuntu-22.04.
In Windows Terminal I open a PowerShell window and start wsl with command:
wsl
Then I start the docker daemon in this window with the following command:
sudo dockerd
It prompts for the admin password, which I enter and then it starts the daemon.
Next I open a new PowerShell window in Windows Terminal, run wsl and run a container to verify everything is working. So far so good.
Now I want to login to Azure Container Registry with the following command:
az acr login -n {name_of_my_acr}
This returns the following error:
You may want to use 'az acr login -n {name_of_my_acr} --expose-token' to get an access token,
which does not require Docker to be installed.
An error occurred: DOCKER_COMMAND_ERROR
error during connect: This error may indicate that the docker daemon is not running.:
Get "http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.24/containers/json":
open //./pipe/docker_engine: The system cannot find the file specified.
The error suggests the daemon is not running, but since I can run a container I assume the deamon is running - otherwise I would not be able to run a container either, right? What can I do to narrow down or resolve this issue?
Docker version info using docker -v command:
Docker version 20.10.12, build 20.10.12-0ubuntu4
An error occurred: DOCKER_COMMAND_ERROR error during connect: This error may indicate that the docker daemon is not running.: Get"http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.24/containers/json": open //./pipe/docker_engine: The system cannot find the file specified.
The above error due to some times docker might be disabled from starting on boot or login.
The following suggestion can be used:
Open the Powershell and type dockerd which will start the daemon.
Open the docker with run as administrator and run the command as below :
C:\Program Files\Docker\Docker\DockerCli.exe -SwitchDaemon
Check the version of WSL2, if it is older it might be a problem and then download the latest package WSL2 Linux kernel update package for x64-bit machines in the windows 11.
Reference:
Manual installation steps for older versions of WSL | Microsoft Docs

Connecting Colab to Local Runtime Jupyter Notebook

I'm trying to connect my Google Colab to a local runtime via Jupyter Notebook. There is one part I can't figure out, which is this:
#Ensure that the notebook server on your machine is running on port 8888 and accepting requests from https://colab.research.google.com.
jupyter notebook \
--NotebookApp.allow_origin='https://colab.research.google.com' \
--port=8888 \
--NotebookApp.port_retries=0
I tried copy-pasting it into my anaconda prompt but only "jupyter notebook " is pasted and executed. How do you get all that code typed into prompt? Is it some cmd feature that I'm completely oblivious to?
The command you are currently running should be run on a Linux machine. If you have a Windows machine, replace the \ at the end of each part of the command with ^. So, your command will be
jupyter notebook ^
--NotebookApp.allow_origin='https://colab.research.google.com' ^
--port=8888 ^
--NotebookApp.port_retries=0
Or, the complete command with all the parameters can be run in a single line like this
jupyter notebook --NotebookApp.allow_origin='https://colab.research.google.com' --port=8888 --NotebookApp.port_retries=0
I have tested this to work on the command prompt, and the local runtime connection is successful.

How to install Apache Superset in Ubuntu 18.04

I am following the steps laid out here:
https://superset.apache.org/docs/installation/installing-superset-using-docker-compose
I run running one by one:
We recommend that you check out and run the code from the last tagged
release
$ git checkout latest
Then, run the following command:
$ docker-compose up
And I am getting this error:
WARNING: The CYPRESS_CONFIG variable is not set. Defaulting to a blank
string.
ERROR: Couldn't connect to Docker daemon - you might need to run docker-machine start default.
I am not able to find how to install and start default server with docker-machine.
Try to use this command before docker-compose up:
export DOCKER_HOST=unix:///var/run/docker.sock
It will export DOCKER_HOST to environment variables and help docker client to find a connection to Docker daemon.

How to run jupyter lab in a conda environment on a google compute engine (Deep Learning VM)?

I made a conda environment in my Deep Learning VM. When I ssh to it (clicking SSH button of my instance in the VM instances page) and type source activate <environment_name> it gets activated correctly in the shell.
I successfully connect to jupyter lab from my local machine as explained from the docs
How can I use jupyter in a specific conda environment on this VM ?
The accepted way to run jupyter in a specific conda environment seems to be
Activate a conda environment in your terminal using source activate <environment_name> before you run jupyter notebook.
but the Deep Learning VM docs say
A Jupyter Lab session is started when your Deep Learning VM instance is initialized
so that I cannot source activate before the creation of the jupyter lab session.
Any ideas ?
run a standard jupyter notebook myself instead of using the jupyter lab provided by the VM ?
activate the environment in startup scripts of the VM before the creation of the jupyter lab ?
Please try out the below steps:
source activate < env_name >
conda install ipykernel
ipython kernel install --name < env_name > --user
After this, launch your python code from hub.colfaxresearch.com and select Kernel --> Change Kernel --> < env_name >
The only way we've found to make it see all your environments(conda and new python environments) is to run a new jupyter lab instance.
When connecting over SSH map the 8888 or any other port instead of 8080 gcloud compute ssh ... -L 8888:localhost:8888
After connecting run jupyter lab from console. The default port is 8888.
This is one of the ugliest issues I've seen with GCE so far!

Error: Error trying install composer runtime. Error: Connect Failed

Prog:dist abhishek$ composer network deploy -a my-network.bna -p hlfv1 -i PeerAdmin -s randomString
Deploying business network from archive: my-network.bna
Business network definition:
Identifier: my-network#0.1.6
Description: My Commodity Trading network
✖ Deploying business network definition. This may take a minute...
Error: Error trying deploy. Error: Error trying install composer runtime. Error: Connect Failed
Command failed
when trying to install the composer runtime,returns
Prog:dist abhishek$ composer runtime install -n my-network -p hlfv1 -i PeerAdmin -s randomString
✖ Installing runtime for business network my-network. This may take a minute...
Error: Error trying install composer runtime. Error: Connect Failed
Command failed
I've been working through the Hyperledger Composer tutorial (https://hyperledger.github.io/composer/tutorials/developer-guide.html) on an older Mac, running OS X Mavericks 10.9.5, which means I'm using Docker Toolbox instead of Docker for Mac. I encountered the same error message when deploying the sample Trading network .bna file on my local dev environment Fabric network.
Here is the command in Terminal:
$ composer network deploy -a my-network.bna -p hlfv1 -i PeerAdmin -s randomString -A admin -S
And here is the error log:
Error: Error trying deploy. Error: Error trying install composer runtime. Error: Connect Failed
In my case, it was because Docker Toolkit answers to on an IP address assigned when you start docker, instead of localhost, 127.0.0.1, etc.
If you are also using Docker Toolkit and are getting the same error, first find the docker IP number, which should be listed under the Docker Whale logo in Terminal when you started it, and then edit the following files (TextEdit should be fine), changing all references to localhost and 127.0.0.1 to the IP number (leave the ports, such as :7050, there):
fabric-tools/fabric-scripts/hlfv1/composer/configtx.yaml
fabric-tools/fabric-scripts/hlfv1/composer/docker-compose.yml
fabric-tools/fabric-scripts/hlfv1/createComposerProfile.sh
fabric-tools/fabric-scripts/hlfv1/createPeerAdminCard.sh
Then, back in Terminal, navigate back to fabric-tools, and if Fabric is already started, stop it, and then recreate the Composer Profile, as documented:
$ ./stopFabric.sh
$ ./createComposerProfile.sh
The log should now show the Docker Toolkit IP for the orderers, CA and peers. Now restart Fabric:
$ ./startFabric.sh
Navigate back to fabric-tools/my-network/dist and re-run the compose command, and if all goes well, it should connect properly.
Is your Fabric running? What is the output of docker ps?
Try doing the next:
Pick a directory that you want and install Hyperledger Fabric and Hyperledger Composer Playground running:
curl -sSL https://hyperledger.github.io/composer/install-hlfv1.sh | bash
Then run your command.
Try the code below:
$composer runtime install -c PeerAdmin#hlfv1 -n basic
$composer network deploy -a basic.bna -A admin -S adminpw -c PeerAdmin#hlfv1 -f admincard

Resources