How to install Apache Superset in Ubuntu 18.04 - bigdata

I am following the steps laid out here:
https://superset.apache.org/docs/installation/installing-superset-using-docker-compose
I run running one by one:
We recommend that you check out and run the code from the last tagged
release
$ git checkout latest
Then, run the following command:
$ docker-compose up
And I am getting this error:
WARNING: The CYPRESS_CONFIG variable is not set. Defaulting to a blank
string.
ERROR: Couldn't connect to Docker daemon - you might need to run docker-machine start default.
I am not able to find how to install and start default server with docker-machine.

Try to use this command before docker-compose up:
export DOCKER_HOST=unix:///var/run/docker.sock
It will export DOCKER_HOST to environment variables and help docker client to find a connection to Docker daemon.

Related

az acr login raises DOCKER_COMMAND_ERROR with message docker daemon not running

Windows 11 with wsl2 ubuntu-22.04.
In Windows Terminal I open a PowerShell window and start wsl with command:
wsl
Then I start the docker daemon in this window with the following command:
sudo dockerd
It prompts for the admin password, which I enter and then it starts the daemon.
Next I open a new PowerShell window in Windows Terminal, run wsl and run a container to verify everything is working. So far so good.
Now I want to login to Azure Container Registry with the following command:
az acr login -n {name_of_my_acr}
This returns the following error:
You may want to use 'az acr login -n {name_of_my_acr} --expose-token' to get an access token,
which does not require Docker to be installed.
An error occurred: DOCKER_COMMAND_ERROR
error during connect: This error may indicate that the docker daemon is not running.:
Get "http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.24/containers/json":
open //./pipe/docker_engine: The system cannot find the file specified.
The error suggests the daemon is not running, but since I can run a container I assume the deamon is running - otherwise I would not be able to run a container either, right? What can I do to narrow down or resolve this issue?
Docker version info using docker -v command:
Docker version 20.10.12, build 20.10.12-0ubuntu4
An error occurred: DOCKER_COMMAND_ERROR error during connect: This error may indicate that the docker daemon is not running.: Get"http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.24/containers/json": open //./pipe/docker_engine: The system cannot find the file specified.
The above error due to some times docker might be disabled from starting on boot or login.
The following suggestion can be used:
Open the Powershell and type dockerd which will start the daemon.
Open the docker with run as administrator and run the command as below :
C:\Program Files\Docker\Docker\DockerCli.exe -SwitchDaemon
Check the version of WSL2, if it is older it might be a problem and then download the latest package WSL2 Linux kernel update package for x64-bit machines in the windows 11.
Reference:
Manual installation steps for older versions of WSL | Microsoft Docs

I keep getting this error "docker: invalid reference format: repository name must be lowercase."

I keep getting this error even my repo name is lowercase, the code i´m running is this sudo docker container run --rm -p 3838:3838 -v /home/ubuntu/la-liga-2018-2019-stats/stats/:/srv/shiny-server/stats -v /home/ubuntu/log/shiny-server/:/var/log/shiny-server/
BorisRendon/shinyauth. I´m trying to deploy a shiny app to aws using docker and i can´t pass this step.
You should probably use docker run and not docker container run.

how to resolve "Error: No module named 'airflow.www'" while starting airflow websever

Getting below error while starting Airflow webserver
balajee#Balajees-MacBook-Air.local:~$ airflow webserver -p 8080
[2018-12-03 00:29:37,066] {init.py:51} INFO - Using executor SequentialExecutor
[2018-12-03 00:29:38,776] {models.py:271} INFO - Filling up the DagBag from /Users/balajee/airflow/dags
Running the Gunicorn Server with:
Workers: 4 sync
Host: 0.0.0.0:8080
Timeout: 120
Logfiles: - -
Error: No module named 'airflow.www'
Fixed for me
pip3 uninstall -y gunicorn
pip3 install gunicorn==19.4.0
I got this problem this morning, and I found a strange solution, may it helps you. I think maybe you just need to change the command running directory.
I install airflow basic dependence in my virtualenv directory venv with PyCharm help, and I use PyCharm build-in Terminal tab to directly access my venv, and I use airflow initdb to init sqlite database to store all my logs and ops, then according to the official tutorial I use airflow webserver to start the webserver. But somehow today I use my Mac terminal, and start virtulenv, and start airflow webserver, and I encounter this problem with:
Running the Gunicorn Server with:
Workers: 4 sync
Host: 0.0.0.0:8080
Timeout: 120
Logfiles: - -
=================================================================
Error: No module named 'airflow.www'
[2019-05-26 07:45:27,130] {cli.py:833} ERROR - No response from gunicorn master within 120 seconds
[2019-05-26 07:45:27,130] {cli.py:834} ERROR - Shutting down webserver
And I tried #Evgeniy Sobolev's solution by reinstall gunicorn and nothing changed, but when I still using my PyCharm Terminal, it can still running successfully. I guess maybe it is because the first directory you init your db and running webserver is critical. By default when I use PyCharm Terminal to init db and start webserver is the Project root directory, like:
(venv) root#root:~/GitHub/FakeProject$ airflow webserver
But today I check into venv to start virtualenv, and the root directory changed!
root#root:~/GitHub/FakeProject/SubDir$ source venv/bin/activate
(venv) root#root:~/GitHub/FakeProject/SubDir$ airflow webserver
** Error **
So in this way it encounters Error: No module named 'airflow.www', so I check out the directory, and the webserver running successfully just like PyCharm Terminal:
(venv) root#root:~/GitHub/FakeProject/SubDir$ cd ..
(venv) root#root:~/GitHub/FakeProject$ airflow webserver
** It works **
I thought maybe airflow store some metadata (like setup a PATH, maybe) in the first time init your airflow db, so you can not change your command running directory.
I hope it may help somebody in the future. Just check your directory!
Looks like you have a problem with gunicorn.
Try to execute this two commands:
sudo -H pip3 uninstall -y gunicorn
sudo -H pip3 install gunicorn
It should resolve your problem, cause airflow show you not clear error message related to gunicorn problems
I did this steps for the problem happens:
create a separate virtualenv only for airflow (I use anaconda distribution)
activate this env with conda activate
install airflow: pip install apache-airflow
at this moment the error No module named 'airflow.www' was showed for me
To fix follow this steps:
Look for where is your gunicorn in: whereis gunicorn
gunicorn have to stay only in your virtualenv directory: /home/yourname/anaconda3/envs/airflow_env/bin/gunicorn
If it stay in two directories, let it just in your airflow enviroment. Remove it all from another.
Another way to verify if gunicorn is in another directories is printing your PATH variable: echo $PATH. Look for gunicorn in /home/yourname/.local/bin and another anaconda directories from PATH. Remove all references. Remove gunicorn from conda base env as well: pip uninstall gunicorn.
With this steps, I think your problem will be solved.
I used anaconda distribution, but I think the same process can be done without it. I used airflow 1.10.0 and python 3.6.
If you have defined a custom home directory for airflow other than default one (~/airflow) during the installation:
You need first export the custom path:
export AIRFLOW_HOME=/your/custom/path/airflow
Go to the airflow directory and then Run the webserver
airflow webserver -p 8080
Run scheduler too
airflow scheduler
please check if gunicorn is installed already in server. for me it was installed in /usr/local/bin and it was taking precedence over gunicorn version installed with airflow. uninstall earlier one or fix $PATH variable
I solved this by starting the webserver from the airflow folder itself.
I was previously trying to open the server from the home directory but the required modules could not be found which may be the case here.
Late to the party but could help others who get here.
I got the same issue using latest airflow version 2.5.0
Make sure env variable AIRFLOW_HOME is pointing to right location
Thanks all for sharing
I added sudo and it actually worked just fine.
I got the same error today and a sudo did the trick to me

Airflow installation issue on Windows 7

How to install Airflow on Windows 7? getting below error while installing it using pip install apache-airflow :
---------------------------------------- Command "c:\users\shrgupta5\appdata\local\programs\python\python36-32\python.exe
-u -c "import setuptools, tokenize;__file__='C:\\Users\\SHRGUP~1\\AppData\\Loca l\\Temp\\pip-build-_yptw7sa\\psutil\\setup.py';f=getattr(tokenize, 'open', open) (__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __fi le__, 'exec'))" install
--record C:\Users\SHRGUP~1\AppData\Local\Temp\pip-_cwm0n u7-record\install-record.txt --single-version-externally-managed
--compile" fail ed with error code 1 in C:\Users\SHRGUP~1\AppData\Local\Temp\pip-build-_yptw7sa\ psutil\
I wouldn't bother trying to install Airflow on windows, even after you install it successfully you cannot run the airflow script due to a dependency on the unix-only module pwd
You can run Airflow on Windows by using the Docker setup from puckel https://github.com/puckel/docker-airflow.
Use VirtualBox and Docker Toolbox(legacy) and setup a docker-machine on your Windows computer (docker-machine create -d virtualbox --virtualbox-cpu-count "2" --virtualbox-memory "2048" default)
make sure to fork puckels git repo underneath c:/Users/yourusername/documents otherwise mounting of DAGS won't work
You should now be able to spin up Airflow e.g. by using the celery-executor setup with docker compose -f docker-compose-CeleryExecutor.yml up -d
I have setup a environment where I develop the DAGs on Windows, test them within the dockercontainer and then push the Dockerimage to Linux to production. I have added a more detailed tutorial here.
Airflow cannot be installed on Windows within the standard command prompt.
You need to use bash and afterwards change the config:
How to run Airflow on Windows
Download the source of airflow from pypi:
https://pypi.org/project/airflow/#files
Unzip and edit setup.cfg, then go to the install_requires section and change the version of psutil with the following: 'psutil>=5.4.7',
Finally, run python setup.py install in the source directory

Error: Error trying install composer runtime. Error: Connect Failed

Prog:dist abhishek$ composer network deploy -a my-network.bna -p hlfv1 -i PeerAdmin -s randomString
Deploying business network from archive: my-network.bna
Business network definition:
Identifier: my-network#0.1.6
Description: My Commodity Trading network
✖ Deploying business network definition. This may take a minute...
Error: Error trying deploy. Error: Error trying install composer runtime. Error: Connect Failed
Command failed
when trying to install the composer runtime,returns
Prog:dist abhishek$ composer runtime install -n my-network -p hlfv1 -i PeerAdmin -s randomString
✖ Installing runtime for business network my-network. This may take a minute...
Error: Error trying install composer runtime. Error: Connect Failed
Command failed
I've been working through the Hyperledger Composer tutorial (https://hyperledger.github.io/composer/tutorials/developer-guide.html) on an older Mac, running OS X Mavericks 10.9.5, which means I'm using Docker Toolbox instead of Docker for Mac. I encountered the same error message when deploying the sample Trading network .bna file on my local dev environment Fabric network.
Here is the command in Terminal:
$ composer network deploy -a my-network.bna -p hlfv1 -i PeerAdmin -s randomString -A admin -S
And here is the error log:
Error: Error trying deploy. Error: Error trying install composer runtime. Error: Connect Failed
In my case, it was because Docker Toolkit answers to on an IP address assigned when you start docker, instead of localhost, 127.0.0.1, etc.
If you are also using Docker Toolkit and are getting the same error, first find the docker IP number, which should be listed under the Docker Whale logo in Terminal when you started it, and then edit the following files (TextEdit should be fine), changing all references to localhost and 127.0.0.1 to the IP number (leave the ports, such as :7050, there):
fabric-tools/fabric-scripts/hlfv1/composer/configtx.yaml
fabric-tools/fabric-scripts/hlfv1/composer/docker-compose.yml
fabric-tools/fabric-scripts/hlfv1/createComposerProfile.sh
fabric-tools/fabric-scripts/hlfv1/createPeerAdminCard.sh
Then, back in Terminal, navigate back to fabric-tools, and if Fabric is already started, stop it, and then recreate the Composer Profile, as documented:
$ ./stopFabric.sh
$ ./createComposerProfile.sh
The log should now show the Docker Toolkit IP for the orderers, CA and peers. Now restart Fabric:
$ ./startFabric.sh
Navigate back to fabric-tools/my-network/dist and re-run the compose command, and if all goes well, it should connect properly.
Is your Fabric running? What is the output of docker ps?
Try doing the next:
Pick a directory that you want and install Hyperledger Fabric and Hyperledger Composer Playground running:
curl -sSL https://hyperledger.github.io/composer/install-hlfv1.sh | bash
Then run your command.
Try the code below:
$composer runtime install -c PeerAdmin#hlfv1 -n basic
$composer network deploy -a basic.bna -A admin -S adminpw -c PeerAdmin#hlfv1 -f admincard

Resources