I usually command with 'jupyter notebook' to start the jupyter notebook .
Traceback (most recent call last): File
"/home/jake/venv/bin/jupyter-notebook", line 8, in
sys.exit(main()) File "/home/jake/venv/lib/python3.7/site-packages/jupyter_core/application.py",
line 268, in launch_instance
return super(JupyterApp, cls).launch_instance(argv=argv, **kwargs) File
"/home/jake/venv/lib/python3.7/site-packages/traitlets/config/application.py",
line 663, in launch_instance
app.initialize(argv) File "",
line 2, in initialize File
"/home/jake/venv/lib/python3.7/site-packages/traitlets/config/application.py",
line 87, in catch_config_error
return method(app, *args, **kwargs) File "/home/jake/venv/lib/python3.7/site-packages/notebook/notebookapp.py",
line 1720, in initialize
self.init_webapp() File "/home/jake/venv/lib/python3.7/site-packages/notebook/notebookapp.py",
line 1482, in init_webapp
self.http_server.listen(port, self.ip) File "/home/jake/venv/lib/python3.7/site-packages/tornado/tcpserver.py",
line 151, in listen
sockets = bind_sockets(port, address=address) File "/home/jake/venv/lib/python3.7/site-packages/tornado/netutil.py", line
174, in bind_sockets
sock.bind(sockaddr) OSError: [Errno 99] Cannot assign requested address
but this message is showed up
Go to /etc/hosts and check that the local host has the ip 127.0.0.1
How to go to the hosts file? If you are using Linux, open up terminal, type
cd /etc/
Then type
cat hosts
This will display the contents in hosts. Now you will see localhosts there. Change its value to 127.0.0.1 if it already isn't. And that should get your notebook running.
If you find that localhost is already 127.0.0.1, then try the command in your terminal:
jupyter notebook --ip=0.0.0.0 --port=8080
to run the jupyter notebook.
The second one is an immediate fix, but every time you want to start jupyter notebook, you will have to provide those two arguments. On the other hand, the first one is a permanent fix (recommended) and you just have to type "jupyter notebook" the next time.
Related
I am having difficulty in a running python script that calls module "requests" in crontab. This was fine a few days ago and then I had to change my authentication for Google (to send emails), then "requests" stopped working in crontab. The python script runs fine in a terminal but will not execute in crontab. "requests" is available and when I type "pip3 show requests" the following is displayed (note I replaced my user name with "user"):
$pip3 show requests
Name: requests
Version: 2.27.1
Summary: Python HTTP for Humans.
Home-page: https://requests.readthedocs.io
Author: Kenneth Reitz
Author-email: me#kennethreitz.org
License: Apache 2.0
Location: /home/user/.local/lib/python3.6/site-packages
Requires: certifi, idna, urllib3, charset-normalizer
A simplified version of the python file I would like to execute in crontab is:
#!/usr/bin/env python...
# -*- coding: utf-8 -*-
import requests
print ('End of code')
The file test_request.py executes fine in a terminal.
I created a bash script called test_request.sh based on directions from this stack overflow page:
"ImportError: No module named requests" in crontab Python script
That bash script is this:
#!/bin/bash
echo test_request.sh called: `date`
HOME=/home/user/
PYTHONPATH=/home/user/.local/lib/python3.6/site-packages
cd /home/user/Documents/bjg_code/
python ./test_request.py 2>&1 1>/dev/null
When I try to run the bash script in a terminal or in crontab I receive this error:
$bash test_request.sh
test_request.sh called: Sat Jun 11 14:18:46 EDT 2022
Traceback (most recent call last):
File "./test_request.py", line 4, in <module>
import requests
ImportError: No module named requests
Any advice would be welcomed and appreciated.
Thank you in advance.
I tried to install Apache Airflow 2.2.4 on windows 10. When I finish and run airflow here are the errors it gives me.
Traceback (most recent call last):
File "/home/david/.local/bin/airflow", line 5, in <module>
from airflow.__main__ import main
File "/home/david/.local/lib/python3.6/site-packages/airflow/__init__.py", line 34, in <module>
from airflow import settings
File "/home/david/.local/lib/python3.6/site-packages/airflow/settings.py", line 35, in <module>
from airflow.configuration import AIRFLOW_HOME, WEBSERVER_CONFIG, conf # NOQA F401
File "/home/david/.local/lib/python3.6/site-packages/airflow/configuration.py", line 1127, in <module>
conf = initialize_config()
File "/home/david/.local/lib/python3.6/site-packages/airflow/configuration.py", line 890, in initialize_config
shutil.copy(_default_config_file_path('default_webserver_config.py'), WEBSERVER_CONFIG)
File "/usr/lib/python3.6/shutil.py", line 245, in copy
copyfile(src, dst, follow_symlinks=follow_symlinks)
File "/usr/lib/python3.6/shutil.py", line 121, in copyfile
with open(dst, 'wb') as fdst:
PermissionError: [Errno 13] Permission denied: '/webserver_config.py'
Following steps resolved similar issue for me, but not sure what resolved it
1) Makes sure your wsl version is 2. (Restart PC if you change wsl version)
2)Enable Windows subsystem for Linux and Virtual Machine platform. (Restart PC)
After this, I followed following tutorial:
https://towardsdatascience.com/run-apache-airflow-on-windows-10-without-docker-3c5754bb98b4
If u follow this, u will not be installing airflow ver 1.10.12 but apache airflow 2.2.4, and instead of "airflow initdb" use "airflow db init" command.
Also, before running command "airflow db init" create an user, command for this (optional but I suggest run this command):
airflow users create --username admin --password admin --firstname <firstname> --lastname <lastname> --role Admin --email abc#gmail.com
I am trying to install Devstack as non-root user, but getting errors.
The log directory contains only broken symbolic links stack.sh.log and stack.sh.log.summary (pointing to nonexistent files).
I've used the sample local.conf - the only change is that I defined the $DEST.
OS: RHEL 6.6
STDOUT/ERR:
/home/john/scripts/openstack/devstack/functions-common: line 68: conditional binary operator expected
/home/john/scripts/openstack/devstack/functions-common: line 68: syntax error near `"$1"'
/home/john/scripts/openstack/devstack/functions-common: line 68: ` [[ -v "$1" ]]'
./stack.sh: line 119: GetDistro: command not found
/home/john/scripts/openstack/devstack/functions-common: line 68: conditional binary operator expected
/home/john/scripts/openstack/devstack/functions-common: line 68: syntax error near `"$1"'
/home/john/scripts/openstack/devstack/functions-common: line 68: ` [[ -v "$1" ]]'
/home/john/scripts/openstack/devstack/stackrc: line 48: isset: command not found
/home/john/scripts/openstack/devstack/.localrc.auto: line 84: enable_service: command not found
/home/john/scripts/openstack/devstack/stackrc: line 498: is_package_installed: command not found
/home/john/scripts/openstack/devstack/stackrc: line 666: get_default_host_ip: command not found
/home/john/scripts/openstack/devstack/stackrc: line 668: die: command not found
WARNING: this script has not been tested on
./stack.sh: line 179: die: command not found
./stack.sh: line 197: export_proxy_variables: command not found
./stack.sh: line 202: disable_negated_services: command not found
./stack.sh: line 209: is_package_installed: command not found
./stack.sh: line 209: install_package: command not found
[sudo] password for john:
./stack.sh: line 231: is_ubuntu: command not found
./stack.sh: line 238: is_fedora: command not found
./stack.sh: line 301: safe_chown: command not found
./stack.sh: line 302: safe_chmod: command not found
./stack.sh: line 310: safe_chown: command not found
Traceback (most recent call last):
File "/home/john/scripts/openstack/devstack/tools/outfilter.py", line 24, in <module>
import argparse
ImportError: No module named argparse
First, fix the missing module by using yum:
yum install python-argparse.noarch
Also you will need to run ./unstack.sh to clear the logs.
I had still faced this issue, so further debugging lead me to an issue when both python-zaqarclient and python-openstackclient were installed. As a quick solution I removed python-zaqarclient:
sudo pip uninstall python-zaqarclient
Then
- apt-get upgrade
- apt-get dist-upgrade
- ./stack.sh
Hope this helps!
I'm trying to copy files from a remote windows server to Unix server. I was successfully able to copy files from windows server using command prompt but when I run these commands from a script it's not working as expected.
commands used:
sftp user#remoteserver.com
lcd local_dir
cd remote dir
get file_name
exit
When I run these commands from a script the script is stopping after it connects to the remote server.
Can anybody tell me how to fix this issue.
The commands lcd to exit are sftp commands, so you cannont just write them into a script line by line but have to redirect them as sftps stdin:
# all lines till "EOF" will be redirected to sftp
sftp user#remoteserver.com <<- EOF
lcd local_dir
cd remote dir
get file_name
exit
EOF
# here you are in your shell script again, eg:
SFTPRES=$?
I'm trying to install opnstack grizzly on a fresh ubuntu 12.04 server.
The sript runs fin until it reach this point :
screen -S stack -p key -X stuff 'cd /opt/stack/keystone &&
/opt/stack/keystone/bin/keystone-all --config-file /etc/keystone/keystone.con' --log-
config
/etc/keystone/logging.conf -d --debug || touch "/opt/stack/status/stack/key.failure"
2013-07-16 17:33:03 + echo 'Waiting for keystone to start...'
2013-07-16 17:33:03 Waiting for keystone to start...
2013-07-16 17:33:03 + timeout 60 sh -c 'while ! http_proxy= curl -s
http://192.168.20.69:5000/v2.0/ >/dev/null; do sleep 1; done'
2013-07-16 17:34:03 + die 311 'keystone did not start'
2013-07-16 17:34:03 + local exitcode=0
2013-07-16 17:34:03 + set +o xtrace
2013-07-16 17:34:03 [ERROR] ./stack.sh:311 keystone did not start
the log file :
File "/opt/stack/keystone/bin/keystone-all", line 112, in <module>
options = deploy.appconfig('config:%s' % paste_config)
File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 261, in appconfig
global_conf=global_conf)
File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 296, in loadcontext
global_conf=global_conf)
File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 320, in _loadconfig
return loader.get_context(object_type, name, global_conf)
File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 413, in get_context
defaults = self.parser.defaults()
File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 68, in defaults
defaults[key] = self.get('DEFAULT', key) or val
File "/usr/lib/python2.7/ConfigParser.py", line 623, in get
return self._interpolate(section, option, value, d)
File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 75, in _interpolate
self, section, option, rawval, vars)
File "/usr/lib/python2.7/ConfigParser.py", line 669, in _interpolate
option, section, rawval, e.args[0])
ConfigParser.InterpolationMissingOptionError: Error in file /etc/keystone/keystone.conf:
Bad value substitution:
section: [DEFAULT]
option : admin_endpoint
key : admin_port
rawval : http://192.168.20.69:%(admin_port)s/
the parsing instruction :
https://github.com/openstack/keystone/blob/master/keystone/common/config.py
the ConfigParser.InterpolationMissingOptionError :
Exception raised when an option referenced from a value does not exist. Subclass of InterpolationError.
I actually don't understan which option referenced does not exist..
Thank you in advance for your help.
Damien
I had the same problem when I ran stack.sh. The localrc file at the time of running stack.sh was:
disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service neutron
# enable_service q-lbass
disable_service mysql
enable_service postgresql
# enable_service swift
# SWIFT_HASH=devstack
#
LOGFILE=$DEST/logs/stack.log
SCREEN_LOGDIR=$DEST/logs/screens
#
SERVICE_TOKEN=devstack
SCHEDULER=nova.scheduler.chance.ChanceScheduler
# Repositories
GLANCE_BRANCK=stable/grizzly
HORIZON_BRANCH=stable/grizzly
KEYSTONE_BRANCH=stable/grizzly
NOVA_BRANCH=stable/grizzly
NEUTRON_BRANCH=stable/grizzly
CINDER_BRANCH=stable/grizzly
SWIFT_BRANCH=stable/grizzly
PBR_BRANCH=master
REQUIREMENTS_BRANCH=stable/grizzly
CEILOMETER_BRANCH=stable/grizzly
...
However, after I removed the repositories definition, and let the defaults in stackrc take over, ie. all branches pointed to 'master', the problem went away.
Further, The contents of /opt/stack/keystone/bin/keystone-all script are different between the stable/grizzly and master branches. I think the one in 'master' branch seems to work now withe neutron enabled.
this error because
you run this "stack.sh" as root
or you forget to chmod your config in /etc/keystone/keystone.conf
chmod 777 /etc/keystone/keystone.conf
unstack.sh and then re run stack.sh
just simply
visudo
add stack as user who will do same as root but no password required
stack ALL=(ALL:ALL) ALL
su stack
cp -r /root/devstack /home/stack/
cd /home/stack/devstack/
./stack.sh
clean all first if necessary
Looks like a bug that has been filed for keystone https://bugs.launchpad.net/keystone/+bug/1201861 and it is still open.
Modify devstack/lib/keystone as follows:
iniset $KEYSTONE_CONF DEFAULT public_endpoint "$KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_SERVICE_HOST:35357/"
iniset $KEYSTONE_CONF DEFAULT admin_endpoint "$KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_SERVICE_HOST:5000/"
I just ran into this myself. The problem is that DevStack is building a Keystone configuration file in /etc/keystone/keystone.conf in which the option "admin_port" is used before it's been set. And you can't just edit keystone.conf and re-run stack.sh, because your edited version will be overwritten. I'm still chasing down the code that borks the configuration file....