Bokeh server is not recognized - bokeh

Trying to run simple app from Kevin Jolly's Hands-On Data Visualization with Bokeh Packt.
#Import the required packages
from bokeh.layouts import widgetbox
from bokeh.models import Slider
from bokeh.io import curdoc
#Create a slider widget
slider_widget = Slider( start = 0, end = 100, step = 10, title = 'Single Slider')
#Create a layout for the widget
slider_layout = widgetbox( slider_widget)
#Add the slider widget to the application
curdoc(). add_root( slider_layout)
Then tried to start bokeh server:
...\Python_Scripts\Sublime> bokeh serve --show bokeh.py
bokeh : The term 'bokeh' is not recognized as the name of a cmdlet, function, script file, or operable program.
bokeh info
Python version : 3.7.3 (default, Apr 24 2019, 15:29:51) [MSC v.1915 64 bit (AMD64)]
IPython version : 7.8.0
Tornado version : 6.0.3
Bokeh version : 1.3.4
BokehJS static path : C:\Users\k S\Anaconda3\lib\site-packages\bokeh\server\static
node.js version : (not installed)
npm version : (not installed)
Previous post with same problem did not provide working solution, please help.

First I would strongly suggest renaming your file to something other than bokeh.py. Due to the way Python itself works, this can sometimes result in Python trying to load the wrong module.
It's exceedingly strange that bokeh info could work but bokeh serve would not, since they are subcommands of literally the same program file. If renaming the script does not help, then you can always invoke the server using the Python -m command line option:
python -m bokeh serve --show app.py
If this does not work it can mean one thing only: the python executable you are running is a different Python environment than the one that you installed Bokeh into.

Related

IPython REPL anywhere: how to share application context with IPython console for future interaction?

IPython console is an extremely power instrument for the development. It used for research in general, application and algorithms developing both as sense catching.
Does there is a way to connect current context of Python app with IPython console? Like import ipyconsole; ipyconsole.set_interactive_from_here().
Here is much more complete situation.
First flow.
Below is some sort of running python app with inited DB and web-app route.
class App:
def __init__(self):
self.db = DB.new_connection("localhost:27018")
self.var_A = "Just an example variable"
def run(self):
self.console = IPythonAppConsole(self) #Console Creation
self.console.initialize()
self.kernel = console.start()
# print(self.kernel.connection_file)
# << "kernel-12345.json"
# let the app be some kind of web/flask application
#app.route('/items/<item_id>')
def get_item(self, item_id=0):
### GOOD PLACE for
### <import ipyconsole; ipyconsole.set_interactive_from_here(self.kernel)> CODE
item = self.db.find_one({'_id': item_id})
print(item)
Second interactive flow. This is a valuable target.
$: ipython console --existing "kernel-12345.json"
<< print(self.db.uri)
>> "localhost:27018"
<< print(item_id)
>> 1234567890
Does there is a common sense way to implement these two flows? Maybe there is some sort of magic combination of pdb and ipython kernel?
By the way there are another interactive ways to communicate with applications:
Debugging. Debug app with pdb/ipdb/web-pdb, using such snipper import pdb; pdb.set_trace() in any line in code.
Generate IPython notebook snippet from python code. Anywhere pyvt.
Today I looking for the answer inside IPython/shellapp, kernelapp sources with Jupyter console and dynamic variable sharing through Redis. Thanks for any kind of ideas!
Maybe Flask shell is what you are looking for https://flask.palletsprojects.com/en/1.1.x/shell/
One possible way for you to achieve this is to use ipdb and iPython combo.
x = 2
ipdb.set_trace()
x = 4
When you run the code, it drops into a ipdb shell
❯ python3 test.py
> /Users/tarunlalwani/Documents/Projects/SO/opa/test.py(7)<module>()
5 ipdb.set_trace()
6
----> 7 x = 4
ipdb>
And then you can drop into a ipython shell from there
ipdb> from IPython import embed
ipdb> embed()
Python 3.9.1 (default, Jan 8 2021, 17:17:43)
Type 'copyright', 'credits' or 'license' for more information
IPython 7.19.0 -- An enhanced Interactive Python. Type '?' for help.
In [1]: x
Out[1]: 2
In [2]:

Python/R ctypes library error: "OSError: lib/libRrefblas.so: undefined symbol: xerbla_"

Firstly, I'm a newbie R, AWS and python guy. So I'm trying to get a python script with embedded R code running in AWS Lambda using rpy2. I created a Lambda package on an EC2 instance following the instructions here (modified for using python 3.4). It seems that there is something funky happening with loading the R libs using ctypes, as per the following error received in the console:
OSError: lib/libRrefblas.so: undefined symbol: xerbla_
The test file (py_test.py) looks like this:
import os
import ctypes
for file in os.listdir('lib'):
if os.path.isfile(os.path.join('lib', file)):
ctypes.cdll.LoadLibrary(os.path.join('lib', file))
os.environ["R_HOME"] = os.getcwd()
os.environ["R_USER"] = os.path.join(os.getcwd(), 'rpy2')
os.environ["R_LIBS"] = os.path.join(os.getcwd(), 'library')
os.environ["LD_LIBRARY_PATH"] = os.path.join(os.getcwd(), 'lib')
import sys
sys.path.append(os.path.join(os.getcwd(),'rpy2'))
import rpy2
from rpy2 import robjects
def test_handler(event, context):
robjects.r('''
f <- function(r, verbose=FALSE) {
if (verbose) {
cat("I am calling f().\n")
}
2 * pi * r
}
print(f(3))
''')
test_handler(None,None)
I have lib/libRrefblas.so in my virtual environment. I have scoured google looking for answers but have come up empty. Any suggestions would be greatly appreciated!
If you can get the traceback, that could help, but I suspect the problem is that it's looking for xerbla_ in the wrong place. Is xerbla_ defined in the path to RLIBS? Maybe in libR.so?
Turns out the BLAS that ships with R is corrupt. Your best bet is to make sure that BLAS and Lapack are installed on the machine you are building R on and see if you can get it to build with those libraries instead.
So steps would be to uninstall R, then run
yum -y install lapack-devel.x86_64 lapack.x86_64
yum -y install blas -devel
yum -y install R.x86_64
Check to see if it has still installed with libRrefblas.so. If it has - try deleting that file and see if it will default to the system BLAS. If you get a error because it is still looking for libRrefblas.so
rm lib/libRrefblas.so
cp /usr/lib64/libblas.so.3 lib/
mv lib/libblas.so.3 lib/libRrefblas.so

How to Export a python project as an executable file? Project is a browser based one

I have a python project which has to be exported in to an executable file so that it can be used in other systems as well. The project has its UI in a browser and runs on localhost.
So far I have tried PyInstaller and cx_Freeze but to no success. I encountered some errors with PyInstaller which I was unable to solve and I switched to cx_Freeze. I was able to freeze the scripts and create a .exe file. But when I open(double click) the .exe file, I get nothing. Not even an error message. I tried running it from command prompt as well, but there too I got no message or output.
Can anyone suggest how my objective can be achieved? Or something needs to be checked?
Here is my setup.py
import sys
import os
from cx_Freeze import setup, Executable
base = None
#if sys.platform == "win32":
# base = "Win32GUI"
os.environ['TCL_LIBRARY']="C:\\Users\\M******\\AppData\\Local\\Continuum\\Anaconda3\\tcl\\tcl8.6"
os.environ['TK_LIBRARY']="C:\\Users\\M******\\AppData\\Local\\Continuum\\Anaconda3\\tcl\\tk8.6"
setup ( name = "Network Analysis",
version = "0.1",
description = "Network Analysis Project",
options = { "build_exe": { "packages" : ['encodings','asyncio','pandas','numpy','geopy','networkx','configparser','json']}},
executables = [Executable("run.py",base=base)])

Apache spark-shell error import jars

I have a local spark 1.5.2 (hadoop 2.4) installation on Windows as explained here.
I'm trying to import a jar file that I created in Java using maven (the jar is jmatrw that I uploaded on here on github). Note the jar does not include a spark program and it has no dependencies to spark. I tried the following steps, but no one seems to work in my installation:
I copied the library in "E:/installprogram/spark-1.5.2-bin-hadoop2.4/lib/jmatrw-v0.1-beta.jar"
Edit spark-env.sh and add SPARK_CLASSPATH="E:/installprogram/spark-1.5.2-bin-hadoop2.4/lib/jmatrw-v0.1-beta.jar"
In a command window I run > spark-shell --jars "E:/installprogram/spark-1.5.2-bin-hadoop2.4/lib/jmatrw-v0.1-beta.jar", but it says "Warning: skip remote jar"
In the spark shell I tried to do scala> sc.addJar("E:/installprogram/spark-1.5.2-bin-hadoop2.4/lib/jmatrw-v0.1-beta.jar"), it says "INFO: added jar ... with timestamp"
When I type scala> import it.prz.jmatrw.JMATData, spark-shell replies with error: not found: value it.
I spent lot of time searching on Stackoverflow and on Google, indeed a similar Stakoverflow question is here, but I'm still not able to import my custom jar.
Thanks
There are two settings in 1.5.2 to reference an external jar. You can add it for the driver or for the executor(s).
I'm doing this by adding settings to the spark-defaults.conf, but you can set these at spark-shell or in SparkConf.
spark.driver.extraClassPath /path/to/jar/*
spark.executor.extraClassPath /path/to/jar/*
I don't see anything really wrong with the way you are doing it, but you could try the conf approach above, or setting these using SparkConf
val conf = new SparkConf()
conf.set("spark.driver.extraClassPath", "/path/to/jar/*")
val sc = new SparkContext(conf)
In general, I haven't enjoyed working with Spark on Windows. Try to get onto Unix/Linux.

In Ipython Qt Console sp.info doesn't print inside the console

I have installed IPython 1.1.0, in Ubuntu 12.04 from the source.
Similarly I have installed Numpy-1.8.0, Scipy-0.13.1, Matplotlib-1.3.1 from the source.
When I use the Ipython Qt COnsole the command sp.info(optimize.fmin) doesn't print the output in console but it prints it in the terminal (pylab). Is there anyway that it can print it in console too.
import numpy as np
import scipy as sp
from scipy import optimize
sp.info(optimize.fmin)
The output is like this in pylab
fmin(func, x0, args=(), xtol=0.0001, ftol=0.0001, maxiter=None, maxfun=None,
full_output=0, disp=1, retall=0, callback=None)
Minimize a function using the downhill simplex algorithm.
Parameters
----------
func : callable func(x,*args)
You can use IPython's ? syntax to get information about any object:
optimize.fmin?
That will work in all IPython environments.
However, scipy.info() and numpy.info() both work in the Qt console when I try them, whether or not I start it in pylab mode. I'm not sure why they don't for you.

Resources