How can I use Asynchronous Widgets on jupyter lab?
I'm trying to reproduce the official Asynchronous Widgets-Example on jupyter lab, but the await never continues.
Setup / reproduction
docker run --rm -p 8888:8888 -e JUPYTER_ENABLE_LAB=yes jupyter/datascience-notebook start-notebook.sh --NotebookApp.token=''
firefox 0.0.0.0:8888
create a new python3 notebook
create a cell and enter the code below
run cell
move slider
code for the cell
%gui asyncio
import asyncio
def wait_for_change(widget, value):
future = asyncio.Future()
def getvalue(change):
# make the new value available
future.set_result(change.new)
widget.unobserve(getvalue, value)
widget.observe(getvalue, value)
return future
from ipywidgets import IntSlider
slider = IntSlider()
async def f():
for i in range(10):
print('did work %s'%i)
#x = await asyncio.sleep(1)
x = await wait_for_change(slider, 'value')
print('async function continued with value %s'%x)
asyncio.ensure_future(f())
#task = asyncio.create_task(f())
slider
Expected result
The cell outputs
did work 0
async function continued with value 1
did work 1
async function continued with value 2
[...]
Actual output
nothing after the first did work 0
Notes
I'm specifically talking about jupyter lab and not about regular jupyter notebooks
There is no error-message or anything. The expected output just doesn't happen
The minimal asyncio-example does work in jupyter lab:
import asyncio
async def main():
print('hello')
await asyncio.sleep(1)
print('world')
await main()
when you leave out the -e JUPYTER_ENABLE_LAB=yes, then you get a regular jupyter notebook without jupyter lab and the expected result happens.
this is not a duplicate of ipywidgets widgets values not changing or Jupyter Interactive Widget not executing properly, because these questions nether include jupyter lab nor asyncio
Actually it works, but jupyter lose print output.
Try this code:
from IPython.display import display
import ipywidgets as widgets
out = widgets.Output()
import asyncio
def wait_for_change(widget, value):
future = asyncio.Future()
def getvalue(change):
# make the new value available
future.set_result(change.new)
widget.unobserve(getvalue, value)
widget.observe(getvalue, value)
return future
from ipywidgets import IntSlider
slider = IntSlider()
# Now the key: the container is displayed (while empty) in the main thread
async def f():
for i in range(10):
out.append_stdout('did work %s'%i)
x = await wait_for_change(slider, 'value')
out.append_stdout('async function continued with value %s'%x)
asyncio.ensure_future(f())
display(slider)
display(out)
You can find more details here: https://github.com/jupyter-widgets/ipywidgets/issues/2567#issuecomment-535971252
I've had luck with jupyter-ui-poll to synchronize widget activity with the Jupyter Python kernal:
https://github.com/Kirill888/jupyter-ui-poll
In particular I used it here:
https://github.com/AaronWatters/jp_doodle/blob/master/jp_doodle/auto_capture.py
Works for me.
Hope that helps!
Related
I'm trying to create a DatePicker widget in Jupyter Lab with a button, when the user selects a date from the date picker and hits the button, all the below cells should start running for that date. I tried running below code and it is working as expected in the anaconda jupyter notebook but is not working in jupyter lab. Can you help me with the same?
from IPython.display import Javascript, display
import ipywidgets as widgets
fromDatePicker = widgets.DatePicker(description='fromdate')
ToDatePicker = widgets.DatePicker(description='fromdate')
def fromDate():
return f"'{fromDatePicker.value}'"
def toDate():
return f"'{ToDatePicker.value}'"
def run_all(ev):
display(Javascript('IPython.notebook.execute_cell_range(IPython.notebook.get_selected_index()+1, IPython.notebook.ncells())'))
button = widgets.Button(description="Run all")
button.on_click(run_all)
widgets.HBox([fromDatePicker, ToDatePicker, button])
after this to check whether it is working or not I'm printing the start date below :
print("start date is :", fromDate())
I expected that after clicking on the Run all button it should start executing the below cells but here I have to go to that cell and run it manually. Kindly please post your answers or suggestions below.
After using ipyab :
import ipywidgets as widgets
from ipylab import JupyterFrontEnd
from IPython.display import Javascript, display
app = JupyterFrontEnd()
fromDatePicker = widgets.DatePicker(
description='From Date'
)
ToDatePicker = widgets.DatePicker(
description='To Date'
)
button = widgets.Button(
description='Execute',
disabled=False,
button_style='success',
)
def fromDate():
return f"'{fromDatePicker.value}'"
def toDate():
return f"'{ToDatePicker.value}'"
def run_all(ev):
app.commands.execute('notebook:run-all-below')
display(button)
button.on_click(run_all)
widgets.HBox([fromDatePicker, ToDatePicker, button])
Above code is also running same as the first one
jupyter labextension list
JupyterLab v3.4.5
jupyterlab-execute-time v2.1.0 enabled OK (python, jupyterlab_execute_time)
jupyterlab_pygments v0.2.2 enabled OK (python, jupyterlab_pygments)
nbdime-jupyterlab v2.1.1 enabled OK
#jupyter-widgets/jupyterlab-manager v3.1.1 enabled OK (python, jupyterlab_widgets)
#jupyterlab/geojson-extension v3.1.2 enabled OK (python, jupyterlab-geojson)
#jupyterlab/git v0.32.4 enabled OK (python, jupyterlab-git)
Disabled extensions:
#jupyterlab/extensionmanager-extension (all plugins)
#jupyterlab/running-extension (all plugins)
I tried to run parallel, but it didn't work as I expected
The progress bar doesn't work the way I thought it would.
I think that both operations should be executed at the same time.
but first run find_highest_calorie_cereal after find_highest_protein_cereal
import csv
import time
import requests
from dagster import pipeline, solid
# start_complex_pipeline_marker_0
#solid
def download_cereals():
response = requests.get("https://docs.dagster.io/assets/cereal.csv")
lines = response.text.split("\n")
return [row for row in csv.DictReader(lines)]
#solid
def find_highest_calorie_cereal(cereals):
time.sleep(5)
sorted_cereals = list(
sorted(cereals, key=lambda cereal: cereal["calories"])
)
return sorted_cereals[-1]["name"]
#solid
def find_highest_protein_cereal(context, cereals):
time.sleep(10)
sorted_cereals = list(
sorted(cereals, key=lambda cereal: cereal["protein"])
)
# for i in range(1, 11):
# context.log.info(str(i) + '~~~~~~~~')
# time.sleep(1)
return sorted_cereals[-1]["name"]
#solid
def display_results(context, most_calories, most_protein):
context.log.info(f"Most caloric cereal 테스트: {most_calories}")
context.log.info(f"Most protein-rich cereal: {most_protein}")
#pipeline
def complex_pipeline():
cereals = download_cereals()
display_results(
most_protein=find_highest_protein_cereal(cereals),
most_calories=find_highest_calorie_cereal(cereals),
)
I am not sure but I think you should set up a executor with parallelism available. You could use multiprocess_executor.
"Executors are responsible for executing steps within a pipeline run.
Once a run has launched and the process for the run, or run worker,
has been allocated and started, the executor assumes responsibility
for execution."
modes provide the possible set of executors one can use. Use the executor_defs property on ModeDefinition.
MODE_DEV = ModeDefinition(name="dev", executor_defs=[multiprocess_executor])
#pipeline(mode_defs=[MODE_DEV], preset_defs=[Preset_test])
the execution config section of the run config determines the actual executor.
in the yml file or run_config, set:
execution:
multiprocess:
config:
max_concurrent: 4
retrieved from : https://docs.dagster.io/deployment/executors
if I have a simple script such as this:
for i in range(100):
sleep(1)
print(i)
Is there a way to only show the last 5 lines of the output, similar to a "tail -f" command?
I found a solution that works for me, I've also realized that I needed the process to work in the background.
I first had to enable widgets:
jupyter nbextension enable --py --sys-prefix widgetsnbextension
from IPython.display import display
from ipywidgets import Label
from time import sleep
import threading
class App(object):
def __init__(self, nloops=2000):
self.nloops = nloops
self.pb = Label(description='Thread loops', value="0")
def start(self):
display(self.pb)
for i in range(10):
self.pb.value += str(i)
sleep(1)
app = App(nloops=20000)
t = threading.Thread(target=app.start)
t.start()
My app interfaces with the IPython Qt shell with code something like this:
from IPython.core.interactiveshell import ExecutionResult
shell = self.kernelApp.shell # ZMQInteractiveShell
code = compile(script, file_name, 'exec')
result = ExecutionResult()
shell.run_code(code, result=result)
if result:
self.show_result(result)
The problem is: how can show_result show the traceback resulting from exceptions in code?
Neither the error_before_exec nor the error_in_exec ivars of ExecutionResult seem to give references to the traceback. Similarly, neither sys nor shell.user_ns.namespace.get('sys') have sys.exc_traceback attributes.
Any ideas? Thanks!
Edward
IPython/core/interactiveshell.py contains InteractiveShell._showtraceback:
def _showtraceback(self, etype, evalue, stb):
"""Actually show a traceback. Subclasses may override..."""
print(self.InteractiveTB.stb2text(stb), file=io.stdout)
The solution is to monkey-patch IS._showtraceback so that it writes to sys.stdout (the Qt console):
from __future__ import print_function
...
shell = self.kernelApp.shell # ZMQInteractiveShell
code = compile(script, file_name, 'exec')
def show_traceback(etype, evalue, stb, shell=shell):
print(shell.InteractiveTB.stb2text(stb), file=sys.stderr)
sys.stderr.flush() # <==== Oh, so important
old_show = getattr(shell, '_showtraceback', None)
shell._showtraceback = show_traceback
shell.run_code(code)
if old_show: shell._showtraceback = old_show
Note: there is no need to pass an ExecutionResult object to shell.run_code().
EKR
I'm creating a module for OpenERP in which I have to launch an ongoing process.
OpenERP runs in a continuous loop. My process has to be launched when I click on a button, and it has to keep running without holding up OpenERP's execution.
To simplify it, I have this code:
#!/usr/bin/python
import multiprocessing
import time
def f(name):
while True:
try:
print 'hello', name
time.sleep(1)
except KeyboardInterrupt:
return
if __name__ == "__main__":
count = 0
while True:
count += 1
print "Pass %d" % count
pool = multiprocessing.Pool(1)
result = pool.apply_async(f, args=['bob'])
try:
result.get()
except KeyboardInterrupt:
#pass
print 'Interrupted'
time.sleep(1)
When executed, Pass 1 is printed once and then an endless series of hello bob is printed until CTRL+C is pressed. Then Pass 2 is obtained and so on, as shown below:
Pass 1
hello bob
hello bob
hello bob
^CInterrupted
Pass 2
hello bob
hello bob
hello bob
hello bob
I would like the passes to keep increasing in parallel with the hello bob's.
How do I do that?
Here what you can do id you can create then Multi Threaded Implementation of Python under the server memory, which will run independently then server execution thread.
Trick behind this will be used is we will fork one thread from server on your required click and we will assign all server variable separate copy to the new Thread so that thread will execute independently and at then end of process you have to commit the transaction as this process will be not main server process. Here the small example of it how you can do it .
import pprint
import pooler
from threading import Thread
import datetime
import logging
pp = pprint.PrettyPrinter(indent=4)
class myThread(Thread):
"""
"""
def __init__(self, obj, cr, uid, context=None):
Thread.__init__(self)
self.external_id_field = 'id'
self.obj = obj
self.cr = cr
self.uid = uid
self.context = context or {}
self.logger = logging.getLogger(module_name)
self.initialize()
"""
Abstract Method to be implemented in the real instance
"""
def initialize(self):
"""
init before import
usually for the login
"""
pass
def init_run(self):
"""
call after intialize run in the thread, not in the main process
TO use for long initialization operation
"""
pass
def run(self):
"""
this is the Entry point to launch the process(Thread)
"""
try:
self.init_run()
#Your Code Goes Here
#TODO Add Business Logic
self.cr.commit()
except Exception, err:
sh = StringIO.StringIO()
traceback.print_exc(file=sh)
error = sh.getvalue()
print error
self.cr.close()
LIke this you can add some code in some module like (import_base module in 6.1 or trunk)
Now what Next you can do is you can make extended implementation of this and then make instace of the service or you can directly start forking the treads like following code:
service = myServcie(self, cr, uid, context)
service.start()
now this we start background services which run faster and give you freedom to use the UI.
Hope this will help you
Thank You