IPython REPL anywhere: how to share application context with IPython console for future interaction? - jupyter-notebook

IPython console is an extremely power instrument for the development. It used for research in general, application and algorithms developing both as sense catching.
Does there is a way to connect current context of Python app with IPython console? Like import ipyconsole; ipyconsole.set_interactive_from_here().
Here is much more complete situation.
First flow.
Below is some sort of running python app with inited DB and web-app route.
class App:
def __init__(self):
self.db = DB.new_connection("localhost:27018")
self.var_A = "Just an example variable"
def run(self):
self.console = IPythonAppConsole(self) #Console Creation
self.console.initialize()
self.kernel = console.start()
# print(self.kernel.connection_file)
# << "kernel-12345.json"
# let the app be some kind of web/flask application
#app.route('/items/<item_id>')
def get_item(self, item_id=0):
### GOOD PLACE for
### <import ipyconsole; ipyconsole.set_interactive_from_here(self.kernel)> CODE
item = self.db.find_one({'_id': item_id})
print(item)
Second interactive flow. This is a valuable target.
$: ipython console --existing "kernel-12345.json"
<< print(self.db.uri)
>> "localhost:27018"
<< print(item_id)
>> 1234567890
Does there is a common sense way to implement these two flows? Maybe there is some sort of magic combination of pdb and ipython kernel?
By the way there are another interactive ways to communicate with applications:
Debugging. Debug app with pdb/ipdb/web-pdb, using such snipper import pdb; pdb.set_trace() in any line in code.
Generate IPython notebook snippet from python code. Anywhere pyvt.
Today I looking for the answer inside IPython/shellapp, kernelapp sources with Jupyter console and dynamic variable sharing through Redis. Thanks for any kind of ideas!

Maybe Flask shell is what you are looking for https://flask.palletsprojects.com/en/1.1.x/shell/

One possible way for you to achieve this is to use ipdb and iPython combo.
x = 2
ipdb.set_trace()
x = 4
When you run the code, it drops into a ipdb shell
❯ python3 test.py
> /Users/tarunlalwani/Documents/Projects/SO/opa/test.py(7)<module>()
5 ipdb.set_trace()
6
----> 7 x = 4
ipdb>
And then you can drop into a ipython shell from there
ipdb> from IPython import embed
ipdb> embed()
Python 3.9.1 (default, Jan 8 2021, 17:17:43)
Type 'copyright', 'credits' or 'license' for more information
IPython 7.19.0 -- An enhanced Interactive Python. Type '?' for help.
In [1]: x
Out[1]: 2
In [2]:

Related

Python3.8 asyncio behavior difference between Windows and Unix

I am working on script where I will be dealing with huge amount of data to process via python.
I have written a script using asyncio in python3.8 on windows box which is working perfectly fine but when I execute the same script on unix on python3.8 its completing the execution but not terminating the program at the end. Seems like its not release resources/lock.
When I debug further, found that on windows the asyncio uses ProactorEventLoop whereas on Unix it uses _UnixSelectorEventLoop, But not sure if this affect by any means.
I cant share the full script but it follows below structure:
import asyncio
async def myCoroutine():
print("My Coroutine")
try:
loop = asyncio.get_event_loop()
loop.run_until_complete(myCoroutine())
print("Execution Completed")
finally:
print("Closing the loop")
loop.close()
print("loop Closed")
Output:
Execution Completed
Closing the loop
loop closed
But program is not terminating.
Is anyone faced the similar issue before? Any inputs?
Thanks in Advance!!

Bokeh server is not recognized

Trying to run simple app from Kevin Jolly's Hands-On Data Visualization with Bokeh Packt.
#Import the required packages
from bokeh.layouts import widgetbox
from bokeh.models import Slider
from bokeh.io import curdoc
#Create a slider widget
slider_widget = Slider( start = 0, end = 100, step = 10, title = 'Single Slider')
#Create a layout for the widget
slider_layout = widgetbox( slider_widget)
#Add the slider widget to the application
curdoc(). add_root( slider_layout)
Then tried to start bokeh server:
...\Python_Scripts\Sublime> bokeh serve --show bokeh.py
bokeh : The term 'bokeh' is not recognized as the name of a cmdlet, function, script file, or operable program.
bokeh info
Python version : 3.7.3 (default, Apr 24 2019, 15:29:51) [MSC v.1915 64 bit (AMD64)]
IPython version : 7.8.0
Tornado version : 6.0.3
Bokeh version : 1.3.4
BokehJS static path : C:\Users\k S\Anaconda3\lib\site-packages\bokeh\server\static
node.js version : (not installed)
npm version : (not installed)
Previous post with same problem did not provide working solution, please help.
First I would strongly suggest renaming your file to something other than bokeh.py. Due to the way Python itself works, this can sometimes result in Python trying to load the wrong module.
It's exceedingly strange that bokeh info could work but bokeh serve would not, since they are subcommands of literally the same program file. If renaming the script does not help, then you can always invoke the server using the Python -m command line option:
python -m bokeh serve --show app.py
If this does not work it can mean one thing only: the python executable you are running is a different Python environment than the one that you installed Bokeh into.

E-mail (or similar) notification when code execution is finished

I am currently doing several simulations in R that each take quite a long time to execute and the time it takes for each to finish varies from case to case. To use the time in between more efficiently, I wondered if it would be possible to set up something (like a e-mail notification system or similar) that would notify me as soon a a chunk of simulation is completed.
Does somebody here have any experience with setting up something similar or does someone know a resource that could teach me to implement a notification system via R?
I recently saw an R package for this kind of thing: pushoverr. However didn't use it myself - so not tested how it works. But seems like it might be useful in your case.
I assume you run the time consuming simulations on a server, correct? If these run own you own PC, your PC will be slow as hell anyway and I would not see something beneficial in sending a mail to myself.
For long calculations: Run them on a virtual machine, I use the following workflow for my own calculations.
Write your R script. Important: Write a .txt file when the calculation file in the end. The shell script will search in a loop for the file to exist.
Copy that code an save it as Python script. I tried one day to get MailR running a Linux and it did not work. This code worked on the first try.
#!/usr/bin/env python3
import smtplib
from email.mime.text import MIMEText
from email.mime.multipart import MIMEMultipart
from email.mime.base import MIMEBase
from email import encoders
email_user = 'youownmail#gmail.com'
email_password = 'password'
email_send = 'theothersmail.com'
subject = 'yourreport'
msg = MIMEMultipart()
msg['From'] = email_user
msg['To'] = email_send
msg['Subject'] = subject
body = 'Calculation is done'
msg.attach(MIMEText(body,'plain'))
part = MIMEBase('application','octet-stream')
part.set_payload((attachment).read())
encoders.encode_base64(part)
msg.attach(part)
text = msg.as_string()
server = smtplib.SMTP('smtp.gmail.com',587)
server.starttls()
server.login(email_user,email_password)
server.sendmail(email_user,email_send,text)
server.quit()
Make sure you are allowed to run the script.
sudo chmod 777 /path/script.R sudo chmod 777 /path/script.py
Run both your script.R and script.py inside a script.sh file. It looks the the following:
R < /path/script.R --no-save
while [ ! -f /tmp/finished.txt ]
do
sleep 2
done
python path/script.py
This may sound a bit overwhelming if you are not familiar with these technologies, but think this is a pretty much automated workflow, which relieves your own resources and can be used "in production". (I use this workflow to send me my own stock reports).

What to do when a py.test hangs silently?

While using py.test, I have some tests that run fine with SQLite but hang silently when I switch to Postgresql. How would I go about debugging something like that? Is there a "verbose" mode I can run my tests in, or set a breakpoint ? More generally, what is the standard plan of attack when pytest stalls silently? I've tried using the pytest-timeout, and ran the test with $ py.test --timeout=300, but the tests still hang with no activity on the screen whatsoever
I ran into the same SQLite/Postgres problem with Flask and SQLAlchemy, similar to Gordon Fierce. However, my solution was different. Postgres is strict about table locks and connections, so explicitly closing the session connection on teardown solved the problem for me.
My working code:
#pytest.yield_fixture(scope='function')
def db(app):
# app is an instance of a flask app, _db a SQLAlchemy DB
_db.app = app
with app.app_context():
_db.create_all()
yield _db
# Explicitly close DB connection
_db.session.close()
_db.drop_all()
Reference: SQLAlchemy
To answer the question "How would I go about debugging something like that?"
Run with py.test -m trace --trace to get trace of python calls.
One option (useful for any stuck unix binary) is to attach to process using strace -p <PID>. See what system call it might be stuck on or loop of system calls. e.g. stuck calling gettimeofday
For more verbose py.test output install pytest-sugar. pip install pytest-sugar And run test with pytest.py --verbose . . .
https://pypi.python.org/pypi/pytest-sugar
I had a similar problem with pytest and Postgresql while testing a Flask app that used SQLAlchemy. It seems pytest has a hard time running a teardown using its request.addfinalizer method with Postgresql.
Previously I had:
#pytest.fixture
def db(app, request):
def teardown():
_db.drop_all()
_db.app = app
_db.create_all()
request.addfinalizer(teardown)
return _db
( _db is an instance of SQLAlchemy I import from extensions.py )
But if I drop the database every time the database fixture is called:
#pytest.fixture
def db(app, request):
_db.app = app
_db.drop_all()
_db.create_all()
return _db
Then pytest won't hang after your first test.
Not knowing what is breaking in the code, the best way is to isolate the test that is failing and set a breakpoint in it to have a look. Note: I use pudb instead of pdb, because it's really the best way to debug python if you are not using an IDE.
For example, you can the following in your test file:
import pudb
...
def test_create_product(session):
pudb.set_trace()
# Create the Product instance
# Create a Price instance
# Add the Product instance to the session.
...
Then run it with
py.test -s --capture=no test_my_stuff.py
Now you'll be able to see exactly where the script locks up, and examine the stack and the database at this particular moment of execution. Otherwise it's like looking for a needle in a haystack.
I just ran into this problem for quite some time (though I wasn't using SQLite). The test suite ran fine locally, but failed in CircleCI (Docker).
My problem was ultimately that:
An object's underlying implementation used threading
The object's __del__ normally would end the threads
My test suite wasn't calling __del__ as it should have
I figured I'd add how I figured this out. Other answers suggest these:
Found usage of pytest-timeout didn't help, the test hung after completion
Invoked via pytest --timeout 5
Versions: pytest==6.2.2, pytest-timeout==1.4.2
Running pytest -m trace --trace or pytest --verbose yielded no useful information either
I ended up having to comment literally everything out, including:
All conftest.py code and test code
Slowly uncommented/re-commented regions and identified the root cause
Ultimate solution: using a factory fixture to add a finalizer to call __del__
In my case the Flask application did not check if __name__ == '__main__': so it executed app.start() when that was not my intention.
You can read many more details here.
In my case diff worked very slow on comparing 4 MB data when assert failed.
with open(path, 'rb') as f:
assert f.read() == data
Fixed by:
with open(path, 'rb') as f:
eq = f.read() == data
assert eq

In Ipython Qt Console sp.info doesn't print inside the console

I have installed IPython 1.1.0, in Ubuntu 12.04 from the source.
Similarly I have installed Numpy-1.8.0, Scipy-0.13.1, Matplotlib-1.3.1 from the source.
When I use the Ipython Qt COnsole the command sp.info(optimize.fmin) doesn't print the output in console but it prints it in the terminal (pylab). Is there anyway that it can print it in console too.
import numpy as np
import scipy as sp
from scipy import optimize
sp.info(optimize.fmin)
The output is like this in pylab
fmin(func, x0, args=(), xtol=0.0001, ftol=0.0001, maxiter=None, maxfun=None,
full_output=0, disp=1, retall=0, callback=None)
Minimize a function using the downhill simplex algorithm.
Parameters
----------
func : callable func(x,*args)
You can use IPython's ? syntax to get information about any object:
optimize.fmin?
That will work in all IPython environments.
However, scipy.info() and numpy.info() both work in the Qt console when I try them, whether or not I start it in pylab mode. I'm not sure why they don't for you.

Resources