Create a custom magic in a Glue Studio Notebook - jupyter-notebook

I've been trying to create custom magic for a Glue Studio Notebook, like the following example (taken from here)
I've adding the Ipython module by running the glue magic
%additional_python_modules IPython
And running this from a cell:
from IPython.core.magic import (register_line_magic,
register_cell_magic)
#register_line_magic
def hello(line):
if line == 'french':
print("Salut tout le monde!")
else:
print("Hello world!")
However, I get this error:
AttributeError: 'NoneType' object has no attribute 'register_magic_function'
Thanks.
I think is related to the fact that if i do
from IPython import get_ipython
get_ipython()
get_ipython() returns None.
This means that this is not running inside IPython, but what then? How can I add a custom magic? My goal is to have a magic to run sql queries in a postgresql database connected using a glue connection.

Related

How can I successfully call python script in R using [reticulate] when having 'multiprocessing' module in python script?

I am trying to use reticulate package to call python script in R. Previously, it run successfully but this time, the python script includes the multiprocessing module, then I cannot run through. And in RStudio, it always stuck there.
Here is the tested python script, named test_multiprocessing.py.
# test_multiprocessing.py
from multiprocessing import Pool
def f(x):
return x*x
if __name__ == '__main__':
with Pool(5) as p:
print(p.map(f, [1, 2, 3]))
I can run the above script in python, but in RStudio as below:
library(reticulate)
condaEnvName = 'myEnv'
reticulate::use_condaenv(condaEnvName, required = TRUE)
reticulate::source_python('./test_multiprocessing.py')
The R always stuck there.
Can you please guide me how can we successfully call the python script in R when using multiprocessing module in the python script like above one?
I am using Windows 10 OS.
Thanks.

Run a jupyter notebook from another notebook and accept notebook parameters

I know that we can call one notebook from another using %run <jupyter-notebook>
But is there a way to pass in a string parameter while calling a notebook this way? Or any other way to share information from caller notebook to callee notebook?
I tried executing the following:
%run /root/notebook.ipynb "/root/abc.csv"
inside the notebook, I print sys.argv and I see this on console:
['/usr/local/lib/python3.8/dist-packages/ipykernel_launcher.py', '-f', '/root/.local/share/jupyter/runtime/kernel-fa157ce3-83be-4e70-bf95-fe7f83530d4d.json']
I was expecting to see my argument /root/abc.csv in the output for sys.argv according to the documentation. I may be misunderstanding something.
I ended up using papermill python library to do this.
Syntax:
import papermill as pm
pm.execute_notebook('input.ipynb', 'output.ipynb', {'param1': 'value1', 'param2': ['value2', 'value3']})

IPython REPL anywhere: how to share application context with IPython console for future interaction?

IPython console is an extremely power instrument for the development. It used for research in general, application and algorithms developing both as sense catching.
Does there is a way to connect current context of Python app with IPython console? Like import ipyconsole; ipyconsole.set_interactive_from_here().
Here is much more complete situation.
First flow.
Below is some sort of running python app with inited DB and web-app route.
class App:
def __init__(self):
self.db = DB.new_connection("localhost:27018")
self.var_A = "Just an example variable"
def run(self):
self.console = IPythonAppConsole(self) #Console Creation
self.console.initialize()
self.kernel = console.start()
# print(self.kernel.connection_file)
# << "kernel-12345.json"
# let the app be some kind of web/flask application
#app.route('/items/<item_id>')
def get_item(self, item_id=0):
### GOOD PLACE for
### <import ipyconsole; ipyconsole.set_interactive_from_here(self.kernel)> CODE
item = self.db.find_one({'_id': item_id})
print(item)
Second interactive flow. This is a valuable target.
$: ipython console --existing "kernel-12345.json"
<< print(self.db.uri)
>> "localhost:27018"
<< print(item_id)
>> 1234567890
Does there is a common sense way to implement these two flows? Maybe there is some sort of magic combination of pdb and ipython kernel?
By the way there are another interactive ways to communicate with applications:
Debugging. Debug app with pdb/ipdb/web-pdb, using such snipper import pdb; pdb.set_trace() in any line in code.
Generate IPython notebook snippet from python code. Anywhere pyvt.
Today I looking for the answer inside IPython/shellapp, kernelapp sources with Jupyter console and dynamic variable sharing through Redis. Thanks for any kind of ideas!
Maybe Flask shell is what you are looking for https://flask.palletsprojects.com/en/1.1.x/shell/
One possible way for you to achieve this is to use ipdb and iPython combo.
x = 2
ipdb.set_trace()
x = 4
When you run the code, it drops into a ipdb shell
❯ python3 test.py
> /Users/tarunlalwani/Documents/Projects/SO/opa/test.py(7)<module>()
5 ipdb.set_trace()
6
----> 7 x = 4
ipdb>
And then you can drop into a ipython shell from there
ipdb> from IPython import embed
ipdb> embed()
Python 3.9.1 (default, Jan 8 2021, 17:17:43)
Type 'copyright', 'credits' or 'license' for more information
IPython 7.19.0 -- An enhanced Interactive Python. Type '?' for help.
In [1]: x
Out[1]: 2
In [2]:

Sympy Attribute Error: module 'sympy' has no attribute 'derive_by_array'

I keep receiving this error when trying to derive using sympy.
I have updated sympy using Anaconda Prompt and "conda update sympy" but it made no change when trying to use the derive_by_array function in jupyter-notebook. Perhaps the update isn't registering in jupyter?
What can I do to fix this issue?
Here is a general example of the code where I receive the error:
import sympy as sp
x= sp.symbols('x')
f = x**2
sp.derive_by_array(f,x)
After removing using Anaconda Prompt then installing a newer version of Anaconda, sympy started working.

In Ipython Qt Console sp.info doesn't print inside the console

I have installed IPython 1.1.0, in Ubuntu 12.04 from the source.
Similarly I have installed Numpy-1.8.0, Scipy-0.13.1, Matplotlib-1.3.1 from the source.
When I use the Ipython Qt COnsole the command sp.info(optimize.fmin) doesn't print the output in console but it prints it in the terminal (pylab). Is there anyway that it can print it in console too.
import numpy as np
import scipy as sp
from scipy import optimize
sp.info(optimize.fmin)
The output is like this in pylab
fmin(func, x0, args=(), xtol=0.0001, ftol=0.0001, maxiter=None, maxfun=None,
full_output=0, disp=1, retall=0, callback=None)
Minimize a function using the downhill simplex algorithm.
Parameters
----------
func : callable func(x,*args)
You can use IPython's ? syntax to get information about any object:
optimize.fmin?
That will work in all IPython environments.
However, scipy.info() and numpy.info() both work in the Qt console when I try them, whether or not I start it in pylab mode. I'm not sure why they don't for you.

Resources