How can I successfully call python script in R using [reticulate] when having 'multiprocessing' module in python script? - r

I am trying to use reticulate package to call python script in R. Previously, it run successfully but this time, the python script includes the multiprocessing module, then I cannot run through. And in RStudio, it always stuck there.
Here is the tested python script, named test_multiprocessing.py.
# test_multiprocessing.py
from multiprocessing import Pool
def f(x):
return x*x
if __name__ == '__main__':
with Pool(5) as p:
print(p.map(f, [1, 2, 3]))
I can run the above script in python, but in RStudio as below:
library(reticulate)
condaEnvName = 'myEnv'
reticulate::use_condaenv(condaEnvName, required = TRUE)
reticulate::source_python('./test_multiprocessing.py')
The R always stuck there.
Can you please guide me how can we successfully call the python script in R when using multiprocessing module in the python script like above one?
I am using Windows 10 OS.
Thanks.

Related

IPython REPL anywhere: how to share application context with IPython console for future interaction?

IPython console is an extremely power instrument for the development. It used for research in general, application and algorithms developing both as sense catching.
Does there is a way to connect current context of Python app with IPython console? Like import ipyconsole; ipyconsole.set_interactive_from_here().
Here is much more complete situation.
First flow.
Below is some sort of running python app with inited DB and web-app route.
class App:
def __init__(self):
self.db = DB.new_connection("localhost:27018")
self.var_A = "Just an example variable"
def run(self):
self.console = IPythonAppConsole(self) #Console Creation
self.console.initialize()
self.kernel = console.start()
# print(self.kernel.connection_file)
# << "kernel-12345.json"
# let the app be some kind of web/flask application
#app.route('/items/<item_id>')
def get_item(self, item_id=0):
### GOOD PLACE for
### <import ipyconsole; ipyconsole.set_interactive_from_here(self.kernel)> CODE
item = self.db.find_one({'_id': item_id})
print(item)
Second interactive flow. This is a valuable target.
$: ipython console --existing "kernel-12345.json"
<< print(self.db.uri)
>> "localhost:27018"
<< print(item_id)
>> 1234567890
Does there is a common sense way to implement these two flows? Maybe there is some sort of magic combination of pdb and ipython kernel?
By the way there are another interactive ways to communicate with applications:
Debugging. Debug app with pdb/ipdb/web-pdb, using such snipper import pdb; pdb.set_trace() in any line in code.
Generate IPython notebook snippet from python code. Anywhere pyvt.
Today I looking for the answer inside IPython/shellapp, kernelapp sources with Jupyter console and dynamic variable sharing through Redis. Thanks for any kind of ideas!
Maybe Flask shell is what you are looking for https://flask.palletsprojects.com/en/1.1.x/shell/
One possible way for you to achieve this is to use ipdb and iPython combo.
x = 2
ipdb.set_trace()
x = 4
When you run the code, it drops into a ipdb shell
❯ python3 test.py
> /Users/tarunlalwani/Documents/Projects/SO/opa/test.py(7)<module>()
5 ipdb.set_trace()
6
----> 7 x = 4
ipdb>
And then you can drop into a ipython shell from there
ipdb> from IPython import embed
ipdb> embed()
Python 3.9.1 (default, Jan 8 2021, 17:17:43)
Type 'copyright', 'credits' or 'license' for more information
IPython 7.19.0 -- An enhanced Interactive Python. Type '?' for help.
In [1]: x
Out[1]: 2
In [2]:

Use get_log of selenium library to extract all the logs of a session

In robot framework, I created a python script to use get_log. What i want is to log all console outputs during a whole test or session, is that possible?
My script is as follows:
from robot.libraries.BuiltIn import BuiltIn
def get_selenium_browser_log():
selib = BuiltIn().get_library_instance('SeleniumLibrary')
return selib.driver.get_log('browser')

Python/R ctypes library error: "OSError: lib/libRrefblas.so: undefined symbol: xerbla_"

Firstly, I'm a newbie R, AWS and python guy. So I'm trying to get a python script with embedded R code running in AWS Lambda using rpy2. I created a Lambda package on an EC2 instance following the instructions here (modified for using python 3.4). It seems that there is something funky happening with loading the R libs using ctypes, as per the following error received in the console:
OSError: lib/libRrefblas.so: undefined symbol: xerbla_
The test file (py_test.py) looks like this:
import os
import ctypes
for file in os.listdir('lib'):
if os.path.isfile(os.path.join('lib', file)):
ctypes.cdll.LoadLibrary(os.path.join('lib', file))
os.environ["R_HOME"] = os.getcwd()
os.environ["R_USER"] = os.path.join(os.getcwd(), 'rpy2')
os.environ["R_LIBS"] = os.path.join(os.getcwd(), 'library')
os.environ["LD_LIBRARY_PATH"] = os.path.join(os.getcwd(), 'lib')
import sys
sys.path.append(os.path.join(os.getcwd(),'rpy2'))
import rpy2
from rpy2 import robjects
def test_handler(event, context):
robjects.r('''
f <- function(r, verbose=FALSE) {
if (verbose) {
cat("I am calling f().\n")
}
2 * pi * r
}
print(f(3))
''')
test_handler(None,None)
I have lib/libRrefblas.so in my virtual environment. I have scoured google looking for answers but have come up empty. Any suggestions would be greatly appreciated!
If you can get the traceback, that could help, but I suspect the problem is that it's looking for xerbla_ in the wrong place. Is xerbla_ defined in the path to RLIBS? Maybe in libR.so?
Turns out the BLAS that ships with R is corrupt. Your best bet is to make sure that BLAS and Lapack are installed on the machine you are building R on and see if you can get it to build with those libraries instead.
So steps would be to uninstall R, then run
yum -y install lapack-devel.x86_64 lapack.x86_64
yum -y install blas -devel
yum -y install R.x86_64
Check to see if it has still installed with libRrefblas.so. If it has - try deleting that file and see if it will default to the system BLAS. If you get a error because it is still looking for libRrefblas.so
rm lib/libRrefblas.so
cp /usr/lib64/libblas.so.3 lib/
mv lib/libblas.so.3 lib/libRrefblas.so

knitr won't run Python commands

I have used knitr for a long time, usually in the R Studio environment. Recently I installed Python (version 3.4.1) on my Windows machine, put it in the path, and tried out Yi Hui Xie's sample document for Python. But the Python code chunks won't run. From a chunk like this:
{r test-python, engine='python'}
x = 'hello, python world!'
print x
print x.split(' ')
I get an error message like this:
Warning: running command '"python" -c "print '**Write** _something_ in `Markdown` from `Python`!'"' had status 1
running: "python" -c "x = 'hello, python world!'
print x
print x.split(' ')"
File "<string>", line 2
print x
^
SyntaxError: invalid syntax
I'm in Windows 7, running R 3.1.0, with RStudio Version 0.98.847 (beta preview version). Interactive Python opens just fine from the command line.
Any ideas?
Your issue is that you've installed python3, but the syntax you're using is python2. The py2 -> py3 transition involved changes to the language itself -- in your example, print has changed from a syntax to a function. (So print(x) would work in your code above.)
The easiest option is to uninstall python3 and install the most recent Python 2.7 (currently Python 2.7.6). Alternately, onward and upward -- use py3, which just involves possibly tweaking any existing examples you run into in knitr.

In Ipython Qt Console sp.info doesn't print inside the console

I have installed IPython 1.1.0, in Ubuntu 12.04 from the source.
Similarly I have installed Numpy-1.8.0, Scipy-0.13.1, Matplotlib-1.3.1 from the source.
When I use the Ipython Qt COnsole the command sp.info(optimize.fmin) doesn't print the output in console but it prints it in the terminal (pylab). Is there anyway that it can print it in console too.
import numpy as np
import scipy as sp
from scipy import optimize
sp.info(optimize.fmin)
The output is like this in pylab
fmin(func, x0, args=(), xtol=0.0001, ftol=0.0001, maxiter=None, maxfun=None,
full_output=0, disp=1, retall=0, callback=None)
Minimize a function using the downhill simplex algorithm.
Parameters
----------
func : callable func(x,*args)
You can use IPython's ? syntax to get information about any object:
optimize.fmin?
That will work in all IPython environments.
However, scipy.info() and numpy.info() both work in the Qt console when I try them, whether or not I start it in pylab mode. I'm not sure why they don't for you.

Resources