Call Python from Julia - julia

I am new to Julia and I have a Python function that I want to use in Julia. Basically what the function does is to accept a dataframe (passed as a numpy ndarray), a filter value and a list of column indices (from the array) and run a logistic regression using the statsmodels package in Python. So far I have tried this:
using PyCall
py"""
import pandas as pd
import numpy as np
import random
import statsmodels.api as sm
import itertools
def reg_frac(state, ind_vars):
rows = 2000
total_rows = rows*13
data = pd.DataFrame({
'state': ['a', 'b', 'c','d','e','f','g','h','i','j','k','l','m']*rows, \
'y_var': [random.uniform(0,1) for i in range(total_rows)], \
'school': [random.uniform(0,10) for i in range(total_rows)], \
'church': [random.uniform(11,20) for i in range(total_rows)]}).to_numpy()
try:
X, y = sm.add_constant(np.array(data[(data[:,0] == state)][:,ind_vars], dtype=float)), np.array(data[(data[:,0] == state), 1], dtype=float)
model = sm.Logit(y, X).fit(cov_type='HC0', disp=False)
rmse = np.sqrt(np.square(np.subtract(y, model.predict(X))).mean())
except:
rmse = np.nan
return [state, ind_vars, rmse]
"""
reg_frac(state, ind_vars) = (py"reg_frac"(state::Char, ind_vars::Array{Any}))
However, when I run this, I don't expect the results to be NaN. I think it is working but I am missing something.
reg_frac('b', Any[i for i in 2:3])
0.000244 seconds (249 allocations: 7.953 KiB)
3-element Array{Any,1}:
'b'
[2, 3]
NaN
Any help is appreciated.

In Python code you have strs while in Julia code you have Chars - it is not the same.
Python:
>>> type('a')
<class 'str'>
Julia:
julia> typeof('a')
Char
Hence your comparisons do not work.
Your function could look like this:
reg_frac(state, ind_vars) = (py"reg_frac"(state::String, ind_vars::Array{Any}))
And now:
julia> reg_frac("b", Any[i for i in 2:3])
3-element Array{Any,1}:
"b"
[2, 3]
0.2853707270515166
However, I recommed using Vector{Float64} that in PyCall gets converted in-flight into a numpy vector rather than using Vector{Any} so looks like your code still could be improved (depending on what you are actually planning to do).

Related

Cannot convert Cython memoryviewslice to ndarray

I am trying to write an explicit Successive Overrelaxation Function over a 2D matrix. In this case for an electrostatic potential.
When trying to optimize this in Cython I seem to get an error that I am not quite sure I understand.
%%cython
cimport cython
import numpy as np
cimport numpy as np
from libc.math cimport pi
#SOR function
#cython.boundscheck(False)
#cython.wraparound(False)
#cython.initializedcheck(False)
#cython.nonecheck(False)
def SOR_potential(np.float64_t[:, :] potential, mask, int max_iter, float error_threshold, float alpha):
#the ints
cdef int height = potential.shape[0]
cdef int width = potential.shape[1] #more general non quadratic
cdef int it = 0
#the floats
cdef float error = 0.0
cdef float sor_adjustment
#the copy array we will iterate over and return
cdef np.ndarray[np.float64_t, ndim=2] input_matrix = potential.copy()
#set the ideal alpha if user input is 0.0
if alpha == 0.0:
alpha = 2/(1+(pi/((height+width)*0.5)))
#start the SOR loop. The for loops omit the 0 and -1 index\
#because they are *shadow points* used for neuman boundary conditions\
cdef int row, col
#iteration loop
while True:
#2-stencil loop
for row in range(1, height-1):
for col in range(1, width-1):
if not(mask[row][col]):
potential[row][col] = 0.25*(input_matrix[row-1][col] + \
input_matrix[row+1][col] + \
input_matrix[row][col-1] + \
input_matrix[row][col+1])
sor_adjustment = alpha * (potential[row][col] - input_matrix[row][col])
input_matrix[row][col] = sor_adjustment + input_matrix[row][col]
error += np.abs(input_matrix[row][col] - potential[row][col])
#by the end of this loop input_matrix and potential have diff values
if error<error_threshold:
break
elif it>max_iter:
break
else:
error = 0
it = it + 1
return input_matrix, error, it
and I used a very simple example for an array to see if it would give an error output.
test = [[True, False], [True, False]]
pot = np.array([[1.0, 2.0], [3.0, 4.0]], dtype=np.float64)
SOR_potential(pot, test, 50, 0.1, 0.0)
Gives out this error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In [30], line 1
----> 1 SOR_potential(pot, test, 50, 0.1, 0.0)
File _cython_magic_6c09a5060df996862b8e35adacc0e25c.pyx:21, in _cython_magic_6c09a5060df996862b8e35adacc0e25c.SOR_potential()
TypeError: Cannot convert _cython_magic_6c09a5060df996862b8e35adacc0e25c._memoryviewslice to numpy.ndarray
But when I delete the np.float64_t[:, :] part from
def SOR_potential(np.float64_t[:, :] potential,...)
the code works. Of course, the simple 2x2 matrix will not converge but it gives no errors. Where is the mistake here?
I also tried importing the modules differently as suggested here
Cython: how to resolve TypeError: Cannot convert memoryviewslice to numpy.ndarray?
but I got 2 errors instead of 1 where there were type mismatches.
Note: I would also like to ask, how would I define a numpy array of booleans to put in front of the "mask" input in the function?
A minimal reproducible example of your error message would look like this:
def foo(np.float64_t[:, :] A):
cdef np.ndarray[np.float64_t, ndim=2] B = A.copy()
# ... do something with B ...
return B
The problem is, that A is a memoryview while B is a np.ndarray. If both A and B are memoryviews, i.e.
def foo(np.float64_t[:, :] A):
cdef np.float64_t[:, :] B = A.copy()
# ... do something with B ...
return np.asarray(B)
your example will compile without errors. Note that you then need to call np.asarray if you want to return a np.ndarray.
Regarding your second question: You could use a memoryview with dtype np.uint8_t
def foo(np.float64_t[:, :] A, np.uint8_t[:, :] mask):
cdef np.float64_t[:, :] B = A.copy()
# ... do something with B and mask ...
return np.asarray(B)
and call it like this from Python:
mask = np.array([[True, True], [False, False]], dtype=bool)
A = np.ones((2,2), dtype=np.float64)
foo(A, mask)
PS: If your array's buffers are guaranteed to be C-Contiguous, you can use contiguous memoryviews for better performance:
def foo(np.float64_t[:, ::1] A, np.uint8_t[:, ::1] mask):
cdef np.float64_t[:, ::1] B = A.copy()
# ... do something with B and mask ...
return np.asarray(B)

How do we represent the Python/numpy function "zeros_like" in Julia language

I am a bit new to Julia but I have some knowledge in Python. I am now learning Julia and I want to know how to represent the Python function "zeros_like" from Numpy in Julia.
The python code is below:
import numpy as np
a = [3] #vector of one number
b = np.zeros_like(a)
Base.zero function returns the zero element (the "additive identity element" in the doc) for the type of its input argument:
julia> a = [3]
1-element Array{Int64,1}:
3
julia> zero(a)
1-element Array{Int64,1}:
0
or
julia> zeros(Int, size(a))
1-element Array{Int64,1}:
0

What does assignment like x\y = 1 mean in Julia?

While experimenting with Julia 1.0, I've noticed that I can do something like this:
x\y = 1
The REPL then shows:
\ (generic function with 1 method)
which means its a valid assignment (the interpreter doesn't complain). However, x, y, and x\y all remain undefined.
What is the meaning of such expression?
It is a new function definition that (kind of) shadows the left division operator \ in Base, since the left division operator is already defined for some types in Julia. The new function definition is \(x,y) = 1 (the names of function parameters do not matter) which works for all types of variables. This will prevent julia from loading Base.\ due to name conflict. No matter what the input is your new \ will return the same value.
julia> x\y = 5
julia> a = 3; b = 4;
julia> a\b
5
julia> c = "Lorem ipsum"; d = "dolor";
julia> c\d
5
If you have already used the \ that is defined in Base, your redefinition will throw an error saying that extending Base.\ requires an explicit import with import Base.\. The behavior of defining \ after import Base.\ however will be different. It will extend the operator Base.\.
julia> 1\[1,3]
2-element Array{Float64,1}:
1.0
3.0
julia> import Base.\
julia> x\y=3
\ (generic function with 152 methods)
julia> 1\[1,3]
2-element Array{Int64,1}:
3
3

rpy2 to subset RS4 object (expressionSet)

I'm building an ExpressionSet class using rpy2, following the relevant tutorial as a guide. One of the most common things I do with the Eset object is subsetting, which in native R is as straightforward as
eset2<-eset1[1:10,1:5] # first ten features, first five samples
which returns a new ExpressionSet object with subsets of both the expression and phenotype data, using the given indices. Rpy2's RS4 object doesn't seem to allow direct subsetting, or have rx/rx2 attributes unlike e.g. RS3 vectors. I tried, with ~50% success, adding a '_subset' function (below) that creates subsets of these two datasets separately and assigns them back to Eset, but is there a more straightforward way that I'm missing?
from rpy2 import (robjects, rinterface)
from rpy2.robjects import (r, pandas2ri, Formula)
from rpy2.robjects.packages import (importr,)
from rpy2.robjects.methods import (RS4,)
class ExpressionSet(RS4):
# funcs to get the attributes
def _assay_get(self): # returns an environment, use ['exprs'] key to access
return self.slots["assayData"]
def _pdata_get(self): # returns an RS4 object, use .slots("data") to access
return self.slots["phenoData"]
def _feats_get(self): # returns an RS4 object, use .slots("data") to access
return self.slots["featureData"]
def _annot_get(self): # slots returns a tuple, just pick 1st (only) element
return self.slots["annotation"][0]
def _class_get(self): # slots returns a tuple, just pick 1st (only) element
return self.slots["class"][0]
# funcs to set the attributes
def _assay_set(self, value):
self.slots["assayData"] = value
def _pdata_set(self, value):
self.slots["phenoData"] = value
def _feats_set(self,value):
self.slots["featureData"] = value
def _annot_set(self, value):
self.slots["annotation"] = value
def _class_set(self, value):
self.slots["class"] = value
# funcs to work with the above to get/set the data
def _exprs_get(self):
return self.assay["exprs"]
def _pheno_get(self):
pdata = self.pData
return pdata.slots["data"]
def _exprs_set(self, value):
assay = self.assay
assay["exprs"] = value
def _pheno_set(self, value):
pdata = self.pData
pdata.slots["data"] = value
assay = property(_assay_get, _assay_set, None, "R attribute 'assayData'")
pData = property(_pdata_get, _pdata_set, None, "R attribute 'phenoData'")
fData = property(_feats_get, _feats_set, None, "R attribute 'featureData'")
annot = property(_annot_get, _annot_set, None, "R attribute 'annotation'")
exprs = property(_exprs_get, _exprs_set, None, "R attribute 'exprs'")
pheno = property(_pheno_get, _pheno_set, None, "R attribute 'pheno")
def _subset(self, features=None, samples=None):
features = features if features else self.exprs.rownames
samples = samples if samples else self.exprs.colnames
fx = robjects.BoolVector([f in features for f in self.exprs.rownames])
sx = robjects.BoolVector([s in samples for s in self.exprs.colnames])
self.pheno = self.pheno.rx(sx, self.pheno.colnames)
self.exprs = self.exprs.rx(fx,sx) # can't assign back to exprs this way
When doing
eset2<-eset1[1:10,1:5]
in R, the R S4 method "[" with the signature ("ExpressionSet") is fetched and run using the parameter values you provided.
The documentation is suggesting the use of getmethod (see http://rpy2.readthedocs.org/en/version_2.7.x/generated_rst/s4class.html#methods ) to facilitate the task of fetching the relevant S4 method, but its behaviour seems to have changed after the documentation was written (resolution of the dispatch through inheritance is no longer done).
The following should do it though:
from rpy2.robjects.packages import importr
methods = importr('methods')
r_subset_expressionset = methods.selectMethod("[", "ExpressionSet")
with thanks to #lgautier's answer, here's a snippet of my above code, modified to allow subsetting of the RS4 object:
from multipledispatch import dispatch
#dispatch(RS4)
def eset_subset(eset, features=None, samples=None):
"""
subset an RS4 eset object
"""
features = features if features else eset.exprs.rownames
samples = samples if samples else eset.exprs.colnames
fx = robjects.BoolVector([f in features for f in eset.exprs.rownames])
sx = robjects.BoolVector([s in samples for s in eset.exprs.colnames])
esub=methods.selectMethod("[", signature="ExpressionSet")(eset, fx,sx)
return esub

How can I get a flat result from a list comprehension instead of a nested list?

I have a list A, and a function f which takes an item of A and returns a list. I can use a list comprehension to convert everything in A like [f(a) for a in A], but this returns a list of lists. Suppose my input is [a1,a2,a3], resulting in [[b11,b12],[b21,b22],[b31,b32]].
How can I get the flattened list [b11,b12,b21,b22,b31,b32] instead? In other words, in Python, how can I get what is traditionally called flatmap in functional programming languages, or SelectMany in .NET?
(In the actual code, A is a list of directories, and f is os.listdir. I want to build a flat list of subdirectories.)
See also: How do I make a flat list out of a list of lists? for the more general problem of flattening a list of lists after it's been created.
You can have nested iterations in a single list comprehension:
[filename for path in dirs for filename in os.listdir(path)]
which is equivalent (at least functionally) to:
filenames = []
for path in dirs:
for filename in os.listdir(path):
filenames.append(filename)
>>> from functools import reduce # not needed on Python 2
>>> list_of_lists = [[1, 2],[3, 4, 5], [6]]
>>> reduce(list.__add__, list_of_lists)
[1, 2, 3, 4, 5, 6]
The itertools solution is more efficient, but this feels very pythonic.
You can find a good answer in the itertools recipes:
import itertools
def flatten(list_of_lists):
return list(itertools.chain.from_iterable(list_of_lists))
The question proposed flatmap. Some implementations are proposed but they may unnecessary creating intermediate lists. Here is one implementation that's based on iterators.
def flatmap(func, *iterable):
return itertools.chain.from_iterable(map(func, *iterable))
In [148]: list(flatmap(os.listdir, ['c:/mfg','c:/Intel']))
Out[148]: ['SPEC.pdf', 'W7ADD64EN006.cdr', 'W7ADD64EN006.pdf', 'ExtremeGraphics', 'Logs']
In Python 2.x, use itertools.map in place of map.
You could just do the straightforward:
subs = []
for d in dirs:
subs.extend(os.listdir(d))
You can concatenate lists using the normal addition operator:
>>> [1, 2] + [3, 4]
[1, 2, 3, 4]
The built-in function sum will add the numbers in a sequence and can optionally start from a specific value:
>>> sum(xrange(10), 100)
145
Combine the above to flatten a list of lists:
>>> sum([[1, 2], [3, 4]], [])
[1, 2, 3, 4]
You can now define your flatmap:
>>> def flatmap(f, seq):
... return sum([f(s) for s in seq], [])
...
>>> flatmap(range, [1,2,3])
[0, 0, 1, 0, 1, 2]
Edit: I just saw the critique in the comments for another answer and I guess it is correct that Python will needlessly build and garbage collect lots of smaller lists with this solution. So the best thing that can be said about it is that it is very simple and concise if you're used to functional programming :-)
subs = []
map(subs.extend, (os.listdir(d) for d in dirs))
(but Ants's answer is better; +1 for him)
import itertools
x=[['b11','b12'],['b21','b22'],['b31']]
y=list(itertools.chain(*x))
print y
itertools will work from python2.3 and greater
You could try itertools.chain(), like this:
import itertools
import os
dirs = ["c:\\usr", "c:\\temp"]
subs = list(itertools.chain(*[os.listdir(d) for d in dirs]))
print subs
itertools.chain() returns an iterator, hence the passing to list().
This is the most simple way to do it:
def flatMap(array):
return reduce(lambda a,b: a+b, array)
The 'a+b' refers to concatenation of two lists
You can use pyxtension:
from pyxtension.streams import stream
stream([ [1,2,3], [4,5], [], [6] ]).flatMap() == range(7)
Google brought me next solution:
def flatten(l):
if isinstance(l,list):
return sum(map(flatten,l))
else:
return l
If listA=[list1,list2,list3]
flattened_list=reduce(lambda x,y:x+y,listA)
This will do.

Resources