Unable to get mypy to accept code which works correctly; generics appear to be the problem - mypy

The following program is self-documenting. It gives the correct results, but it fails typechecking with mypy --strict.
Maybe there's a bug in mypy. More likely, my type annotations are incorrect. Any ideas?
"""
This code shows a technique for creating user-defined infix operators.
The technique is not my invention. However, I would like to have an implementation that:
a) I understand
b) type-checks with mypy --strict.
This code allows any two-argument function to be called as an operator, using the syntax:
leftArgument <<functionName>> rightArgument
... for the avoidance of doubt, << and >> are literally present in the code.
"""
from typing import Callable, Generic, TypeVar
T = TypeVar("T")
U = TypeVar("U")
V = TypeVar("V")
class WithLeft(Generic[T, U, V]):
def __init__(self, func, left: T) -> None:
self.func = func
self.left = left
def __rshift__(self, right: U) -> V:
return self.func(self.left, right)
class WithRight(Generic[T, U, V]):
def __init__(self, func, right: U) -> None:
self.func = func
self.right = right
def __rlshift__(self, left: T) -> V:
return self.func(left, self.right)
class Op(Generic[T, U, V]):
def __init__(self, func) -> None:
self.func = func
def __rlshift__(self, left: T) -> WithLeft[T, U, V]:
return WithLeft(self.func, left)
def __rshift__(self, right: U) -> WithRight[T, U, V]:
return WithRight(self.func, right)
if __name__ == "__main__":
print("Define the operator <<add>>, which adds two ints.")
#Op
def add(a: int, b: int) -> int:
return a + b
print("Demonstrate that it works.")
print(f"{12 <<add>> 44} correctly prints {12 + 44}")
print('Previous line gives error from mypy:')
print('\tUnsupported operand types for << ("int" and "Op[<nothing>, <nothing>, <nothing>]")')
print()
print("Define the operator <<compose>>, which composes two functions.")
P = TypeVar("P")
Q = TypeVar("Q")
R = TypeVar("R")
#Op
def compose(left: Callable[[Q], P], right: Callable[[R], Q]) -> Callable[[R], P]:
return lambda x: left(right(x))
print("Demonstrate that it works.")
from math import (
cos, # angle in radians, not degrees
log # base e, not base 10
)
logcos: Callable[[float], float] = log <<compose>> cos
print('Previous line gives two errors from mypy:')
print('\tUnsupported operand types for << ("Callable[[SupportsFloat, SupportsFloat], float]" and "Op[<nothing>, <nothing>, <nothing>]")')
print('\tUnsupported operand types for >> ("WithLeft[<nothing>, <nothing>, <nothing>]" and "Callable[[SupportsFloat], float]")')
print()
print(f"{logcos(1)} correctly prints {log(cos(1))}")
Stackoverflow doesn't like this post because it is mostly code.
So I'll repeat the explanation from the code. No need to read this again!
This code shows a technique for creating user-defined infix operators.
The technique is not my invention. However, I would like to have an implementation that:
a) I understand
b) type-checks with mypy --strict.
This code allows any two-argument function to be called as an operator, using the syntax:
leftArgument <> rightArgument
... for the avoidance of doubt, << and >> are literally present in the code.

Related

Python dictionary setdefault() method used as a return value of a recursive function

Say you want to get the nth Fibonacci Number. Then, one possibility is to use the recursive function
def Fib(n, d):
"""Assumes n is an int >= 0, d dictionary
Returns nth Fibonacci number"""
if n in d:
return d[n]
else:
d[n] = Fib(n-1, d) + Fib(n-2, d)
return d[n]
This works quite well. I tried to shorten this to
def Fib(n, d):
return d.setdefault(n, Fib(n-1, d) + Fib(n-2, d))
But when I can call it with
d={0:1, 1:1}
print(f(2, d))
, or even with f(1,d), it goes into infinite loop, and restarts the kernel. In fact, any function of this form, say
def f(n, d):
return d.setdefault(n, f(n-1,d))
has the same problem. When I tried to debug this, I saw that n keeps decreasing pass the value 1. I guess I don't understand the implementation of this method. I presumed that the setdefault method first checks whether the key is in the dictionary and returns the value, and if not, then assigns the default value to the key and returns the default value. What am I missing here?(I am using Python 3.9.1 with Spyder 4.2.0)
You still need a base case otherwise there's nothing to stop it from calculating, fib(-1), fib(-2), fib(-99), ...
def fib(n, d):
return n if n < 2 else d.setdefault(n, fib(n-1, d) + fib(n-2, d))
print(fib(10, {0:0, 1:1}))
55
The problem you are experiencing with setdefault is that python is an applicative order language. That means a functions arguments are evaluated before a the function is called. In the case of setdefault, we will evaluate fib(n-1,d) + fib(n-2,d) before we attempt to lookup n in d.
A better interface might be dict.setdefault(key, lambda: somevalue) where the lambda is executed only if the default needs to be set. We could write this as lazydefault below -
def lazydefault(d, key, lazyvalue):
if key not in d:
d[key] = lazyvalue()
return d[key]
def fib(n, d):
return lazydefault(d, n, lambda: fib(n-1, d) + fib(n-2, d))
print(fib(10, {0:0, 1:1}))
55

Higher order function on lists Ocaml

I created a function p that checks if the square of a given value is lower than 30.
Then this function is called in an other function (as argument) to return the first value inside a list with its square less then 30 ( if p is true, basically I have to check if the function p is true or false ).
This is the code :
let p numb =
let return = (numb * numb) < 30 in return
let find p listT =
let rec support p listT =
match listT with
| []-> raise (Failure "No element in list for p")
| hd :: tl -> if p hd then hd
else support p tl in
let ret = support (p listT) in ret
let () =
let a = [5;6;7] in
let b = find p a in print_int b
But it said on the last line :
Error: This expression (p) has type int -> bool
but an expression was expected of type int -> 'a -> bool
Type bool is not compatible with type 'a -> bool
However, I don't think I'm using higher order functions in the right way, I think it should be more automatic I guess, or not?
First, note that
let return = x in return
can replaced by
x
Second, your original error is on line 10
support (p listT)
This line makes the typechecker deduce that the p argument of find is a function that takes one argument (here listT) and return another function of type int -> bool.
Here's another way to look at your problem, which is as #octachron says.
If you assume that p is a function of type int -> bool, then this recursive call:
support (p listT)
is passing a boolean as the first parameter of support. That doesn't make a lot of sense since the first parameter of support is supposed to be a function.
Another problem with this same expression is that it requires that listT be a value of type int (since this is what p expects as a parameter). But listT is a list of ints, not an int.
A third problem with this expression is that it only passes one parameter to support. But support is expecting two parameters.
Luckily the fix for all these problems is exremely simple.

Pyparsing: ParseAction not called

On a simple grammar I am in the bad situation that one of my ParseActions is not called.
For me this is strange as parseActions of a base symbol ("logic_oper") and a derived symbol ("cmd_line") are called correctly. Just "pa_logic_cmd" is not called. You can see this on the output which is included at the end of the code.
As there is no exception on parsing the input string, I am assuming that the grammar is (basically) correct.
import io, sys
import pyparsing as pp
def diag(msg, t):
print("%s: %s" % (msg , str(t)) )
def pa_logic_oper(t): diag('logic_oper', t)
def pa_operand(t): diag('operand', t)
def pa_ident(t): diag('ident', t)
def pa_logic_cmd(t): diag('>>>>>> logic_cmd', t)
def pa_cmd_line(t): diag('cmd_line', t)
def make_grammar():
semi = pp.Literal(';')
ident = pp.Word(pp.alphas, pp.alphanums).setParseAction(pa_ident)
operand = (ident).setParseAction(pa_operand)
op_and = pp.Keyword('A')
op_or = pp.Keyword('O')
logic_oper = (( op_and | op_or) + pp.Optional(operand))
logic_oper.setParseAction(pa_logic_oper)
logic_cmd = logic_oper + pp.Suppress(semi)
logic_cmd.setParseAction(pa_logic_cmd)
cmd_line = (logic_cmd)
cmd_line.setParseAction(pa_cmd_line)
grammar = pp.OneOrMore(cmd_line) + pp.StringEnd()
return grammar
if __name__ == "__main__":
inp_str = '''
A param1;
O param2;
A ;
'''
grammar = make_grammar()
print( "pp-version:" + pp.__version__)
parse_res = grammar.parseString( inp_str )
'''USAGE/Output: python test_4.py
pp-version:2.0.3
operand: ['param1']
logic_oper: ['A', 'param1']
cmd_line: ['A', 'param1']
operand: ['param2']
logic_oper: ['O', 'param2']
cmd_line: ['O', 'param2']
logic_oper: ['A']
cmd_line: ['A']
'''
Can anybody give me a hint on this parseAction problem?
Thanks,
The problem is here:
cmd_line = (logic_cmd)
cmd_line.setParseAction(pa_cmd_line)
The first line assigns cmd_line to be the same expression as logic_cmd. You can verify by adding this line:
print("???", cmd_line is logic_cmd)
Then the second line calls setParseAction, which overwrites the parse action of logic_cmd, so the pa_logic_cmd will never get called.
Remove the second line, since you are already testing the calling of the parse action with pa_logic_cmd. You could change to using the addParseAction method instead, but to my mind that is an invalid test (adding 2 parse actions to the same pyparsing expression object).
Or, change the definition of cmd_line to:
cmd_line = pp.Group(logic_cmd)
Now you will have wrapped logic_cmd inside another expression, and you can then independently set and test the running of parse actions on the two different expressions.

OCaml: applying second argument first(higher-order functions)

I defined a higher-order function like this:
val func : int -> string -> unit
I would like to use this function in two ways:
other_func (func 5)
some_other_func (fun x -> func x "abc")
i.e., by making functions with one of the arguments already defined. However, the second usage is less concise and readable than the first one. Is there a more readable way to pass the second argument to make another function?
In Haskell, there's a function flip for this. You can define it yourself:
let flip f x y = f y x
Then you can say:
other_func (func 5)
third_func (flip func "abc")
Flip is defined in Jane Street Core as Fn.flip. It's defined in OCaml Batteries Included as BatPervasives.flip. (In other words, everybody agrees this is a useful function.)
The question posed in the headline "change order of parameters" is already answered. But I am reading your description as "how do I write a new function with the second parameter fixed". So I will answer this simple question with an ocaml toplevel protocol:
# let func i s = if i < 1 then print_endline "Counter error."
else for ix = 1 to i do print_endline s done;;
val func : int -> string -> unit = <fun>
# func 3 "hi";;
hi
hi
hi
- : unit = ()
# let f1 n = func n "curried second param";;
val f1 : int -> unit = <fun>
# f1 4;;
curried second param
curried second param
curried second param
curried second param
- : unit = ()
#

How can I use functools.partial on multiple methods on an object, and freeze parameters out of order?

I find functools.partial to be extremely useful, but I would like to be able to freeze arguments out of order (the argument you want to freeze is not always the first one) and I'd like to be able to apply it to several methods on a class at once, to make a proxy object that has the same methods as the underlying object except with some of its methods parameters being frozen (think of it as generalizing partial to apply to classes). And I'd prefer to do this without editing the original object, just like partial doesn't change its original function.
I've managed to scrap together a version of functools.partial called 'bind' that lets me specify parameters out of order by passing them by keyword argument. That part works:
>>> def foo(x, y):
... print x, y
...
>>> bar = bind(foo, y=3)
>>> bar(2)
2 3
But my proxy class does not work, and I'm not sure why:
>>> class Foo(object):
... def bar(self, x, y):
... print x, y
...
>>> a = Foo()
>>> b = PureProxy(a, bar=bind(Foo.bar, y=3))
>>> b.bar(2)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: bar() takes exactly 3 arguments (2 given)
I'm probably doing this all sorts of wrong because I'm just going by what I've pieced together from random documentation, blogs, and running dir() on all the pieces. Suggestions both on how to make this work and better ways to implement it would be appreciated ;) One detail I'm unsure about is how this should all interact with descriptors. Code follows.
from types import MethodType
class PureProxy(object):
def __init__(self, underlying, **substitutions):
self.underlying = underlying
for name in substitutions:
subst_attr = substitutions[name]
if hasattr(subst_attr, "underlying"):
setattr(self, name, MethodType(subst_attr, self, PureProxy))
def __getattribute__(self, name):
return getattr(object.__getattribute__(self, "underlying"), name)
def bind(f, *args, **kwargs):
""" Lets you freeze arguments of a function be certain values. Unlike
functools.partial, you can freeze arguments by name, which has the bonus
of letting you freeze them out of order. args will be treated just like
partial, but kwargs will properly take into account if you are specifying
a regular argument by name. """
argspec = inspect.getargspec(f)
argdict = copy(kwargs)
if hasattr(f, "im_func"):
f = f.im_func
args_idx = 0
for arg in argspec.args:
if args_idx >= len(args):
break
argdict[arg] = args[args_idx]
args_idx += 1
num_plugged = args_idx
def new_func(*inner_args, **inner_kwargs):
args_idx = 0
for arg in argspec.args[num_plugged:]:
if arg in argdict:
continue
if args_idx >= len(inner_args):
# We can't raise an error here because some remaining arguments
# may have been passed in by keyword.
break
argdict[arg] = inner_args[args_idx]
args_idx += 1
f(**dict(argdict, **inner_kwargs))
new_func.underlying = f
return new_func
Update: In case anyone can benefit, here's the final implementation I went with:
from types import MethodType
class PureProxy(object):
""" Intended usage:
>>> class Foo(object):
... def bar(self, x, y):
... print x, y
...
>>> a = Foo()
>>> b = PureProxy(a, bar=FreezeArgs(y=3))
>>> b.bar(1)
1 3
"""
def __init__(self, underlying, **substitutions):
self.underlying = underlying
for name in substitutions:
subst_attr = substitutions[name]
if isinstance(subst_attr, FreezeArgs):
underlying_func = getattr(underlying, name)
new_method_func = bind(underlying_func, *subst_attr.args, **subst_attr.kwargs)
setattr(self, name, MethodType(new_method_func, self, PureProxy))
def __getattr__(self, name):
return getattr(self.underlying, name)
class FreezeArgs(object):
def __init__(self, *args, **kwargs):
self.args = args
self.kwargs = kwargs
def bind(f, *args, **kwargs):
""" Lets you freeze arguments of a function be certain values. Unlike
functools.partial, you can freeze arguments by name, which has the bonus
of letting you freeze them out of order. args will be treated just like
partial, but kwargs will properly take into account if you are specifying
a regular argument by name. """
argspec = inspect.getargspec(f)
argdict = copy(kwargs)
if hasattr(f, "im_func"):
f = f.im_func
args_idx = 0
for arg in argspec.args:
if args_idx >= len(args):
break
argdict[arg] = args[args_idx]
args_idx += 1
num_plugged = args_idx
def new_func(*inner_args, **inner_kwargs):
args_idx = 0
for arg in argspec.args[num_plugged:]:
if arg in argdict:
continue
if args_idx >= len(inner_args):
# We can't raise an error here because some remaining arguments
# may have been passed in by keyword.
break
argdict[arg] = inner_args[args_idx]
args_idx += 1
f(**dict(argdict, **inner_kwargs))
return new_func
You're "binding too deep": change def __getattribute__(self, name): to def __getattr__(self, name): in class PureProxy. __getattribute__ intercepts every attribute access and so bypasses everything that you've set with setattr(self, name, ... making those setattr bereft of any effect, which obviously's not what you want; __getattr__ is called only for access to attributes not otherwise defined so those setattr calls become "operative" & useful.
In the body of that override, you can and should also change object.__getattribute__(self, "underlying") to self.underlying (since you're not overriding __getattribute__ any more). There are other changes I'd suggest (enumerate in lieu of the low-level logic you're using for counters, etc) but they wouldn't change the semantics.
With the change I suggest, your sample code works (you'll have to keep testing with more subtle cases of course). BTW, the way I debugged this was simply to stick in print statements in the appropriate places (a jurassic=era approach but still my favorite;-).

Resources