Debug coin change Dynamic Programming - recursion

My coin change dynamic programming implementation is failing for some of the test cases, and I am having a hard time figuring out why:
Problem Statement: Given an amount and a list of coins, find the minimum number of coins required to make that amount.
Ex:
Target Amount: 63
Coin List: [1, 5, 10, 21, 25]
Output: [21, 21, 21]
def coin_change(change_list, amount, tried):
if amount <= 0:
return []
if amount in change_list:
return [amount]
if amount in tried:
return tried[amount]
coin_count = []
for change in change_list:
if change < amount:
changes = coin_change(change_list, amount-change, tried)
changes.append(change)
coin_count.append(changes)
min_changes = coin_count[0][:]
for x in coin_count[1:]:
if len(min_changes) >= len(x):
min_changes = x[:]
tried[amount] = min_changes[:]
return min_changes
def main():
for amount in range(64):
changes = coin_change([1, 5, 10, 21, 25], amount, {})
if sum(changes) != amount:
print "WRONG: Change for %d is: %r" % (amount, changes)
else:
# print "Change for %d is: %r" % (amount, changes)
pass
if __name__ == "__main__":
main()
Trinket: https://trinket.io/python/43fcff035e

You're corrupting the variable, changes, by appending to it during a loop. Try this:
Replace these two lines:
changes.append(change)
coin_count.append(changes)
With:
_changes = changes[:] + [change]
coin_count.append(_changes)

Related

What is the best way to implement asynchronous "priority" chunking?

The closest implementation I can find is that of aiostream's chunks. Which allows the generation of "chunks of size n from an asynchronous sequence. The chunks are lists, and the last chunk might contain less than n elements".
I have implemented something similar but the key difference is that it prioritises fulfilling one batch at a time as opposed to multiple batches at once.
import asyncio
import aiostream
from collections import deque
class IterableAsyncQueue:
def __init__(self):
self.queue = asyncio.Queue()
async def put(self, value):
await self.queue.put(value)
def __aiter__(self):
return self
async def __anext__(self):
return await self.queue.get()
class Batch:
def __init__(self, n):
self.batch_size = n
def __call__(self, iterable, *args):
self.iterable = iterable
self.calls = deque()
self.pending = set()
self.pending_return = asyncio.Queue()
self.initialised = False
return self
def __iter__(self):
iterable = iter(self.iterable)
return iter(lambda: tuple(itertools.islice(iterable, self.batch_size)), ())
def __aiter__(self):
return self
async def __anext__(self):
self.pending |= {asyncio.create_task(self.iterable.__anext__()) for _ in range(self.batch_size)}
if self.initialised:
future = asyncio.get_running_loop().create_future()
self.calls.append(future)
await future
else:
self.initialised = True
batch = []
while len(batch) < self.batch_size:
done, _ = await asyncio.wait(self.pending, return_when=asyncio.FIRST_COMPLETED)
done = list(done)[0]
batch.append(await done)
self.pending.discard(done)
next_call = self.calls.popleft()
next_call.set_result(None)
return batch
async def consumer(n, a):
start = time.time()
async for x in a:
print(n, x, time.time() - start)
async def producer(q):
for x in range(50):
await asyncio.sleep(0.5)
await q.put(x)
q = IterableAsyncQueue()
# a = Batch(5)(q)
a = aiostream.stream.chunks(q, 5)
loop = asyncio.get_event_loop()
loop.create_task(producer(q))
loop.create_task(consumer(1, a))
loop.create_task(consumer(2, a))
loop.run_forever()
The output using aiostream.stream.chunks:
1 [0, 2, 4, 6, 8] 4.542179107666016
2 [1, 3, 5, 7, 9] 5.04422402381897
1 [10, 12, 14, 16, 18] 9.575451850891113
2 [11, 13, 15, 17, 19] 10.077155828475952
The output using my implementation of priority batch:
1 [0, 1, 2, 3, 4] 2.519313097000122
2 [5, 6, 7, 8, 9] 5.031418323516846
1 [10, 11, 12, 13, 14] 7.543889045715332
2 [15, 16, 17, 18, 19] 10.052537202835083
It seems to me that the priority batch is fundamentally more useful as it yields results sooner than chunks allowing the calling code to await another batch. This means that if there are m consumers each awaiting a batch of size n then there are always between m×n and (m-1)×n results being waited upon. With the chunks implementation the number of results being waited upon varies between m and m×n.
What I would like to know is why haven't I been able to find an implementation of this before and is this the best way of implementing this solution?

How can I tell mypy that a condition is guaranteed?

My actual case is more complicated, but the MVCE is
from typing import List
def find_largest(numbers: List[int]) -> List[int]:
"""
>>> find_largest([3, 4, 5, 5, 3, 1, -2, 4, 3, 3])
[5, 5]
"""
assert len(numbers) > 0 # guaranteed by caller
largest_numbers = None
value = None
for number in numbers:
if value is None or number > value:
largest_numbers = [number]
value = number
elif number == value:
largest_numbers.append(number)
return largest_numbers
if __name__ == '__main__':
import doctest
doctest.testmod()
When I run mypy on this, I get:
mytest.py:18: error: Incompatible return value type (got "Optional[List[int]]", expected "List[int]")
Found 1 error in 1 file (checked 1 source file)
But restrictions which are not captured by mypy guarantee that None is not returned. How can I hint that to mypy? (Initializing with something else is NOT possible)
Your code can still return None according to Mypy and the it thinks the typing is correct.
Assuming you can't fix this you could also force the return to always have a value with:
assert largest_numbers
return largest_numbers
Alternatively, use typing.cast:
To the type checker this signals that the return value has the designated type, but at runtime we intentionally don’t check anything (we want this to be as fast as possible).
from typing import List, cast
...
def find_largest(numbers: List[int]) -> List[int]:
"""
>>> find_largest([3, 4, 5, 5, 3, 1, -2, 4, 3, 3])
[5, 5]
"""
assert len(numbers) > 0 # guaranteed by caller
largest_numbers = None
value = None
for number in numbers:
if value is None or number > value:
largest_numbers = [number]
value = number
elif number == value:
largest_numbers.append(number)
return cast(List[int], largest_numbers)
...

Go time comparison

I'm trying to create simple function just to change time zone of a time to another (Lets assume UTC to +0700 WIB). Here is the source code. I have 2 functions, first GenerateWIB which will change just your time zone into +0700 WIB with same datetime. Second is GenerateUTC which will change given time's timezone into UTC. GenerateUTC works perfectly while another is not.
expect := time.Date(2016, 12, 12, 1, 2, 3, 4, wib)
t1 := time.Date(2016, 12, 12, 1, 2, 3, 4, time.UTC)
res := GenerateWIB(t1)
if res != expect {
fmt.Printf("WIB Expect %+v, but get %+v", expect, res)
}
The res != expect always fullfilled with this result.
WIB Expect 2016-12-12 01:02:03.000000004 +0700 WIB, but get 2016-12-12 01:02:03.000000004 +0700 WIB
But it is the same time right? Did i miss something?
There is an .Equal() method to compare dates :
if !res.Equal(expect) {
...
Quoting the doc :
Note that the Go == operator compares not just the time instant but also the Location and the monotonic clock reading. Therefore, Time values should not be used as map or database keys without first guaranteeing that the identical Location has been set for all values, which can be achieved through use of the UTC or Local method, and that the monotonic clock reading has been stripped by setting t = t.Round(0). In general, prefer t.Equal(u) to t == u, since t.Equal uses the most accurate comparison available and correctly handles the case when only one of its arguments has a monotonic clock reading.
If you look at the code for the time.Time(*) struct, you can see that this struct has three private fields :
type Time struct {
...
wall uint64
ext int64
...
loc *Location
}
and the comments about those fields clearly indicate that, depending on how the Time struct was built, two Time describing the same point in time may have different values for these fields.
Running res == expect compares the values of these inner fields,
running res.Equal(expect) tries to do the thing you expect.
(*) time/time.go source code on master branch as of oct 27th, 2020
Dates in golang must be compared with Equal method. Method Date returns Time type.
func Date(year int, month Month, day, hour, min, sec, nsec int, loc *Location) Time
and Time type have Equal method.
func (t Time) Equal(u Time) bool
Equal reports whether t and u represent the same time instant. Two times can be equal even if they are in different locations. For example, 6:00 +0200 CEST and 4:00 UTC are Equal. See the documentation on the Time type for the pitfalls of using == with Time values; most code should use Equal instead.
Example
package main
import (
"fmt"
"time"
)
func main() {
secondsEastOfUTC := int((8 * time.Hour).Seconds())
beijing := time.FixedZone("Beijing Time", secondsEastOfUTC)
// Unlike the equal operator, Equal is aware that d1 and d2 are the
// same instant but in different time zones.
d1 := time.Date(2000, 2, 1, 12, 30, 0, 0, time.UTC)
d2 := time.Date(2000, 2, 1, 20, 30, 0, 0, beijing)
datesEqualUsingEqualOperator := d1 == d2
datesEqualUsingFunction := d1.Equal(d2)
fmt.Printf("datesEqualUsingEqualOperator = %v\n", datesEqualUsingEqualOperator)
fmt.Printf("datesEqualUsingFunction = %v\n", datesEqualUsingFunction)
}
datesEqualUsingEqualOperator = false
datesEqualUsingFunction = true
resources
Time type documentation
Equal method documentation
time.Date

Push dictionary? How to achieve this in Lua?

Say I have this dictionary in Lua
places = {dest1 = 10, dest2 = 20, dest3 = 30}
In my program I check if the dictionary has met my size limit in this case 3, how do I push the oldest key/value pair out of the dictionary and add a new one?
places["newdest"] = 50
--places should now look like this, dest3 pushed off and newdest added and dictionary has kept its size
places = {newdest = 50, dest1 = 10, dest2 = 20}
It's not too difficult to do this, if you really needed it, and it's easily reusable as well.
local function ld_next(t, i) -- This is an ordered iterator, oldest first.
if i <= #t then
return i + 1, t[i], t[t[i]]
end
end
local limited_dict = {__newindex = function(t,k,v)
if #t == t[0] then -- Pop the last entry.
t[table.remove(t, 1)] = nil
end
table.insert(t, k)
rawset(t, k, v)
end, __pairs = function(t)
return ld_next, t, 1
end}
local t = setmetatable({[0] = 3}, limited_dict)
t['dest1'] = 10
t['dest2'] = 20
t['dest3'] = 30
t['dest4'] = 50
for i, k, v in pairs(t) do print(k, v) end
dest2 20
dest3 30
dest4 50
The order is stored in the numeric indices, with the 0th index indicating the limit of unique keys that the table can have.
Given that dictionary keys do not save their entered position, I wrote something that should be able to help you accomplish what you want, regardless.
function push_old(t, k, v)
local z = fifo[1]
t[z] = nil
t[k] = v
table.insert(fifo, k)
table.remove(fifo, 1)
end
You would need to create the fifo table first, based on the order you entered the keys (for instance, fifo = {"dest3", "dest2", "dest1"}, based on your post, from first entered to last entered), then use:
push_old(places, "newdest", 50)
and the function will do the work. Happy holidays!

Which of these is pythonic? and Pythonic vs. Speed

I'm new to python and just wrote this module level function:
def _interval(patt):
""" Converts a string pattern of the form '1y 42d 14h56m'
to a timedelta object.
y - years (365 days), M - months (30 days), w - weeks, d - days,
h - hours, m - minutes, s - seconds"""
m = _re.findall(r'([+-]?\d*(?:\.\d+)?)([yMwdhms])', patt)
args = {'weeks': 0.0,
'days': 0.0,
'hours': 0.0,
'minutes': 0.0,
'seconds': 0.0}
for (n,q) in m:
if q=='y':
args['days'] += float(n)*365
elif q=='M':
args['days'] += float(n)*30
elif q=='w':
args['weeks'] += float(n)
elif q=='d':
args['days'] += float(n)
elif q=='h':
args['hours'] += float(n)
elif q=='m':
args['minutes'] += float(n)
elif q=='s':
args['seconds'] += float(n)
return _dt.timedelta(**args)
My issue is with the for loop here i.e the long if elif block, and was wondering if there is a more pythonic way of doing it.
So I re-wrote the function as:
def _interval2(patt):
m = _re.findall(r'([+-]?\d*(?:\.\d+)?)([yMwdhms])', patt)
args = {'weeks': 0.0,
'days': 0.0,
'hours': 0.0,
'minutes': 0.0,
'seconds': 0.0}
argsmap = {'y': ('days', lambda x: float(x)*365),
'M': ('days', lambda x: float(x)*30),
'w': ('weeks', lambda x: float(x)),
'd': ('days', lambda x: float(x)),
'h': ('hours', lambda x: float(x)),
'm': ('minutes', lambda x: float(x)),
's': ('seconds', lambda x: float(x))}
for (n,q) in m:
args[argsmap[q][0]] += argsmap[q][1](n)
return _dt.timedelta(**args)
I tested the execution times of both the codes using timeit module and found that the second one took about 5-6 seconds longer (for the default number of repeats).
So my question is:
1. Which code is considered more pythonic?
2. Is there still a more pythonic was of writing this function?
3. What about the trade-offs between pythonicity and other aspects (like speed in this case) of programming?
p.s. I kinda have an OCD for elegant code.
EDITED _interval2 after seeing this answer:
argsmap = {'y': ('days', 365),
'M': ('days', 30),
'w': ('weeks', 1),
'd': ('days', 1),
'h': ('hours', 1),
'm': ('minutes', 1),
's': ('seconds', 1)}
for (n,q) in m:
args[argsmap[q][0]] += float(n)*argsmap[q][1]
You seem to create a lot of lambdas every time you parse. You really don't need a lambda, just a multiplier. Try this:
def _factor_for(what):
if what == 'y': return 365
elif what == 'M': return 30
elif what in ('w', 'd', 'h', 's', 'm'): return 1
else raise ValueError("Invalid specifier %r" % what)
for (n,q) in m:
args[argsmap[q][0]] += _factor_for([q][1]) * n
Don't make _factor_for a method's local function or a method, though, to speed things up.
(I have not timed this, but) if you're going to use this function often it might be worth pre-compiling the regex expression.
Here's my take on your function:
re_timestr = re.compile("""
((?P<years>\d+)y)?\s*
((?P<months>\d+)M)?\s*
((?P<weeks>\d+)w)?\s*
((?P<days>\d+)d)?\s*
((?P<hours>\d+)h)?\s*
((?P<minutes>\d+)m)?\s*
((?P<seconds>\d+)s)?
""", re.VERBOSE)
def interval3(patt):
p = {}
match = re_timestr.match(patt)
if not match:
raise ValueError("invalid pattern : %s" % (patt))
for k,v in match.groupdict("0").iteritems():
p[k] = int(v) # cast string to int
p["days"] += p.pop("years") * 365 # convert years to days
p["days"] += p.pop("months") * 30 # convert months to days
return datetime.timedelta(**p)
update
From this question, it looks like precompiling regex patterns does not bring about noticeable performance improvement since Python caches and reuses them anyway. You only save the time it takes to check the cache which, unless you are repeating it numerous times, is negligible.
update2
As you quite rightly pointed out, this solution does not support interval3("1h 30s" + "2h 10m"). However, timedelta supports arithmetic operations which means one you can still express it as interval3("1h 30s") + interval3("2h 10m").
Also, as mentioned by some of the comments on the question, you may want to avoid supporting "years" and "months" in the inputs. There's a reason why timedelta does not support those arguments; it cannot be handled correctly (and incorrect code are almost never elegant).
Here's another version, this time with support for float, negative values, and some error checking.
re_timestr = re.compile("""
^\s*
((?P<weeks>[+-]?\d+(\.\d*)?)w)?\s*
((?P<days>[+-]?\d+(\.\d*)?)d)?\s*
((?P<hours>[+-]?\d+(\.\d*)?)h)?\s*
((?P<minutes>[+-]?\d+(\.\d*)?)m)?\s*
((?P<seconds>[+-]?\d+(\.\d*)?)s)?\s*
$
""", re.VERBOSE)
def interval4(patt):
p = {}
match = re_timestr.match(patt)
if not match:
raise ValueError("invalid pattern : %s" % (patt))
for k,v in match.groupdict("0").iteritems():
p[k] = float(v) # cast string to int
return datetime.timedelta(**p)
Example use cases:
>>> print interval4("1w 2d 3h4m") # basic use
9 days, 3:04:00
>>> print interval4("1w") - interval4("2d 3h 4m") # timedelta arithmetic
4 days, 20:56:00
>>> print interval4("0.3w -2.d +1.01h") # +ve and -ve floats
3:24:36
>>> print interval4("0.3x") # reject invalid input
Traceback (most recent call last):
File "date.py", line 19, in interval4
raise ValueError("invalid pattern : %s" % (patt))
ValueError: invalid pattern : 0.3x
>>> print interval4("1h 2w") # order matters
Traceback (most recent call last):
File "date.py", line 19, in interval4
raise ValueError("invalid pattern : %s" % (patt))
ValueError: invalid pattern : 1h 2w
Yes, there is. Use time.strptime instead:
Parse a string representing a time
according to a format. The return
value is a struct_time as returned
by gmtime() or localtime().

Resources