Can anyone write a prog that change the sum(1 to n) to n*(n+1)/2 automatic? - recursion

with the rec sum:
let rec sum a=if a==0 then 0 else a+sum(a-1)
if the compiler use the tail recursive optimization,it may create a variable "sum" to iteration(when I use the "ocamlc -dlambda",the recursive still there.when I use "ocamlc -dinstr" got the assemably code,I can't read it now)
but on the book《Design Concepts of programming languages》,page 287,it can change the function to this(the key line):n*(n+1)/2
"You should convince yourself that the least fixed point of this
function is the computation csum that returns a summation procedure that,returns n*(n+1)/2 if its argument is a nonnegative integer in"
I can't understand it,the prog not Gauss!I think it can't chang the "rec sum" to n*(n+1)/2 automatic!only man can do it,right?
So how this book write here means?Is anyone know?Thanks!

I believe your book is merely making a small point about equivalence of pure functions. Nevertheless, optimising away a loop that only contains affine operations is relatively easy.
Equivalence of pure functions
I haven't read that book, but from the paragraph you quote, I think the book merely makes a point about pure functions. Since sum is a pure function, i.e. a function without side-effect, then in a sense,
let rec sum n =
if n = 0 then 0
else n + sum (n - 1)
is equivalent to
let sum n =
n * (n + 1) / 2
But of course "equivalent" here ignores the time and space complexity, and unless the compiler has some sort of hardcoding for common functions to optimise, I'd be extremely surprised if it optimised sum like that.
Also note that the two above functions are only equivalent so far as they are only called on a nonnegative argument. The recursive version will loop infinitely (and provoke a stack overflow) if n is negative; the direct formula version will always return a result, although that result will be nonsensical if n is negative.
Optimising loops that only contain affine operations
Nevertheless, writing a compiler that would perform such optimisations is not complete science-fiction. At the end of this answer you will find links to two blogposts which you might be interested in. In this answer I will summarise how the method described in those blog posts can be applied to your problem.
First let's rewrite function sum as a loop in pseudo-code:
function sum(n):
s := 0
i := 1
repeat n:
s += i
i += 1
return s
This kind of rewriting is similar to what happens when sum is transformed into a tail-recursive function.
Now if you consider the vector v = [s, i, 1], then the affine operations s += i and i += 1 can be described as multiplying v by a matrix:
s += i
[[ 1, 0, 0 ], # matrix Msi
[ 1, 1, 0 ],
[ 0, 0, 1 ]]
i += 1
[[ 1, 0, 0 ], # matrix Mi1
[ 0, 1, 0 ],
[ 0, 1, 1 ]]
s += i, i += 1
[[ 1, 0, 0 ], # M = Msi * Mi1
[ 1, 1, 0 ],
[ 0, 1, 1 ]]
This affine operation is wrapped in a "repeat n" loop. So we have to multiply v by this matrix M, n times. But matrix multiplication is associative; so instead of doing n multiplications by matrix M, we can raise matrix M to its nth power, and then multiply v by the resulting matrix M**n.
As it turns out:
[[1, 0, 0], [[ 1, 0, 0],
[1, 1, 0], to the nth = [ n, 1, 0],
[0, 1, 1]] [n*(n - 1)/2, n, 1]]
which represents the affine operation:
s = s + n * i + n * (n - 1) / 2
i = i + n
Starting from s, i = 0, 1, this gives us s = n * (n+1) / 2 as expected.
More reading:
Using the Quick Raise of Matrices to a Power to Write a Very Fast Interpreter of a Simple Programming Language;
Automatic Algorithms Optimization via Fast Matrix Exponentiation.

Related

How to find n average number of trials before criteria are met with differing probabilities per outcome?

I've spent a few days trying to figure this out and looking up tutorials, but everything I've found so far seems like it's close to what I need but don't give the results I need.
I have a device that produces a single letter, A-F. For simplicity's sake, you can think of it like a die with letters. It will always produce one and only one letter each time it is used. However it has one major difference: each letter can have a differing known probability of being picked:
A: 25%
B: 5%
C: 20%
D: 15%
E: 20%
F: 15%
These probabilities remain constant throughout all attempts.
Additionally, I have a specific combination I must accrue before I am "successful":
As needed: 1
Bs needed: 3
Cs needed: 0
Ds needed: 1
Es needed: 2
Fs needed: 3
I need find the average number of letter picks (i.e. rolls/trials/attempts) that have to happen for this combination of letters to be accrued. It's completely fine for any individual outcome to have more than the required number of letters, but success is only counted when each letter has been chosen at least its minimum amount of times.
I've looked at plenty of tutorials for multinomial probability distribution and similar things, but I haven't found anything that explains how to find average number of trials for a scenario like this. Please kindly explain answers clearly as I'm not a wiz with statistics.
In addition to Severin's answer that logically looks good to me but might be costly to evaluate (i.e. infinite sum of factorials).
Let me provide some intuition that should give a good approximation.
Considering each category at a time. Refer this math stackexchange question/ answer. Expected number of tosses in which you would get the k number of successes for each category (i) can be calculated as k(i)/ P(i):
Given,
p(A): 25% ; Expected number of tosses to get 1 A = 1/ 0.25 = 4
p(B): 5% ; Expected number of tosses to get 3 B's = 3/ 0.05 = 60
p(C): 20% ; Expected number of tosses to get 0 C = 0/ 0.20 = 0
p(D): 15% ; Expected number of tosses to get 1 D = 1/ 0.15 = 6.67 ~ 7
p(E): 20% ; Expected number of tosses to get 2 E's = 2/ 0.20 = 10
p(F): 15% ; Expected number of tosses to get 3 F's = 3/ 0.15 = 20
you get an idea that getting 3 B's is your bottleneck, you can expect on average 60 tosses for your scenario to play out.
Well, minimum number of throws is 10. Average would be infinite sum
A=10•P(done in 10)+11•P(done in 11)+12•P(done in 12) + ...
For P(done in 10) we could use multinomial
P(10)=Pm(1,3,0,1,2,3|probs), where probs=[.25, .05, .20, .15, .20, .15]
For P(11) you have one more throw which you could distribute like this
P(11)=Pm(2,3,0,1,2,3|probs)+Pm(1,4,0,1,2,3|probs)+Pm(1,3,0,2,2,3|probs)+
Pm(1,3,0,1,3,3|probs)+Pm(1,3,0,1,2,4|probs)
For P(12) you have to distribute 2 more throws. Note, that there are combinations of throws which are impossible to get, like Pm(2,3,0,2,2,3|probs), because you have to stop earlier
And so on and so forth
Your process can be described as a Markov chain with a finite number of states, and an absorbing state.
The number of steps before reaching the absorbing state is called the hitting time. The expected hitting time can be calculated easily from the transition matrix of the Markov chain.
Enumerate all possible states (a, b, c, d, e, f). Consider only a finite number of states, because "b >= 3" is effectively the same as "b = 3", etc. The total number of states is (1+1)*(3+1)*(0+1)*(2+1)*(3+1) = 192.
Make sure that in your enumeration, starting state (0, 0, 0, 0, 0, 0) comes first, with index 0, and absorbing state (1, 3, 0, 1, 2, 3) comes last.
Build the transition matrix P. It's a square matrix with one row and column per state. Entry P[i, j] in the matrix gives the probability of going from state i to state j when rolling a die. There should be at most 6 non-zero entries per row.
For example, if i is the index of state (1, 0, 0, 1, 2, 2) and j the index of state (1, 1, 0, 1, 2, 2), then P[i, j] = probability of rolling face B = 0.05. Another example: if i is the index of state (1,3,0,0,0,0), then P[i,i] = probability of rolling A, B or C = 0.25+0.05+0.2 = 0.5.
Call Q the square matrix obtained by removing the last row and last column of P.
Call I the identity matrix of the same dimensions as Q.
Compute matrix M = (I - Q)^-1, where ^-1 is matrix inversion.
In matrix M, the entry M[i, j] is the expected number of times that state j will be reached before the absorbing state, when starting from state i.
Since our experiment starts in state 0, we're particularly interested in row 0 of matrix M.
The sum of row 0 of matrix M is the expected total number of states reached before the absorbing state. That is exactly the answer we seek: the number of steps to reach the absorbing state.
To understand why this works, you should read a course on Markov chains! Perhaps this one: James Norris' course notes on Markov chains. The chapter about "hitting times" (which is the name for the number of steps before reaching target state) is chapter 1.3.
Below, an implementation in python.
from itertools import product, accumulate
from operator import mul
from math import prod
import numpy as np
dice_weights = [0.25, 0.05, 0.2, 0.15, 0.2, 0.15]
targets = [1, 3, 0, 1, 2, 3]
def get_expected_n_trials(targets, dice_weights):
states = list(product(*(range(n+1) for n in targets)))
base = list(accumulate([n+1 for n in targets[:0:-1]], mul, initial=1))[::-1]
lookup = dict(map(reversed, enumerate(states)))
P = np.zeros((len(states), len(states)))
for i, s in enumerate(states):
a,b,c,d,e,f = s
for f, p in enumerate(dice_weights):
#j = index of state reached from state i when rolling face f
j = i + base[f] * (s[f] < targets[f])
j1 = lookup[s[:f] + (min(s[f]+1, targets[f]),) + s[f+1:]]
if (j != j1):
print(i, s, f, ' --> ' , j, j1)
assert(j == j1)
P[i,j] += p
Q = P[:-1, :-1]
I = np.identity(len(states)-1)
M = np.linalg.inv(I - Q)
return M[0,:].sum()
print(get_expected_n_trials(targets, dice_weights))
# 61.28361802372382
Explanations of code:
First we build the list of states using Cartesian product itertools.product
For a given state i and die face f, we need to calculate j = state reached from i when adding f. I have two ways of calculating that, either as j = i + base[f] * (s[f] < targets[f]) or as j = lookup[s[:f] + (min(s[f]+1, targets[f]),) + s[f+1:]]. Because I'm paranoid, I calculated it both ways and checked that the two ways gave the same result. But you only need one way. You can remove lines j1 = ... to assert(j == j1) if you want.
Matrix P begins filled with zeroes, and we fill up to six cells per row with P[i, j] += p where p is probability of rolling face f.
Then we compute matrices Q and M as I indicated above.
We return the sum of all the cells on the first row of M.
To help you better understand what is going on, I encourage you to examine the values of all variables. For instance you could replace return M[0, :].sum() with return states, base, lookup, P, Q, I, M and then write states, base, lookup, P, Q, I, M = get_expected_n_trials(targets, dice_weights) in the python interactive shell, so that you can look at the variables individually.
A Monte-Carlo simulation:
Actually roll the die until we hit the requirements;
Count how many rolls we did;
Repeat experiment 1000 times to get the empirical average value.
Implementation in python:
from collections import Counter
from random import choices
from itertools import accumulate
from statistics import mean, stdev
dice_weights = [0.25, 0.05, 0.2, 0.15, 0.2, 0.15]
targets = [1, 3, 0, 1, 2, 3]
def avg_n_trials(targets, dice_weights, n_experiments=1000):
dice_faces = range(len(targets))
target_state = Counter(dict(enumerate(targets)))
cum_weights = list(accumulate(dice_weights))
results = []
for _ in range(n_experiments):
state = Counter()
while not state >= target_state:
f = choices(dice_faces, cum_weights=cum_weights)[0]
state[f] += 1
results.append(state.total()) # python<3.10: sum(state.values())
m = mean(results)
s = stdev(results, xbar=m)
return m, s
m, s = avg_n_trials(targets, dice_weights, n_experiments=10000)
print(m)
# 61.4044

Concatenation of binary representation of first n positive integers in O(logn) time complexity

I came across this question in a coding competition. Given a number n, concatenate the binary representation of first n positive integers and return the decimal value of the resultant number formed. Since the answer can be large return answer modulo 10^9+7.
N can be as large as 10^9.
Eg:- n=4. Number formed=11011100(1=1,10=2,11=3,100=4). Decimal value of 11011100=220.
I found a stack overflow answer to this question but the problem is that it only contains a O(n) solution.
Link:- concatenate binary of first N integers and return decimal value
Since n can be up to 10^9 we need to come up with solution that is better than O(n).
Here's some Python code that provides a fast solution; it uses the same ideas as in Abhinav Mathur's post. It requires Python >= 3.8, but it doesn't use anything particularly fancy from Python, and could easily be translated into another language. You'd need to write algorithms for modular exponentiation and modular inverse if they're not already available in the target language.
First, for testing purposes, let's define the slow and obvious version:
# Modulus that results are reduced by,
M = 10 ** 9 + 7
def slow_binary_concat(n):
"""
Concatenate binary representations of 1 through n (inclusive).
Reinterpret the resulting binary string as an integer.
"""
concatenation = "".join(format(k, "b") for k in range(n + 1))
return int(concatenation, 2) % M
Checking that we get the expected result:
>>> slow_binary_concat(4)
220
>>> slow_binary_concat(10)
462911642
Now we'll write a faster version. First, we split the range [1, n) into subintervals such that within each subinterval, all numbers have the same length in binary. For example, the range [1, 10) would be split into four subintervals: [1, 2), [2, 4), [4, 8) and [8, 10). Here's a function to do that splitting:
def split_by_bit_length(n):
"""
Split the numbers in [1, n) by bit-length.
Produces triples (a, b, 2**k). Each triple represents a subinterval
[a, b) of [1, n), with a < b, all of whose elements has bit-length k.
"""
a = 1
while n > a:
b = 2 * a
yield (a, min(n, b), b)
a = b
Example output:
>>> list(split_by_bit_length(10))
[(1, 2, 2), (2, 4, 4), (4, 8, 8), (8, 10, 16)]
Now for each subinterval, the value of the concatenation of all numbers in that subinterval is represented by a fairly simple mathematical sum, which can be computed in exact form. Here's a function to compute that sum modulo M:
def subinterval_concat(a, b, l):
"""
Concatenation of values in [a, b), all of which have the same bit-length k.
l is 2**k.
Equivalently, sum(i * l**(b - 1 - i)) for i in range(a, b)) modulo M.
"""
n = b - a
inv = pow(l - 1, -1, M)
q = (pow(l, n, M) - 1) * inv
return (a * q + (q - n) * inv) % M
I won't go into the evaluation of the sum here: it's a bit off-topic for this site, and it's hard to express without a good way to render formulas. If you want the details, that's a topic for https://math.stackexchange.com, or a page of fairly simple algebra.
Finally, we want to put all the intervals together. Here's a function to do that.
def fast_binary_concat(n):
"""
Fast version of slow_binary_concat.
"""
acc = 0
for a, b, l in split_by_bit_length(n + 1):
acc = (acc * pow(l, b - a, M) + subinterval_concat(a, b, l)) % M
return acc
A comparison with the slow version shows that we get the same results:
>>> fast_binary_concat(4)
220
>>> fast_binary_concat(10)
462911642
But the fast version can easily be evaluated for much larger inputs, where using the slow version would be infeasible:
>>> fast_binary_concat(10**9)
827129560
>>> fast_binary_concat(10**18)
945204784
You just have to note a simple pattern. Taking up your example for n=4, let's gradually build the solution starting from n=1.
1 -> 1 #1
2 -> 2^2(1) + 2 #6
3 -> 2^2[2^2(1)+2] + 3 #27
4 -> 2^3{2^2[2^2(1)+2]+3} + 4 #220
If you expand the coefficients of each term for n=4, you'll get the coefficients as:
1 -> (2^3)*(2^2)*(2^2)
2 -> (2^3)*(2^2)
3 -> (2^3)
4 -> (2^0)
Let the N be total number of bits in the string representation of our required number, and D(x) be the number of bits in x. The coefficients can then be written as
1 -> 2^(N-D(1))
2 -> 2^(N-D(1)-D(2))
3 -> 2^(N-D(1)-D(2)-D(3))
... and so on
Since the value of D(x) will be the same for all x between range (2^t, 2^(t+1)-1) for some given t, you can break the problem into such ranges and solve for each range using mathematics (not iteration). Since the number of such ranges will be log2(Given N), this should work in the given time limit.
As an example, the various ranges become:
1. 1 (D(x) = 1)
2. 2-3 (D(x) = 2)
3. 4-7 (D(x) = 3)
4. 8-15 (D(x) = 4)

Is there a function f(n) that returns the n:th combination in an ordered list of combinations without repetition?

Combinations without repetitions look like this, when the number of elements to choose from (n) is 5 and elements chosen (r) is 3:
0 1 2
0 1 3
0 1 4
0 2 3
0 2 4
0 3 4
1 2 3
1 2 4
1 3 4
2 3 4
As n and r grows the amount of combinations gets large pretty quickly. For (n,r) = (200,4) the number of combinations is 64684950.
It is easy to iterate the list with r nested for-loops, where the initial iterating value of each for loop is greater than the current iterating value of the for loop in which it is nested, as in this jsfiddle example:
https://dotnetfiddle.net/wHWK5o
What I would like is a function that calculates only one combination based on its index. Something like this:
tuple combination(i,n,r) {
return [combination with index i, when the number of elements to choose from is n and elements chosen is r]
Does anyone know if this is doable?
You would first need to impose some sort of ordering on the set of all combinations available for a given n and r, such that a linear index makes sense. I suggest we agree to keep our combinations in increasing order (or, at least, the indices of the individual elements), as in your example. How then can we go from a linear index to a combination?
Let us first build some intuition for the problem. Suppose we have n = 5 (e.g. the set {0, 1, 2, 3, 4}) and r = 3. How many unique combinations are there in this case? The answer is of course 5-choose-3, which evaluates to 10. Since we will sort our combinations in increasing order, consider for a minute how many combinations remain once we have exhausted all those starting with 0. This must be 4-choose-3, or 4 in total. In such a case, if we are looking for the combination at index 7 initially, this implies we must subtract 10 - 4 = 6 and search for the combination at index 1 in the set {1, 2, 3, 4}. This process continues until we find a new index that is smaller than this offset.
Once this process concludes, we know the first digit. Then we only need to determine the remaining r - 1 digits! The algorithm thus takes shape as follows (in Python, but this should not be too difficult to translate),
from math import factorial
def choose(n, k):
return factorial(n) // (factorial(k) * factorial(n - k))
def combination_at_idx(idx, elems, r):
if len(elems) == r:
# We are looking for r elements in a list of size r - thus, we need
# each element.
return elems
if len(elems) == 0 or len(elems) < r:
return []
combinations = choose(len(elems), r) # total number of combinations
remains = choose(len(elems) - 1, r) # combinations after selection
offset = combinations - remains
if idx >= offset: # combination does not start with first element
return combination_at_idx(idx - offset, elems[1:], r)
# We now know the first element of the combination, but *not* yet the next
# r - 1 elements. These need to be computed as well, again recursively.
return [elems[0]] + combination_at_idx(idx, elems[1:], r - 1)
Test-driving this with your initial input,
N = 5
R = 3
for idx in range(choose(N, R)):
print(idx, combination_at_idx(idx, list(range(N)), R))
I find,
0 [0, 1, 2]
1 [0, 1, 3]
2 [0, 1, 4]
3 [0, 2, 3]
4 [0, 2, 4]
5 [0, 3, 4]
6 [1, 2, 3]
7 [1, 2, 4]
8 [1, 3, 4]
9 [2, 3, 4]
Where the linear index is zero-based.
Start with the first element of the result. The value of that element depends on the number of combinations you can get with smaller elements. For each such smaller first element, the number of combinations with first element k is n − k − 1 choose r − 1, with potentially some of-by-one corrections. So you would sum over a bunch of binomial coefficients. Wolfram Alpha can help you compute such a sum, but the result still has a binomial coefficient in it. Solving for the largest k such that the sum doesn't exceed your given index i is a computation you can't do with something as simple as e.g. a square root. You need a loop to test possible values, e.g. like this:
def first_naive(i, n, r):
"""Find first element and index of first combination with that first element.
Returns a tuple of value and index.
Example: first_naive(8, 5, 3) returns (1, 6) because the combination with
index 8 is [1, 3, 4] so it starts with 1, and because the first combination
that starts with 1 is [1, 2, 3] which has index 6.
"""
s1 = 0
for k in range(n):
s2 = s1 + choose(n - k - 1, r - 1)
if i < s2:
return k, s1
s1 = s2
You can reduce the O(n) loop iterations to O(log n) steps using bisection, which is particularly relevant for large n. In that case I find it easier to think about numbering items from the end of your list. In the case of n = 5 and r = 3 you get choose(2, 2)=1 combinations starting with 2, choose(3,2)=3 combinations starting with 1 and choose(4,2)=6 combinations starting with 0. So in the general choose(n,r) binomial coefficient you increase the n with each step, and keep the r. Taking into account that sum(choose(k,r) for k in range(r,n+1)) can be simplified to choose(n+1,r+1), you can eventually come up with bisection conditions like the following:
def first_bisect(i, n, r):
nCr = choose(n, r)
k1 = r - 1
s1 = nCr
k2 = n
s2 = 0
while k2 - k1 > 1:
k3 = (k1 + k2) // 2
s3 = nCr - choose(k3, r)
if s3 <= i:
k2, s2 = k3, s3
else:
k1, s1 = k3, s3
return n - k2, s2
Once you know the first element to be k, you also know the index of the first combination with that same first element (also returned from my function above). You can use the difference between that first index and your actual index as input to a recursive call. The recursive call would be for r − 1 elements chosen from n − k − 1. And you'd add k + 1 to each element from the recursive call, since the top level returns values starting at 0 while the next element has to be greater than k in order to avoid duplication.
def combination(i, n, r):
"""Compute combination with a given index.
Equivalent to list(itertools.combinations(range(n), r))[i].
Each combination is represented as a tuple of ascending elements, and
combinations are ordered lexicograplically.
Args:
i: zero-based index of the combination
n: number of possible values, will be taken from range(n)
r: number of elements in result list
"""
if r == 0:
return []
k, ik = first_bisect(i, n, r)
return tuple([k] + [j + k + 1 for j in combination(i - ik, n - k - 1, r - 1)])
I've got a complete working example, including an implementation of choose, more detailed doc strings and tests for some basic assumptions.

how to improve the performance of this recursion code in mathematica

Clear[r, re, p, pmax, delta, imagesize, delta]
ClearSystemCache[]
re[0, r_] := Sqrt[8/Pi]*((1 - r)/r)^(1/4)*1;
re[1, r_] := Sqrt[8/Pi]*((1 - r)/r)^(1/4)*-1*2*(1 - 2*r);
re[p_, r_] := re[p, r] = Sqrt[8/Pi]*((1 - r)/r)^(1/4)*(-1)^p*(re[1, r]*re[p - 1, r] - re[p - 2, r]);
imagesize = 32;
pmax = 10;
delta = 2/imagesize;
Table[r = Sqrt[x^2 + y^2]; re[pmax, r], {x, -1 + delta/2, 1 - delta/2, delta}, {y, 1 - delta/2, -1 + delta/2, -delta}];
this code is to calculate the distance r from each pixel to point(0,0), then evaluate the radial polynomial as below:
for accuracy, I will use the recursion version:
.
When the imagesize and pmax increase, the time will become unacceptable. So, I would ask if we can use compile of other methods to speed up, like for imagesize is 256 and pmax is 120, the time will be about 10 seconds.
In my code, I also use the memoization to store the value during the evaluation which I will use in the future.

Mathematica How do I plot a vector field with 1 variable?

I cannot figure out how to plot a vector field with only 1 variable. Maybe Mathematica doesn't support this. For example:
r(t) = cost j + sint i
same as
<cost, sint>
This doesn't work:
VectorPlot[{cos t, sin t}, {t, 0, 2 Pi}]
As a bonus how to take the derivative of a vector?
An easy workaround would be to use a 2D-VectorPlot with a dummy variable like this:
VectorPlot[
{Cos[t], Sin[t]}, {t, 0, 2 \[Pi]}, {s, -1/2, 1/2},
AspectRatio -> Automatic,
VectorPoints -> {15, 3},
FrameLabel -> {"t", None}
]
Or what probably makes more sense is to discretize the curve that you get when you follow the vector while increasing t. This is e.g. useful for Feynman-style Action-integrals in quantum mechanics.
Module[
{t, dt = 0.1, vectors, startpoints, startpoint, vector, spv, spvs},
vectors = Table[dt {Cos[t], Sin[t]}, {t, 0, 2 \[Pi], dt}];
startpoints = Accumulate[vectors];
spvs = Transpose[{startpoints, vectors}];
Graphics[Table[Arrow[{spv[[1]], spv[[1]] + spv[[2]]}], {spv, spvs}]]
]

Resources