Continued fractions and Pell's equation - numerical issues - math

Mathematical background
Continued fractions are a way to represent numbers (rational or not), with a basic recursion formula to calculate it. Given a number r, we define r[0]=r and have:
for n in range(0..N):
a[n] = floor(r[n])
if r[n] == [an]: break
r[n+1] = 1 / (r[n]-a[n])
where a is the final representation. We can also define a series of convergents by
h[-2,-1] = [0, 1]
k[-2, -1] = [1, 0]
h[n] = a[n]*h[n-1]+h[n-2]
k[n] = a[n]*k[n-1]+k[n-2]
where h[n]/k[n] converge to r.
Pell's equation is a problem of the form x^2-D*y^2=1 where all numbers are integers and D is not a perfect square in our case. A solution for a given D that minimizes x is given by continued fractions. Basically, for the above equation, it is guaranteed that this (fundamental) solution is x=h[n] and y=k[n] for the lowest n found which solves the equation in the continued fraction expansion of sqrt(D).
Problem
I am failing to get this simple algorithm work for D=61. I first noticed it did not solve Pell's equation for 100 coefficients, so I compared it against Wolfram Alpha's convergents and continued fraction representation and noticed the 20th elements fail - the representation is 3 compared to 4 that I get, yielding different convergents - h[20]=335159612 on Wolfram compared to 425680601 for me.
I tested the code below, two languages (though to be fair, Python is C under the hood I guess), on two systems and get the same result - a diff on loop 20. I'll note that the convergents are still accurate and converge! Why am I getting different results compared to Wolfram Alpha, and is it possible to fix it?
For testing, here's a Python program to solve Pell's equation for D=61, printing first 20 convergents and the continued fraction representation cf (and some extra unneeded fluff):
from math import floor, sqrt # Can use mpmath here as well.
def continued_fraction(D, count=100, thresh=1E-12, verbose=False):
cf = []
h = (0, 1)
k = (1, 0)
r = start = sqrt(D)
initial_count = count
x = (1+thresh+start)*start
y = start
while abs(x/y - start) > thresh and count:
i = int(floor(r))
cf.append(i)
f = r - i
x, y = i*h[-1] + h[-2], i*k[-1] + k[-2]
if verbose is True or verbose == initial_count-count:
print(f'{x}\u00B2-{D}x{y}\u00B2 = {x**2-D*y**2}')
if x**2 - D*y**2 == 1:
print(f'{x}\u00B2-{D}x{y}\u00B2 = {x**2-D*y**2}')
print(cf)
return
count -= 1
r = 1/f
h = (h[1], x)
k = (k[1], y)
print(cf)
raise OverflowError(f"Converged on {x} {y} with count {count} and diff {abs(start-x/y)}!")
continued_fraction(61, count=20, verbose=True, thresh=-1) # We don't want to stop on account of thresh in this example
A c program doing the same:
#include<stdio.h>
#include<math.h>
#include<stdlib.h>
int main() {
long D = 61;
double start = sqrt(D);
long h[] = {0, 1};
long k[] = {1, 0};
int count = 20;
float thresh = 1E-12;
double r = start;
long x = (1+thresh+start)*start;
long y = start;
while(abs(x/(double)y-start) > -1 && count) {
long i = floor(r);
double f = r - i;
x = i * h[1] + h[0];
y = i * k[1] + k[0];
printf("%ld\u00B2-%ldx%ld\u00B2 = %lf\n", x, D, y, x*x-D*y*y);
r = 1/f;
--count;
h[0] = h[1];
h[1] = x;
k[0] = k[1];
k[1] = y;
}
return 0;
}

mpmath, python's multi-precision library can be used. Just be careful that all the important numbers are in mp format.
In the code below, x, y and i are standard multi-precision integers. r and f are multi-precision real numbers. Note that the initial count is set higher than 20.
from mpmath import mp, mpf
mp.dps = 50 # precision in number of decimal digits
def continued_fraction(D, count=22, thresh=mpf(1E-12), verbose=False):
cf = []
h = (0, 1)
k = (1, 0)
r = start = mp.sqrt(D)
initial_count = count
x = 0 # some dummy starting values, they will be overwritten early in the while loop
y = 1
while abs(x/y - start) > thresh and count > 0:
i = int(mp.floor(r))
cf.append(i)
x, y = i*h[-1] + h[-2], i*k[-1] + k[-2]
if verbose or initial_count == count:
print(f'{x}\u00B2-{D}x{y}\u00B2 = {x**2-D*y**2}')
if x**2 - D*y**2 == 1:
print(f'{x}\u00B2-{D}x{y}\u00B2 = {x**2-D*y**2}')
print(cf)
return
count -= 1
f = r - i
r = 1/f
h = (h[1], x)
k = (k[1], y)
print(cf)
raise OverflowError(f"Converged on {x} {y} with count {count} and diff {abs(start-x/y)}!")
continued_fraction(61, count=22, verbose=True, thresh=mpf(1e-100))
Output is similar to wolfram's:
...
335159612²-61x42912791² = 3
1431159437²-61x183241189² = -12
1766319049²-61x226153980² = 1
[7, 1, 4, 3, 1, 2, 2, 1, 3, 4, 1, 14, 1, 4, 3, 1, 2, 2, 1, 3, 4, 1]

Related

Double integration with a differentiation inside in R

I need to integrate the following function where there is a differentiation term inside. Unfortunately, that term is not easily differentiable.
Is this possible to do something like numerical integration to evaluate this in R?
You can assume 30,50,0.5,1,50,30 for l, tau, a, b, F and P respectively.
UPDATE: What I tried
InnerFunc4 <- function(t,x){digamma(gamma(a*t*(LF-LP)*b)/gamma(a*t))*(x-t)}
InnerIntegral4 <- Vectorize(function(x) { integrate(InnerFunc4, 1, x, x = x)$value})
integrate(InnerIntegral4, 30, 80)$value
It shows the following error:
Error in integrate(InnerFunc4, 1, x, x = x) : non-finite function value
UPDATE2:
InnerFunc4 <- function(t,L){digamma(gamma(a*t*(LF-LP)*b)/gamma(a*t))*(L-t)}
t_lower_bound = 0
t_upper_bound = 30
L_lower_bound = 30
L_upper_bound = 80
step_size = 0.5
integral = 0
t <- t_lower_bound + 0.5*step_size
while (t < t_upper_bound){
L = L_lower_bound + 0.5*step_size
while (L < L_upper_bound){
volume = InnerFunc4(t,L)*step_size**2
integral = integral + volume
L = L + step_size
}
t = t + step_size
}
Since It seems that your problem is only the derivative, you can get rid of it by means of partial integration:
Edit
Not applicable solution for lower integration bound 0.

Dependent Arrays in Constraints JuMP

I want to code this constraint.
d and a in the below code are the subsets of set S with the size of N. For example: (N=5, T=3, S=6), d=[1,2,2,3,1] (the elements of d are the first three digits of S and the size of d is N) and a=[6,4,5,6,4] (the elements of a are the three last digits of set S and the size of a is N).
In the constraint, s should start with d and end with a.
It should be like s[j=1]=1:6, s[j=2]=2:4, s[j=3]=2:5, s[j=4]=3:6, s[j=5]1:4.
I do not know how to deal with this set that depends on the other sets. Can you please help me to code my constraint correctly? The below code is not working correctly.
N = 5
T=3
S=6
Cap=15
Q=rand(1:5,N)
d=[1,2,2,3,1]
a=[6,4,5,6,4]
#variable(model, x[j=1:N,t=1:T,s=1:S], Bin)
#constraint(model, [j= 1:N,t = 1:T, s = d[j]:a[j]], sum(x[j,t,s] * Q[j] for j=1:N) <= Cap)
N, T, S = 5, 3, 6
Q = rand(1:5,N)
d = [1, 2, 2, 3, 1]
a = [6, 4, 5, 6, 4]
using JuMP
model = Model()
#variable(model, x[1:N, 1:T, 1:S], Bin)
#constraint(
model,
[t = 1:T, s = 1:S],
sum(x[j, t, s] * Q[j] for j in 1:N if d[j] <= s < a[j]) <= 15,
)
p.s. There's no need to post multiple comments and questions:
Coding arrays in constraint JuMP
You should also consider posting on the Julia discourse instead: https://discourse.julialang.org/c/domain/opt/13. It's easier to have a conversation there.

Renewal Function for Weibull Distribution

The renewal function for Weibull distribution m(t) with t = 10 is given as below.
I want to find the value of m(t). I wrote the following r code to compute m(t)
last_term = NULL
gamma_k = NULL
n = 50
for(k in 1:n){
gamma_k[k] = gamma(2*k + 1)/factorial(k)
}
for(j in 1: (n-1)){
prev = gamma_k[n-j]
last_term[j] = gamma(2*j + 1)/factorial(j)*prev
}
final_term = NULL
find_value = function(n){
for(i in 2:n){
final_term[i] = gamma_k[i] - sum(last_term[1:(i-1)])
}
return(final_term)
}
all_k = find_value(n)
af_sum = NULL
m_t = function(t){
for(k in 1:n){
af_sum[k] = (-1)^(k-1) * all_k[k] * t^(2*k)/gamma(2*k + 1)
}
return(sum(na.omit(af_sum)))
}
m_t(20)
The output is m(t) = 2.670408e+93. Does my iteratvie procedure correct? Thanks.
I don't think it will work. First, lets move Γ(2k+1) from denominator of m(t) into Ak. Thus, Ak will behave roughly as 1/k!.
In the nominator of the m(t) terms there is t2k, so roughly speaking you're computing sum with terms
100k/k!
From Stirling formula
k! ~ kk, making terms
(100/k)k
so yes, they will start to decrease and converge to something but after 100th term
Anyway, here is the code, you could try to improve it, but it breaks at k~70
N <- 20
A <- rep(0, N)
# compute A_k/gamma(2k+1) terms
ps <- 0.0 # previous sum
A[1] = 1.0
for(k in 2:N) {
ps <- ps + A[k-1]*gamma(2*(k-1) + 1)/factorial(k-1)
A[k] <- 1.0/factorial(k) - ps/gamma(2*k+1)
}
print(A)
t <- 10.0
t2 <- t*t
r <- 0.0
for(k in 1:N){
r <- r + (-t2)^k*A[k]
}
print(-r)
UPDATE
Ok, I calculated Ak as in your question, got the same answer. I want to estimate terms Ak/Γ(2k+1) from m(t), I believe it will be pretty much dominated by 1/k! term. To do that I made another array k!*Ak/Γ(2k+1), and it should be close to one.
Code
N <- 20
A <- rep(0.0, N)
psum <- function( pA, k ) {
ps <- 0.0
if (k >= 2) {
jmax <- k - 1
for(j in 1:jmax) {
ps <- ps + (gamma(2*j+1)/factorial(j))*pA[k-j]
}
}
ps
}
# compute A_k/gamma(2k+1) terms
A[1] = gamma(3)
for(k in 2:N) {
A[k] <- gamma(2*k+1)/factorial(k) - psum(A, k)
}
print(A)
B <- rep(0.0, N)
for(k in 1:N) {
B[k] <- (A[k]/gamma(2*k+1))*factorial(k)
}
print(B)
shows that
I got the same Ak values as you did.
Bk is indeed very close to 1
It means that term Ak/Γ(2k+1) could be replaced by 1/k! to get quick estimate of what we might get (with replacement)
m(t) ~= - Sum(k=1, k=Infinity) (-1)k (t2)k / k! = 1 - Sum(k=0, k=Infinity) (-t2)k / k!
This is actually well-known sum and it is equal to exp() with negative argument (well, you have to add term for k=0)
m(t) ~= 1 - exp(-t2)
Conclusions
Approximate value is positive. Probably will stay positive after all, Ak/Γ(2k+1) is a bit different from 1/k!.
We're talking about 1 - exp(-100), which is 1-3.72*10-44! And we're trying to compute it precisely summing and subtracting values on the order of 10100 or even higher. Even with MPFR I don't think this is possible.
Another approach is needed
OK, so I ended up going down a pretty different road on this. I have implemented a simple discretization of the integral equation which defines the renewal function:
m(t) = F(t) + integrate (m(t - s)*f(s), s, 0, t)
The integral is approximated with the rectangle rule. Approximating the integral for different values of t gives a system of linear equations. I wrote a function to generate the equations and extract a matrix of coefficients from it. After looking at some examples, I guessed a rule to define the coefficients directly and used that to generate solutions for some examples. In particular I tried shape = 2, t = 10, as in OP's example, with step = 0.1 (so 101 equations).
I found that the result agrees pretty well with an approximate result which I found in a paper (Baxter et al., cited in the code). Since the renewal function is the expected number of events, for large t it is approximately equal to t/mu where mu is the mean time between events; this is a handy way to know if we're anywhere in the neighborhood.
I was working with Maxima (http://maxima.sourceforge.net), which is not efficient for numerical stuff, but which makes it very easy to experiment with different aspects. At this point it would be straightforward to port the final, numerical stuff to another language such as Python.
Thanks to OP for suggesting the problem, and S. Pappadeux for insightful discussions. Here is the plot I got comparing the discretized approximation (red) with the approximation for large t (blue). Trying some examples with different step sizes, I saw that the values tend to increase a little as step size gets smaller, so I think the red line is probably a little low, and the blue line might be more nearly correct.
Here is my Maxima code:
/* discretize weibull renewal function and formulate system of linear equations
* copyright 2020 by Robert Dodier
* I release this work under terms of the GNU General Public License
*
* This is a program for Maxima, a computer algebra system.
* http://maxima.sourceforge.net/
*/
"Definition of the renewal function m(t):" $
renewal_eq: m(t) = F(t) + 'integrate (m(t - s)*f(s), s, 0, t);
"Approximate integral equation with rectangle rule:" $
discretize_renewal (delta_t, k) :=
if equal(k, 0)
then m(0) = F(0)
else m(k*delta_t) = F(k*delta_t)
+ m(k*delta_t)*f(0)*(delta_t / 2)
+ sum (m((k - j)*delta_t)*f(j*delta_t)*delta_t, j, 1, k - 1)
+ m(0)*f(k*delta_t)*(delta_t / 2);
make_eqs (n, delta_t) :=
makelist (discretize_renewal (delta_t, k), k, 0, n);
make_vars (n, delta_t) :=
makelist (m(k*delta_t), k, 0, n);
"Discretized integral equation and variables for n = 4, delta_t = 1/2:" $
make_eqs (4, 1/2);
make_vars (4, 1/2);
make_eqs_vars (n, delta_t) :=
[make_eqs (n, delta_t), make_vars (n, delta_t)];
load (distrib);
subst_pdf_cdf (shape, scale, e) :=
subst ([f = lambda ([x], pdf_weibull (x, shape, scale)), F = lambda ([x], cdf_weibull (x, shape, scale))], e);
matrix_from (eqs, vars) :=
(augcoefmatrix (eqs, vars),
[submatrix (%%, length(%%) + 1), - col (%%, length(%%) + 1)]);
"Subsitute Weibull pdf and cdf for shape = 2 into discretized equation:" $
apply (matrix_from, make_eqs_vars (4, 1/2));
subst_pdf_cdf (2, 1, %);
"Just the right-hand side matrix:" $
rhs_matrix_from (eqs, vars) :=
(map (rhs, eqs),
augcoefmatrix (%%, vars),
[submatrix (%%, length(%%) + 1), col (%%, length(%%) + 1)]);
"Generate the right-hand side matrix, instead of extracting it from equations:" $
generate_rhs_matrix (n, delta_t) :=
[delta_t * genmatrix (lambda ([i, j], if i = 1 and j = 1 then 0
elseif j > i then 0
elseif j = i then f(0)/2
elseif j = 1 then f(delta_t*(i - 1))/2
else f(delta_t*(i - j))), n + 1, n + 1),
transpose (makelist (F(k*delta_t), k, 0, n))];
"Generate numerical right-hand side matrix, skipping over formulas:" $
generate_rhs_matrix_numerical (shape, scale, n, delta_t) :=
block ([f, F, numer: true], local (f, F),
f: lambda ([x], pdf_weibull (x, shape, scale)),
F: lambda ([x], cdf_weibull (x, shape, scale)),
[genmatrix (lambda ([i, j], delta_t * if i = 1 and j = 1 then 0
elseif j > i then 0
elseif j = i then f(0)/2
elseif j = 1 then f(delta_t*(i - 1))/2
else f(delta_t*(i - j))), n + 1, n + 1),
transpose (makelist (F(k*delta_t), k, 0, n))]);
"Solve approximate integral equation (shape = 3, t = 1) via LU decomposition:" $
fpprintprec: 4 $
n: 20 $
t: 1;
[AA, bb]: generate_rhs_matrix_numerical (3, 1, n, t/n);
xx_by_lu: linsolve_by_lu (ident(n + 1) - AA, bb, floatfield);
"Iterative solution of approximate integral equation (shape = 3, t = 1):" $
xx: bb;
for i thru 10 do xx: AA . xx + bb;
xx - (AA.xx + bb);
xx_iterative: xx;
"Should find iterative and LU give same result:" $
xx_diff: xx_iterative - xx_by_lu[1];
sqrt (transpose(xx_diff) . xx_diff);
"Try shape = 2, t = 10:" $
n: 100 $
t: 10 $
[AA, bb]: generate_rhs_matrix_numerical (2, 1, n, t/n);
xx_by_lu: linsolve_by_lu (ident(n + 1) - AA, bb, floatfield);
"Baxter, et al., Eq. 3 (for large values of t) compared to discretization:" $
/* L.A. Baxter, E.M. Scheuer, D.J. McConalogue, W.R. Blischke.
* "On the Tabulation of the Renewal Function,"
* Econometrics, vol. 24, no. 2 (May 1982).
* H(t) is their notation for the renewal function.
*/
H(t) := t/mu + sigma^2/(2*mu^2) - 1/2;
tx_points: makelist ([float (k/n*t), xx_by_lu[1][k, 1]], k, 1, n);
plot2d ([H(u), [discrete, tx_points]], [u, 0, t]), mu = mean_weibull(2, 1), sigma = std_weibull(2, 1);

How to approach this type of problem in permutation and combination?

Altitudes
Alice and Bob took a journey to the mountains. They have been climbing
up and down for N days and came home extremely tired.
Alice only remembers that they started their journey at an altitude of
H1 meters and they finished their wandering at an alitude of H2
meters. Bob only remembers that every day they changed their altitude
by A, B, or C meters. If their altitude on the ith day was x,
then their altitude on day i + 1 can be x + A, x + B, or x + C.
Now, Bob wonders in how many ways they could complete their journey.
Two journeys are considered different if and only if there exist a day
when the altitude that Alice and Bob covered that day during the first
journey differs from the altitude Alice and Bob covered that day during
the second journey.
Bob asks Alice to tell her the number of ways to complete the journey.
Bob needs your help to solve this problem.
Input format
The first and only line contains 6 integers N, H1, H2, A, B, C that
represents the number of days Alice and Bob have been wandering,
altitude on which they started their journey, altitude on which they
finished their journey, and three possible altitude changes,
respectively.
Output format
Print the answer modulo 10**9 + 7.
Constraints
1 <= N <= 10**5
-10**9 <= H1, H2 <= 10**9
-10**9 <= A, B, C <= 10**9
Sample Input
2 0 0 1 0 -1
Sample Output
3
Explanation
There are only 3 possible journeys-- (0, 0), (1, -1), (-1, 1).
Note
This problem comes originally from a hackerearth competition, now closed. The explanation for the sample input and output has been corrected.
Here is my solution in Python 3.
The question can be simplified from its 6 input parameters to only 4 parameters. There is no need for the beginning and ending altitudes--the difference of the two is enough. Also, we can change the daily altitude changes A, B, and C and get the same answer if we make a corresponding change to the total altitude change. For example, if we add 1 to each of A, B, and C, we could add N to the altitude change: 1 additional meter each day over N days means N additional meters total. We can "normalize" our daily altitude changes by sorting them so A is the smallest, then subtract A from each of the altitude changes and subtract N * A from the total altitude change. This means we now need to add a bunch of 0's and two other values (let's call them D and E). D is not larger than E.
We now have an easier problem: take N values, each of which is 0, D, or E, so they sum to a particular total (let's say H). This is the same at using up to N numbers equaling D or E, with the rest zeros.
We can use mathematics, in particular Bezout's identity, to see if this is possible. Some more mathematics can find all the ways of doing this. Once we know how many 0's, D's, and E's, we can use multinomial coefficients to find how many ways these values can be rearranged. Total all these up and we have the answer.
This code finds the total number of ways to complete the journey, and takes it modulo 10**9 + 7 only at the very end. This is possible since Python uses large integers. The largest result I found in my testing is for the input values 100000 0 100000 0 1 2 which results in a number with 47,710 digits before taking the modulus. This takes a little over 8 seconds on my machine.
This code is a little longer than necessary, since I made some of the routines more general than necessary for this problem. I did this so I can use them in other problems. I used many comments for clarity.
# Combinatorial routines -----------------------------------------------
def comb(n, k):
"""Compute the number of ways to choose k elements out of a pile of
n, ignoring the order of the elements. This is also called
combinations, or the binomial coefficient of n over k.
"""
if k < 0 or k > n:
return 0
result = 1
for i in range(min(k, n - k)):
result = result * (n - i) // (i + 1)
return result
def multcoeff(*args):
"""Return the multinomial coefficient
(n1 + n2 + ...)! / n1! / n2! / ..."""
if not args: # no parameters
return 1
# Find and store the index of the largest parameter so we can skip
# it (for efficiency)
skipndx = args.index(max(args))
newargs = args[:skipndx] + args[skipndx + 1:]
result = 1
num = args[skipndx] + 1 # a factor in the numerator
for n in newargs:
for den in range(1, n + 1): # a factor in the denominator
result = result * num // den
num += 1
return result
def new_multcoeff(prev_multcoeff, x, y, z, ag, bg):
"""Given a multinomial coefficient prev_multcoeff =
multcoeff(x-bg, y+ag, z+(bg-ag)), calculate multcoeff(x, y, z)).
NOTES: 1. This uses bg multiplications and bg divisions,
faster than doing multcoeff from scratch.
"""
result = prev_multcoeff
for d in range(1, ag + 1):
result *= y + d
for d in range(1, bg - ag + 1):
result *= z + d
for d in range(bg):
result //= x - d
return result
# Number theory routines -----------------------------------------------
def bezout(a, b):
"""For integers a and b, find an integral solution to
a*x + b*y = gcd(a, b).
RETURNS: (x, y, gcd)
NOTES: 1. This routine uses the convergents of the continued
fraction expansion of b / a, so it will be slightly
faster if a <= b, i.e. the parameters are sorted.
2. This routine ensures the gcd is nonnegative.
3. If a and/or b is zero, the corresponding x or y
will also be zero.
4. This routine is named after Bezout's identity, which
guarantees the existences of the solution x, y.
"""
if not a:
return (0, (b > 0) - (b < 0), abs(b)) # 2nd is sign(b)
p1, p = 0, 1 # numerators of the two previous convergents
q1, q = 1, 0 # denominators of the two previous convergents
negate_y = True # flag if negate y=q (True) or x=p (False)
quotient, remainder = divmod(b, a)
while remainder:
b, a = a, remainder
p, p1 = p * quotient + p1, p
q, q1 = q * quotient + q1, q
negate_y = not negate_y
quotient, remainder = divmod(b, a)
if a < 0:
p, q, a = -p, -q, -a # ensure the gcd is nonnegative
return (p, -q, a) if negate_y else (-p, q, a)
def byzantine_bball(a, b, s):
"""For nonnegative integers a, b, s, return information about
integer solutions x, y to a*x + b*y = s. This is
equivalent to finding a multiset containing only a and b that
sums to s. The name comes from getting a given basketball score
given scores for shots and free throws in a hypothetical game of
"byzantine basketball."
RETURNS: None if there is no solution, or an 8-tuple containing
x the smallest possible nonnegative integer value of
x.
y the value of y corresponding to the smallest
possible integral value of x. If this is negative,
there is no solution for nonnegative x, y.
g the greatest common divisor (gcd) of a, b.
u the found solution to a*u + b*v = g
v " "
ag a // g, or zero if g=0
bg b // g, or zero if g=0
sg s // g, or zero if g=0
NOTES: 1. If a and b are not both zero and one solution x, y is
returned, then all integer solutions are given by
x + t * bg, y - t * ag for any integer t.
2. This routine is slightly optimized for a <= b. In that
case, the solution returned also has the smallest sum
x + y among positive integer solutions.
"""
# Handle edge cases of zero parameter(s).
if 0 == a == b: # the only score possible from 0, 0 is 0
return (0, 0, 0, 0, 0, 0, 0, 0) if s == 0 else None
if a == 0:
sb = s // b
return (0, sb, b, 0, 1, 0, 1, sb) if s % b == 0 else None
if b == 0:
sa = s // a
return (sa, 0, a, 1, 0, 1, 0, sa) if s % a == 0 else None
# Find if the score is possible, ignoring the signs of x and y.
u, v, g = bezout(a, b)
if s % g:
return None # only multiples of the gcd are possible scores
# Find one way to get the score, ignoring the signs of x and y.
ag, bg, sg = a // g, b // g, s // g # we now have ag*u + bg*v = 1
x, y = sg * u, sg * v # we now have a*x + b*y = s
# Find the solution where x is nonnegative and as small as possible.
t = x // bg # Python rounds toward minus infinity--what we want
x, y = x - t * bg, y + t * ag
# Return the information
return (x, y, g, u, v, ag, bg, sg)
# Routines for this puzzle ---------------------------------------------
def altitude_reduced(n, h, d, e):
"""Return the number of distinct n-tuples containing only the
values 0, d, and e that sum to h. Assume that all these
numbers are integers and that 0 <= d <= e.
"""
# Handle some impossible special cases
if n < 0 or h < 0:
return 0
# Handle some other simple cases with zero values
if n == 0:
return 0 if h else 1
if 0 == d == e: # all step values are zero
return 0 if h else 1
if 0 == d or d == e: # e is the only non-zero step value
# If possible, return # of tuples with proper # of e's, the rest 0's
return 0 if h % e else comb(n, h // e)
# Handle the main case 0 < d < e
# --Try to get the solution with the fewest possible non-zero days:
# x d's and y e's and the rest zeros: all solutions are given by
# x + t * bg, y - t * ag
solutions_info = byzantine_bball(d, e, h)
if not solutions_info:
return 0 # no way at all to get h from d, e
x, y, _, _, _, ag, bg, _ = solutions_info
# --Loop over all solutions with nonnegative x, y, small enough x + y
result = 0
while y >= 0 and x + y <= n: # at most n non-zero days
# Find multcoeff(x, y, n - x - y), in a faster way
if result == 0: # 1st time through loop: no prev coeff available
amultcoeff = multcoeff(x, y, n - x - y)
else: # use previous multinomial coefficient
amultcoeff = new_multcoeff(amultcoeff, x, y, n - x - y, ag, bg)
result += amultcoeff
x, y = x + bg, y - ag # x+y increases by bg-ag >= 0
return result
def altitudes(input_str=None):
# Get the input
if input_str is None:
input_str = input('Numbers N H1 H2 A B C? ')
# input_str = '100000 0 100000 0 1 2' # replace with prev line for input
n, h1, h2, a, b, c = map(int, input_str.strip().split())
# Reduce the number of parameters by normalizing the values
h_diff = h2 - h1 # net altitude change
a, b, c = sorted((a, b, c)) # a is now the smallest
h, d, e = h_diff - n * a, b - a, c - a # reduce a to zero
# Solve the reduced problem
print(altitude_reduced(n, h, d, e) % (10**9 + 7))
if __name__ == '__main__':
altitudes()
Here are some of my test routines for the main problem. These are suitable for pytest.
# Testing, some with pytest ---------------------------------------------------
import itertools # for testing
import collections # for testing
def brute(n, h, d, e):
"""Do alt_reduced with brute force."""
return sum(1 for v in itertools.product({0, d, e}, repeat=n)
if sum(v) == h)
def brute_count(n, d, e):
"""Count achieved heights with brute force."""
if n < 0:
return collections.Counter()
return collections.Counter(
sum(v) for v in itertools.product({0, d, e}, repeat=n)
)
def test_impossible():
assert altitude_reduced(0, 6, 1, 2) == 0
assert altitude_reduced(-1, 6, 1, 2) == 0
assert altitude_reduced(3, -1, 1, 2) == 0
def test_simple():
assert altitude_reduced(1, 0, 0, 0) == 1
assert altitude_reduced(1, 1, 0, 0) == 0
assert altitude_reduced(1, -1, 0, 0) == 0
assert altitude_reduced(1, 1, 0, 1) == 1
assert altitude_reduced(1, 1, 1, 1) == 1
assert altitude_reduced(1, 2, 0, 1) == 0
assert altitude_reduced(1, 2, 1, 1) == 0
assert altitude_reduced(2, 4, 0, 3) == 0
assert altitude_reduced(2, 4, 3, 3) == 0
assert altitude_reduced(2, 4, 0, 2) == 1
assert altitude_reduced(2, 4, 2, 2) == 1
assert altitude_reduced(3, 4, 0, 2) == 3
assert altitude_reduced(3, 4, 2, 2) == 3
assert altitude_reduced(4, 4, 0, 2) == 6
assert altitude_reduced(4, 4, 2, 2) == 6
assert altitude_reduced(2, 6, 0, 2) == 0
assert altitude_reduced(2, 6, 2, 2) == 0
def test_main():
N = 12
maxcnt = 0
for n in range(-1, N):
for d in range(N): # must have 0 <= d
for e in range(d, N): # must have d <= e
counts = brute_count(n, d, e)
for h, cnt in counts.items():
if cnt == 25653:
print(n, h, d, e, cnt)
maxcnt = max(maxcnt, cnt)
assert cnt == altitude_reduced(n, h, d, e)
print(maxcnt) # got 25653 for N = 12, (n, h, d, e) = (11, 11, 1, 2) etc.

Super-ellipse Point Picking

https://en.wikipedia.org/wiki/Superellipse
I have read the SO questions on how to point-pick from a circle and an ellipse.
How would one uniformly select random points from the interior of a super-ellipse?
More generally, how would one uniformly select random points from the interior of the curve described by an arbitrary super-formula?
https://en.wikipedia.org/wiki/Superformula
The discarding method is not considered a solution, as it is mathematically unenlightening.
In order to sample the superellipse, let's assume without loss of generality that a = b = 1. The general case can be then obtained by rescaling the corresponding axis.
The points in the first quadrant (positive x-coordinate and positive y-coordinate) can be then parametrized as:
x = r * ( cos(t) )^(2/n)
y = r * ( sin(t) )^(2/n)
with 0 <= r <= 1 and 0 <= t <= pi/2:
Now, we need to sample in r, t so that the sampling transformed into x, y is uniform. To this end, let's calculate the Jacobian of this transform:
dx*dy = (2/n) * r * (sin(2*t)/2)^(2/n - 1) dr*dt
= (1/n) * d(r^2) * d(f(t))
Here, we see that as for the variable r, it is sufficient to sample uniformly the value of r^2 and then transform back with a square root. The dependency on t is a bit more complicated. However, with some effort, one gets
f(t) = -(n/2) * 2F1(1/n, (n-1)/n, 1 + 1/n, cos(t)^2) * cos(t)^(2/n)
where 2F1 is the hypergeometric function.
In order to obtain uniform sampling in x,y, we need now to sample uniformly the range of f(t) for t in [0, pi/2] and then find the t which corresponds to this sampled value, i.e., to solve for t the equation u = f(t) where u is a uniform random variable sampled from [f(0), f(pi/2)]. This is essentially the same method as for r, nevertheless in that case one can calculate the inverse directly.
One small issue with this approach is that the function f is not that well-behaved near zero - the infinite slope makes it quite challenging to find a root of u = f(t). To circumvent this, we can sample only the "upper part" of the first quadrant (i.e., area between lines x=y and x=0) and then obtain all the other points by symmetry (not only in the first quadrant but also for all the other ones).
An implementation of this method in Python could look like:
import numpy as np
from numpy.random import uniform, randint, seed
from scipy.optimize import brenth, ridder, bisect, newton
from scipy.special import gamma, hyp2f1
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
seed(100)
def superellipse_area(n):
#https://en.wikipedia.org/wiki/Superellipse#Mathematical_properties
inv_n = 1. / n
return 4 * ( gamma(1 + inv_n)**2 ) / gamma(1 + 2*inv_n)
def sample_superellipse(n, num_of_points = 2000):
def f(n, x):
inv_n = 1. / n
return -(n/2)*hyp2f1(inv_n, 1 - inv_n, 1 + inv_n, x)*(x**inv_n)
lb = f(n, 0.5)
ub = f(n, 0.0)
points = [None for idx in range(num_of_points)]
for idx in range(num_of_points):
r = np.sqrt(uniform())
v = uniform(lb, ub)
w = bisect(lambda w: f(n, w**n) - v, 0.0, 0.5**(1/n))
z = w**n
x = r * z**(1/n)
y = r * (1 - z)**(1/n)
if uniform(-1, 1) < 0:
y, x = x, y
x = (2*randint(0, 2) - 1)*x
y = (2*randint(0, 2) - 1)*y
points[idx] = [x, y]
return points
def plot_superellipse(ax, n, points):
coords_x = [p[0] for p in points]
coords_y = [p[1] for p in points]
ax.set_xlim(-1.25, 1.25)
ax.set_ylim(-1.25, 1.25)
ax.text(-1.1, 1, '{n:.1f}'.format(n = n), fontsize = 12)
ax.scatter(coords_x, coords_y, s = 0.6)
params = np.array([[0.5, 1], [2, 4]])
fig = plt.figure(figsize = (6, 6))
gs = gridspec.GridSpec(*params.shape, wspace = 1/32., hspace = 1/32.)
n_rows, n_cols = params.shape
for i in range(n_rows):
for j in range(n_cols):
n = params[i, j]
ax = plt.subplot(gs[i, j])
if i == n_rows-1:
ax.set_xticks([-1, 0, 1])
else:
ax.set_xticks([])
if j == 0:
ax.set_yticks([-1, 0, 1])
else:
ax.set_yticks([])
#ensure that the ellipses have similar point density
num_of_points = int(superellipse_area(n) / superellipse_area(2) * 4000)
points = sample_superellipse(n, num_of_points)
plot_superellipse(ax, n, points)
fig.savefig('fig.png')
This produces:

Resources