Consider this input to WolframAlpha,
solve [ 0 = x^4 - 6*x^2 - 8*x*cos( (2*pi )/5 ) - 2*cos( (4*pi)/5) - 1 ]
The solutions it gives are,
{x == (1 - Sqrt[5])/2 || x == (3 + Sqrt[5])/2 || x == (-2 - Sqrt[2 (5 - Sqrt[5])])/2 || x == (-2 + Sqrt[2 (5 - Sqrt[5])])/2}
But the same equation on sage gives the roots,
h(x) = x^4 - 6*x^2 - 8*x*cos( (2*pi )/5 ) - 2*cos( (4*pi)/5) - 1
h(x).solve(x)
[x == -1/2*sqrt(-2*sqrt(5) + 10) - 1, x == 1/2*sqrt(-2*sqrt(5) + 10) -
1, x == -1/2*sqrt(2*sqrt(5) + 6) + 1, x == 1/2*sqrt(2*sqrt(5) + 6) + 1]
It seems that the first two roots given by WolframAlpha differ from the last two roots given by Sage.
Why?
They're not different; they're exactly the same, simply listed in a different order.
sage: h(x) = x^4 - 6*x^2 - 8*x*cos( (2*pi )/5 ) - 2*cos( (4*pi)/5) - 1
sage: sols = h(x).solve(x, solution_dict=True)
sage: [CC(d[x]) for d in sols]
[-2.17557050458495, 0.175570504584946, -0.618033988749895, 2.61803398874989]
sage: wa = [ (1 - sqrt(5))/2 , (3 + sqrt(5))/2 , (-2 - sqrt(2* (5 - sqrt(5))))/2 , (-2 + sqrt(2* (5 - sqrt(5))))/2 ]
sage: [CC(v) for v in wa]
[-0.618033988749895, 2.61803398874989, -2.17557050458495, 0.175570504584946]
Related
I am trying to reproduce this model - the code in the tutorial is for an old version of JuMP/Julia and does not run.
However, when I try to add the constraint:
#constraint(model, con, c[i = 1:N] .== ( ((1 - τ) * (1 - l[i]) .* w[i]) + e[i]))
I get the error Unexpected assignment in expression 'c[i = 1:N]'.
Here is the reprex:
using Random
using Distributions
using JuMP
using Ipopt
Random.seed!(123)
N = 1000
γ = 0.5
τ = 0.2
ϵ = rand(Normal(0, 1), N)
wage = rand(Normal(10, 1), N)
consumption = (γ * (1 - τ) * wage) + (γ * ϵ)
leisure = (1 - γ) .+ (( 1 - γ) * ϵ) ./ (( 1 - τ ) * wage)
model = Model(Ipopt.Optimizer)
#variable(model, c[i = 1:N] >= 0)
#variable(model, 0 <= l[i = 1:N] <= 1)
#constraint(model, con, c[i = 1:N] .== ( ((1 - τ) * (1 - l[i]) .* w[i]) + e[i]))
#NLobjective(model, Max, sum(γ *log(c[i]) + (1-γ)*log(l[i]) for i in 1:N ) )
Does anyone know why this is being thrown and how to fix it?
Any and all help appreciated!
Running Julia 1.5.1
With the c[i = 1:N] in JuMP yo can only define variables.
With the constraints one way you could do is just:
w = wage # not in your code
e = ϵ # not in your code
#constraint(model, con[i = 1:N], c[i] == ( ((1 - τ) * (1 - l[i]) .* w[i]) + e[i]))
Przemyslaw's answer is a good one. If you want to stick with the vectorized syntax, you can go
N = 1_000
e = rand(N)
w = rand(N)
τ = 0.2
model = Model()
#variable(model, c[i = 1:N] >= 0)
#variable(model, 0 <= l[i = 1:N] <= 1)
#constraint(model, c .== (1 - τ) .* (1 .- l) .* w .+ e)
Here is the JuMP documentation for constraints https://jump.dev/JuMP.jl/stable/constraints
enter image description here
I failed to do the following expression and make it give accurate results if any one can help me I will be glade. I attached my expression in a pic "want this" and my trial as "my trial". the correct answer must equal 0.119 when a=1, b=10, m=3, n=6. thanks a lot in advance.
a = 1
b = 10
m = 3
n = 6
a^1 b^n (Sum[
Sum[Sum[Sum[(-1)^(k + v - n + m + 1)
If[k == 0, 1,
SeriesCoefficient[Series[(-Log[1 - x])^k, {x, 0, 30}],
p + k]] If[n - k - 2 == 0, 1,
SeriesCoefficient[
Series[(-Log[1 - x])^(n - k - 2), {x, 0, 30}],
q + (n - k - 2)]]
Binomial[n - m - 1, k] Binomial[b - 1,
v] (-PolyGamma[0, -1 + 1/a - k + n + q] +
PolyGamma[0, 2/a + n + p + q + v])/(a (1 + k + p + v) +
1), {q, 0, 30 - (n - k - 2)}], {p, 0, 30 - k}], {v, 0,
b - 1}], {k, 0, n - m - 1}])/((m - 1)! (n - m - 1)!)
I found the solution for the problem. the problem was when the value of k was 0 the coefficient will not equal 1 but the whole expression must be found from the start for a value of k that will start from 1 and an expression when the value of k is 0. yet I failed to solve it using MATHEMATICA but by doing the above I succeed to get the correct result. thank you all for your precious time and opinions.
I want to show that the recursion of quicksort run on best time time on n log n.
i got this recursion formula
M(0) = 1
M(1) = 1
M(n) = min (0 <= k <= n-1) {M(K) + M(n - k - 1)} + n
show that M(n) >= 1/2 (n + 1) lg(n + 1)
what i have got so far:
By induction hyposes
M(n) <= min {M(k) + M(n - k - 1} + n
focusing on the inner expresison i got:
1/2(k + 1)lg(k + 1) + 1/2(n - k)lg(n - k)
1/2lg(k + 1)^(k + 1) + 1/2lg(n - k)^(n - k)
1/2(lg(k + 1)^(k + 1) + lg(n - k)^(n - k)
1/2(lg((k + 1)^(k + 1) . (n - k)^(n - k))
But i think im doing something wrong. i think the "k" should be gonne but i cant see how this equation would cancel out all the "k". So, probably, im doing something wrong
You indeed want to get rid of k. To do this, you want to find the lower bound on the minimum of M(k) + M(n - k - 1). In general it can be arbitrarily tricky, but in this case the standard approach works: take derivative by k.
((k+1) ln(k+1) + (n-k) ln(n-k))' =
ln(k+1) + (k+1)/(k+1) - ln(n-k) - (n-k)/(n-k) =
ln((k+1) / (n-k))
We want the derivative to be 0, so
ln((k+1) / (n-k)) = 0 <=>
(k+1) / (n-k) = 1 <=>
k + 1 = n - k <=>
k = (n-1) / 2
You can check that it's indeed a local minimum.
Therefore, the best lower bound on M(k) + M(n - k - 1) (which we can get from the inductive hypothesis) is reached for k=(n-1)/2. Now you can just substitute this value instead of k, and n will be your only remaining variable.
Let's say I had the equation T = sum(A**n) for n from 1 to M.
Now let's say I knew M and T, but wanted A. How would I solve for A?
I want to do an exponential backoff in the event of an error, but I don't want the total time spent backing off to be greater than T, nor the maximum number of retries to exceed M. So I'd need to find A.
The closed-form solution for sum(A**n) for n from 1 to M is (A^(M+1) - 1) / (A - 1) - 1. To see this work, let M = 3 and A = 2. Then 2^1 + 2^2 + 2^3 = 14, and (2^4 - 1) / (2 - 1) - 1 = 15 / 1 - 1 = 14.
So, we have the closed form expression T = (A ^ (M + 1) - 1) / (A - 1) - 1. This is a transcendental equation and has no closed-form solution. However, because the RHS is monotonically increasing in A (bigger values of A always give bigger values of the expression) then we can do what amounts to binary search to find an answer to arbitrary precision:
L = 0
H = MAX(T, 2)
A = (L + H) / 2
while |(A ^ (M + 1) - 1) / (A - 1) - 1 - T| > precision
if (A ^ (M + 1) - 1) / (A - 1) - 1 > T then
H = A
else then
L = A
end if
A = (L + H) / 2
loop
Example: T = 14, M = 3, epsilon = 0.25
L = 0
H = MAX(15, 2) = 14
A = L + H / 2 = 7
|(A ^ (M + 1) - 1) / (A - 1) - 1 - T|
= 385 > 0.25
H = A = 7
A = (L + H) / 2 = 3.5
|(A ^ (M + 1) - 1) / (A - 1) - 1 - T|
= 44.625 > 0.25
H = A = 3.5
A = (L + H) / 2 = 1.75
|(A ^ (M + 1) - 1) / (A - 1) - 1 - T|
= 3.828125 > 0.25
L = A = 1.75
A = (L + H) / 2 = 2.625
|(A ^ (M + 1) - 1) / (A - 1) - 1 - T|
= 13.603515625 > 0.25
H = A = 2.625
A = (L + H) / 2 = 2.1875
|(A ^ (M + 1) - 1) / (A - 1) - 1 - T|
= 3.440185546875 > 0.25
H = A = 2.1875
A = (L + H) / 2 = 1.96875
|(A ^ (M + 1) - 1) / (A - 1) - 1 - T|
= 0.524444580078125 > 0.25
L = A = 1.96875
A = (L + H) / 2 = 2.078125
|(A ^ (M + 1) - 1) / (A - 1) - 1 - T|
= 1.371326446533203125 > 0.25
H = A = 2.078125
A = (L + H) / 2 = 2.0234375
|(A ^ (M + 1) - 1) / (A - 1) - 1 - T|
= 0.402295589447021484375 > 0.25
H = A = 2.0234375
A = (L + H) / 2 = 1.99609375
|(A ^ (M + 1) - 1) / (A - 1) - 1 - T|
= 0.066299498081207275390625 < 0.25
Solution: 1.99609375
Please verify my logic to see if what I'm attempting is valid or shady.
W(n) = W(n/2) + nlg(n)
W(1) = 1
n =2^k
By trying the pattern
line 1 : W (2^k) = W(2^k-1) + nlgn
line 2 : = W(2^k-2) + nlgn + nlgn
...
line i : = W(2^k-i) + i*nlgn
and then solve the rest for i.
I just want to make sure it's cool if I substitute in 2k in one place (on line 1) but not in the other for the n lg n.
I find by subbing in 2^k for 2^k lg(2^k) gets really greasy.
Any thoughts are welcome (specifically if I should be subbing in 2^k, and if I should then how would you suggest the solution)
It's fine to switch back and forth between n and 2k as needed because you're assuming that n = 2k. However, that doesn't mean that what you have above is correct. Remember that as n decreases, the value of n log n keeps decreasing as well, so it isn't the case that the statement
line i = W(2k-i) + i * n lg n
is true.
Let's use the iteration method one more time:
W(n) = W(n / 2) + n log n
= (W(n / 4) + (n/2) log (n/2)) + n log n
= W(n / 4) + (n/2) (log n - 1) + n log n
= W(n / 4) + n log n / 2 - n / 2 + n log n
= W(n / 4) + (1 + 1/2) n log n - n / 2
= (W(n / 8) + (n / 4) log(n/4)) + (1 + 1/2) n log n - n / 2
= W(n / 8) + (n / 4) (log n - 2) + (1 + 1/2) n log n - n / 2
= W(n / 8) + n log n / 4 - n / 2 + (1 + 1/2) log n - n / 2
= W(n / 8) + (1 + 1/2 + 1/4) n log n - n
= (W(n / 16) + (n / 8) log(n/8)) + (1 + 1/2 + 1/4) n log n - n
= W(n / 16) + (n / 8) (log n - 3)) + (1 + 1/2 + 1/4) n log n - n
= W(n / 16) + n log n / 8 - 3n / 8 + (1 + 1/2 + 1/4) n log n - n
= W(n / 16) + (1 + 1/2 + 1/4 + 1/8) n log n - n - 3/8n
We can look at this to see if we spot a pattern. First, notice that the n log n term has a coefficient of (1 + 1/2 + 1/4 + 1/8 + ...) as we keep expanding outward. This series converges to 2, so when the iteration stops that term will be between n log n and 2n log n. Next, look at the coefficient of the -n term. If you look closely, you'll notice that this coefficient is -1 times
(1 / 2 + 2 / 4 + 3 / 8 + 4 / 16 + 5 / 32 + ... )
This is the sum of i / 2i, which converges to 2. Therefore, if we iterate this process, we'll find at each step that the value is Θ(n log n) - Θ(n), so the overall recurrence solves to Θ(n log n).
Hope this helps!