Fairly new to modular arithmetic and I've searched through several sources and have not figured out how to do this. I have the following equation:
s = (k * (h + x * r)) mod q
How would one solve for x? I am quite unsure of what to do with the mod q.
Related
I am trying to solve the following ODE using DifferentialEquation.jl :
Where P is a matrix used for a projection. I am having a hard time imagining how to solve this problem. Is there a way to directly solve it using Julia? Or should I try and rearrange the equation by hand (which I already tried) to fit the usual differential equation format?
I already started by writing down some equations which can be found below but I am not getting very far.
function ODE(u, p, t)
g,N = p
Jacg = ForwardDiff.jacobian(g, u)
sum = zeros(size(N,1))
for i in 1:size(Jacg,1)
sum = sum + Jacg[i,:] .* (u / norm(u)) .* N[:,i]
end
Proj_N(N) * sum
nothing
end
prob = ODEProblem(ODE, u0, (0.0, 3.0), (g, N))
sol = solve(prob)
Any help is appreciated and thanks in advance.
If you want to use the out of place form you have to return the derivative, i.e.
function ODE(u, p, t)
g,N = p
Jacg = ForwardDiff.jacobian(g, u)
sum = zeros(size(N,1))
for i in 1:size(Jacg,1)
sum = sum + Jacg[i,:] .* (u / norm(u)) .* N[:,i]
end
Proj_N(N) * sum
end
I think you were just mixing up the mutating and non-mutating derivative forms.
I'm trying to solve this equation:
(b(ax+b ) - c) % n = e
Where everything is given except x
I tried the approach of :
(A + x) % B = C
(B + C - A) % B = x
where A is (-c) and then manually solve for x given my other subs, but I am not getting the correct output. Would I possibly need to use eea? Any help would be appreciated! I understand this question has been asked, I tried their solutions but it doesn't work for me.
(b*(a*x+b) - c) % n = e
can be rewritten as:
(b*a*x) % n = (e - b*b + c) % n
x = ((e - b*b + c) * modular_inverse(b*a, n)) % n
where the modular inverse of u, modular_inverse(u, n), is a number v such that u*v % n == 1. See this question for code to calculate the modular inverse.
Some caveats:
When simplifying modular equations, you can never simply divide, you need to multiply with the modular inverse.
There is no straightforward formula to calculate the modular inverse, but there is a simple, quick algorithm to calculate it, similar to calculating the gcd.
The modular inverse doesn't always exist.
Depending on the programming language, when one or both arguments are negative, the result of modulo can also be negative.
As every solution works for every x modulo n, for small n only the numbers from 0 till n-1 need to be tested, so in many cases a simple loop is sufficient.
What language are you doing this in, and are the variables constant?
Here's a quick way to determine the possible values of x in Java:
for (int x = -1000; x < 1000; x++){
if ((b*((a*x)+b) - c) % n == e){
System.out.println(x);
}
}
Question: Is there symbolic ODE solver in R ? (ODE = ordinary differential equation)
I am afraid there is NO... but let me confirm from experts ...
For example, solve:
> (5x-6)^2 y' = 5(5x-6) y - 2
Here: y - unknown function, y' - its derivative
(It is easy to solve by hands: y = 1/(5(5x-6)) + C* (5x-6) , but I want to get that answer from R).
What I know:
1) There are NUMERICAL (not symbolic) solvers:
I know there are numerical ODE solvers like library(deSolve),
see answer here:
Can R language find a generic solution of the first order differential equation?
2) There are symbolic packages : (but they do not seem to contain ODE solvers)
There are symbolic packages in R like
see Ryacas and rSymPy and also some basic symbolic calculation in base R, see:
https://stats.stackexchange.com/questions/4775/symbolic-computation-in-r/4778
3) Brief overview of various differential equations solvers for R:
https://cran.r-project.org/web/views/DifferentialEquations.html
However I was unable to find sumbolic ODE solvers (((
I've had a play around with Ryacas, and you can in fact get symbolic solutions to some simple ODEs without too much work. Unfortunately, YACAS fails to find a solution for your example ODE. However, depending on the ODEs you are exploring, this might still be of use. If not, I'm happy to remove this post.
As an initial simple example, let's consider the following ODE: y'' + y = 0:
Load the library
library(Ryacas);
Since Ryacas is just an interface to YACAS, we can use YACAS' OdeSolve to solve the ODE
yacas("OdeSolve( y\'\' + y == 0 )")
#expression(C70 * exp(x * complex_cartesian(0, -1)) + C74 * exp(x *
# complex_cartesian(0, 1)))
This gives the correct solution const * exp(- ix) + const * exp(+ ix).
Unfortunately when using your particular example, OdeSolve fails to find a solution:
yacas("OdeSolve( y\'\' == (5 * (5 * x - 6) * y - 2) / (5 * x - 6)^2 )")
#expression(y(2) - (5 * ((5 * x - 6) * y(0)) - 2)/(5 * x - 6)^2)
The same happens when we use the YACAS online demo.
Let's define:
Tower(1) of n is: n.
Tower(2) of n is: n^n (= power(n,n)).
Tower(10) of n is: n^n^n^n^n^n^n^n^n^n.
And also given two functions:
f(n) = [Tower(logn n) of n] = n^n^n^n^n^n^....^n (= log n times "height of tower").
g(n) = [Tower(n) of log n] = log(n)^log(n)^....^log(n) (= n times "height of tower").
Three questions:
How are functions f(n)/g(n) related each other asymptotically (n-->infinity),
in terms of: theta, Big O, little o, Big Omega, little omega ?
Please describe exact way of solution and not only eventual result.
Does base of log (i.e.: 0.5, 2, 10, log n, or n) effect the result ?
If no - why ?
If yes - how ?
I'd like to know whether in any real (even if hypotetic) application there complexity performance looks similar to f(n) or g(n) above. Please give case description - if such exist.
P.S.
I tried to substitute: log n = a, therefore: n = 2^a or 10^a.
And got confused of counting height of received "towers".
I won't provide you a solution, because you have to work on your homework, but maybe there are other people interested about some hints.
1) Mathematics:
log(a^x) = x*log(a)
this will humanize your problem
2) Mathematics:
logx(y) = log2(y) / log2(x) = log10(y) / log10(x)
of course: if x is constant => log2(x) and log10(x) are constants
3) recursive + stop condition
All,
I've just been starting to play around with the Julia language and am enjoying it quite a bit. At the end of the 3rd tutorial there's an interesting problem: genericize the quadratic formula such that it solves for the roots of any n-order polynomial equation.
This struck me as (a) an interesting programming problem and (b) an interesting Julia problem. Has anyone out there solved this one? For reference, here is the Julia code with a couple toy examples. Again, the idea is to make this generic for any n-order polynomial.
Cheers,
Aaron
function derivative(f)
return function(x)
# pick a small value for h
h = x == 0 ? sqrt(eps(Float64)) : sqrt(eps(Float64)) * x
# floating point arithmetic gymnastics
xph = x + h
dx = xph - x
# evaluate f at x + h
f1 = f(xph)
# evaluate f at x
f0 = f(x)
# divide the difference by h
return (f1 - f0) / dx
end
end
function quadratic(f)
f1 = derivative(f)
c = f(0.0)
b = f1(0.0)
a = f(1.0) - b - c
return (-b + sqrt(b^2 - 4a*c + 0im))/2a, (-b - sqrt(b^2 - 4a*c + 0im))/2a
end
quadratic((x) -> x^2 - x - 2)
quadratic((x) -> x^2 + 2)
The package PolynomialRoots.jl provides the function roots() to find all (real and complex) roots of polynomials of any order. The only mandatory argument is the array with coefficients of the polynomial in ascending order.
For example, in order to find the roots of
6x^5 + 5x^4 + 3x^2 + 2x + 1
after loading the package (using PolynomialRoots) you can use
julia> roots([1, 2, 3, 4, 5, 6])
5-element Array{Complex{Float64},1}:
0.294195-0.668367im
-0.670332+2.77556e-17im
0.294195+0.668367im
-0.375695-0.570175im
-0.375695+0.570175im
The package is a Julia implementation of the root-finding algorithm described in this paper: http://arxiv.org/abs/1203.1034
PolynomialRoots.jl has also support for arbitrary precision calculation. This is useful for solving equation that cannot be solved in double precision. For example
julia> r = roots([94906268.375, -189812534, 94906265.625]);
julia> (r[1], r[2])
(1.0000000144879793 - 0.0im,1.0000000144879788 + 0.0im)
gives the wrong result for the polynomial, instead passing the input array in arbitrary precision forces arbitrary precision calculations that provide the right answer (see https://en.wikipedia.org/wiki/Loss_of_significance):
julia> r = roots([BigFloat(94906268.375), BigFloat(-189812534), BigFloat(94906265.625)]);
julia> (Float64(r[1]), Float64(r[2]))
(1.0000000289759583,1.0)
There are no algebraic formulae for a general polynomials of degree five and above (infact there cant be see here). So theoretically, you could proceed using the same methodology for solutions to cubics and quartics, but even that would be a lot of hard work given very unwieldy formulae for roots of quartics. You could also use a CAS like SymPy to find out those formulae.