I am trying to calculate point on a line.
I got the points of the edges and one distance between one edge to the point I want to find (which is B).
A(2,4)
B(x,y)
C(4,32)
The distance between A to B is 5.
How can I calculate Bx and By? using the following equations:
d = Math.Sqr((Bx-Ax)^2 + (By-Ay)^2)
d = Math.Sqr((Cx-Bx)^2 + (Cy-By)^2)
and than compare the equations above.
Here is the equations with the points placed:
5 = Math.Sqr((Bx-2)^2 + (By-4)^2)
23.0713366 = Math.Sqr((4-Bx)^2 + (32-By)^2)
or
Math.Sqr((Bx-2)^2 + (By-4)^2) - 5 = Math.Sqr((4-Bx)^2 + (32-By)^2) - 23.0713377
How can I solve this using VBA?
Thank you!
I won't solve your equations above because they are an unnecessarily complex way to state the problem (and the existence of a solution is questionable in the presence of rounding), but all the points on the line A=(Ax,Ay) to C=(Cx,Cy) can be described as B=(Ax,Ay) + t*(Cx-Ax,Cy-Ay) with t between 0 and 1.
The distance between B and A is then given by d=t*Sqrt((Cx-Ax)^2+(Cy-Ay)^2), which you can invert to get the proper t for a given d - t=d/Sqrt((Cx-Ax)^2+(Cy-Ay)^2)
In your case, B(t) = (2,4) + t*(2,28), t=5/Sqrt(2^2+28^2) ~ 0.178 -> B ~ (2,4) + 0.178 * (2,28) ~ (2.356, 8.987).
VBA has no Symbolic Language capability. To solve this problem, there are different approach :
Transform the equations to isolate one of the unknowns, most likely to use substitution, and compute it (I recommend this for your problem.)
Transform your functions and derive them to use Newton's methods (don't do this, it's overkill.)
Use a "brute force" convergence methods : Fix a min/max for each variable and use bisection methods to find what you want (I don't recommend this because you'll most likely "fall" into a local minimum/maximum in your case.)
So basically, I'd say you go with the first way. It requires 15mins of tinkering with mathematical equations, then you're set to go.
Related
ModelingToolkit.jl is such a great package that I frequently expect too much of it. For example, I often find myself with a model which boils down to the following:
#variables t x(t) y(t)
#parameters a b C
d = Differential(t)
eqs = [
d(x) ~ a * y - b * x,
d(y) ~ b * x - a * y,
0 ~ x + y - C
]
#named sys = ODESystem(eqs)
Now, I know that I could get this down to one equation by substitution of the 0 ~ x + y - C. But in reality my systems are much larger, less trivial and programmatically generated, so I would like ModelingToolkit.jl to do it for me.
I have tried using structural_simplify, but the extra equation gets in the way:
julia> structural_simplify(sys)
ERROR: ExtraEquationsSystemException: The system is unbalanced. There are 2 highest order derivative variables and 3 equations.
More equations than variables, here are the potential extra equation(s):
Then I found the tutorial on DAE index reduction, and thought that dae_index_lowering might work for me:
julia> dae_index_lowering(sys)
ERROR: maxiters=8000 reached! File a bug report if your system has a reasonable index (<100), and you are using the default `maxiters`. Try to increase the maxiters by `pantelides(sys::ODESystem; maxiters=1_000_000)` if your system has an incredibly high index and it is truly extremely large.
So the question is whether ModelingToolkit.jl currently has a feature which will do the transformation, or if a different approach is necessary?
The problem is that the system is unbalanced, i.e. there are more equations than there are states. In general it is impossible to prove that an overdetermined system of this sort is well-defined. Thus to solve it, you have to delete one of the equations. If you know the conservation law must hold true, then you can delete the second differential equation:
using ModelingToolkit
#variables t x(t) y(t)
#parameters a b C
d = Differential(t)
eqs = [
d(x) ~ a * y - b * x,
0 ~ x + y - C
]
#named sys = ODESystem(eqs)
simpsys = structural_simplify(sys)
And that will simplify down to a single equation. The problem is that in general it cannot prove that if it does delete that differential equation, that y(t) is still going to be the same. In this specific case, maybe it could one day prove that the conservation law must occur given the differential equation system. But even if it could, then the format would be for you to only give the differential equation and then let it remove equations by substituting proved conservation laws: so you would still only give two equations for the two state system.
I'm trying to solve this in LPSolve IDE:
/* Objective function */
min: x + y;
/* Variable bounds */
r_1: 2x = 2y;
r_2: x + y = 1.11 x y;
r_3: x >= 1;
r_4: y >= 1;
but the response I get is:
Model name: 'LPSolver' - run #1
Objective: Minimize(R0)
SUBMITTED
Model size: 4 constraints, 2 variables, 5 non-zeros.
Sets: 0 GUB, 0 SOS.
Using DUAL simplex for phase 1 and PRIMAL simplex for phase 2.
The primal and dual simplex pricing strategy set to 'Devex'.
The model is INFEASIBLE
lp_solve unsuccessful after 2 iter and a last best value of 1e+030
How come this can happen when x=1.801801802 and y=1.801801802 are possible solutions here?
How To Find The Solution
Let's do some math.
Your problem is:
min x+y
s.t. 2x = 2y
x + y = 1.11 x y
x >= 1
y >= 1
The first constraint 2x = 2y can be simplified to x=y. We now substitute throughout the problem:
min 2*x
s.t. 2*x = 1.11 x^2
x >= 1
And rearrange:
min 2*x
s.t. 1.11 x^2-2*x=0
x >= 1
From geometry we know that 1.11 x^2-2*x makes an upward-opening parabola with a minimum less than zero. Therefore, there are exactly two points. These are given by the quadratic equation: 200/111 and 0.
Only one of these satisfies the second constraint: 200/111.
Why Can't I Find This Constraint With My Solver
The easy way out is to say it's because the x^2 term (x*y before the substitution is nonlinear). But it goes a little deeper than that. Nonlinear problems can be easy to solve as long as they are convex. A convex problem is one whose constraints form a single, contiguous space such that any line drawn between two points in the space stays within the boundaries of the space.
Your problem is not convex. The constraint 1.11 x^2-2*x=0 defines an infinite number of points. No two of these points can be connected by a straight line which stays in the space defined by the constraint because that space is curved. If the constraint were instead 1.11 x^2-2*x<=0 then the space would be convex because all points could be connected with straight lines that stay in its interior.
Nonconvex problems are part of a broader class of problems called NP-Hard. This means that there is not (and perhaps cannot) be any easy way of solving the problem. We have to be smart.
Solvers that can handle mixed-integer programming (MIP/MILP) can solve many non-convex problems efficiently, as can other techniques such as genetic algorithms. But, beneath the hood, these techniques all rely on glorified guess-and-check.
So your solver fails because the problem is nonconvex and your solver is neither smart enough to use MIP to guess-and-check its way to a solution nor smart enough to use the quadratic equation.
How Then Can I Solve The Problem?
In this particular instance, we are able to use mathematics to quickly find a solution because, although the problem is nonconvex, it is part of a class of special cases. Deep thinking by mathematicians has given us a simple way of handling this class.
But consider a few generalizations of the problem:
(a) a x^3+b x^2+c x+d=0
(b) a x^4+b x^3+c x^2+d x+e =0
(c) a x^5+b x^4+c x^3+d x^2+e x+f=0
(a) has three potential solutions which must be checked (exact solutions are tricky), (b) has four (trickier), and (c) has five. The formulas for (a) and (b) are much more complex than the quadratic formula and mathematicians have shown that there is no formula for (c) that can be expressed using "elementary operations". Instead, we have to resort to glorified guess-and-check.
So the techniques we used to solve your problem don't generalize very well. This is what it means to live in the realm of the nonconvex and NP-hard, and it's a good reason to fund research in mathematics, computer science, and related fields.
I was going through a question which ask to calculate gcd(a-b,a^n+b^n)%(10^9+7) where a,b,n can be as large as 10^12.
I am able to solve this for a,b and n for very small numbers and fermat's theorem also didn't seem to work, and i reached a conclusion that if a,b are coprime then this will always give me gcd as 2 but for the rest i am not able to get it?
i need just a little hint that what i am doing wrong to get gcd for large numbers? I also tried x^y to find gcd by taking modulo at each step but that also didn't work.
Need just direction and i will make my way.
Thanks in advance.
You are correct that a^n + b^n is too large to compute and that working mod 10^9 + 7 at each step doesn't provide a way to compute the answer. But, you can still use modular exponentiation by squaring with a different modulus, namely a-b
Key observations:
1) gcd(a-b,a^n + b^n) = gcd(d,a^n + b^n) where d = abs(a-b)
2) gcd(d,a^n + b^n) = gcd(d,r) where r = (a^n + b^n) % d
3) r can be feasibly computed with modular exponentiation by squaring
The point of 1) is that different programming languages have different conventions for handling negative numbers in the mod operator. Taking the absolute value avoids such complications, though mathematically it doesn't make a difference. The key idea is that it is perfectly feasible to do the first step of the Euclidean algorithm for computing gcds. All you need is the remainder upon division of the larger by the smaller of the two numbers. After the first step is done, all of the numbers are in the feasible range.
I'm trying to input a second order differential equation to solve into matlab over x = 0 to x =1. I can't figure out how. Here's the equation:
y'' = 1 + 0.1 \sqrt{1+(y')^2}
with initial conditions at zero.
Normally you solve higher-order equations by converting to a system of first order equations. Here, you would define:
y' = v
v' = 1 + 0.1 \sqrt{1 + v^2}
Define a function computing the right-hand side, and use ode45.
Note that this equation is solvable without much trouble in closed form, too, so should be a good test for how to do it.
If I have a general function,f(z,a), z and a are both real, and the function f takes on real values for all z except in some interval (z1,z2), where it becomes complex. How do I determine z1 and z2 (which will be in terms of a) using Mathematica (or is this possible)? What are the limitations?
For a test example, consider the function f[z_,a_]=Sqrt[(z-a)(z-2a)]. For real z and a, this takes on real values except in the interval (a,2a), where it becomes imaginary. How do I find this interval in Mathematica?
In general, I'd like to know how one would go about finding it mathematically for a general case. For a function with just two variables like this, it'd probably be straightforward to do a contour plot of the Riemann surface and observe the branch cuts. But what if it is a multivariate function? Is there a general approach that one can take?
What you have appears to be a Riemann surface parametrized by 'a'. Consider the algebraic (or analytic) relation g(a,z)=0 that would be spawned from this branch of a parametrized Riemann surface. In this case it is simply g^2 - (z - a)*(z - 2*a) == 0. More generally it might be obtained using Groebnerbasis, as below (no guarantee this will always work without some amount of user intervention).
grelation = First[GroebnerBasis[g - Sqrt[(z - a)*(z - 2*a)], {x, a, g}]]
Out[472]= 2 a^2 - g^2 - 3 a z + z^2
A necessary condition for the branch points, as functions of the parameter 'a', is that the zero set for 'g' not give a (single valued) function in a neighborhood of such points. This in turn means that the partial derivative of this relation with respect to g vanishes (this is from the implicit function theorem of multivariable calculus). So we find where grelation and its derivative both vanish, and solve for 'z' as a function of 'a'.
Solve[Eliminate[{grelation == 0, D[grelation, g] == 0}, g], z]
Out[481]= {{z -> a}, {z -> 2 a}}
Daniel Lichtblau
Wolfram Research
For polynomial systems (and some class of others), Reduce can do the job.
E.g.
In[1]:= Reduce[Element[{a, z}, Reals]
&& !Element[Sqrt[(z - a) (z - 2 a)], Reals], z]
Out[1]= (a < 0 && 2a < z < a) || (a > 0 && a < z < 2a)
This type of approach also works (often giving very complicated solutions for functions with many branch cuts) for other combinations of elementary functions I checked.
To find the branch cuts (as opposed to the simple class of branch points you're interested in) in general, I don't know of a good approach. The best place to find the detailed conventions that Mathematica uses is at the functions.wolfram site.
I do remember reading a good paper on this a while back... I'll try to find it....
That's right! The easiest approach I've seen for branch cut analysis uses the unwinding number. There's a paper "Reasoning about the elementary functions of complex analysis" about this the the journal "Artificial Intelligence and Symbolic Computation". It and similar papers can be found at one of the authors homepage: http://www.apmaths.uwo.ca/~djeffrey/offprints.html.
For general functions you cannot make Mathematica calculate it.
Even for polynomials, finding an exact answer takes time.
I believe Mathematica uses some sort of quantifier elimination when it uses Reduce,
which takes time.
Without any restrictions on your functions (are they polynomials, continuous, smooth?)
one can easily construct functions which Mathematica cannot simplify further:
f[x_,y_] := Abs[Zeta[y+0.5+x*I]]*I
If this function is real for arbitrary x and any -0.5 < y < 0 or 0<y<0.5,
then you will have found a counterexample to the Riemann zeta conjecture,
and I'm sure Mathematica cannot give a correct answer.