I was taking a derivative in Mathematica and the result containted the term Sqrt', and I was wondering what the ' on the end ment? I believe in means 1/Sqrt from doing the derivative by hand but if someone could confirm this is how the result is displayed I would appreciate it. Here is my input and output.
In f[p_] := cSqrt[(m^2)*(c^2) + (p - eA/c)^2] + e*phi
In f'[p]
Out 2 (-(eA/c) + p) Derivative[1][cSqrt][c^2 m^2 + (-(eA/c) + p)^2]
Best,
Ben
This may help:
http://blog.wolfram.com/2011/05/20/mathematica-qa-three-functions-for-computing-derivatives/
Apparently ' is the standard shorthand notation for the derivative.
Related
I would like to simplify symbolic expressions, which end users provide as arguments to a function, in such a way that the result is always an expression of the following form: c_1*A + c_2*B + c_3*C = c_4, where all c-terms are numerical constants, and capital letters are unknowns.
Thus, if a user provides the expression:
.2*A + 3*B + 2*B + 2 = C -5,
this should simplify to:
.2*A + 5*B -1*C = -7
The furthest I've come with this is by searching for symbolic simplification. The package Deriv can simplify some expressions by removing brackets etc., but it doesn't order terms as required.
Does anyone have a suggestion for how to go about this in R, without introducing any dependencies other than on other R-packages? Thank you sincerely!
Just started a new course (data struct) and having some trouble with a question:
For what F(n) function is this true?
My direction is that it should happen when The N exponent is 1 or less, because that will match the definition of thetha as the function will be closer and bounded by C1*F(n) and C2*F(n), but im not sure about that. Thanks!
Notice that
0n + 1n + 2n + ... + n·n
= n(0 + 1 + 2 + ... + n)
= n(n(n+1)/2)
with that last step following from Gauss's sum. Therefore, the summation is Θ(n3).
I'm studying the same course right now, and I think that if you apply the next rule it might work out.
lim n->inf f(x) / g(x) = c ,
There is a commonly used verlet-integration formula on the web by Johnathan Dummer, called Time-Corrected Verlet. However I've read several forum posts, that people get weird or unexpected results with it in certain conditions.
Formula by Johnathan Dummer:
x1 = x + (x – x0) * dt / dt0 + a * dt^2
There is also a stackoverflow answer, which states that Dummer's time-corrected formula is broken and the poster presents his own derivation as the correct one.
Suggested correct formula by a stackoverflow answer
x1 = x + (x – x0) * dt / dt0 + a * dt * (dt + dt0) / 2
Well, is Dummer's formula really broken? If yes, is the derivation of the poster better?
PS: It is also weird that Dummer uses the verlet integration formula x1 = x - x0 + a * dt^2 on his website instead of the correct x1 = 2x - x0 + a * dt^2.
The wikipedia page Verlet integration - Non-constant time differences presents the two formula, without referenced. I've not checked the derivation myself but the reasoning for the second improved formula looks sound.
I've downloaded Dummer's spreadsheet and modified one of the formula to use the correction. The results are much better.
The exact results are in yellow, we see that just using the normal Verlet algorithm with fluctuating frame-rate is bad. Dummer's time-correct varient in red is pretty good, but a little off. The dark green version with the improved correction is much better.
For projectiles under gravity which has a quadratic solution you may find that the improved version is exact. When the degree gets a bit higher it will vary from the true path and it might be worth testing to see if we still get a better approximation.
Doing the same calculation for a sin curve shows the improved method is considerably better. Here Time-correct Verlet is drifting quite a bit. The improved version is better, only a little bit off the exact answer.
For the PS. Note that if you set dt=dt0 in the TCV formula
x1 = x + (x – x0) * dt / dt0 + a * dt^2
you get
x1 = x + x – x0 + a * dt^2
= 2 x – x0 + a * dt^2
the original Verlet formula.
The true derivation is based on Taylor formulas
x(t-h0) = x(t) - x'(t)*h0 + 0.5*x''(t)*h0^2 + O(h0^3)
x(t+h1) = x(t) + x'(t)*h1 + 0.5*x''(t)*h1^2 + O(h1^3)
and now eliminate x'(t) from these two formulas to get a Verlet-like formula
h0*x(t+h1) + h1*x(t-h0) = (h0+h1)*x(t) + 0.5*a(t)*h0*h1*(h0+h1) +O(h^3)
which makes a propagation formula
x(t+h1) = (1+h1/h0)*x(t) - h1/h0*x(t-h0) + 0.5*a(t)*h1*(h0+h1)
= x(t) + (x(t)-x(t-h0))*h1/h0 + 0.5*a(t)*h1*(h0+h1)
so that indeed the corrected formula is the correct one.
Note that if you use velocity Verlet steps
Verlet(dt) {
v += a * 0.5*dt
x += v*dt
a = acceleration(x)
v += a * 0.5*dt
}
then each step is indepentently symplectic, so that the change of step size between steps is absolutely unproblematic.
Notice that the main advantages of Verlet and other similar symplectic schemes over Runge-Kutta methods etc. crucially depends on using a fixed step. In detail, the modified energy function (and other more than quadratic constants of motion) that is largely constant under the numerical method is a modification of the exact energy where the difference scales with the step size. So when changing the step size a different modification gives a constant energy. Frequent changes of the step size allow thus, possibly, arbitrary changes of the energy level.
I decided to quit being lazy and show some kind of derivation of how the original Verlet method looks with a variable step size. Because it seems like this faulty adaptation by Dummer is more pervasive than I thought, which is saddening. I also noticed that, as the above answer points out, the correct version is now on wikipedia alongside the Dummer one, though it was added after my "suggested correct answer".
When I look at Verlet method, I see that it looks a lot like leapfrog, velocity Verlet, implicit Euler etc., which look like second-order versions of modified midpoint, and some of them may be identical. In each of these, to some degree they have a leapfrog idea where integration of acceleration (into velocity) and integration of constant velocity (into position) are each staggered so that they overlap by half. This brings things like time-reversibility and stability, which are more important for the 'realism' of a simulation than accuracy is. And the 'realism', the believability, is more important for video games. We don't care if something moves to a slightly different position than it's exact mass would have truly caused, so long as it looks and feels realistic. We're not calculating where to point our high-powered satellite telescopes to look at features on distant objects, or future celestial events. Here, stability and efficiency takes priority here over mathematical accuracy. So, it seems like leapfrog method is appropriate. When you adapt leapfrog for variable time step, it loses some of this advantage, and it loses some of it's appeal for game physics. Stormer-Verlet is like leapfrog, except it uses the average velocity of the previous step instead of a separately maintained velocity. You can adapt this Stormer-Verlet in the same way as leapfrog. To integrate velocity forward with a fixed acceleration, you use half the length of the previous step and half the length of the next step, because they are staggered. If the steps were fixed like true leapfrog, they would be the same length and so the two half lengths sum to one. I use h for step size, a/v/p for acceleration/velocity/position, and hl/pl for 'last' as in previous step. These aren't really equations, more like assignment operations.
Original leapfrog:
v = v + a*h
p = p + v*h
With variable time step:
v = v + a*hl/2 + a*h/2
p = p + v*h
Factor a/2:
v = v + a*(hl + h)/2
p = p + v*h
Use previous position (p - pl)/hl for initial velocity:
v = (p - pl)/hl + a*(hl + h)/2
p = p + v*h
Substitute, we don't need v:
p = p + ( (p - pl)/hl + a*(hl + h)/2)*h
Distribute h:
p = p + (p - pl)*h/hl + a*h*(h + hl)/2
The result is not as simple or fast as the original Stormer form of Verlet, 2p - pl + a*h^2. I hope this makes some sense. You would omit the last step in actual code, no need to multiply h twice.
Using code FullSimplify[Abs[q + I*w], Element[{q, w}, Reals]] results in
Abs[q + I w]
and not
Sqrt[q^2 + w^2]
What am I missing?
P.S. Assuming[{q \[Element] Reals, w \[Element] Reals},
Abs[q + I*w]] does not work either.
Note: Simplify[Abs[w]^2, Element[{q, w}, Reals]] and Simplify[Abs[I*q]^2, Element[{q, w}, Reals]] work.
The problem is that what you assume to be "Simple" and what MMA assumes to be simple are two different things. Taking a look at ComplexityFunction indicates that MMA primarily looks at "LeafCount". Applying LeafCount gives:
In[3]:= Abs[q + I w] // LeafCount
Out[3]= 8
In[4]:= Sqrt[q^2 + w^2] // LeafCount
Out[4]= 11
So, MMA considers the Abs form to be better. (One can visually explore the simplicity using either TreeForm or FullForm). What we need to do is tell MMA to treat MMA as more expensive. To do this, we take the example from ComplexityFunction and write:
In[7]:= f[e_] := 100 Count[e, _Abs, {0, Infinity}] + LeafCount[e]
FullSimplify[Abs[q + I w], Element[{q, w}, Reals],
ComplexityFunction -> f]
Out[8]= Sqrt[q^2 + w^2]
As requested. Basically, we are telling MMA through f[e] that the count of all parts of the form Abs should count as 100 leaves.
EDIT: As mentioned by Brett, you can also make it more general, and use _Complex as the rule to look for:
In[20]:= f[e_] := 100 Count[e, _Complex, {0, Infinity}] + LeafCount[e]
FullSimplify[Abs[q + I w], Element[{q, w}, Reals],
ComplexityFunction -> f]
Out[21]= Sqrt[q^2 + w^2]
I suggest using ComplexExpand, which tells the system that all variables are real.
In[28]:= Abs[q + I*w] // ComplexExpand
Out[28]= Sqrt[q^2 + w^2]
These comments are not helpful. Mathematica is failing to evaluate complex numbers, as in Abs[5+i20] is left unchanged. The i is coded correctly. Making abstract observations about 'what is or is not simple' is unrelated and wrong. There is a float that should result, not some algebra. N and ImportForm do not work, either.
Does anyone know how to minimize a function containing an integral in MATLAB? The function looks like this:
L = Int(t=0,t=T)[(AR-x)dt], A is a system parameter and R and x are related through:
dR/dt = axRY - bR, where a and b are constants.
dY/dt = -xRY
I read somewhere that I can use fminbnd and quad in combination but I am not able to make it work. Any suggestions?
Perhaps you could give more details of your integral, e.g. where is the missing bracket in [AR-x)dt]? Is there any dependence of x on t, or can we integrate dR/dt = axR - bR to give R=C*exp((a*x-b)*t)? In any case, to answer your question on fminbnd and quad, you could set A,C,T,a,b,xmin and xmax (the last two are the range you want to look for the min over) and use:
[x fval] = fminbnd(#(x) quad(#(t)A*C*exp((a*x-b)*t)-x,0,T),xmin,xmax)
This finds x that minimizes the integral.
If i didn't get it wrong you are trying to minimize respect to t:
\int_0^t{(AR-x) dt}
well then you just need to find the zeros of:
AR-x
This is just math, not matlab ;)
Here's some manipulation of your equations that might help.
Combining the second and third equations you gave gives
dR/dt = -a*(dY/dt)-bR
Now if we solve for R on the righthand side and plug it into the first equation you gave we get
L = Int(t=0,t=T)[(-A/b*(dR/dt + a*dY/dt) - x)dt]
Now we can integrate the first term to get:
L = -A/b*[R(T) - R(0) + Y(T) - Y(0)] - Int(t=0,t=T)[(x)dt]
So now all that matters with regards to R and Y are the endpoints. In fact, you may as well define a new function, Z which equals Y + R. Then you get
L = -A/b*[Z(T) - Z(0)] - Int(t=0,t=T)[(x)dt]
This next part I'm not as confident in. The integral of x with respect to t will give some function which is evaluated at t = 0 and t = T. This function we will call X to give:
L = -A/b*[Z(T) - Z(0)] - X(T) + X(0)
This equation holds true for all T, so we can set T to t if we want to.
L = -A/b*[Z(t) - Z(0)] - X(t) + X(0)
Also, we can group a lot of the constants together and call them C to give
X(t) = -A/b*Z(t) + C
where
C = A/b*Z(0) + X(0) - L
So I'm not sure what else to do with this, but I've shown that the integral of x(t) is linearly related to Z(t) = R(t) + Y(t). It seems to me that there are many equations that solve this. Anyone else see where to go from here? Any problems with my math?