Basically I am trying to solve the following Definite Integral in Maple from Theta=0 to Theta0=45. I am trying to find an actual numerical value but need to find the integral first. I don't know how to ask Maple to help me solve an integral where there are two different values (theta and theta0) within. All I am trying to do is find the period of oscillation of a pendulum but I have been instructed to only use this method and equation.
From the equation d^2θ/dt^2= -g/L sin(θ) we find:
P = 4 sqrt(L/2g) ∫ (0 to θ0) dθ/sqrt[cos(θ)-cos(θ0)]
L= 1
g= 9.8
To simplify the value before the integral I did the following:
>L:=1;
>g:=9.8;
>evalf(4*sqrt(L/(2*g));
>M:=%;
So the integral to solve simplifies to:
P = M ∫ (0 to θ0) dθ/sqrt[cos(θ)-cos(θ0)]
When I try to evaluate the integral by itself I get the error:
"Error, index must evaluate to a name when indexing a module".
I am trying to figure out how Maple wants me to enter in the integral so it will solve it.
I have tried the following as well as similar combinations of variables:
int(1/sqrt[cos(t)-cos(45)],t=0..45);
I can't conceive how to make maple solve the definite integral for me given it is cos(theta)-cos(theta0) in the denominator instead of just one variable. When I try different values for the integral I also get the following error:
Error, index must evaluate to a name when indexing a module
I must be overlooking something considerable to continue getting this error. Thanks in advance for any help or direction! :)
As acer noted in his comment, maple syntax doesn't use square brackets for functions. The proper syntax for your task is:
int(1/sqrt(cos(t)-cos(Pi/4)),t=0..Pi/4);
Notice that maple works in radians, so I replaced your 45 with Pi/4.
If you need a numerical value you can use evalf:
evalf(int(1/sqrt(cos(t)-cos(Pi/4)),t=0..Pi/4));
maple's answer is 2.310196615.
If you need to evaluate with a generic variable theta0, you can define a function as:
myint:=theta0->int(1/sqrt(cos(t)-cos(theta0)),t=0..theta0);
Then just call it as, e.g.,
myint(Pi/4);
and for a numerical evaluation:
evalf(myint(Pi/4));
Related
While doing certain computations involving the Rogers L-function, the following result was generated by Wolfram Alpha:
I wanted to verify this result in Pari/GP by means of the lindep function, so I calculated the integral to 20 digits in WA, yielding:
11.3879638800312828875
Then, I used the following code in Pari/GP:
lindep([zeta(2), zeta(3), 11.3879638800312828875])
As pi^2 = 6*zeta(2), one would expect the output to be a vector along the lines of:
[12,12,-3]
because that's the linear dependency suggested by WA's result. However, I got a very elaborate vector from Pari/GP:
[35237276454, -996904369, -4984618961]
I think the first vector should be the "right" output of the Pari code sample.
Questions:
Why is the lindep function in Pari/GP not yielding the output one would expect in this case?
What can I do to make it give the vector that would be more appropriate in this situation?
It comes down to Pari treating your rounded values as exact. Since you must round your values, lindep's solution doesn't always come to the same solution as the true answer due to error.
You can try changing the accuracy of lindep using the second argument. The manual states that you should choose this to be smaller than the number of correct decimal digits. I believe this should solve the issue.
lindep(v, {flag = 0}) finds a small nontrivial integral linear
combination between components of v. If none can be found return an
empty vector.
If v is a vector with real/complex entries we use a floating point
(variable precision) LLL algorithm. If flag = 0 the accuracy is chosen
internally using a crude heuristic. If flag > 0 the computation is
done with an accuracy of flag decimal digits. To get meaningful
results in the latter case, the parameter flag should be smaller than
the number of correct decimal digits in the input.
I've got a code that works with the Data Set. I found out that it doesn't want to work with the ln(x) function. The data set can be found here.
LY <- ln(Apple$Close - Apple$Open)
Warning in log(x) : NaNs produced
Could you please help me to fix this problem?
Since stocks can go down as well as up (unfortunately), Close can be less than Open and Close - Open can be negative. It just doesn't make sense to take the natural log of a negative number; it's like dividing by zero, or more precisely like taking the square root of a negative number.
Actually, you can take the logarithm of a complex number with a negative real part:
log(as.complex(-1))
## [1] 0+3.141593i
... but "i times pi" is probably not a very useful result for further data analysis ...
(in R, log() takes the natural logarithm. While the SciViews package provides ln() as a synonym, you might as well just get used to using log() - this is a convention across most programming languages ...)
Depending on what you're trying to do, the logarithm of the close/open ratio can be a useful value (log(Close/Open)): this is negative when Close < Open, positive when Close > Open). As #jpiversen points out, this is called the logarithmic return; as #KarelZe points out, log(Close/Open) is mathematically equivalent to log(Close) - log(Open) (which might be what your professor wanted ... ???)
Are you looking for logarithmic return? In that case the formula would be:
log(Apple$Close / Apple$Open)
Since A / B for two positive values is always positive, this will not create NaNs.
This is probably straightforward, elementary and whatever, but I can't manage to get it. I have 2 Nx1vectors u and w, which are composed by both negative and positive values. I am trying to compute w'u u'w , which should be a quadratic form. I should be able to write this like
t(w)%*%u%*%t(u)%*%w
However sometimes I get a negative value, depending on the values in the two vectors. This is not possible, since that thing is a quadratic form. I tried with
crossprod(w, u)%*%crossprod(u, w)
and
crossprod(w, u)*crossprod(u, w)
which gives positive and equal results. However, since I am dealing with Nx1vectors, I should also be able to write it as
`sum(w*u)^2`
which gives a positive value but different from the ones above.
So I guess I am doing something wrong somewhere. So, the question is: how can I express w'u u'w which is valid for both vectors and matrices ?
EDIT: here a csv file with the original vectors to reproduce exactly the same issue
I was trying to learn Scipy, using it for mixed integrations and differentiations, but at the very initial step I encountered the following problems.
For numerical differentiation, it seems that the only Scipy function that works for callable functions is scipy.derivative() if I'm right!? However, I couldn't work with it:
1st) when I am not going to specify the point at which the differentiation is to be taken, e.g. when the differentiation is under an integral so that it is the integral that should assign the numerical values to its integrand's variable, not me. As a simple example I tried this code in Sage's notebook:
import scipy as sp
from scipy import integrate, derivative
var('y')
f=lambda x: 10^10*sin(x)
g=lambda x,y: f(x+y^2)
I=integrate.quad( sp.derivative(f(y),y, dx=0.00001, n=1, order=7) , 0, pi)[0]; show(I)
show( integral(diff(f(y),y),y,0,1).n() )
also it gives the warning that "Warning: The occurrence of roundoff error is detected, which prevents the requested tolerance from being achieved. The error may be underestimated." and I don't know what does this warning stand for as it persists even with increasing "dx" and decreasing the "order".
2nd) when I want to find the derivative of a multivariable function like g(x,y) in the above example and something like sp.derivative(g(x,y),(x,0.5), dx=0.01, n=1, order=3) gives error, as is easily expected.
Looking forward to hearing from you about how to resolve the above cited problems with numerical differentiation.
Best Regards
There are some strange problems with your code that suggest you need to brush up on some python! I don't know how you even made these definitions in python since they are not legal syntax.
First, I think you are using an older version of scipy. In recent versions (at least from 0.12+) you need from scipy.misc import derivative. derivative is not in the scipy global namespace.
Second, var is not defined, although it is not necessary anyway (I think you meant to import sympy first and use sympy.var('y')). sin has also not been imported from math (or numpy, if you prefer). show is not a valid function in sympy or scipy.
^ is not the power operator in python. You meant **
You seem to be mixing up the idea of symbolic and numeric calculus operations here. scipy won't numerically differentiate an expression involving a symbolic object -- the second argument to derivative is supposed to be the point at which you wish to take the derivative (i.e. a number). As you say you are trying to do numeric differentiation, I'll resolve the issue for that purpose.
from scipy import integrate
from scipy.misc import derivative
from math import *
f = lambda x: 10**10*sin(x)
df = lambda x: derivative(f, x, dx=0.00001, n=1, order=7)
I = integrate.quad( df, 0, pi)[0]
Now, this last expression generates the warning you mentioned, and the value returned is not very close to zero at -0.0731642869874073 in absolute terms, although that's not bad relative to the scale of f. You have to appreciate the issues of roundoff error in finite differencing. Your function f varies on your interval between 0 and 10^10! It probably seems paradoxical, but making the dx value for differentiation too small can actually magnify roundoff error and cause numerical instability. See the second graph here ("Example showing the difficulty of choosing h due to both rounding error and formula error") for an explanation: http://en.wikipedia.org/wiki/Numerical_differentiation
In fact, in this case, you need to increase it, say to 0.001: df = lambda x: derivative(f, x, dx=0.001, n=1, order=7)
Then, you can integrate safely, with no terrible roundoff.
I=integrate.quad( df, 0, pi)[0]
I don't recommend throwing away the second return value from quad. It's an important verification of what happened, as it is "an estimate of the absolute error in the result". In this case, I == 0.0012846582250212652 and the abs error is ~ 0.00022, which is not bad (the interval that implies still does not include zero). Maybe some more fiddling with the dx and absolute tolerances for quad will get you an even better solution, but hopefully you get the idea.
For your second problem, you simply need to create a proper scalar function (call it gx) that represents g(x,y) along y=0.5 (this is called Currying in computer science).
g = lambda x, y: f(x+y**2)
gx = lambda x: g(x, 0.5)
derivative(gx, 0.2, dx=0.01, n=1, order=3)
gives you a value of the derivative at x=0.2. Naturally, the value is huge given the scale of f. You can integrate using quad like I showed you above.
If you want to be able to differentiate g itself, you need a different numerical differentiation functio. I don't think scipy or numpy support this, although you could hack together a central difference calculation by making a 2D fine mesh (size dx) and using numpy.gradient. There are probably other library solutions that I'm not aware of, but I know my PyDSTool software contains a function diff that will do that (if you rewrite g to take one array argument instead). It uses Ridder's method and is inspired from the Numerical Recipes pseudocode.
Is there a way to calculate the determinant of a complex matrix?
F4<-matrix(c(1,1,1,1,1,1i,-1,-1i,1,-1,1,-1,1,-1i,-1,1i),nrow=4)
det(F4)
Error in determinant.matrix(x, logarithm = TRUE, ...) :
determinant not currently defined for complex matrices
library(Matrix)
determinant(Matrix(F4))
Error in Matrix(F4) :
complex matrices not yet implemented in Matrix package
Error in determinant(Matrix(F4)) :
error in evaluating the argument 'x' in selecting a method for function 'determinant'
If you use prod(eigen(F4)$values)
I'd recommend
prod(eigen(F4, only.values=TRUE)$values)
instead.
Note that the qr() is advocated to use iff you are only interested in the
absolute value or rather Mod() :
prod(abs(Re(diag(qr(x)$qr))))
gives the Mod(determinant(x))
{In X = QR, |det(Q)|=1 and the diagonal of R is real (in R at least).}
BTW: Did you note the caveat
Often, computing the determinant is
not what you should be doing
to solve a given problem.
on the help(determinant) page ?
If you know that the characteristic polynomial of a matrix A splits into linear factors, then det(A) is the product of the eigenvalues of A, and you can use eigen value functions like this to work around your problem. I suspect you'll still want something better, but this might be a start.