I want to integrate 3D function along x, y and z. for example
f(x,y,z) = (x^2+y^2+z^2).^3/2.*exp(x+y+z)*some extra terms....
where 0<x<inf, 0<y<inf, 0<z<inf.
I will be thankful if some one help me here.
Take care that Scilab performs a numerical approximation of the integrals with methods which more or less consist in splitting the domain in small intervals
So it is impossible to integrate up to infinity.
You can try symbolic computation tools like Maple, Mathematica, or Macsyma.
Related
I am trying to solve a 5x5 Cholesky decomposition (for a variance-covariance matrix) all in terms of unknowns (no constants).
A simplified version, for the sake of giving an example, would be a 2x2 decomposition:
[[a,0],[b,c]]*[[a,b],[0,c]]=[[U1,U2],[U2,U3]]
Is there a software (I'm proficient in R, so if R can do it that would be great) that could solve the above to yield an answer of the left-hand variables in terms of the right-hand variables? i.e. this would be the final answer:
a = sqrt(U1)
b = U2/sqrt(U1)
c = sqrt(U3+U2/U1)
Take a look at this Wikipedia section.
The symbolic definition of the (i,j)th entry of the decomposition is defined recursively in terms of the entries above and to the left. You could implement these recursions using Matlab's Symbolic Math Toolbox and then apply them (symbolically) to obtain your formulas for the 5x5 case. Be warned that you'll probably end up with extremely complicated formulas for some of the unknowns, and - excepting unusual circumstances - it will be fine to implement the decomposition iteratively even for a fixed size 5x5 matrix.
I want to know is there any fast way to draw a graph of a "non-function" curve. For example
x^2+3x = y^3-4y+1
I know for normal function, like y=x^2, we can iterate x and calculate y, then draw the points. But for non-function curve, it will take a lot of times to iterate x, then solve function of y (using Newton method or alike). So please suggest me the correct way to draw them.
Thanks & Regards.
I am afraid there is no "generic" way except for the method you describe yourself: iterate over one variable and solve for the other.
Complications
Note that you have to be careful to find all solutions, not just a solution. This is a major stumbling block in creating a working general algorithm.
Another stumbling block is the singularity points: when f'(x)=0, you will want to solve for y and, vice versa, when g'(y)=0, you will want to solve for x. What if both are 0 at the same time? You will need to do some paper-and-pencil analysis.
Special Cases
There are some problem-specific simplifications though.
In your specific case the equation for x is quadratic, so a well known simple closed formula exists. This means that iterating over y and solving for x is easier. (The equation for y is cubic, so a less well known and much more complicated formula exist too).
Another way is to find a parametric representation of your curve (e.g., x^2+y^2=1 is equivalent to x=cos(t); y=sin(t); 0<=t<2*pi).
I'm having a hard time understanding why it would be useful to use the Taylor series for a function in order to gain an approximation of a function, instead of just using the function itself when programming. If I can tell my computer to compute e^(.1) and it will give me an exact value, why would I take an approximation instead?
Taylor series are generally not used to approximate functions. Usually, some form of minimax polynomial is used.
Taylor series converge slowly (it takes many terms to get the accuracy desired) and are inefficient (they are more accurate near the point around which they are centered and less accurate away from it). The largest use of Taylor series is likely in mathematics classes and papers, where they are useful for examining the properties of functions and for learning about calculus.
To approximate functions, minimax polynomials are often used. A minimax polynomial has the minimum possible maximum error for a particular situation (interval over which a function is to be approximated, degree available for the polynomial). There is usually no analytical solution to finding a minimax polynomial. They are found numerically, using the Remez algorithm. Minimax polynomials can be tailored to suit particular needs, such as minimizing relative error or absolute error, approximating a function over a particular interval, and so on. Minimax polynomials need fewer terms than Taylor series to get acceptable results, and they “spread” the error over the interval instead of being better in the center and worse at the ends.
When you call the exp function to compute ex, you are likely using a minimax polynomial, because somebody has done the work for you and constructed a library routine that evaluates the polynomial. For the most part, the only arithmetic computer processors can do is addition, subtraction, multiplication, and division. So other functions have to be constructed from those operations. The first three give you polynomials, and polynomials are sufficient to approximate many functions, such as sine, cosine, logarithm, and exponentiation (with some additional operations of moving things into and out of the exponent field of floating-point values). Division adds rational functions, which is useful for functions like arctangent.
For two reasons. First and foremost - most processors do not have hardware implementations of complex operations like exponentials, logarithms, etc... In such cases the programming language may provide a library function for computing those - in other words, someone used a taylor series or other approximation for you.
Second, you may have a function that not even the language supports.
I recently wanted to use lookup tables with interpolation to get an angle and then compute the sin() and cos() of that angle. Trouble is that it's a DSP with no floating point and no trigonometric functions so those two functions are really slow (software implementation). Instead I put sin(x) in the table instead of x and then used the taylor series for y=sqrt(1-x*x) to compute the cos(x) from that. This taylor series is accurate over the range I needed with only 5 terms (denominators are all powers of two!) and can be implemented in fixed point using plain C and generates code that is faster than any other approach I could think of.
I am looking to calculate the indefinite integral of an equation.
I have data from an accelerometer feed into R through a visual C program, and from there it was simple enough to come up with an equation to represent the acceleration curve. That is all well in good, however i need to calculate the impact velocity as well. From my understanding from the good ol' highschool days, the indefinite integral of my acceleration curve will yield the the equation for the velocity.
I know it is easy enough to perform numerical integration with the integrate() function, is there anything which is comparable for an indefinite integral?
library(Ryacas)
x <- Sym("x")
Integrate(sin(x), x)
gives
expression(-cos(x))
An alternative way:
yacas("Integrate(x)Sin(x)")
You can find the function reference here
If the NA's you mention are informative in the sense of indicating no acceleration input then they should be replace by zeros. Let's assume you have the data in acc.vec and the device recorded at a rate of rec_per_sec:
acc.vec[is.na(ac.vec)] <- 0
vel.vec <- cumsum(acc.vec)/recs_per_sec
I do not think constructing a best fit curve is going to improve your accuracy in this instance. To plot velocity versus time:
plot(1:length(acc.vec)/recs_per_sec, vel.vec,
xlab="Seconds", ylab="Integrated Acceleration = Velocity")
As Ben said, try the Ryacas package for calculating the antiderivative of a function. But you probably should ask yourself whether you really want to generate a continuous function which only approximates your data in the first place (fitting errors). I'd stick with numerical integration of your actual data. Keep in mind the uncertainty in each data point, of course.
If I have a system of a springs, not one, but for example 3 degree of freedom system of the springs connected in some with each other. I can make a system of differential equations for but it is impossible to solve it in a general way. The question is, are there any papers or methods for filtering such a complex oscilliations, in order to get rid of the oscilliations and get a real signal as much as possible? For example if I connect 3 springs in some way, and push them to start the vibrations, or put some weight on them, and then take the vibrations from each spring, are there any filtering methods to make it easy to determine the weight (in case if some mass is put above) of each mass? I am interested in filtering complex spring like systems.
Three springs, six degrees of freedom? This is a trivial solution using finite element methods and numerical integration. It's a system of six coupled ODEs. You can apply any form of numerical integration, such as 5th order Runge-Kutta.
I'd recommend doing an eigenvalue analysis of the system first to find out something about its frequency characteristics and normal modes. I'd also do an FFT of the dynamic forces you apply to the system. You don't mention any damping, so if you happen to excite your system at a natural frequency that's close to a resonance you might have some interesting behavior.
If the dynamic equation has this general form (sorry, I don't have LaTeX here to make it look nice):
Ma + Kx = F
where M is the mass matrix (diagonal), a is the acceleration (2nd derivative of displacements w.r.t. time), K is the stiffness matrix, and F is the forcing function.
If you're saying you know the response, you'll have to pre-multiply by the transpose of the response function and try to solve for M. It's diagonal, so you have a shot at it.
Are you connecting the springs in such a way that the behavior of the system is approximately linear? (e.g. at least as close to linear as are musical instrument springs/strings?) Is this behavior consistant over time? (e.g. the springs don't melt or break.) If so, LTI (linear time invariant) systems theory might be applicable. Given enough measurements versus the numbers of degrees of freedom in the LTI system, one might be able to estimate a pole-zero plot of the system response, and go from there. Or something like a linear predictor might be useful.
Actually it is possible to solve the resulting system of differential equations as long as you know the masses, etc.
The standard approach is to use a Laplace Transform. In particular you start with a set of linear differential equations. Add variables until you have a set of first order linear differential equations. (So if you have y'' in your equation, you'd add the equation z = y' and replace y'' with z'.) Rewrite this in the form:
v' = Av + w
where v is a vector of variable, A is a matrix, and w is a scalar vector. (An example of something that winds up in w is gravity.)
Now apply a Laplace transform to get
s L(v) - v(0) = AL(v) + s w
Solve it to get
L(v) = inv(A - I s)(s w + v(0))
where inv inverts a matrix and I is the identity matrix. Apply the inverse Laplace transform (if you read up on Laplace transforms you can find tables of inverse of common types of functions - getting a complete list of the functions you actually encounter shouldn't be that hard), and you have your solution. (Be warned, these computations quickly get very complex.)
Now you have the ability to take a particular setup and solve for the future behavior. You also have the ability to (if you do things really carefully) figure out how the model responds to a small perturbation in parameters. But your problem is that you don't know the parameters to use. However you do have the ability to measure the positions in the system at repeated times.
If you put this together, what you can do is this. Measure your position at a number of points. First estimate all of the initial values of the parameters, and then all of the values a second later. You can adjust your parameters (using Newton's method) to come close enough to the values a second later. Take the measurements from 5 seconds later and use that initial estimate as your starting point to refine your calculations for what is happening 5 seconds later. Repeat with longer intervals to get all of your answers.
Writing and debugging this should take you some time. :-) I would strongly recommend investigating how much of this Mathematica knows how to do for you already...