Could anybody please tell me whether I can perform this integration with FFT in MATLAB? How?
Please answer as soon as possible with the details.
Suppose there exists 2 rectangular planes, say, input accessed by x1 and y1 variables and the resulting plane is output accessed by tetax and tetay variables.
This is the integral in pseudo-code:
output(tetax,tetay)=double integral of [input(x1,y1)*exp(-j*k*((tetax*x1)+(tetay*y1)))](dx1)(dy1)
where: -1<= x1 <= 1 and -1<= y1 <= 1
tetax and tetay should change so they can span the final rectangular plane.
I would really appreciate a prompt and detailed answer.
Since this looks like homework, I'll just give some hints. The trick is to rewrite the integral to look like a normal 2D Fourier integral of a function.
There are two issues:
1) You need to combine k and your tetax, tetay to look like a normal wavenumber (and compensate for this in the appropriate way).
2) You need to deal with the limits being in the range (-1,1) whereas the Fourier integral needs them in the range (-inf, +inf). To do this, pick a function to go inside the Fourier integral that will make this work.
Then it will be obvious how to do this in Matlab. It's a cute problem and I hope this doesn't ruin it (and if people think it does, let me know and I'll delete this answer, or delete it for me if you can).
Your problem looks like a Fourier transform, not a discrete Fourier transform (DFT). A FFT calculates the latter type of transform.
Briefly, a Fourier transform involves an integral, while a DFT involves a sum.
Related
I have been working for a couple of months with OpenMDAO and I find myself struggling with my code when I want to impose conditions for trying to replicate a physical/engineering behaviour.
I have tried using sigmoid functions, but I am still not convinced with that, due to the difficulty about trading off sensibility and numerical stabilization. Most of times I found overflows in exp so I end up including other conditionals (like np.where) so loosing linearity.
outputs['sigmoid'] = 1 / (1 + np.exp(-x))
I was looking for another kind of step function or something like that, able to keep linearity and derivability to the ease of the optimization. I don't know if something like that exists or if there is any strategy that can help me. If it helps, I am working with an OpenConcept benchmark, which uses vectorized computations ans Simpson's rule numerical integration.
Thank you very much.
PD: This is my first ever question in stackoverflow, so I would like to apologyze in advance for any error or bad practice commited. Hope to eventually collaborate and become active in the community.
Update after Justin answer:
I will take the opportunity to define a little bit more my problem and the strategy I tried. I am trying to monitorize and control thermodynamics conditions inside a tank. One of the things is to take actions when pressure P1 reaches certein threshold P2, for defining this:
eval= (inputs['P1'] - inputs['P2']) / (inputs['P1'] + inputs['P2'])
# P2 = threshold [Pa]
# P1 = calculated pressure [Pa]
k=100 #steepness control
outputs['sigmoid'] = (1 / (1 + np.exp(-eval * k)))
eval was defined in order avoid overflows normalizing the values, so when the threshold is recahed, corrections are taken. In a very similar way, I defined a function to check if there is still mass (so flowing can continue between systems):
eval= inputs['mass']/inputs['max']
k=50
outputs['sigmoid'] = (1 / (1 + np.exp(-eval*k)))**3
maxis also used for normalizing the value and the exponent is added for reaching zero before entering in the negative domain.
PLot (sorry it seems I cannot post images yet for my reputation)
It may be important to highlight that both mass and pressure are calculated from coupled ODE integration, in which this activation functions take part. I guess OpenConcept nature 'explore' a lot of possible values before arriving the solution, so most of the times giving negative infeasible values for massand pressure and creating overflows. For that sometimes I try to include:
eval[np.where(eval > 1.5)] = 1.5
eval[np.where(eval < -1.5)] = -1.5
That is not a beautiful but sometimes effective solution. I try to avoid using it since I taste that this bounds difficult solver and optimizer work.
I could give you a more complete answer if you distilled your question down to a specific code example of the function you're wrestling with and its expected input range. If you provide that code-sample, I'll update my answer.
Broadly, this is a common challenge when using gradient based optimization. You want some kind of behavior like an if-condition to turn something on/off and in many cases thats a fundamentally discontinuous function.
To work around that we often use sigmoid functions, but these do have some of the numerical challenges you pointed out. You could try a hyberbolic tangent as an alternative, though it may suffer the same kinds of problems.
I will give you two broad options:
Option 1
sometimes its ok (even if not ideal) to leave the purely discrete conditional in the code. Lets say you wanted to represent a kind of simple piecewise function:
y = 2x; x>=0
y = 0; x < 0
There is a sharp corner in that function right at 0. That corner is not differentiable, but the function is fine everywhere else. This is very much like the absolute value function in practice, though you might not draw the analogy looking at the piecewise definition of the function because the piecewise nature of abs is often hidden from you.
If you know (or at least can check after the fact) that your final answer will no lie right on or very near to that C1 discontinuity, then its probably fine to leave the code the way is is. Your derivatives will be well defined everywhere but right at 0 and you can simply pick the left or the right answer for 0.
Its not strictly mathematically correct, but it works fine as long as you're not ending up stuck right there.
Option 2
Apply a smoothing function. This can be a sigmoid, or a simple polynomial. The exact nature of the smoothing function is highly specific to the kind of discontinuity you are trying to approximate.
In the case of the piecewise function above, you might be tempted to define that function as:
2x*sig(x)
That would give you roughly the correct behavior, and would be differentiable everywhere. But wolfram alpha shows that it actually undershoots a little. Thats probably undesirable, so you can increase the exponent to mitigate that. This however, is where you start to get underflow and overflow problems.
So to work around that, and make a better behaved function all around, you could instead defined a three part piecewise polynomial:
y = 2x; x>=a
y = c0 + c1*x + c2*x**2; -a <= x < a
y = 0 x < -a
you can solve for the coefficients as a function of a (please double check my algebra before using this!):
c0 = 1.5a
c1 = 2
c2 = 1/(2a)
The nice thing about this approach is that it will never overshoot and go negative. You can also make a reasonably small and still get decent numerics. But if you try to make it too small, c2 will obviously blow up.
In general, I consider the sigmoid function to be a bit of a blunt instrument. It works fine in many cases, but if you try to make it approximate a step function too closely, its a nightmare. If you want to represent physical processes, I find polynomial fillet functions work more nicely.
It takes a little effort to derive that polynomial, because you want it to be c1 continuous on both sides of the curve. So you have to construct the system of equations to solve for it as a function of the polynomial order and the specific relaxation you want (0.1 here).
My goto has generally been to consult the table of activation functions on wikipedia: https://en.wikipedia.org/wiki/Activation_function
I've had good luck with sigmoid and the hyperbolic tangent, scaling them such that we can choose the lower and upper values as well as choosing the location of the activation on the x-axis and the steepness.
Dymos uses a vectorization that I think is similar to OpenConcept and I've had success with numpy.where there as well, providing derivatives for each possible "branch" taken. It is true that you may have issues with derivative mismatches if you have an analysis point right on the transition, but often I've had success despite that. If the derivative at the transition becomes a hinderance then implementing a sigmoid or relu are more appropriate.
If x is of a magnitude such that it can cause overflows, consider applying units or using scaling to put it within reasonable limits if you cannot bound it directly.
I'm looking for something that I guess is rather sophisticated and might not exist publicly, but hopefully it does.
I basically have a database with lots of items which all have values (y) that correspond to other values (x). Eg. one of these items might look like:
x | 1 | 2 | 3 | 4 | 5
y | 12 | 14 | 16 | 8 | 6
This is just a a random example. Now, there are thousands of these items all with their own set of x and y values. The range between one x and the x after that one is not fixed and may differ for every item.
What I'm looking for is a library where I can plugin all these sets of Xs and Ys and tell it to return things like the most common item (sets of x and y that follow a compareable curve / progression), and the ability to check whether a certain set is atleast x% compareable with another set.
With compareable I mean the slope of the curve if you would draw a graph of the data. So, not actaully the static values but rather the detection of events, such as a high increase followed by a slow decrease, etc.
Due to my low amount of experience in mathematics I'm not quite sure what I'm looking for is called, and thus have trouble explaining what I need. Hopefully I gave enough pointers for someone to point me into the right direction.
I'm mostly interested in a library for javascript, but if there is no such thing any library would help, maybe I can try to port what I need.
About Markov Cluster(ing) again, of which I happen to be the author, and your application. You mention you are interested in trend similarity between objects. This is typically computed using Pearson correlation. If you use the mcl implementation from http://micans.org/mcl/, you'll also obtain the program 'mcxarray'. This can be used to compute pearson correlations between e.g. rows in a table. It might be useful to you. It is able to handle missing data - in a simplistic approach, it just computes correlations on those indices for which values are available for both. If you have further questions I am happy to answer them -- with the caveat that I usually like to cc replies to the mcl mailing list so that they are archived and available for future reference.
What you're looking for is an implementation of a Markov clustering. It is often used for finding groups of similar sequences. Porting it to Javascript, well... If you're really serious about this analysis, you drop Javascript as soon as possible and move on to R. Javascript is not meant to do this kind of calculations, and it is far too slow for it. R is a statistical package with much implemented. It is also designed specifically for very speedy matrix calculations, and most of the language is vectorized (meaning you don't need for-loops to apply a function over a vector of values, it happens automatically)
For the markov clustering, check http://www.micans.org/mcl/
An example of an implementation : http://www.orthomcl.org/cgi-bin/OrthoMclWeb.cgi
Now you also need to define a "distance" between your sets. As you are interested in the events and not the values, you could give every item an extra attribute being a vector with the differences y[i] - y[i-1] (in R : diff(y) ). The distance between two items can then be calculated as the sum of squared differences between y1[i] and y2[i].
This allows you to construct a distance matrix of your items, and on that one you can call the mcl algorithm. Unless you work on linux, you'll have to port that one.
What you're wanting to do is ANOVA, or ANalysis Of VAriance. If you run the numbers through an ANOVA test, it'll give you information about the dataset that will help you compare one to another. I was unable to locate a Javascript library that would perform ANOVA, but there are plenty of programs that are capable of it. Excel can perform ANOVA from a plugin. R is a stats package that is free and can also perform ANOVA.
Hope this helps.
Something simple is (assuming all the graphs have 5 points, and x = 1,2,3,4,5 always)
Take u1 = the first point of y, ie. y1
Take u2 = y2 - y1
...
Take u5 = y5 - y4
Now consider the vector u as a point in 5-dimensional space. You can use simple clustering algorithms, like k-means.
EDIT: You should not aim for something too complicated as long as you go with javascript. If you want to go with Java, I can suggest something based on PCA (requiring the use of singular value decomposition, which is too complicated to be implemented efficiently in JS).
Basically, it goes like this: Take as previously a (possibly large) linear representation of data, perhaps differences of components of x, of y, absolute values. For instance you could take
u = (x1, x2 - x1, ..., x5 - x4, y1, y2 - y1, ..., y5 - y4)
You compute the vector u for each sample. Call ui the vector u for the ith sample. Now, form the matrix
M_{ij} = dot product of ui and uj
and compute its SVD. Now, the N most significant singular values (ie. those above some "similarity threshold") give you N clusters.
The corresponding columns of the matrix U in the SVD give you an orthonormal family B_k, k = 1..N. The squared ith component of B_k gives you the probability that the ith sample belongs to cluster K.
If it is ok to use java you really should have a look at Weka. It is possible to access all features via java code. Maybe you find a markov clustering, but if not, they hava a lot other clustering algorithem and its really easy to use.
Someone somewhere has had to solve this problem. I can find many a great website explaining this problem and how to solve it. While I'm sure they are well written and make sense to math whizzes, that isn't me. And while I might understand in a vague sort of way, I do not understand how to turn that math into a function that I can use.
So I beg of you, if you have a function that can do this, in any language, (sure even fortran or heck 6502 assembler) - please help me out.
prefer an analytical to iterative solution
EDIT: Meant to specify that its a cubic bezier I'm trying to work with.
What you're asking for is the inverse of the arc length function. So, given a curve B, you want a function Linv(len) that returns a t between 0 and 1 such that the arc length of the curve between 0 and t is len.
If you had this function your problem is really easy to solve. Let B(0) be the first point. To find the next point, you'd simply compute B(Linv(w)) where w is the "equal arclength" that you refer to. To get the next point, just evaluate B(Linv(2*w)) and so on, until Linv(n*w) becomes greater than 1.
I've had to deal with this problem recently. I've come up with, or come across a few solutions, none of which are satisfactory to me (but maybe they will be for you).
Now, this is a bit complicated, so let me just give you the link to the source code first:
http://icedtea.classpath.org/~dlila/webrevs/perfWebrev/webrev/raw_files/new/src/share/classes/sun/java2d/pisces/Dasher.java. What you want is in the LengthIterator class. You shouldn't have to look at any other parts of the file. There are a bunch of methods that are defined in another file. To get to them just cut out everything from /raw_files/ to the end of the URL. This is how you use it. Initialize the object on a curve. Then to get the parameter of a point with arc length L from the beginning of the curve just call next(L) (to get the actual point just evaluate your curve at this parameter, using deCasteljau's algorithm, or zneak's suggestion). Every subsequent call of next(x) moves you a distance of x along the curve compared to your last position. next returns a negative number when you run out of curve.
Explanation of code: so, I needed a t value such that B(0) to B(t) would have length LEN (where LEN is known). I simply flattened the curve. So, just subdivide the curve recursively until each curve is close enough to a line (you can test for this by comparing the length of the control polygon to the length of the line joining the end points). You can compute the length of this sub-curve as (controlPolyLength + endPointsSegmentLen)/2. Add all these lengths to an accumulator, and stop the recursion when the accumulator value is >= LEN. Now, call the last subcurve C and let [t0, t1] be its domain. You know that the t you want is t0 <= t < t1, and you know the length from B(0) to B(t0) - call this value L0t0. So, now you need to find a t such that C(0) to C(t) has length LEN-L0t0. This is exactly the problem we started with, but on a smaller scale. We could use recursion, but that would be horribly slow, so instead we just use the fact that C is a very flat curve. We pretend C is a line, and compute the point at t using P=C(0)+((LEN-L0t0)/length(C))*(C(1)-C(0)). This point doesn't actually lie on the curve because it is on the line C(0)->C(1), but it's very close to the point we want. So, we just solve Bx(t)=Px and By(t)=Py. This is just finding cubic roots, which has a closed source solution, but I just used Newton's method. Now we have the t we want, and we can just compute C(t), which is the actual point.
I should mention that a few months ago I skimmed through a paper that had another solution to this that found an approximation to the natural parameterization of the curve. The author has posted a link to it here: Equidistant points across Bezier curves
What is the usual method or algorithm used to plot implicit equations of 2 variables?
I am talking about equations such as,
sin(x*y)*y = 20
x*x - y*y = 1
Etc.
Does anyone know how Maple or Matlab do this? My target language is C#.
Many thanks!
One way to do this is to sample the function on a regular, 2D grid. Then you can run an algorithm like marching squares on the resulting 2D grid to draw iso-contours.
In a related question, someone also linked to the gnuplot source code. It's fairly complex, but might be worth going through. You can find it here: http://www.gnuplot.info/
Iterate the value of x across the range you want to plot. For each fixed value of x, solve the equation numerically using a method such as interval bisection or the Newton-Raphson method (for which you can calculate the derivative using implicit differentiation, or perhaps differentiate numerically). This will give you the corresponding value of y for a given x. In most cases, you won't need too many iterations to get a very precise result, and it's very efficient anyway.
Note that you will need to transform the equation into the form f(x) = 0, though this is always trivial. The nice thing about this method is that it works just as well the other way round (i.e. taking a fixed range of y and computing x per value).
There're multiple methods. The easiest algorithm I could find is descripted here:
https://homepages.warwick.ac.uk/staff/David.Tall/pdfs/dot1986b-implicit-fns.pdf and describes what Noldorin has described you.
The most complex one, and seems to be the one that can actually solve a lot of special cases is described here:
https://academic.oup.com/comjnl/article/33/5/402/480353
i think,
in matlab you give array as input for x.
then for every x, it calculates y.
then draws line from x0,y0 to x1, y1
then draws line from x1,y1 to x2, y2
...
...
What is the best way to approximate a cubic Bezier curve? Ideally I would want a function y(x) which would give the exact y value for any given x, but this would involve solving a cubic equation for every x value, which is too slow for my needs, and there may be numerical stability issues as well with this approach.
Would this be a good solution?
Just solve the cubic.
If you're talking about Bezier plane curves, where x(t) and y(t) are cubic polynomials, then y(x) might be undefined or have multiple values. An extreme degenerate case would be the line x= 1.0, which can be expressed as a cubic Bezier (control point 2 is the same as end point 1; control point 3 is the same as end point 4). In that case, y(x) has no solutions for x != 1.0, and infinite solutions for x == 1.0.
A method of recursive subdivision will work, but I would expect it to be much slower than just solving the cubic. (Unless you're working with some sort of embedded processor with unusually poor floating-point capacity.)
You should have no trouble finding code that solves a cubic that has already been thoroughly tested and debuged. If you implement your own solution using recursive subdivision, you won't have that advantage.
Finally, yes, there may be numerical stablility problems, like when the point you want is near a tangent, but a subdivision method won't make those go away. It will just make them less obvious.
EDIT: responding to your comment, but I need more than 300 characters.
I'm only dealing with bezier curves where y(x) has only one (real) root. Regarding numerical stability, using the formula from http://en.wikipedia.org/wiki/Cubic_equation#Summary, it would appear that there might be problems if u is very small. – jtxx000
The wackypedia article is math with no code. I suspect you can find some cookbook code that's more ready-to-use somewhere. Maybe Numerical Recipies or ACM collected algorithms link text.
To your specific question, and using the same notation as the article, u is only zero or near zero when p is also zero or near zero. They're related by the equation:
u^^6 + q u^^3 == p^^3 /27
Near zero, you can use the approximation:
q u^^3 == p^^3 /27
or p / 3u == cube root of q
So the computation of x from u should contain something like:
(fabs(u) >= somesmallvalue) ? (p / u / 3.0) : cuberoot (q)
How "near" zero is near? Depends on how much accuracy you need. You could spend some quality time with Maple or Matlab looking at how much error is introduced for what magnitudes of u. Of course, only you know how much accuracy you need.
The article gives 3 formulas for u for the 3 roots of the cubic. Given the three u values, you can get the 3 corresponding x values. The 3 values for u and x are all complex numbers with an imaginary component. If you're sure that there has to be only one real solution, then you expect one of the roots to have a zero imaginary component, and the other two to be complex conjugates. It looks like you have to compute all three and then pick the real one. (Note that a complex u can correspond to a real x!) However, there's another numerical stability problem there: floating-point arithmetic being what it is, the imaginary component of the real solution will not be exactly zero, and the imaginary components of the non-real roots can be arbitrarily close to zero. So numeric round-off can result in you picking the wrong root. It would be helpfull if there's some sanity check from your application that you could apply there.
If you do pick the right root, one or more iterations of Newton-Raphson can improve it's accuracy a lot.
Yes, de Casteljau algorithm would work for you. However, I don't know if it will be faster than solving the cubic equation by Cardano's method.