Formula to create simple MIDI velocity curves - math

I'm trying to figure out a simple math formula that will allow me to apply various velocity curves to an incoming MIDI value. In the picture below, the starting x,y is (0,0) and ending x,y is (127,127). I'm trying to get a formula with a single variable that will allow me to produce simple expanded or contracted curves always bumping in the middle (by a degree of the variable). My input will be a value between 0 and 127, and my output will always be between 0 and 127. This seems like it should be easy, but my college calculus is escaping me at the moment.

Let's start by marking the four corners as:
S = (0,0)
E = (127,127)
U = (0,127)
V = (127,0)
You are looking for a circle equation which passes through E, S and a third point Z on the line between U and V. Let's mark it Z:
Z(t) = t*U + (1-t)*V
I will use the fact the the perpendicular bisectors of two chords meet at the center.
The bisectors B1 and B2 are:
B1 = (Xz/2 , Yz/2)
B2 = ((127+Xz)/2 , (127+Yz)/2)
The slopes of the perpendiculars are the negative inverse of the slope of the chords so:
Slope1 = - (127-Xz)/(127-Yz)
Slope2 = - Xz/Yz
Using the straight line equation for a point and a slope
L1 = y = Slope1*(x - B1x) + B1y
L2 = y = Slope2*(x - B2x) + B2y
The center of the circle C is their intersection:
Cx = [B2y-B1y + Slope1*B1x - Slope2*B2x]/[Slope1 - Slope2]
Cy = [Slope2*(B2y-B1y + Slope1*B1x - Slope2*B2x)]/[Slope1-Slope2] - Slope2*B2x +B2y
So we have the left hand side of the circle equation:
(x-Cx)^2 + (y-Cy)^2 = R^2
What's missing is the radius. But it is just the distance between C and any of our initial points. Computing from S is the easiest because it is (0,0):
R = square-root( [Cx-Sx]^2 + [Cy-Sy]^2 ) = square-root( Cx^2 + Cy^2)
So finally if you substitute all the definitions (probably easier to do in a program than to type it here) then you will obtain a function of a single variable t:
(x-Cx)^2 + (y-Cy)^2 = Cx^2 + Cy^2
note: you will get a straight line for t = 0.5 , but you can easily substitute t' = t-0.5 and only play with t'

I wanted a simple quadratic bezier curve from p0 (0,0) to p2 (127,127) based on a movable control point p1 that would range between (0,127) and (127,0). I wanted the position of the control point to be determined by a deviation variable. This is how I solved it based on my deviation variable being a number between -100 and 100 (0 being a linear line, the negative side representing pic 2 and positive side representing pic 3).
/// <summary>
/// Converts a MIDI value based on a velocity curve
/// </summary>
/// <param name="value">The value to convert</param>
/// <param name="deviation">The amount of deviation from a linear line from -100 to 100</param>
private int ConvertMidiValue(int value, double deviation)
{
if (deviation < -100 || deviation > 100)
throw new ArgumentException("Value must be between -100 and 100", "deviation");
var minMidiValue = 0d;
var maxMidiValue = 127d;
var midMidiValue = 63.5d;
// This is our control point for the quadratic bezier curve
// We want this to be between 0 (min) and 63.5 (max)
var controlPointX = midMidiValue + ((deviation / 100) * midMidiValue);
// Get the percent position of the incoming value in relation to the max
var t = (double)value / maxMidiValue;
// The quadratic bezier curve formula
// B(t) = ((1 - t) * (1 - t) * p0) + (2 * (1 - t) * t * p1) + (t * t * p2)
// t = the position on the curve between (0 and 1)
// p0 = minMidiValue (0)
// p1 = controlPointX (the bezier control point)
// p2 = maxMidiValue (127)
// Formula can now be simplified as:
// B(t) = ((1 - t) * (1 - t) * minMidiValue) + (2 * (1 - t) * t * controlPointX) + (t * t * maxMidiValue)
// What is the deviation from our value?
var delta = (int)Math.Round((2 * (1 - t) * t * controlPointX) + (t * t * maxMidiValue));
return (value - delta) + value;
}
This results in value curves ranging between the three shown below (blue is -100, gray is 0 and red is 100):

Related

Plot a point in 3D space perpendicular to a line vector

I'm attempting to calculate a point in 3D space which is orthogonal/perpendicular to a line vector.
So I have P1 and P2 which form the line. I’m then trying to find a point with a radius centred at P1, which is orthogonal to the line.
I'd like to do this using trigonometry, without any programming language specific functions.
At the moment I'm testing how accurate this function is by potting a circle around the line vector.
I also rotate the line vector in 3D space to see what happens to the plotted circle, this is where my results vary.
Some of the unwanted effects include:
The circle rotating and passing through the line vector.
The circle's radius appearing to reducing to zero before growing as the line vector changes direction.
I used this question to get me started, and have since made some adjustments to the code.
Any help with this would be much appreciated - I've spent 3 days drawing circles now. Here's my code so far
//Define points which form the line vector
dx = p2x - p1x;
dy = p2y - p1y;
dz = p2z - p1z;
// Normalize line vector
d = sqrt(dx*dx + dy*dy + dz*dz);
// Line vector
v3x = dx/d;
v3y = dy/d;
v3z = dz/d;
// Angle and distance to plot point around line vector
angle = 123 * pi/180 //convert to radians
radius = 4;
// Begin calculating point
s = sqrt(v3x*v3x + v3y*v3y + v3z*v3z);
// Calculate v1.
// I have been playing with these variables (v1x, v1y, v1z) to try out different configurations.
v1x = s * v3x;
v1y = s * v3y;
v1z = s * -v3z;
// Calculate v2 as cross product of v3 and v1.
v2x = v3y*v1z - v3z*v1y;
v2y = v3z*v1x - v3x*v1z;
v2z = v3x*v1y - v3y*v1x;
// Point in space around the line vector
px = p1x + (radius * (v1x * cos(angle)) + (v2x * sin(angle)));
py = p1y + (radius * (v1y * cos(angle)) + (v2y * sin(angle)));
pz = p1z + (radius * (v1z * cos(angle)) + (v2z * sin(angle)));
EDIT
After wrestling with this for days while in lockdown, I've finally managed to get this working. I'd like to thank MBo and Futurologist for their invaluable input.
Although I wasn't able to get their examples working (more likely due to me being at fault), their answers pointed me in the right direction and to that eureka moment!
The key was in swapping the correct vectors.
So thank you to you both, you really helped me along with this. This is my final (working) code:
//Set some vars
angle = 123 * pi/180;
radius = 4;
//P1 & P2 represent vectors that form the line.
dx = p2x - p1x;
dy = p2y - p1y;
dz = p2z - p1z;
d = sqrt(dx*dx + dy*dy + dz*dz)
//Normalized vector
v3x = dx/d;
v3y = dy/d;
v3z = dz/d;
//Store vector elements in an array
p = [v3x, v3y, v3z];
//Store vector elements in second array, this time with absolute value
p_abs = [abs(v3x), abs(v3y), abs(v3z)];
//Find elements with MAX and MIN magnitudes
maxval = max(p_abs[0], p_abs[1], p_abs[2]);
minval = min(p_abs[0], p_abs[1], p_abs[2]);
//Initialise 3 variables to store which array indexes contain the (max, medium, min) vector magnitudes.
maxindex = 0;
medindex = 0;
minindex = 0;
//Loop through p_abs array to find which magnitudes are equal to maxval & minval. Store their indexes for use later.
for(i = 0; i < 3; i++) {
if (p_abs[i] == maxval) maxindex = i;
else if (p_abs[i] == minval) minindex = i;
}
//Find the remaining index which has the medium magnitude
for(i = 0; i < 3; i++) {
if (i!=maxindex && i!=minindex) {
medindex = i;
break;
}
}
//Store the maximum magnitude for now.
storemax = (p[maxindex]);
//Swap the 2 indexes that contain the maximum & medium magnitudes, negating maximum. Set minimum magnitude to zero.
p[maxindex] = (p[medindex]);
p[medindex] = -storemax;
p[minindex] = 0;
//Calculate v1. Perpendicular to v3.
s = sqrt(v3x*v3x + v3z*v3z + v3y*v3y);
v1x = s * p[0];
v1y = s * p[1];
v1z = s * p[2];
//Calculate v2 as cross product of v3 and v1.
v2x = v3y*v1z - v3z*v1y;
v2y = v3z*v1x - v3x*v1z;
v2z = v3x*v1y - v3y*v1x;
//For each circle point.
circlepointx = p2x + radius * (v1x * cos(angle) + v2x * sin(angle))
circlepointy = p2y + radius * (v1y * cos(angle) + v2y * sin(angle))
circlepointz = p2z + radius * (v1z * cos(angle) + v2z * sin(angle))
Your question is too vague, but I may suppose what you really want.
You have line through two point p1 and p2. You want to build a circle of radius r centered at p1 and perpendicular to the line.
At first find direction vector of this line - you already know how - normalized vector v3.
Now you need arbitrary vector perpendicular to v3: find components of v3 with the largest magnitude and with the second magnitude. For example, abs(v3y) is the largest and abs(v3x) has the second magnitude. Exchange them, negate the largest, and make the third component zero:
p = (-v3y, v3x, 0)
This vector is normal to v3 (their dot product is zero)
Now normalize it
pp = p / length(p)
Now get binormal vector as cross product of v3 and pp (I has unit length, no need to normalize), it is perpendicular to both v3 and pp
b = v3 x pp
Now build needed circle
circlepoint(theta) = p1 + radius * pp * Cos(theta) + radius * b * Sin(theta)
Aslo note that angle in radians is
angle = degrees * pi / 180
#Input:
# Pair of points which determine line L:
P1 = [x_P1, y_P1, z_P1]
P2 = [x_P1, y_P1, z_P1]
# Radius:
Radius = R
# unit vector aligned with the line passing through the points P1 and P2:
V3 = P1 - P2
V3 = V3 / norm(V3)
# from the three basis vectors, e1 = [1,0,0], e2 = [0,1,0], e3 = [0,0,1]
# pick the one that is the most transverse to vector V3
# this means, look at the entries of V3 = [x_V3, y_V3, z_V3] and check which
# one has the smallest absolute value and record its index. Take the coordinate
# vector that has 1 at that selected index. In other words,
# if min( abs(x_V3), abs(y_V)) = abs(y_V3),
# then argmin( abs(x_V3), abs(y_V3), abs(z_V3)) = 2 and so take e = [0,1,0]:
e = [0,0,0]
i = argmin( abs(V3[1]), abs(V3[2]), abs(V3[3]) )
e[i] = 1
# a unit vector perpendicular to both e and V3:
V1 = cross(e, V3)
V1 = V1 / norm(V1)
# third unit vector perpendicular to both V3 and V1:
V2 = cross(V3, V1)
# an arbitrary point on the circle (i.e. equation of the circle with parameter s):
P = P1 + Radius*( np.cos(s)*V1 + np.sin(s)*V2 )
# E.g. say you want to find point P on the circle, 60 degrees relative to vector V1:
s = pi/3
P = P1 + Radius*( cos(s)*V1 + sin(s)*V2 )
Test example in Python:
import numpy as np
#Input:
# Pair of points which determine line L:
P1 = np.array([1, 1, 1])
P2 = np.array([3, 2, 3])
Radius = 3
V3 = P1 - P2
V3 = V3 / np.linalg.norm(V3)
e = np.array([0,0,0])
e[np.argmin(np.abs(V3))] = 1
V1 = np.cross(e, V3)
V1 = V1 / np.linalg.norm(V3)
V2 = np.cross(V3, V1)
# E.g., say you want to rotate point P along the circle, 60 degrees along rel to V1:
s = np.pi/3
P = P1 + Radius*( np.cos(s)*V1 + np.sin(s)*V2 )

Deform a triangle along vector to get a specific angle

I am trying to create a binary tree from a lot of segments in 3d space sharing the same origin.
When merging two segments I want to have a specific angle between the lines to the child nodes.
The following image illustrates my problem. C shows the position of the parent node and A and B the child positions. N is the average vector of the vectors from C to A and C to B.
With a given angle, how can I determine point P?
Thanks for any help
P = C + t * ((A + B)/2 - C) t is unknown parameter
PA = A - P PA vector
PB = B - P PB vector
Tan(Fi) = (PA x PB) / (PA * PB) (cross product in the nominator, scalar product in the denominator)
Tan(Fi) * (PA.x*PB.x + PA.y*PB.y) = (PA.x*PB.y - PA.y*PB.x)
this is quadratic equation for t, after solving we will get two (for non-degenerate cases) possible positions of P point (the second one lies at other side of AB line)
Addition:
Let's ax = A.x - A point X-coordinate and so on,
abcx = (ax+bx)/2-cx, abcy = (ay+by)/2-cy
pax = ax-cx - t*abcx, pay = ay-cy - t*abcy
pbx = bx-cx - t*abcx, pby = by-cy - t*abcy
ff = Tan(Fi) , then
ff*(pax*pbx+pay*pby)-pax*pby+pay*pbx=0
ff*((ax-cx - t*abcx)*(bx-cx - t*abcx)+(ay-cy - t*abcy)*(by-cy - t*abcy)) -
- (ax-cx - t*abcx)*(by-cy - t*abcy) + (ay-cy - t*abcy)*(bx-cx - t*abcx) =
t^2 *(ff*(abcx^2+abcy^2)) +
t * (-2*ff*(abcx^2+abcy^2) + abcx*(by-ay) + abcy*(ax-bx) ) +
(ff*((ax-cx)*(bx-cx)+(ay-cy)*(by-cy)) - (ax-cx)*(by-cy)+(bx-cx)*(ay-cy)) =0
This is quadratic equation AA*t^2 + BB*t + CC = 0 with coefficients
AA = ff*(abcx^2+abcy^2)
BB = -2*ff*(abcx^2+abcy^2) + abcx*(by-ay) + abcy*(ax-bx)
CC = ff*((ax-cx)*(bx-cx)+(ay-cy)*(by-cy)) - (ax-cx)*(by-cy)+(bx-cx)*(ay-cy)
P.S. My answer is for 2d-case!
For 3d: It is probably simpler to use scalar product only (with vector lengths)
Cos(Fi) = (PA * PB) / (|PA| * |PB|)
Another solution could be using binary search on the vector N, whether P is close to C then the angle will be smaller and whether P is far from C then the angle will be bigger, being it suitable for a binary search.

Computing the 3D coordinates on a unit sphere from a 2D point

I have a square bitmap of a circle and I want to compute the normals of all the pixels in that circle as if it were a sphere of radius 1:
The sphere/circle is centered in the bitmap.
What is the equation for this?
Don't know much about how people program 3D stuff, so I'll just give the pure math and hope it's useful.
Sphere of radius 1, centered on origin, is the set of points satisfying:
x2 + y2 + z2 = 1
We want the 3D coordinates of a point on the sphere where x and y are known. So, just solve for z:
z = ±sqrt(1 - x2 - y2).
Now, let us consider a unit vector pointing outward from the sphere. It's a unit sphere, so we can just use the vector from the origin to (x, y, z), which is, of course, <x, y, z>.
Now we want the equation of a plane tangent to the sphere at (x, y, z), but this will be using its own x, y, and z variables, so instead I'll make it tangent to the sphere at (x0, y0, z0). This is simply:
x0x + y0y + z0z = 1
Hope this helps.
(OP):
you mean something like:
const int R = 31, SZ = power_of_two(R*2);
std::vector<vec4_t> p;
for(int y=0; y<SZ; y++) {
for(int x=0; x<SZ; x++) {
const float rx = (float)(x-R)/R, ry = (float)(y-R)/R;
if(rx*rx+ry*ry > 1) { // outside sphere
p.push_back(vec4_t(0,0,0,0));
} else {
vec3_t normal(rx,sqrt(1.-rx*rx-ry*ry),ry);
p.push_back(vec4_t(normal,1));
}
}
}
It does make a nice spherical shading-like shading if I treat the normals as colours and blit it; is it right?
(TZ)
Sorry, I'm not familiar with those aspects of C++. Haven't used the language very much, nor recently.
This formula is often used for "fake-envmapping" effect.
double x = 2.0 * pixel_x / bitmap_size - 1.0;
double y = 2.0 * pixel_y / bitmap_size - 1.0;
double r2 = x*x + y*y;
if (r2 < 1)
{
// Inside the circle
double z = sqrt(1 - r2);
.. here the normal is (x, y, z) ...
}
Obviously you're limited to assuming all the points are on one half of the sphere or similar, because of the missing dimension. Past that, it's pretty simple.
The middle of the circle has a normal facing precisely in or out, perpendicular to the plane the circle is drawn on.
Each point on the edge of the circle is facing away from the middle, and thus you can calculate the normal for that.
For any point between the middle and the edge, you use the distance from the middle, and some simple trig (which eludes me at the moment). A lerp is roughly accurate at some points, but not quite what you need, since it's a curve. Simple curve though, and you know the beginning and end values, so figuring them out should only take a simple equation.
I think I get what you're trying to do: generate a grid of depth data for an image. Sort of like ray-tracing a sphere.
In that case, you want a Ray-Sphere Intersection test:
http://www.siggraph.org/education/materials/HyperGraph/raytrace/rtinter1.htm
Your rays will be simple perpendicular rays, based off your U/V coordinates (times two, since your sphere has a diameter of 2). This will give you the front-facing points on the sphere.
From there, calculate normals as below (point - origin, the radius is already 1 unit).
Ripped off from the link above:
You have to combine two equations:
Ray: R(t) = R0 + t * Rd , t > 0 with R0 = [X0, Y0, Z0] and Rd = [Xd, Yd, Zd]
Sphere: S = the set of points[xs, ys, zs], where (xs - xc)2 + (ys - yc)2 + (zs - zc)2 = Sr2
To do this, calculate your ray (x * pixel / width, y * pixel / width, z: 1), then:
A = Xd^2 + Yd^2 + Zd^2
B = 2 * (Xd * (X0 - Xc) + Yd * (Y0 - Yc) + Zd * (Z0 - Zc))
C = (X0 - Xc)^2 + (Y0 - Yc)^2 + (Z0 - Zc)^2 - Sr^2
Plug into quadratic equation:
t0, t1 = (- B + (B^2 - 4*C)^1/2) / 2
Check discriminant (B^2 - 4*C), and if real root, the intersection is:
Ri = [xi, yi, zi] = [x0 + xd * ti , y0 + yd * ti, z0 + zd * ti]
And the surface normal is:
SN = [(xi - xc)/Sr, (yi - yc)/Sr, (zi - zc)/Sr]
Boiling it all down:
So, since we're talking unit values, and rays that point straight at Z (no x or y component), we can boil down these equations greatly:
Ray:
X0 = 2 * pixelX / width
Y0 = 2 * pixelY / height
Z0 = 0
Xd = 0
Yd = 0
Zd = 1
Sphere:
Xc = 1
Yc = 1
Zc = 1
Factors:
A = 1 (unit ray)
B
= 2 * (0 + 0 + (0 - 1))
= -2 (no x/y component)
C
= (X0 - 1) ^ 2 + (Y0 - 1) ^ 2 + (0 - 1) ^ 2 - 1
= (X0 - 1) ^ 2 + (Y0 - 1) ^ 2
Discriminant
= (-2) ^ 2 - 4 * 1 * C
= 4 - 4 * C
From here:
If discriminant < 0:
Z = ?, Normal = ?
Else:
t = (2 + (discriminant) ^ 1 / 2) / 2
If t < 0 (hopefully never or always the case)
t = -t
Then:
Z: t
Nx: Xi - 1
Ny: Yi - 1
Nz: t - 1
Boiled farther still:
Intuitively it looks like C (X^2 + Y^2) and the square-root are the most prominent figures here. If I had a better recollection of my math (in particular, transformations on exponents of sums), then I'd bet I could derive this down to what Tom Zych gave you. Since I can't, I'll just leave it as above.

correcting fisheye distortion programmatically

BOUNTY STATUS UPDATE:
I discovered how to map a linear lens, from destination coordinates to source coordinates.
How do you calculate the radial distance from the centre to go from fisheye to rectilinear?
1). I actually struggle to reverse it, and to map source coordinates to destination coordinates. What is the inverse, in code in the style of the converting functions I posted?
2). I also see that my undistortion is imperfect on some lenses - presumably those that are not strictly linear. What is the equivalent to-and-from source-and-destination coordinates for those lenses? Again, more code than just mathematical formulae please...
Question as originally stated:
I have some points that describe positions in a picture taken with a fisheye lens.
I want to convert these points to rectilinear coordinates. I want to undistort the image.
I've found this description of how to generate a fisheye effect, but not how to reverse it.
There's also a blog post that describes how to use tools to do it; these pictures are from that:
(1) : SOURCE Original photo link
Input : Original image with fish-eye distortion to fix.
(2) : DESTINATION Original photo link
Output : Corrected image (technically also with perspective correction, but that's a separate step).
How do you calculate the radial distance from the centre to go from fisheye to rectilinear?
My function stub looks like this:
Point correct_fisheye(const Point& p,const Size& img) {
// to polar
const Point centre = {img.width/2,img.height/2};
const Point rel = {p.x-centre.x,p.y-centre.y};
const double theta = atan2(rel.y,rel.x);
double R = sqrt((rel.x*rel.x)+(rel.y*rel.y));
// fisheye undistortion in here please
//... change R ...
// back to rectangular
const Point ret = Point(centre.x+R*cos(theta),centre.y+R*sin(theta));
fprintf(stderr,"(%d,%d) in (%d,%d) = %f,%f = (%d,%d)\n",p.x,p.y,img.width,img.height,theta,R,ret.x,ret.y);
return ret;
}
Alternatively, I could somehow convert the image from fisheye to rectilinear before finding the points, but I'm completely befuddled by the OpenCV documentation. Is there a straightforward way to do it in OpenCV, and does it perform well enough to do it to a live video feed?
The description you mention states that the projection by a pin-hole camera (one that does not introduce lens distortion) is modeled by
R_u = f*tan(theta)
and the projection by common fisheye lens cameras (that is, distorted) is modeled by
R_d = 2*f*sin(theta/2)
You already know R_d and theta and if you knew the camera's focal length (represented by f) then correcting the image would amount to computing R_u in terms of R_d and theta. In other words,
R_u = f*tan(2*asin(R_d/(2*f)))
is the formula you're looking for. Estimating the focal length f can be solved by calibrating the camera or other means such as letting the user provide feedback on how well the image is corrected or using knowledge from the original scene.
In order to solve the same problem using OpenCV, you would have to obtain the camera's intrinsic parameters and lens distortion coefficients. See, for example, Chapter 11 of Learning OpenCV (don't forget to check the correction). Then you can use a program such as this one (written with the Python bindings for OpenCV) in order to reverse lens distortion:
#!/usr/bin/python
# ./undistort 0_0000.jpg 1367.451167 1367.451167 0 0 -0.246065 0.193617 -0.002004 -0.002056
import sys
import cv
def main(argv):
if len(argv) < 10:
print 'Usage: %s input-file fx fy cx cy k1 k2 p1 p2 output-file' % argv[0]
sys.exit(-1)
src = argv[1]
fx, fy, cx, cy, k1, k2, p1, p2, output = argv[2:]
intrinsics = cv.CreateMat(3, 3, cv.CV_64FC1)
cv.Zero(intrinsics)
intrinsics[0, 0] = float(fx)
intrinsics[1, 1] = float(fy)
intrinsics[2, 2] = 1.0
intrinsics[0, 2] = float(cx)
intrinsics[1, 2] = float(cy)
dist_coeffs = cv.CreateMat(1, 4, cv.CV_64FC1)
cv.Zero(dist_coeffs)
dist_coeffs[0, 0] = float(k1)
dist_coeffs[0, 1] = float(k2)
dist_coeffs[0, 2] = float(p1)
dist_coeffs[0, 3] = float(p2)
src = cv.LoadImage(src)
dst = cv.CreateImage(cv.GetSize(src), src.depth, src.nChannels)
mapx = cv.CreateImage(cv.GetSize(src), cv.IPL_DEPTH_32F, 1)
mapy = cv.CreateImage(cv.GetSize(src), cv.IPL_DEPTH_32F, 1)
cv.InitUndistortMap(intrinsics, dist_coeffs, mapx, mapy)
cv.Remap(src, dst, mapx, mapy, cv.CV_INTER_LINEAR + cv.CV_WARP_FILL_OUTLIERS, cv.ScalarAll(0))
# cv.Undistort2(src, dst, intrinsics, dist_coeffs)
cv.SaveImage(output, dst)
if __name__ == '__main__':
main(sys.argv)
Also note that OpenCV uses a very different lens distortion model to the one in the web page you linked to.
(Original poster, providing an alternative)
The following function maps destination (rectilinear) coordinates to source (fisheye-distorted) coordinates. (I'd appreciate help in reversing it)
I got to this point through trial-and-error: I don't fundamentally grasp why this code is working, explanations and improved accuracy appreciated!
def dist(x,y):
return sqrt(x*x+y*y)
def correct_fisheye(src_size,dest_size,dx,dy,factor):
""" returns a tuple of source coordinates (sx,sy)
(note: values can be out of range)"""
# convert dx,dy to relative coordinates
rx, ry = dx-(dest_size[0]/2), dy-(dest_size[1]/2)
# calc theta
r = dist(rx,ry)/(dist(src_size[0],src_size[1])/factor)
if 0==r:
theta = 1.0
else:
theta = atan(r)/r
# back to absolute coordinates
sx, sy = (src_size[0]/2)+theta*rx, (src_size[1]/2)+theta*ry
# done
return (int(round(sx)),int(round(sy)))
When used with a factor of 3.0, it successfully undistorts the images used as examples (I made no attempt at quality interpolation):
Dead link
(And this is from the blog post, for comparison:)
If you think your formulas are exact, you can comput an exact formula with trig, like so:
Rin = 2 f sin(w/2) -> sin(w/2)= Rin/2f
Rout= f tan(w) -> tan(w)= Rout/f
(Rin/2f)^2 = [sin(w/2)]^2 = (1 - cos(w))/2 -> cos(w) = 1 - 2(Rin/2f)^2
(Rout/f)^2 = [tan(w)]^2 = 1/[cos(w)]^2 - 1
-> (Rout/f)^2 = 1/(1-2[Rin/2f]^2)^2 - 1
However, as #jmbr says, the actual camera distortion will depend on the lens and the zoom. Rather than rely on a fixed formula, you might want to try a polynomial expansion:
Rout = Rin*(1 + A*Rin^2 + B*Rin^4 + ...)
By tweaking first A, then higher-order coefficients, you can compute any reasonable local function (the form of the expansion takes advantage of the symmetry of the problem). In particular, it should be possible to compute initial coefficients to approximate the theoretical function above.
Also, for good results, you will need to use an interpolation filter to generate your corrected image. As long as the distortion is not too great, you can use the kind of filter you would use to rescale the image linearly without much problem.
Edit: as per your request, the equivalent scaling factor for the above formula:
(Rout/f)^2 = 1/(1-2[Rin/2f]^2)^2 - 1
-> Rout/f = [Rin/f] * sqrt(1-[Rin/f]^2/4)/(1-[Rin/f]^2/2)
If you plot the above formula alongside tan(Rin/f), you can see that they are very similar in shape. Basically, distortion from the tangent becomes severe before sin(w) becomes much different from w.
The inverse formula should be something like:
Rin/f = [Rout/f] / sqrt( sqrt(([Rout/f]^2+1) * (sqrt([Rout/f]^2+1) + 1) / 2 )
I blindly implemented the formulas from here, so I cannot guarantee it would do what you need.
Use auto_zoom to get the value for the zoom parameter.
def dist(x,y):
return sqrt(x*x+y*y)
def fisheye_to_rectilinear(src_size,dest_size,sx,sy,crop_factor,zoom):
""" returns a tuple of dest coordinates (dx,dy)
(note: values can be out of range)
crop_factor is ratio of sphere diameter to diagonal of the source image"""
# convert sx,sy to relative coordinates
rx, ry = sx-(src_size[0]/2), sy-(src_size[1]/2)
r = dist(rx,ry)
# focal distance = radius of the sphere
pi = 3.1415926535
f = dist(src_size[0],src_size[1])*factor/pi
# calc theta 1) linear mapping (older Nikon)
theta = r / f
# calc theta 2) nonlinear mapping
# theta = asin ( r / ( 2 * f ) ) * 2
# calc new radius
nr = tan(theta) * zoom
# back to absolute coordinates
dx, dy = (dest_size[0]/2)+rx/r*nr, (dest_size[1]/2)+ry/r*nr
# done
return (int(round(dx)),int(round(dy)))
def fisheye_auto_zoom(src_size,dest_size,crop_factor):
""" calculate zoom such that left edge of source image matches left edge of dest image """
# Try to see what happens with zoom=1
dx, dy = fisheye_to_rectilinear(src_size, dest_size, 0, src_size[1]/2, crop_factor, 1)
# Calculate zoom so the result is what we wanted
obtained_r = dest_size[0]/2 - dx
required_r = dest_size[0]/2
zoom = required_r / obtained_r
return zoom
I took what JMBR did and basically reversed it. He took the radius of the distorted image (Rd, that is, the distance in pixels from the center of the image) and found a formula for Ru, the radius of the undistorted image.
You want to go the other way. For each pixel in the undistorted (processed image), you want to know what the corresponding pixel is in the distorted image.
In other words, given (xu, yu) --> (xd, yd). You then replace each pixel in the undistorted image with its corresponding pixel from the distorted image.
Starting where JMBR did, I do the reverse, finding Rd as a function of Ru. I get:
Rd = f * sqrt(2) * sqrt( 1 - 1/sqrt(r^2 +1))
where f is the focal length in pixels (I'll explain later), and r = Ru/f.
The focal length for my camera was 2.5 mm. The size of each pixel on my CCD was 6 um square. f was therefore 2500/6 = 417 pixels. This can be found by trial and error.
Finding Rd allows you to find the corresponding pixel in the distorted image using polar coordinates.
The angle of each pixel from the center point is the same:
theta = arctan( (yu-yc)/(xu-xc) ) where xc, yc are the center points.
Then,
xd = Rd * cos(theta) + xc
yd = Rd * sin(theta) + yc
Make sure you know which quadrant you are in.
Here is the C# code I used
public class Analyzer
{
private ArrayList mFisheyeCorrect;
private int mFELimit = 1500;
private double mScaleFESize = 0.9;
public Analyzer()
{
//A lookup table so we don't have to calculate Rdistorted over and over
//The values will be multiplied by focal length in pixels to
//get the Rdistorted
mFisheyeCorrect = new ArrayList(mFELimit);
//i corresponds to Rundist/focalLengthInPixels * 1000 (to get integers)
for (int i = 0; i < mFELimit; i++)
{
double result = Math.Sqrt(1 - 1 / Math.Sqrt(1.0 + (double)i * i / 1000000.0)) * 1.4142136;
mFisheyeCorrect.Add(result);
}
}
public Bitmap RemoveFisheye(ref Bitmap aImage, double aFocalLinPixels)
{
Bitmap correctedImage = new Bitmap(aImage.Width, aImage.Height);
//The center points of the image
double xc = aImage.Width / 2.0;
double yc = aImage.Height / 2.0;
Boolean xpos, ypos;
//Move through the pixels in the corrected image;
//set to corresponding pixels in distorted image
for (int i = 0; i < correctedImage.Width; i++)
{
for (int j = 0; j < correctedImage.Height; j++)
{
//which quadrant are we in?
xpos = i > xc;
ypos = j > yc;
//Find the distance from the center
double xdif = i-xc;
double ydif = j-yc;
//The distance squared
double Rusquare = xdif * xdif + ydif * ydif;
//the angle from the center
double theta = Math.Atan2(ydif, xdif);
//find index for lookup table
int index = (int)(Math.Sqrt(Rusquare) / aFocalLinPixels * 1000);
if (index >= mFELimit) index = mFELimit - 1;
//calculated Rdistorted
double Rd = aFocalLinPixels * (double)mFisheyeCorrect[index]
/mScaleFESize;
//calculate x and y distances
double xdelta = Math.Abs(Rd*Math.Cos(theta));
double ydelta = Math.Abs(Rd * Math.Sin(theta));
//convert to pixel coordinates
int xd = (int)(xc + (xpos ? xdelta : -xdelta));
int yd = (int)(yc + (ypos ? ydelta : -ydelta));
xd = Math.Max(0, Math.Min(xd, aImage.Width-1));
yd = Math.Max(0, Math.Min(yd, aImage.Height-1));
//set the corrected pixel value from the distorted image
correctedImage.SetPixel(i, j, aImage.GetPixel(xd, yd));
}
}
return correctedImage;
}
}
I found this pdf file and I have proved that the maths are correct (except for the line vd = *xd**fv+v0 which should say vd = **yd**+fv+v0).
http://perception.inrialpes.fr/CAVA_Dataset/Site/files/Calibration_OpenCV.pdf
It does not use all of the latest co-efficients that OpenCV has available but I am sure that it could be adapted fairly easily.
double k1 = cameraIntrinsic.distortion[0];
double k2 = cameraIntrinsic.distortion[1];
double p1 = cameraIntrinsic.distortion[2];
double p2 = cameraIntrinsic.distortion[3];
double k3 = cameraIntrinsic.distortion[4];
double fu = cameraIntrinsic.focalLength[0];
double fv = cameraIntrinsic.focalLength[1];
double u0 = cameraIntrinsic.principalPoint[0];
double v0 = cameraIntrinsic.principalPoint[1];
double u, v;
u = thisPoint->x; // the undistorted point
v = thisPoint->y;
double x = ( u - u0 )/fu;
double y = ( v - v0 )/fv;
double r2 = (x*x) + (y*y);
double r4 = r2*r2;
double cDist = 1 + (k1*r2) + (k2*r4);
double xr = x*cDist;
double yr = y*cDist;
double a1 = 2*x*y;
double a2 = r2 + (2*(x*x));
double a3 = r2 + (2*(y*y));
double dx = (a1*p1) + (a2*p2);
double dy = (a3*p1) + (a1*p2);
double xd = xr + dx;
double yd = yr + dy;
double ud = (xd*fu) + u0;
double vd = (yd*fv) + v0;
thisPoint->x = ud; // the distorted point
thisPoint->y = vd;
This can be solved as an optimization problem. Simply draw on curves in images that are supposed to be straight lines. Store the contour points for each of those curves. Now we can solve the fish eye matrix as a minimization problem. Minimize the curve in points and that will give us a fisheye matrix. It works.
It can be done manually by adjusting the fish eye matrix using trackbars! Here is a fish eye GUI code using OpenCV for manual calibration.

How to calculate center of an ellipse by two points and radius sizes

While working on SVG implementation for Internet Explorer to be based on its own VML format I came to a problem of translation of an SVG elliptical arc to an VML elliptical arc.
In VML an arc is given by: two angles for two points on ellipse and lengths of radiuses,
In SVG an arc is given by: two pairs of coordinates for two points on ellipse and sizes of ellipse boundary box
So, the question is: How to express angles of two points on ellipse to two pairs of their coordinates.
An intermediate question could be: How to find the center of an ellipse by coordinates of a pair of points on its curve.
Update: Let's have a precondition saying that an ellipse is normally placed (its radiuses are parallel to linear coordinate system axis), thus no rotation is applied.
Update: This question is not related to svg:ellipse element, rather to "a" elliptical arc command in svg:path element (SVG Paths: The elliptical arc curve commands)
So the solution is here:
The parametrized formula of an ellipse:
x = x0 + a * cos(t)
y = y0 + b * sin(t)
Let's put known coordinates of two points to it:
x1 = x0 + a * cos(t1)
x2 = x0 + a * cos(t2)
y1 = y0 + b * sin(t1)
y2 = y0 + b * sin(t2)
Now we have a system of equations with 4 variables: center of ellipse (x0/y0) and two angles t1, t2
Let's subtract equations in order to get rid of center coordinates:
x1 - x2 = a * (cos(t1) - cos(t2))
y1 - y2 = b * (sin(t1) - sin(t2))
This can be rewritten (with product-to-sum identities formulas) as:
(x1 - x2) / (2 * a) = sin((t1 + t2) / 2) * sin((t1 - t2) / 2)
(y2 - y1) / (2 * b) = cos((t1 + t2) / 2) * sin((t1 - t2) / 2)
Let's replace some of the equations:
r1: (x1 - x2) / (2 * a)
r2: (y2 - y1) / (2 * b)
a1: (t1 + t2) / 2
a2: (t1 - t2) / 2
Then we get simple equations system:
r1 = sin(a1) * sin(a2)
r2 = cos(a1) * sin(a2)
Dividing first equation by second produces:
a1 = arctan(r1/r2)
Adding this result to the first equation gives:
a2 = arcsin(r2 / cos(arctan(r1/r2)))
Or, simple (using compositions of trig and inverse trig functions):
a2 = arcsin(r2 / (1 / sqrt(1 + (r1/r2)^2)))
or even more simple:
a2 = arcsin(sqrt(r1^2 + r2^2))
Now the initial four-equations system can be resolved with easy and all angles as well as eclipse center coordinates can be found.
The elliptical curve arc link you posted includes a link to elliptical arc implementation notes.
In there, you will find the equations for conversion from endpoint to centre parameterisation.
Here is my JavaScript implementation of those equations, taken from an interactive demo of elliptical arc paths, using Sylvester.js to perform the matrix and vector calculations.
// Calculate the centre of the ellipse
// Based on http://www.w3.org/TR/SVG/implnote.html#ArcConversionEndpointToCenter
var x1 = 150; // Starting x-point of the arc
var y1 = 150; // Starting y-point of the arc
var x2 = 400; // End x-point of the arc
var y2 = 300; // End y-point of the arc
var fA = 1; // Large arc flag
var fS = 1; // Sweep flag
var rx = 100; // Horizontal radius of ellipse
var ry = 50; // Vertical radius of ellipse
var phi = 0; // Angle between co-ord system and ellipse x-axes
var Cx, Cy;
// Step 1: Compute (x1′, y1′)
var M = $M([
[ Math.cos(phi), Math.sin(phi)],
[-Math.sin(phi), Math.cos(phi)]
]);
var V = $V( [ (x1-x2)/2, (y1-y2)/2 ] );
var P = M.multiply(V);
var x1p = P.e(1); // x1 prime
var y1p = P.e(2); // y1 prime
// Ensure radii are large enough
// Based on http://www.w3.org/TR/SVG/implnote.html#ArcOutOfRangeParameters
// Step (a): Ensure radii are non-zero
// Step (b): Ensure radii are positive
rx = Math.abs(rx);
ry = Math.abs(ry);
// Step (c): Ensure radii are large enough
var lambda = ( (x1p * x1p) / (rx * rx) ) + ( (y1p * y1p) / (ry * ry) );
if(lambda > 1)
{
rx = Math.sqrt(lambda) * rx;
ry = Math.sqrt(lambda) * ry;
}
// Step 2: Compute (cx′, cy′)
var sign = (fA == fS)? -1 : 1;
// Bit of a hack, as presumably rounding errors were making his negative inside the square root!
if((( (rx*rx*ry*ry) - (rx*rx*y1p*y1p) - (ry*ry*x1p*x1p) ) / ( (rx*rx*y1p*y1p) + (ry*ry*x1p*x1p) )) < 1e-7)
var co = 0;
else
var co = sign * Math.sqrt( ( (rx*rx*ry*ry) - (rx*rx*y1p*y1p) - (ry*ry*x1p*x1p) ) / ( (rx*rx*y1p*y1p) + (ry*ry*x1p*x1p) ) );
var V = $V( [rx*y1p/ry, -ry*x1p/rx] );
var Cp = V.multiply(co);
// Step 3: Compute (cx, cy) from (cx′, cy′)
var M = $M([
[ Math.cos(phi), -Math.sin(phi)],
[ Math.sin(phi), Math.cos(phi)]
]);
var V = $V( [ (x1+x2)/2, (y1+y2)/2 ] );
var C = M.multiply(Cp).add(V);
Cx = C.e(1);
Cy = C.e(2);
An ellipse cannot be defined by only two points. Even a circle (a special cased ellipse) is defined by three points.
Even with three points, you would have infinite ellipses passing through these three points (think: rotation).
Note that a bounding box suggests a center for the ellipse, and most probably assumes that its major and minor axes are parallel to the x,y (or y,x) axes.
The intermediate question is fairly easy... you don't. You work out the centre of an ellipse from the bounding box (namely, the centre of the box is the centre of the ellipse, as long as the ellipse is centred in the box).
For your first question, I'd look at the polar form of the ellipse equation, which is available on Wikipedia. You would need to work out the eccentricity of the ellipse as well.
Or you could brute force the values from the bounding box... work out if a point lies on the ellipse and matches the angle, and iterate through every point in the bounding box.
TypeScript implementation based on the answer from Rikki.
Default DOMMatrix and DOMPoint are used for the calculations (Tested in the latest Chrome v.80) instead of the external library.
ellipseCenter(
x1: number,
y1: number,
rx: number,
ry: number,
rotateDeg: number,
fa: number,
fs: number,
x2: number,
y2: number
): DOMPoint {
const phi = ((rotateDeg % 360) * Math.PI) / 180;
const m = new DOMMatrix([
Math.cos(phi),
-Math.sin(phi),
Math.sin(phi),
Math.cos(phi),
0,
0,
]);
let v = new DOMPoint((x1 - x2) / 2, (y1 - y2) / 2).matrixTransform(m);
const x1p = v.x;
const y1p = v.y;
rx = Math.abs(rx);
ry = Math.abs(ry);
const lambda = (x1p * x1p) / (rx * rx) + (y1p * y1p) / (ry * ry);
if (lambda > 1) {
rx = Math.sqrt(lambda) * rx;
ry = Math.sqrt(lambda) * ry;
}
const sign = fa === fs ? -1 : 1;
const div =
(rx * rx * ry * ry - rx * rx * y1p * y1p - ry * ry * x1p * x1p) /
(rx * rx * y1p * y1p + ry * ry * x1p * x1p);
const co = sign * Math.sqrt(Math.abs(div));
// inverse matrix b and c
m.b *= -1;
m.c *= -1;
v = new DOMPoint(
((rx * y1p) / ry) * co,
((-ry * x1p) / rx) * co
).matrixTransform(m);
v.x += (x1 + x2) / 2;
v.y += (y1 + y2) / 2;
return v;
}
Answering part of the question with code
How to find the center of an ellipse by coordinates of a pair of points on its curve.
This is a TypeScript function which is based on the excellent accepted answer by Sergey Illinsky above (which ends somewhat halfway through, IMHO). It calculates the center of an ellipse with given radii, given the condition that both provided points a and b must lie on the circumference of the ellipse. Since there are (almost) always two solutions to this problem, the code choses the solution that places the ellipse "above" the two points:
(Note that the ellipse must have major and minor axis parallel to the horizontal/vertical)
/**
* We're in 2D, so that's what our vertors look like
*/
export type Point = [number, number];
/**
* Calculates the vector that connects the two points
*/
function deltaXY (from: Point, to: Point): Point {
return [to[0]-from[0], to[1]-from[1]];
}
/**
* Calculates the sum of an arbitrary amount of vertors
*/
function vecAdd (...vectors: Point[]): Point {
return vectors.reduce((acc, curr) => [acc[0]+curr[0], acc[1]+curr[1]], [0, 0]);
}
/**
* Given two points a and b, as well as ellipsis radii rX and rY, this
* function calculates the center-point of the ellipse, so that it
* is "above" the two points and has them on the circumference
*/
function topLeftOfPointsCenter (a: Point, b: Point, rX: number, rY: number): Point {
const delta = deltaXY(a, b);
// Sergey's work leads up to a simple system of liner equations.
// Here, we calculate its general solution for the first of the two angles (t1)
const A = Math.asin(Math.sqrt((delta[0]/(2*rX))**2+(delta[1]/(2*rY))**2));
const B = Math.atan(-delta[0]/delta[1] * rY/rX);
const alpha = A + B;
// This may be the new center, but we don't know to which of the two
// solutions it belongs, yet
let newCenter = vecAdd(a, [
rX * Math.cos(alpha),
rY * Math.sin(alpha)
]);
// Figure out if it is the correct solution, and adjusting if not
const mean = vecAdd(a, [delta[0] * 0.5, delta[1] * 0.5]);
const factor = mean[1] > newCenter[1] ? 1 : -1;
const offMean = deltaXY(mean, newCenter);
newCenter = vecAdd(mean, [offMean[0] * factor, offMean[1] * factor]);
return newCenter;
}
This function does not check if a solution is possible, meaning whether the radii provided are large enough to even connect the two points!

Resources