How to adjust player sprite speed correctly? (Basically a math question?) - math

Background: I have a bird view's JavaScript game where the player controls a space ship by touching a circle -- e.g. touch to the left of the circle center, and the ship will move left, touch the top right and it will move to the top right and so on... the further away from the circle center of pseudo joystick, the more speed in that direction. However, I'm not directly adjusting the ship's speed, but rather set a targetSpeed.x and targetSpeed.y value, and the ship will then adjust its speed using something like:
if (this.speed.x < this.targetSpeed.x) {
this.speed.x += this.speedStep;
}
else if (this.speed.x > this.targetSpeed.x) {
this.speed.x -= this.speedStep;
}
... and the same for the y speed, and speedStep is a small value to make it smoother and not too abrupt (a ship shouldn't go from a fast leftwards direction to an immediate fast rightwards direction).
My question: Using above code, I believe however that the speed will be adjusted quicker in diagonal directions, and slower along the horizontal/ vertical lines. How do I correct this to have an equal target speed following?
Thanks so much for any help!

var xdiff = targetSpeed.x - speed.x;
var ydiff = targetSpeed.y - speed.y;
var angle = Math.atan2(ydiff, xdiff);
speed.x += speedStep * Math.cos(angle);
speed.y += speedStep * Math.sin(angle);

Assuming you already checked that the touch is inside the circle, and that the edge of the circle represents max speed, and that the center of the circle is circleTouch == [0, 0]
In some C++-like pseudo code:
Scalar circleRadius = ...;
Scalar maxSpeed = ...;
Scalar acceleration = ...;
Vector calculateTargetSpeed( Vector circleTouch ) {
Vector targetSpeed = maxSpeed * circleTouch / circleRadius;
return targetSpeed;
}
Vector calculateNewSpeed( Vector currentSpeed, Vector targetSpeed ) {
Vector speedDiff = targetSpeed - currentSpeed;
Vector newSpeed = currentSpeed + acceleration * normalized(speedDiff);
return newSpeed;
}
// Divide v by its length to get normalized vector (length 1) with same x/y ratio
Vector normalized( Vector v ) {
return v / length(v);
}
// Pythagoras for the length of v
Scalar length( Vector v ) {
Scalar length = sqrt(v.x * v.x + v.y * v.y); // or preferably hypot(v.x, v.y)
return length;
}
This is just off the top of my head, and i haven't tested it. The other answer is fine, i just wanted to give an answer without trigonometry functions. :)

Related

What is the fast way to constraint an float angle in a range?

For example, I have an angle with value 350 degree, and I want to constraint it in a range with max positive offset of 30 and a max negative offset of 40.
As a result, the angle value should be in a range of (310, 360) and (0, 20). If the computed angle value is 304, the angle value should be constrainted to 310, and if the computed angle value is 30, the angle value should be constrainted to 20.
I have already implemented a method, but it's not efficient enough(Most of the effort is to solve the issue when the angle value is near 360~0 ). What is the fast way to achieve this please?
Function:
// All values are in the range [0.0f, 360.0f]
// Output: the angle value after constraint.
float _KeepAngleValueBetween(float originalAngle, float currentAngle, float MaxPositiveOffset, float MaxNegativeOffset).
For example:
KeepAngleValueBetween(350.0f, 302.0f, 30.0f, 40.0f)
result: 310.0f
KeepAngleValueBetween(350.0f, 40.0f, 30.0f, 40.0f)
result: 20.0f
KeepAngleValueBetween(140.0f, 190.0f, 45.0f, 40.0f)
result: 185.0f
I couldn't come up with a solution that doesn't use if. Anyway, I handle the problem around 0/360 by translating the values before checking if currentAngle is in the desired range.
Pseudo code (Ok, it's C. It is also valid Java. And C++.):
float _KeepAngleValueBetween(float originalAngle, float currentAngle, float MaxPositiveOffset, float MaxNegativeOffset) {
// Translate so that the undesirable range starts at 0.
float translateBy = originalAngle + MaxPositiveOffset;
float result = currentAngle - translateBy + 720f;
result -= ((int)result/360) * 360;
float undesiredRange = 360f - MaxNegativeOffset - MaxPositiveOffset;
if (result >= undesiredRange) {
// No adjustment needed
return currentAngle;
}
// Perform adjustment
if (result * 2 < undesiredRange) {
// Return the upper limit because it is closer.
result = currentAngle + MaxPositiveOffset;
} else {
// Return the lower limit
result = currentAngle - MaxNegativeOffset + 360f;
}
// Translate to the range 0-360.
result -= ((int)result)/360 * 360;
return result;
}

d3js Cluster Force Layout IV block by Mike

I am new to d3js and I'm just starting out.
I am trying the cluster layout example written by Mike in one of his blocks.
https://bl.ocks.org/mbostock/7882658
I got it to work on my machine with my code but I really don't like just blindly copying code without understanding it.
However I am having a tough time understanding the math behind the 'cluster()' and 'collide()' functions and as to how they function.
Could anyone please explain it? Thanks for your help !!
Let's look at each method and I'll comment it as best I can.
Cluster
First the caller:
function tick(e) {
node
.each(cluster(10 * e.alpha * e.alpha)) //for each node on each tick call function returned by cluster function
//pass in alpha cooling parameter to collide
...
I won't rehash an explanation here about how the tick event works. The documentation is clear.
The function:
// returns a closure wrapping the cooling
// alpha (so it can be used for every node on the tick)
function cluster(alpha) {
return function(d) { // d here is the datum on the node
var cluster = clusters[d.cluster]; // clusters is a hash-map, the key is an index of the 10 clusters, the value is an object where d.cluster is the center node in that cluster
if (cluster === d) return; // if we are on the center node, do nothing
var x = d.x - cluster.x, // distance on x of node to center node
y = d.y - cluster.y, // distance on y of node to center node
l = Math.sqrt(x * x + y * y), // distance of node to center node (Pythagorean theorem)
r = d.radius + cluster.radius; // radius of node, plus radius of center node (the center node is always the largest one in the cluster)
if (l != r) { // if the node is not adjacent to the center node
l = (l - r) / l * alpha; //find a length that is slightly closer, this provides the illusion of it moving towards the center on each tick
d.x -= x *= l; // move node closer to center node
d.y -= y *= l;
cluster.x += x; // move center node closer to node
cluster.y += y;
}
};
}
Collide
The collide function is a bit more complicated. Before we dive into it, you need to understand what a QuadTree is and why Bostock is using it. If you want to determine if two elements are colliding the naive algorithm would be to loop the elements both outer and inner to compare each one against every other one. This is, of course, computationally expensive especially on every tick. This is the problem QuadTrees are trying to solve:
A quadtree recursively partitions two-dimensional space into squares, dividing each square into four equally-sized squares. Each distinct point exists in a unique leaf node; coincident points are represented by a linked list. Quadtrees can accelerate various spatial operations, such as the Barnes–Hut approximation for computing many-body forces, collision detection, and searching for nearby points.
What does that mean? First, take a look at this excellent explanation. In my own simplified words it means this: take a 2-d space and divide it into four quadrants. If any quadrant contains 4 or less nodes stop. If the quadrant contains more than four nodes, divide it again into four quadrants. Repeat this until each quadrant/sub-quadrant contains 4 or less nodes. Now when we look for collisions, our inner loop no longer loops nodes, but instead quadrants. If the quadrant doesn't collide then move to the next one. This is a big optimization.
Now onto the code:
// returns a closure wrapping the cooling
// alpha (so it can be used for every node on the tick)
// and the quadtree
function collide(alpha) {
// create quadtree from our nodes
var quadtree = d3.geom.quadtree(nodes);
return function(d) { // d is the datum on the node
var r = d.radius + maxRadius + Math.max(padding, clusterPadding), // r is the radius of the node circle plus padding
nx1 = d.x - r, // nx1, nx2, ny1, ny2 are the bounds of collision detection on the node
nx2 = d.x + r,
ny1 = d.y - r,
ny2 = d.y + r;
quadtree.visit(function(quad, x1, y1, x2, y2) { // visit each quadrant
if (quad.point && (quad.point !== d)) { // if the quadrant is a point (a node and not a sub-quadrant) and that point is not our current node
var x = d.x - quad.point.x, // distance on x of node to quad node
y = d.y - quad.point.y, // distance on y of node to quad node
l = Math.sqrt(x * x + y * y), // distance of node to quad node (Pythagorean theorem)
r = d.radius + quad.point.radius + (d.cluster === quad.point.cluster ? padding : clusterPadding); // radius of node in quadrant
if (l < r) { // if there is a collision
l = (l - r) / l * alpha; // re-position nodes
d.x -= x *= l;
d.y -= y *= l;
quad.point.x += x;
quad.point.y += y;
}
}
// This is important, it determines if the quadrant intersects
// with the node. If it does not, it returns false
// and we no longer visit and sub-quadrants or nodes
// in our quadrant, if true it descends into it
return x1 > nx2 || x2 < nx1 || y1 > ny2 || y2 < ny1;
});
};
}

Offset Clock Hands Angle Calculation

I have an interesting mathematical problem that I just cant figure out.
I am building a watch face for android wear and need to work out the angle of rotation for the hands based on the time.
Ordinarily this would be simple but here's the kicker: the hands are not central on the clock.
Lets say I have a clock face that measures 10,10
My minute hand pivot point resides at 6,6 (bottom left being 0,0) and my hour hand resides at 4,4.
How would I work out the angle at any given minute such that the point always points at the correct minute?
Thanks
Ok, with the help Nico's answer I've manage to make tweaks and get a working example.
The main changes that needed to be incorporated were changing the order of inputs to the atan calculation as well as making tweaks because of android's insistence to do coordinate systems upside down.
Please see my code below.
//minutes hand rotation calculation
int minute = mCalendar.get(Calendar.MINUTE);
float minutePivotX = mCenterX+minuteOffsetX;
//because of flipped coord system we take the y remainder of the full width instead
float minutePivotY = mWidth - mCenterY - minuteOffsetY;
//calculate target position
double minuteTargetX = mCenterX + mRadius * Math.cos(ConvertToRadians(minute * 6));
double minuteTargetY = mCenterY + mRadius * Math.sin(ConvertToRadians(minute * 6));
//calculate the direction vector from the hand's pivot to the target
double minuteDirectionX = minuteTargetX - minutePivotX;
double minuteDirectionY = minuteTargetY - minutePivotY;
//calculate the angle
float minutesRotation = (float)Math.atan2(minuteDirectionY,minuteDirectionX );
minutesRotation = (float)(minutesRotation * 360 / (2 * Math.PI));
//do this because of flipped coord system
minutesRotation = minutesRotation-180;
//if less than 0 add 360 so the rotation is clockwise
if (minutesRotation < 0)
{
minutesRotation = (minutesRotation+360);
}
//hours rotation calculations
float hour = mCalendar.get(Calendar.HOUR);
float minutePercentOfHour = (minute/60.0f);
hour = hour+minutePercentOfHour;
float hourPivotX = mCenterX+hourOffsetX;
//because of flipped coord system we take the y remainder of the full width instead
float hourPivotY = mWidth - mCenterY - hourOffsetY;
//calculate target position
double hourTargetX = mCenterX + mRadius * Math.cos(ConvertToRadians(hour * 30));
double hourTargetY = mCenterY + mRadius * Math.sin(ConvertToRadians(hour * 30));
//calculate the direction vector from the hand's pivot to the target
double hourDirectionX = hourTargetX - hourPivotX;
double hourDirectionY = hourTargetY - hourPivotY;
//calculate the angle
float hoursRotation = (float)Math.atan2(hourDirectionY,hourDirectionX );
hoursRotation = (float)(hoursRotation * 360 / (2 * Math.PI));
//do this because of flipped coord system
hoursRotation = hoursRotation-180;
//if less than 0 add 360 so the rotation is clockwise
if (hoursRotation < 0)
{
hoursRotation = (hoursRotation+360);
}
This also included a small helper function:
public double ConvertToRadians(double angle)
{
return (Math.PI / 180) * angle;
}
Thanks for your help all
Just calculate the angle based on the direction vector.
First, calculate the target position. For the minute hand, this could be:
targetX = radius * sin(2 * Pi / 60 * minutes)
targetY = radius * cos(2 * Pi / 60 * minutes)
Then calculate the direction vector from the hand's pivot to the target:
directionX = targetX - pivotX
directionY = targetY - pivotY
And calculate the angle:
angle = atan2(directionX, directionY)

How to calculate both positive and negative angle between two lines?

There is a very handy set of 2d geometry utilities here.
The angleBetweenLines has a problem, though. The result is always positive. I need to detect both positive and negative angles, so if one line is 15 degrees "above" or "below" the other line, the shape obviously looks different.
The configuration I have is that one line remains stationary, while the other line rotates, and I need to understand what direction it is rotating in, by comparing it with the stationary line.
EDIT: in response to swestrup's comment below, the situation is actually that I have a single line, and I record its starting position. The line then rotates from its starting position, and I need to calculate the angle from its starting position to current position. E.g if it has rotated clockwise, it is positive rotation; if counterclockwise, then negative. (Or vice versa.)
How to improve the algorithm so it returns the angle as both positive or negative depending on how the lines are positioned?
Here's the implementation of brainjam's suggestion. (It works with my constraints that the difference between the lines is guaranteed to be small enough that there's no need to normalize anything.)
CGFloat angleBetweenLinesInRad(CGPoint line1Start, CGPoint line1End, CGPoint line2Start, CGPoint line2End) {
CGFloat a = line1End.x - line1Start.x;
CGFloat b = line1End.y - line1Start.y;
CGFloat c = line2End.x - line2Start.x;
CGFloat d = line2End.y - line2Start.y;
CGFloat atanA = atan2(a, b);
CGFloat atanB = atan2(c, d);
return atanA - atanB;
}
I like that it's concise. Would the vector version be more concise?
#duffymo's answer is correct, but if you don't want to implement cross-product, you can use the atan2 function. This returns an angle between -π and π, and you can use it on each of the lines (or more precisely the vectors representing the lines).
If you get an angle θ for the first (stationary line), you'll have to normalize the angle φ for the second line to be between θ-π and θ+π (by adding ±2π). The angle between the two lines will then be φ-θ.
This is an easy problem involving 2D vectors. The sine of the angle between two vectors is related to the cross-product between the two vectors. And "above" or "below" is determined by the sign of the vector that's produced by the cross-product: if you cross two vectors A and B, and the cross-product produced is positive, then A is "below" B; if it's negative, A is "above" B. See Mathworld for details.
Here's how I might code it in Java:
package cruft;
import java.text.DecimalFormat;
import java.text.NumberFormat;
/**
* VectorUtils
* User: Michael
* Date: Apr 18, 2010
* Time: 4:12:45 PM
*/
public class VectorUtils
{
private static final int DEFAULT_DIMENSIONS = 3;
private static final NumberFormat DEFAULT_FORMAT = new DecimalFormat("0.###");
public static void main(String[] args)
{
double [] a = { 1.0, 0.0, 0.0 };
double [] b = { 0.0, 1.0, 0.0 };
double [] c = VectorUtils.crossProduct(a, b);
System.out.println(VectorUtils.toString(c));
}
public static double [] crossProduct(double [] a, double [] b)
{
assert ((a != null) && (a.length >= DEFAULT_DIMENSIONS ) && (b != null) && (b.length >= DEFAULT_DIMENSIONS));
double [] c = new double[DEFAULT_DIMENSIONS];
c[0] = +a[1]*b[2] - a[2]*b[1];
c[1] = +a[2]*b[0] - a[0]*b[2];
c[2] = +a[0]*b[1] - a[1]*b[0];
return c;
}
public static String toString(double [] a)
{
StringBuilder builder = new StringBuilder(128);
builder.append("{ ");
for (double c : a)
{
builder.append(DEFAULT_FORMAT.format(c)).append(' ');
}
builder.append("}");
return builder.toString();
}
}
Check the sign of the 3rd component. If it's positive, A is "below" B; if it's negative, A is "above" B - as long as the two vectors are in the two quadrants to the right of the y-axis. Obviously, if they're both in the two quadrants to the left of the y-axis the reverse is true.
You need to think about your intuitive notions of "above" and "below". What if A is in the first quadrant (0 <= θ <= 90) and B is in the second quadrant (90 <= θ <= 180)? "Above" and "below" lose their meaning.
The line then rotates from its
starting position, and I need to
calculate the angle from its starting
position to current position. E.g if
it has rotated clockwise, it is
positive rotation; if
counterclockwise, then negative. (Or
vice versa.)
This is exactly what the cross-product is for. The sign of the 3rd component is positive for counter-clockwise and negative for clockwise (as you look down at the plane of rotation).
One 'quick and dirty' method you can use is to introduce a third reference line R. So, given two lines A and B, calculate the angles between A and R and then B and R, and subtract them.
This does about twice as much calculation as is actually necessary, but is easy to explain and debug.
// Considering two vectors CA and BA
// Computing angle from CA to BA
// Thanks to code shared by Jaanus, but atan2(y,x) is used wrongly.
float getAngleBetweenVectorsWithSignInDeg(Point2f C, Point2f A, Point2f B)
{
float a = A.x - C.x;
float b = A.y - C.y;
float c = B.x - C.x;
float d = B.y - C.y;
float angleA = atan2(b, a);
float angleB = atan2(d, c);
cout << "angleA: " << angleA << "rad, " << angleA * 180 / M_PI << " deg" << endl;
cout << "angleB: " << angleB << "rad, " << angleB * 180 / M_PI << " deg" << endl;
float rotationAngleRad = angleB - angleA;
float thetaDeg = rotationAngleRad * 180.0f / M_PI;
return thetaDeg;
}
That function is working in RADS
There are 2pi RADS in a full circle (360 degrees)
Thus I believe the answear you are looking for is simply the returned value - 2pi
If you are asking to have that one function return both values at the same time, then you are asking to break the language, a function can only return a single value. You could pass it two pointers that it can use to set the value of so that the change can persist after the frunction ends and your program can continue to work. But not really a sensible way of solving this problem.
Edit
Just noticed that the function actually converts the Rads to Degrees as it returns the value. But the same principle will work.

correcting fisheye distortion programmatically

BOUNTY STATUS UPDATE:
I discovered how to map a linear lens, from destination coordinates to source coordinates.
How do you calculate the radial distance from the centre to go from fisheye to rectilinear?
1). I actually struggle to reverse it, and to map source coordinates to destination coordinates. What is the inverse, in code in the style of the converting functions I posted?
2). I also see that my undistortion is imperfect on some lenses - presumably those that are not strictly linear. What is the equivalent to-and-from source-and-destination coordinates for those lenses? Again, more code than just mathematical formulae please...
Question as originally stated:
I have some points that describe positions in a picture taken with a fisheye lens.
I want to convert these points to rectilinear coordinates. I want to undistort the image.
I've found this description of how to generate a fisheye effect, but not how to reverse it.
There's also a blog post that describes how to use tools to do it; these pictures are from that:
(1) : SOURCE Original photo link
Input : Original image with fish-eye distortion to fix.
(2) : DESTINATION Original photo link
Output : Corrected image (technically also with perspective correction, but that's a separate step).
How do you calculate the radial distance from the centre to go from fisheye to rectilinear?
My function stub looks like this:
Point correct_fisheye(const Point& p,const Size& img) {
// to polar
const Point centre = {img.width/2,img.height/2};
const Point rel = {p.x-centre.x,p.y-centre.y};
const double theta = atan2(rel.y,rel.x);
double R = sqrt((rel.x*rel.x)+(rel.y*rel.y));
// fisheye undistortion in here please
//... change R ...
// back to rectangular
const Point ret = Point(centre.x+R*cos(theta),centre.y+R*sin(theta));
fprintf(stderr,"(%d,%d) in (%d,%d) = %f,%f = (%d,%d)\n",p.x,p.y,img.width,img.height,theta,R,ret.x,ret.y);
return ret;
}
Alternatively, I could somehow convert the image from fisheye to rectilinear before finding the points, but I'm completely befuddled by the OpenCV documentation. Is there a straightforward way to do it in OpenCV, and does it perform well enough to do it to a live video feed?
The description you mention states that the projection by a pin-hole camera (one that does not introduce lens distortion) is modeled by
R_u = f*tan(theta)
and the projection by common fisheye lens cameras (that is, distorted) is modeled by
R_d = 2*f*sin(theta/2)
You already know R_d and theta and if you knew the camera's focal length (represented by f) then correcting the image would amount to computing R_u in terms of R_d and theta. In other words,
R_u = f*tan(2*asin(R_d/(2*f)))
is the formula you're looking for. Estimating the focal length f can be solved by calibrating the camera or other means such as letting the user provide feedback on how well the image is corrected or using knowledge from the original scene.
In order to solve the same problem using OpenCV, you would have to obtain the camera's intrinsic parameters and lens distortion coefficients. See, for example, Chapter 11 of Learning OpenCV (don't forget to check the correction). Then you can use a program such as this one (written with the Python bindings for OpenCV) in order to reverse lens distortion:
#!/usr/bin/python
# ./undistort 0_0000.jpg 1367.451167 1367.451167 0 0 -0.246065 0.193617 -0.002004 -0.002056
import sys
import cv
def main(argv):
if len(argv) < 10:
print 'Usage: %s input-file fx fy cx cy k1 k2 p1 p2 output-file' % argv[0]
sys.exit(-1)
src = argv[1]
fx, fy, cx, cy, k1, k2, p1, p2, output = argv[2:]
intrinsics = cv.CreateMat(3, 3, cv.CV_64FC1)
cv.Zero(intrinsics)
intrinsics[0, 0] = float(fx)
intrinsics[1, 1] = float(fy)
intrinsics[2, 2] = 1.0
intrinsics[0, 2] = float(cx)
intrinsics[1, 2] = float(cy)
dist_coeffs = cv.CreateMat(1, 4, cv.CV_64FC1)
cv.Zero(dist_coeffs)
dist_coeffs[0, 0] = float(k1)
dist_coeffs[0, 1] = float(k2)
dist_coeffs[0, 2] = float(p1)
dist_coeffs[0, 3] = float(p2)
src = cv.LoadImage(src)
dst = cv.CreateImage(cv.GetSize(src), src.depth, src.nChannels)
mapx = cv.CreateImage(cv.GetSize(src), cv.IPL_DEPTH_32F, 1)
mapy = cv.CreateImage(cv.GetSize(src), cv.IPL_DEPTH_32F, 1)
cv.InitUndistortMap(intrinsics, dist_coeffs, mapx, mapy)
cv.Remap(src, dst, mapx, mapy, cv.CV_INTER_LINEAR + cv.CV_WARP_FILL_OUTLIERS, cv.ScalarAll(0))
# cv.Undistort2(src, dst, intrinsics, dist_coeffs)
cv.SaveImage(output, dst)
if __name__ == '__main__':
main(sys.argv)
Also note that OpenCV uses a very different lens distortion model to the one in the web page you linked to.
(Original poster, providing an alternative)
The following function maps destination (rectilinear) coordinates to source (fisheye-distorted) coordinates. (I'd appreciate help in reversing it)
I got to this point through trial-and-error: I don't fundamentally grasp why this code is working, explanations and improved accuracy appreciated!
def dist(x,y):
return sqrt(x*x+y*y)
def correct_fisheye(src_size,dest_size,dx,dy,factor):
""" returns a tuple of source coordinates (sx,sy)
(note: values can be out of range)"""
# convert dx,dy to relative coordinates
rx, ry = dx-(dest_size[0]/2), dy-(dest_size[1]/2)
# calc theta
r = dist(rx,ry)/(dist(src_size[0],src_size[1])/factor)
if 0==r:
theta = 1.0
else:
theta = atan(r)/r
# back to absolute coordinates
sx, sy = (src_size[0]/2)+theta*rx, (src_size[1]/2)+theta*ry
# done
return (int(round(sx)),int(round(sy)))
When used with a factor of 3.0, it successfully undistorts the images used as examples (I made no attempt at quality interpolation):
Dead link
(And this is from the blog post, for comparison:)
If you think your formulas are exact, you can comput an exact formula with trig, like so:
Rin = 2 f sin(w/2) -> sin(w/2)= Rin/2f
Rout= f tan(w) -> tan(w)= Rout/f
(Rin/2f)^2 = [sin(w/2)]^2 = (1 - cos(w))/2 -> cos(w) = 1 - 2(Rin/2f)^2
(Rout/f)^2 = [tan(w)]^2 = 1/[cos(w)]^2 - 1
-> (Rout/f)^2 = 1/(1-2[Rin/2f]^2)^2 - 1
However, as #jmbr says, the actual camera distortion will depend on the lens and the zoom. Rather than rely on a fixed formula, you might want to try a polynomial expansion:
Rout = Rin*(1 + A*Rin^2 + B*Rin^4 + ...)
By tweaking first A, then higher-order coefficients, you can compute any reasonable local function (the form of the expansion takes advantage of the symmetry of the problem). In particular, it should be possible to compute initial coefficients to approximate the theoretical function above.
Also, for good results, you will need to use an interpolation filter to generate your corrected image. As long as the distortion is not too great, you can use the kind of filter you would use to rescale the image linearly without much problem.
Edit: as per your request, the equivalent scaling factor for the above formula:
(Rout/f)^2 = 1/(1-2[Rin/2f]^2)^2 - 1
-> Rout/f = [Rin/f] * sqrt(1-[Rin/f]^2/4)/(1-[Rin/f]^2/2)
If you plot the above formula alongside tan(Rin/f), you can see that they are very similar in shape. Basically, distortion from the tangent becomes severe before sin(w) becomes much different from w.
The inverse formula should be something like:
Rin/f = [Rout/f] / sqrt( sqrt(([Rout/f]^2+1) * (sqrt([Rout/f]^2+1) + 1) / 2 )
I blindly implemented the formulas from here, so I cannot guarantee it would do what you need.
Use auto_zoom to get the value for the zoom parameter.
def dist(x,y):
return sqrt(x*x+y*y)
def fisheye_to_rectilinear(src_size,dest_size,sx,sy,crop_factor,zoom):
""" returns a tuple of dest coordinates (dx,dy)
(note: values can be out of range)
crop_factor is ratio of sphere diameter to diagonal of the source image"""
# convert sx,sy to relative coordinates
rx, ry = sx-(src_size[0]/2), sy-(src_size[1]/2)
r = dist(rx,ry)
# focal distance = radius of the sphere
pi = 3.1415926535
f = dist(src_size[0],src_size[1])*factor/pi
# calc theta 1) linear mapping (older Nikon)
theta = r / f
# calc theta 2) nonlinear mapping
# theta = asin ( r / ( 2 * f ) ) * 2
# calc new radius
nr = tan(theta) * zoom
# back to absolute coordinates
dx, dy = (dest_size[0]/2)+rx/r*nr, (dest_size[1]/2)+ry/r*nr
# done
return (int(round(dx)),int(round(dy)))
def fisheye_auto_zoom(src_size,dest_size,crop_factor):
""" calculate zoom such that left edge of source image matches left edge of dest image """
# Try to see what happens with zoom=1
dx, dy = fisheye_to_rectilinear(src_size, dest_size, 0, src_size[1]/2, crop_factor, 1)
# Calculate zoom so the result is what we wanted
obtained_r = dest_size[0]/2 - dx
required_r = dest_size[0]/2
zoom = required_r / obtained_r
return zoom
I took what JMBR did and basically reversed it. He took the radius of the distorted image (Rd, that is, the distance in pixels from the center of the image) and found a formula for Ru, the radius of the undistorted image.
You want to go the other way. For each pixel in the undistorted (processed image), you want to know what the corresponding pixel is in the distorted image.
In other words, given (xu, yu) --> (xd, yd). You then replace each pixel in the undistorted image with its corresponding pixel from the distorted image.
Starting where JMBR did, I do the reverse, finding Rd as a function of Ru. I get:
Rd = f * sqrt(2) * sqrt( 1 - 1/sqrt(r^2 +1))
where f is the focal length in pixels (I'll explain later), and r = Ru/f.
The focal length for my camera was 2.5 mm. The size of each pixel on my CCD was 6 um square. f was therefore 2500/6 = 417 pixels. This can be found by trial and error.
Finding Rd allows you to find the corresponding pixel in the distorted image using polar coordinates.
The angle of each pixel from the center point is the same:
theta = arctan( (yu-yc)/(xu-xc) ) where xc, yc are the center points.
Then,
xd = Rd * cos(theta) + xc
yd = Rd * sin(theta) + yc
Make sure you know which quadrant you are in.
Here is the C# code I used
public class Analyzer
{
private ArrayList mFisheyeCorrect;
private int mFELimit = 1500;
private double mScaleFESize = 0.9;
public Analyzer()
{
//A lookup table so we don't have to calculate Rdistorted over and over
//The values will be multiplied by focal length in pixels to
//get the Rdistorted
mFisheyeCorrect = new ArrayList(mFELimit);
//i corresponds to Rundist/focalLengthInPixels * 1000 (to get integers)
for (int i = 0; i < mFELimit; i++)
{
double result = Math.Sqrt(1 - 1 / Math.Sqrt(1.0 + (double)i * i / 1000000.0)) * 1.4142136;
mFisheyeCorrect.Add(result);
}
}
public Bitmap RemoveFisheye(ref Bitmap aImage, double aFocalLinPixels)
{
Bitmap correctedImage = new Bitmap(aImage.Width, aImage.Height);
//The center points of the image
double xc = aImage.Width / 2.0;
double yc = aImage.Height / 2.0;
Boolean xpos, ypos;
//Move through the pixels in the corrected image;
//set to corresponding pixels in distorted image
for (int i = 0; i < correctedImage.Width; i++)
{
for (int j = 0; j < correctedImage.Height; j++)
{
//which quadrant are we in?
xpos = i > xc;
ypos = j > yc;
//Find the distance from the center
double xdif = i-xc;
double ydif = j-yc;
//The distance squared
double Rusquare = xdif * xdif + ydif * ydif;
//the angle from the center
double theta = Math.Atan2(ydif, xdif);
//find index for lookup table
int index = (int)(Math.Sqrt(Rusquare) / aFocalLinPixels * 1000);
if (index >= mFELimit) index = mFELimit - 1;
//calculated Rdistorted
double Rd = aFocalLinPixels * (double)mFisheyeCorrect[index]
/mScaleFESize;
//calculate x and y distances
double xdelta = Math.Abs(Rd*Math.Cos(theta));
double ydelta = Math.Abs(Rd * Math.Sin(theta));
//convert to pixel coordinates
int xd = (int)(xc + (xpos ? xdelta : -xdelta));
int yd = (int)(yc + (ypos ? ydelta : -ydelta));
xd = Math.Max(0, Math.Min(xd, aImage.Width-1));
yd = Math.Max(0, Math.Min(yd, aImage.Height-1));
//set the corrected pixel value from the distorted image
correctedImage.SetPixel(i, j, aImage.GetPixel(xd, yd));
}
}
return correctedImage;
}
}
I found this pdf file and I have proved that the maths are correct (except for the line vd = *xd**fv+v0 which should say vd = **yd**+fv+v0).
http://perception.inrialpes.fr/CAVA_Dataset/Site/files/Calibration_OpenCV.pdf
It does not use all of the latest co-efficients that OpenCV has available but I am sure that it could be adapted fairly easily.
double k1 = cameraIntrinsic.distortion[0];
double k2 = cameraIntrinsic.distortion[1];
double p1 = cameraIntrinsic.distortion[2];
double p2 = cameraIntrinsic.distortion[3];
double k3 = cameraIntrinsic.distortion[4];
double fu = cameraIntrinsic.focalLength[0];
double fv = cameraIntrinsic.focalLength[1];
double u0 = cameraIntrinsic.principalPoint[0];
double v0 = cameraIntrinsic.principalPoint[1];
double u, v;
u = thisPoint->x; // the undistorted point
v = thisPoint->y;
double x = ( u - u0 )/fu;
double y = ( v - v0 )/fv;
double r2 = (x*x) + (y*y);
double r4 = r2*r2;
double cDist = 1 + (k1*r2) + (k2*r4);
double xr = x*cDist;
double yr = y*cDist;
double a1 = 2*x*y;
double a2 = r2 + (2*(x*x));
double a3 = r2 + (2*(y*y));
double dx = (a1*p1) + (a2*p2);
double dy = (a3*p1) + (a1*p2);
double xd = xr + dx;
double yd = yr + dy;
double ud = (xd*fu) + u0;
double vd = (yd*fv) + v0;
thisPoint->x = ud; // the distorted point
thisPoint->y = vd;
This can be solved as an optimization problem. Simply draw on curves in images that are supposed to be straight lines. Store the contour points for each of those curves. Now we can solve the fish eye matrix as a minimization problem. Minimize the curve in points and that will give us a fisheye matrix. It works.
It can be done manually by adjusting the fish eye matrix using trackbars! Here is a fish eye GUI code using OpenCV for manual calibration.

Resources