I want to plot a simple object in scilab (3d). To understand the way scilab works in that regard, I wrote the following example:
xx = [[2;2;1;3;],[2;2;3;3],[2;2;3;1],[2;2;1;1],[1;3;3;1],[3;3;3;3],[3;3;1;1],[1;1;1;1],[1;2;2;3],[1;1;2;2],[3;2;2;3],[3;2;2;1]]
yy = [[2;2;1;1;],[2;2;1;3],[2;2;3;3],[2;2;3;1],[1;1;1;1],[1;3;3;1],[3;3;3;3],[3;1;1;3],[1;2;2;1],[1;3;2;2],[1;2;2;3],[3;2;2;3]]
zz = [[0;0;1;1;],[0;0;1;1],[0;0;1;1],[0;0;1;1],[1;1;2;2],[1;1;2;2],[1;2;2;1],[1;1;2;2],[2;3;3;2],[2;2;3;3],[2;3;3;2],[2;3;3;2]]
col = ones(12,1)*3
plot3d(xx,yy,list(zz,col))
//h = get("hdl")
//h.hiddencolor = -1 // backside and frontside same color
with the following result:
While the structure is absolutley fine, the coloring on 2 faces is inside out. I tried to draw the points of the affected faces in different ways counterclockwise/clockwise, different starting points, etc.. But the faces seem to keep oriented inwards the structure. I found a workaround by setting the backside of the faces equal to the frontside (the 2 commented lines in the code) but I want to understand how scilab determines the orientation of the faces for later work. Any clues?
EDIT:
So i tried PTRK's suggestions. While his provided Matrices are definitely different:
The result is still the same. Even the output of the provided Testscript is different:
Perhaps thats some kind of version/system thing? I'm using Scilab 6.0.0 on Windows 10.
Let a surface defined by 3 nodes: [P1,P2,P3]. Then you must cycle clockwise trough theses nodes to have the right orientation of inside and outside. Here is a drawing explaining it:
3 of your polygones are defined conterclockwise, thoses with y=1, y=3 and x = 1. When drawing 4 points polygones, to switch the rotation from clockwise to counterclockwise, just swap the 2nd and 4th nodes or 1st and 3rd.
Thus you must set your points as:
xx = [[2;2;1;3;],[2;2;3;3],[2;2;3;1],[2;2;1;1],[1;1;3;3],[3;3;3;3],[3;3;1;1],[1;1;1;1],[1;2;2;3],[1;1;2;2],[3;2;2;3],[3;2;2;1]]
yy = [[2;2;1;1;],[2;2;1;3],[2;2;3;3],[2;2;3;1],[1;1;1;1],[1;1;3;3],[3;3;3;3],[3;3;1;1],[1;2;2;1],[1;3;2;2],[1;2;2;3],[3;2;2;3]]
zz = [[0;0;1;1;],[0;0;1;1],[0;0;1;1],[0;0;1;1],[1;2;2;1],[1;2;2;1],[1;2;2;1],[1;2;2;1],[2;3;3;2],[2;2;3;3],[2;3;3;2],[2;3;3;2]]
This will give the desired output :
Scilab 6.0.0 bug
In this version, if your surfaces are parallel to the cartesian axes, then Scilab will direct it along the axis, no matter how you defined it. Thus your problem. A workaround could be to offset one of the coordinate by a small delta, which must be not too small, as shown in below example.
Regarding your problem, if we want to keep the geometry of your object, we could tilt it with a tiny angle: using rotation matrix, if the computational cost induced by the rotation of all the coordinates doesn't bother you. Here's your script with the tilted object
clc
clear
xdel(winsid())
xx = [[2;2;1;3;],[2;2;3;3],[2;2;3;1],[2;2;1;1],[1;1;3;3],[3;3;3;3],[3;3;1;1],[1;1;1;1],[1;2;2;3],[1;1;2;2],[3;2;2;3],[3;2;2;1]]
yy = [[2;2;1;1;],[2;2;1;3],[2;2;3;3],[2;2;3;1],[1;1;1;1],[1;1;3;3],[3;3;3;3],[3;3;1;1],[1;2;2;1],[1;3;2;2],[1;2;2;3],[3;2;2;3]]
zz = [[0;0;1;1;],[0;0;1;1],[0;0;1;1],[0;0;1;1],[1;2;2;1],[1;2;2;1],[1;2;2;1],[1;2;2;1],[2;3;3;2],[2;2;3;3],[2;3;3;2],[2;3;3;2]]
col = ones(12,1)*3
figure(1)
set(gcf(),'background',-2)
subplot(2,1,1)
plot3d(xx,yy,list(zz,col))
title('Object with surfaces orthogonal to cartesian axis')
subplot(2,1,2)
// t is angle in radian showing the tilt
t = %pi/10000
c = cos(t)
s = sin(t)
rot = [1,0,0;0,c,-s;0,s,c]*[c,0,s;0,1,0;-s,0,c]*[c,-s,0;s,c,0;0,0,1]
for i=1:size(xx,1)
for j = 1:size(xx,2)
xyz=(rot*[xx(i,j);yy(i,j);zz(i,j)])
x(i,j)=xyz(1)
y(i,j)=xyz(2)
z(i,j)=xyz(3)
end
end
plot3d(x,y,list(z,col))
title('Object with surfaces tildted by an angle of '+string(t)+' rad')
Script showing 2 surfaces defined by the same nodes but in opposite order.
clc
clear
xdel(winsid())
figure(1)
set(gcf(),'background',-2)
cr=color('red') // color of the outside surface
P1 = [0,0,0] //
P2 = [0,1,0]
P3 = [1,0,0]
F1 = [P1;P2;P3] // defining surface clockwise
F2 = [P1;P3;P2] // counterclockwise
subplot(2,2,1)
plot3d(F1(:,1),F1(:,2),list(F1(:,3),cr*ones(F1(:,3))))
xstring(F1(:,1),F1(:,2),['P1','P2','P3'])
title('surface is [P1,P2,P3] with z_P3=0')
set(gca(),'data_bounds',[0,1,0,1,-1,1])
subplot(2,2,2)
plot3d(F2(:,1),F2(:,2),list(F2(:,3),cr*ones(F2(:,3))))
xstring(F2(:,1),F2(:,2),['P1','P3','P2'])
title('surface is [P1,P3,P2] with z_P3=0, broken with Scilab 6.0.0')
set(gca(),'data_bounds',[0,1,0,1,-1,1])
subplot(2,2,3)
plot3d(F2(:,1),F2(:,2),list(F2(:,3)+[0;0;10^-7],cr*ones(F2(:,3))))
xstring(F2(:,1),F2(:,2),['P1','P3','P2'])
title('surface is [P1,P3,P2] with |z_P3| < 10^-8')
set(gca(),'data_bounds',[0,1,0,1,-1,1])
subplot(2,2,4)
plot3d(F2(:,1),F2(:,2),list(F2(:,3)+[0;0;10^-8],cr*ones(F2(:,3))))
xstring(F2(:,1),F2(:,2),['P1','P3','P2'])
title('surface is [P1,P3,P2] with |z_P3| = 10^-8, broken in 6.0.0')
set(gca(),'data_bounds',[0,1,0,1,-1,1])
Scilab 5.5.1
Scilab 6.0.0
Related
I am very sorry for asking a question that is probably very easy if you know how to solve it, and where many versions of the same question has been asked before. However, I am creating a new post since I have not found an answer to this specific question.
Basically, I have a 200cm x 200cm square that I am recording with a camera above it. However, the camera distorts the square slightly, see example here.. I am wondering how I go from transforming the x,y coordinates in the camera to real-life x,y coordinates (e.g., between 0-200 cm for each side). I understand that I probably need to apply some kind of transformation matrix, but I do not know which one, nor how to determine the transformation matrix. I haven't done any serious linear-algebra in a long time, so I appreciate any pointers for what to read up on, or how to get it done. I am working in python, so if there is some ready code for doing the transformation that would also be useful to know.
Thanks a lot!
I will show this using python and numpy.
import numpy as np
First, you have to understand the projection model
def apply_homography(H, p1):
p = H # p1.T
return (p[:2] / p[2]).T
With some algebraic manipulation you can determine the points at the plane z=1 that produced the given points.
def revert_homography(H, p2):
Hb = np.linalg.inv(H)
# 1 figure out which z coordinate should be added to p2
# order to get z=1 for p1
z = 1/(Hb[2,2] + (Hb[2,0] * p2[:,0] + Hb[2,1]*p2[:,1]))
p2 = np.hstack([p2[:,:2] * z[:,None], z[:, None]])
return p2 # Hb.T
The projection is not invertible, but under the complanarity assumption it may be inverted successfully.
Now, let's see how to determine the H matrix from the given points (assuming they are coplanar).
If you have the four corners in order in order you can simply specify the (x,y) coordinates of the cornder, then you can use the projection equations to determine the homography matrix like here, or here.
This requires at least 5 points to be determined as there is 9 coefficients, but we can fix one element of the matrix and make it an inhomogeneous equation.
def find_homography(p1, p2):
A = np.zeros((8, 2*len(p1)))
# x2'*(H[2,0]*x1+H[2,1]*x2)
A[6,0::2] = p1[:,0] * p2[:,0]
A[7,0::2] = p1[:,1] * p2[:,0]
# - (H[0,0]*x1+H[0,1]*y1+H[0,2])
A[0,0::2] = -p1[:,0]
A[1,0::2] = -p1[:,1]
A[2,0::2] = -1
# y2'*(H[2,0]*x1+H[2,1]*x2)
A[6,1::2] = p1[:,0] * p2[:,1]
A[7,1::2] = p1[:,1] * p2[:,1]
# - (H[1,0]*x1+H[1,1]*y1+H[1,2])
A[3,1::2] = -p1[:,0]
A[4,1::2] = -p1[:,1]
A[5,1::2] = -1
# assuming H[2,2] = 1 we can pass its coefficient
# to the independent term making an inhomogeneous
# equation
b = np.zeros(2*len(p2))
b[0::2] = -p2[:,0]
b[1::2] = -p2[:,1]
h = np.ones(9)
h[:8] = np.linalg.lstsq(A.T, b, rcond=None)[0]
return h.reshape(3,3)
Here a complete usage example. I pick a random H and transform four random points, this is what you have, I show how to find the transformation matrix H_. Next I create a test set of points, and I show how to find the world coordinates from the image coordinates.
# Pick a random Homography
H = np.random.rand(3,3)
H[2,2] = 1
# Pick a set of random points
p1 = np.random.randn(4, 3);
p1[:,2] = 1;
# The coordinates of the points in the image
p2 = apply_homography(H, p1)
# testing
# Create a set of random points
p_test = np.random.randn(20, 3)
p_test[:,2] = 1;
p_test2 = apply_homography(H, p_test)
# Now using only the corners find the homography
# Find a homography transform
H_ = find_homography(p1, p2)
assert np.allclose(H, H_)
# Predict the plane points for the test points
p_test_predicted = revert_homography(H_, p_test2)
assert np.allclose(p_test_predicted, p_test)
I've been trying to figure out the 2D rotation value as seen from orthographic "top" view for a 3D object with XYZ rotation values in Maya. Maybe another way to ask this could be: I want to figure out the 2D rotation of a 3D obj's direction.
Here is a simple image to illustrate my question:
I've tried methods like getting the twist value of an object using quaternion (script pasted below), to this post I've found: Component of a quaternion rotation around an axis.
If I set the quaternion's X and Z values to zero, this method works half way. I can get the correct 2D rotation even when obj is rotated in both X and Y axis, but when rotated in all 3 axis, the result is wrong.
I am pretty new to all the quaternion and vector calculations, so I've been having difficulty trying to wrap my head around it.
;)
def quaternionTwist(q, axisVec):
axisVec.normalize()
# Get the plane the axisVec is a normal of
orthonormal1, orthonormal2 = findOrthonormals(axisVec)
transformed = rotateByQuaternion(orthonormal1, q)
# Project transformed vector onto plane
flattened = transformed - ((transformed * axisVec) * axisVec)
flattened.normalize()
# Get angle between original vector and projected transform to get angle around normal
angle = math.acos(orthonormal1 * flattened)
return math.degrees(angle)
q = getMQuaternion(obj)
# Zero out X and Y since we are only interested in Y axis.
q.x = 0
q.z = 0
up = om2.MVector.kYaxisVector
angle = quaternionTwist(q, up)
Can you get the (x,y,z) coordinates of the rotated vector? Once you have them use the (x,y) values to find the angle with atan2(y,x).
I'm not familiar with the framework you're using, but if it does what it seems, I think you're almost there. Just don't zero out the X and Z components of the quaternion before calling quaternionTwist().
The quaternions q1 = (x,y,z,w) and q2 = (0, y, 0, w) don't represent the same rotation about the y-axis, especially since q2 written this way becomes unnormalized, so what you're really comparing is (x,y,z,w) with (0, y/|q2|, 0, w/|q2|) where |q2| = sqrt(y^2 + w^2).
Here is a working code for Maya using John Alexiou's answer:
matrix = dagPath.inclusiveMatrix() #OpenMaya dagPath for an object
axis = om2.MVector.kZaxisVector
v = (axis * matrix).normal()
angle = math.atan2(v.x, v.z) #2D angle on XZ plane
My task is to produce the plot of a 2-dimensional function in real time using nothing but linear algebra and color (imagine having to compute an image buffer in plain C++ from a function definition, for example f(x,y) = x^2 + y^2). The output should be something like this 3d plot.
So far I have tried 3 approaches:
1: Ray tracing:
Divide the (x,y) plane into triangles, find the z-values of each vertex, thus divide the plot into triangles. Intersect each ray with the triangles.
2: Sphere tracing:
a method for rendering implicit surfaces described here.
3: Rasterization:
The inverse of (1). Split the plot into triangles, project them onto the camera plane, loop over the pixels of the canvas and for each one choose the "closest" projected pixel.
All of these are way to slow. Part of my assignment is moving around the camera, so the plot has to be re-rendered in each frame. Please point me towards another source of information/another algorithm/any kind of help. Thank you.
EDIT
As pointed out, here is the pseudocode for my very basic rasterizer. I am aware that this code might not be flawless, but it should resemble the general idea. However, when splitting my plot into 200 triangles (which I do not expect to be enough) it already runs very slowly, even without rendering anything. I am not even using a depth buffer for visibility. I just wanted to test the speed by setting up a frame buffer as follows:
NOTE: In the JavaScript framework I am using, _ denotes array indexing and a..b composes a list from a to b.
/*
* Raster setup.
* The raster is a pxH x pxW array.
* Raster coordinates might be negative or larger than the array dimensions.
* When rendering (i.e. filling the array) positions outside the visible raster will not be filled (i.e. colored).
*/
pxW := Width of the screen in pixels.
pxH := Height of the screen in pixels.
T := Transformation matrix of homogeneous world points to raster space.
// Buffer setup.
colBuffer = apply(1..pxW, apply(1..pxH, 0)); // pxH x pxW array of black pixels.
// Positive/0 if the point is on the right side of the line (V1,V2)/exactly on the line.
// p2D := point to test.
// V1, V2 := two vertices of the triangle.
edgeFunction(p2D, V1, V2) := (
det([p2D-V1, V2-V1]);
);
fillBuffer(V0, V1, V2) := (
// Dehomogenize.
hV0 = V0/(V0_3);
hV1 = V1/(V1_3);
hV2 = V2/(V2_3);
// Find boundaries of the triangle in raster space.
xMin = min(hV0.x, hV1.x, hV2.x);
xMax = max(hV0.x, hV1.x, hV2.x);
yMin = min(hV0.y, hV1.y, hV2.y);
yMax = max(hV0.y, hV1.y, hV2.y);
xMin = floor(if(xMin >= 0, xMin, 0));
xMax = ceil(if(xMax < pxW, xMax, pxW));
yMin = floor(if(yMin >= 0, yMin, 0));
yMax = ceil(if(yMax < pxH, yMax, pxH));
// Check for all points "close to" the triangle in raster space whether they lie inside it.
forall(xMin..xMax, x, forall(yMin..yMax, y, (
p2D = (x,y);
i = edgeFunction(p2D, hV0.xy, hV1.xy) * edgeFunction(p2D, hV1.xy, hV2.xy) * edgeFunction(p2D, hV2.xy, hV0.xy);
if (i > 0, colBuffer_y_x = 1); // Fill all points inside the triangle with some placeholder.
)));
);
mapTrianglesToScreen() := (
tvRaster = homogVerts * T; // Triangle vertices in raster space.
forall(1..(length(tvRaster)/3), i, (
actualI = i / 3 + 1;
fillBuffer(tvRaster_actualI, tvRaster_(actualI + 1), tvRaster_(actualI + 2));
));
);
// After all this, render the colBuffer.
What is wrong about this approach? Why is it so slow?
Thank you.
I would go with #3 it is really not that complex so you should obtain > 20 fps on standard machine with pure SW rasterizer (without any libs) if coded properly. My bet is you are using some slow API like PutPixel or SetPixel or doing some crazy thing. Without seeing code or better description of how you do it is hard to elaborate. All the info you need to do this is in here:
Algorithm to fill triangle
HSV histogram
Understanding 4x4 homogenous transform matrices
Do look also in the sub-links in each ...
Ok, I know this sounds really daft to be asking here, but it is programming related.
I'm working on a game, and I'm thinking of implementing a system that allows users to triangulate their 3D coordinates to locate something (eg for a task).
I also want to be able to let the user make the coordinates of the points they are using for triangulation have user-determined coordinates (so the location's coordinate is relative, probably by setting up a beacon or something).
I have a method in place for calculating the distance between the points, so essentially I can calculate the lengths of the sides of the triangle/pyramid as well as all but the coordinate I am after.
It has been a long time since I have done any trigonometry and I am rusty with the sin, cos and tan functions, I have a feeling they are required but have no clue how to implement them.
Can anyone give me a demonstration as to how I would go about doing this in a mathematical/programatical way?
extra info:
My function returns the exact distance between the two points, so say you set two points to 0,0,0 and 4,4,0 respectively, and those points are set to scale(the game world is divided into a very large 3d grid, with each 'block' area being represented by a 3d coordinate) then it would give back a value at around 5.6.
The key point about it varying is that the user can set the points, so say they set a point to read 0,0,0, the actual location could be something like 52, 85, 93. However, providing they then count the blocks and set their other points correctly (eg, set a point 4,4,0 at the real point 56, 89, 93) then the final result will return the relative position (eg the object they are trying to locate is at real point 152, 185, 93, it will return the relative value 100,100,0). I need to be able to calculate it knowing every point but the one it's trying to locate, as well as the distances between all points.
Also, please don't ask why I can't just calculate it by using the real coordinates, I'm hoping to show the equation up on screen as it calculates the result.7
Example:
Here is a diagram
Imagine these are points in my game on a flat plain.
I want to know the point f.
I know the values of points d and e, and the sides A,B and C.
Using only the data I know, I need to find out how to do this.
Answered Edit:
After many days of working on this, Sean Kenny has provided me with his time, patience and intellect, and thus I have now got a working implementation of a triangulation method.
I hope to place the different language equivalents of the code as I test them so that future coders may use this code and not have the same problem I have had.
I spent a bit of time working on a solution but I think the implementer, i.e you, should know what it's doing, so any errors encountered can be tackled later on. As such, I'll give my answer in the form of strong hints.
First off, we have a vector from d to e which we can work out: if we consider the coordinates as position vectors rather than absolute coordinates, how can we determine what the vector pointing from d to e is? Think about how you would determine the displacement you had moved if you only knew where you started and where you ended up? Displacement is a straight line, point A to B, no deviation, not: I had to walk around that house so I walked further. A straight line. If you started at the point (0,0) it would be easy.
Secondly, the cosine rule. Do you know what it is? If not, read up on it. How can we rearrange the form given in the link to find the angle d between vectors DE and DF? Remember you need the angle, not a function of the angle (cos is a function remember).
Next we can use a vector 'trick' called the scalar product. Notice there is a cos function in there. Now, you may be thinking, we've just found the angle, why are we doing it again?
Define DQ = [1,0]. DQ is a vector of length 1, a unit vector, along the x-axis. Which other vector do we know? Do we know of two position vectors?
Once we have two vectors (I hope you worked out the other one) we can use the scalar product to find the angle; again, just the angle, not a function of it.
Now, hopefully, we have 2 angles. Could we take one from the other to get yet another angle to our desired coordinate DF? The choice of using a unit vector earlier was not arbitrary.
The scalar product, after some cancelling, gives us this : cos(theta) = x / r
Where x is the x ordinate for F and r is the length of side A.
The end result being:
theta = arccos( xe / B ) - arccos( ( (A^2) + (B^2) - (C^2) ) / ( 2*A*B ) )
Where theta is the angle formed between a unit vector along the line y = 0 where the origin is at point d.
With this information we can find the x and y coordinates of point f relative to d. How?
Again, with the scalar product. The rest is fairly easy, so I'll give it to you.
x = r.cos(theta)
y = r.sin(theta)
From basic trigonometry.
I wouldn't advise trying to code this into one value.
Instead, try this:
//pseudo code
dx = 0
dy = 0 //initialise coordinates somehow
ex = ex
ey = ey
A = A
B = B
C = C
cosd = ex / B
cosfi = ((A^2) + (B^2) - (C^2)) / ( 2*A*B)
d = acos(cosd) //acos is a method in java.math
fi = acos(cosfi) //you will have to find an equivalent in your chosen language
//look for a method of inverse cos
theta = fi - d
x = A cos(theta)
y = A sin(theta)
Initialise all variables as those which can take decimals. e.g float or double in Java.
The green along the x-axis represents the x ordinate of f, and the purple the y ordinate.
The blue angle is the one we are trying to find because, hopefully you can see, we can then use simple trig to work out x and y, given that we know the length of the hypotenuse.
This yellow line up to 1 is the unit vector for which scalar products are taken, this runs along the x-axis.
We need to find the black and red angles so we can deduce the blue angle by simple subtraction.
Hope this helps. Extensions can be made to 3D, all the vector functions work basically the same for 3D.
If you have the displacements from an origin, regardless of whether this is another user defined coordinate or not, the coordinate for that 3D point are simply (x, y, z).
If you are defining these lengths from a point, which also has a coordinate to take into account, you can simply write (x, y, z) + (x1, y1, z1) = (x2, y2, z2) where x2, y2 and z2 are the displacements from the (0, 0, 0) origin.
If you wish to find the length of this vector, i.e if you defined the line from A to B to be the x axis, what would the x displacement be, you can use Pythagoras for 3D vectors, it works just the same as with 2D:
Length l = sqrt((x^2) + (y^2) + (z^2))
EDIT:
Say you have a user defined point A (x1, y1, z1) and you want to define this as the origin (0,0,0). You have another user chosen point B (x2, y2, z2) and you know the distance from A to B in the x, y and z plane. If you want to work out what this point is, in relation to the new origin, you can simply do
B relative to A = (x2, y2, z2) - (x1, y1, z1) = (x2-x1, y2-y1, z2-z1) = C
C is the vector A>B, a vector is a quantity which has a magnitude (the length of the lines) and a direction (the angle from A which points to B).
If you want to work out the position of B relative to the origin O, you can do the opposite:
B relative to O = (x2, y2, z2) + (x1, y1, z1) = (x1+x2, y1+y2, z1+z2) = D
D is the vector O>B.
Edit 2:
//pseudo code
userx = x;
usery = y;
userz = z;
//move origin
for (every block i){
xi = xi-x;
yi = yi - y;
zi = zi -z;
}
Although the context of this question is about making a 2d/3d game, the problem i have boils down to some math.
Although its a 2.5D world, lets pretend its just 2d for this question.
// xa: x-accent, the x coordinate of the projection
// mapP: a coordinate on a map which need to be projected
// _Dist_ values are constants for the projection, choosing them correctly will result in i.e. an isometric projection
xa = mapP.x * xDistX + mapP.y * xDistY;
ya = mapP.x * yDistX + mapP.y * yDistY;
xDistX and yDistX determine the angle of the x-axis, and xDistY and yDistY determine the angle of the y-axis on the projection (and also the size of the grid, but lets assume this is 1-pixel for simplicity).
x-axis-angle = atan(yDistX/xDistX)
y-axis-angle = atan(yDistY/yDistY)
a "normal" coordinate system like this
--------------- x
|
|
|
|
|
y
has values like this:
xDistX = 1;
yDistX = 0;
xDistY = 0;
YDistY = 1;
So every step in x direction will result on the projection to 1 pixel to the right end 0 pixels down. Every step in the y direction of the projection will result in 0 steps to the right and 1 pixel down.
When choosing the correct xDistX, yDistX, xDistY, yDistY, you can project any trimetric or dimetric system (which is why i chose this).
So far so good, when this is drawn everything turns out okay. If "my system" and mindset are clear, lets move on to perspective.
I wanted to add some perspective to this grid so i added some extra's like this:
camera = new MapPoint(60, 60);
dx = mapP.x - camera.x; // delta x
dy = mapP.y - camera.y; // delta y
dist = Math.sqrt(dx * dx + dy * dy); // dist is the distance to the camera, Pythagoras etc.. all objects must be in front of the camera
fac = 1 - dist / 100; // this formula determines the amount of perspective
xa = fac * (mapP.x * xDistX + mapP.y * xDistY) ;
ya = fac * (mapP.x * yDistX + mapP.y * yDistY );
Now the real hard part... what if you got a (xa,ya) point on the projection and want to calculate the original point (x,y).
For the first case (without perspective) i did find the inverse function, but how can this be done for the formula with the perspective. May math skills are not quite up to the challenge to solve this.
( I vaguely remember from a long time ago mathematica could create inverse function for some special cases... could it solve this problem? Could someone maybe try?)
The function you've defined doesn't have an inverse. Just as an example, as user207422 already pointed out anything that's 100 units away from the camera will get mapped to (xa,ya)=(0,0), so the inverse isn't uniquely defined.
More importantly, that's not how you calculate perspective. Generally the perspective scaling factor is defined to be viewdist/zdist where zdist is the perpendicular distance from the camera to the object and viewdist is a constant which is the distance from the camera to the hypothetical screen onto which everything is being projected. (See the diagram here, but feel free to ignore everything else on that page.) The scaling factor you're using in your example doesn't have the same behaviour.
Here's a stab at trying to convert your code into a correct perspective calculation (note I'm not simplifying to 2D; perspective is about projecting three dimensions to two, trying to simplify the problem to 2D is kind of pointless):
camera = new MapPoint(60, 60, 10);
camera_z = camera.x*zDistX + camera.y*zDistY + camera.z*zDistz;
// viewdist is the distance from the viewer's eye to the screen in
// "world units". You'll have to fiddle with this, probably.
viewdist = 10.0;
xa = mapP.x*xDistX + mapP.y*xDistY + mapP.z*xDistZ;
ya = mapP.x*yDistX + mapP.y*yDistY + mapP.z*yDistZ;
za = mapP.x*zDistX + mapP.y*zDistY + mapP.z*zDistZ;
zdist = camera_z - za;
scaling_factor = viewdist / zdist;
xa *= scaling_factor;
ya *= scaling_factor;
You're only going to return xa and ya from this function; za is just for the perspective calculation. I'm assuming the the "za-direction" points out of the screen, so if the pre-projection x-axis points towards the viewer then zDistX should be positive and vice-versa, and similarly for zDistY. For a trimetric projection you would probably have xDistZ==0, yDistZ<0, and zDistZ==0. This would make the pre-projection z-axis point straight up post-projection.
Now the bad news: this function doesn't have an inverse either. Any point (xa,ya) is the image of an infinite number of points (x,y,z). But! If you assume that z=0, then you can solve for x and y, which is possibly good enough.
To do that you'll have to do some linear algebra. Compute camera_x and camera_y similar to camera_z. That's the post-transformation coordinates of the camera. The point on the screen has post-tranformation coordinates (xa,ya,camera_z-viewdist). Draw a line through those two points, and calculate where in intersects the plane spanned by the vectors (xDistX, yDistX, zDistX) and (xDistY, yDistY, zDistY). In other words, you need to solve the equations:
x*xDistX + y*xDistY == s*camera_x + (1-s)*xa
x*yDistX + y*yDistY == s*camera_y + (1-s)*ya
x*zDistX + y*zDistY == s*camera_z + (1-s)*(camera_z - viewdist)
It's not pretty, but it will work.
I think that with your post i can solve the problem. Still, to clarify some questions:
Solving the problem in 2d is useless indeed, but this was only done to make the problem easier to grasp (for me and for the readers here). My program actually give's a perfect 3d projection (i checked it with 3d images rendered with blender). I did left something out about the inverse function though. The inverse function is only for coordinates between 0..camera.x * 0.5 and 0.. camera.y*0.5. So in my example between 0 and 30. But even then i have doubt's about my function.
In my projection the z-axis is always straight up, so to calculate the height of an object i only used the vieuwingangle. But since you cant actually fly or jumpt into the sky everything has only a 2d point. This also means that when you try to solve the x and y, the z really is 0.
I know not every funcion has an inverse, and some functions do, but only for a particular domain. My basic thought in this all was... if i can draw a grid using a function... every point on that grid maps to exactly one map-point. I can read the x and y coordinate so if i just had the correct function i would be able to calculate the inverse.
But there is no better replacement then some good solid math, and im very glad you took the time to give a very helpfull responce :).