Compose a parametric equation c# gdi+ - gdi+

Compose a parametric equation and construct the motion path of the disk's extreme point if the disk rolls over:
· Ox axis;
· Oy axis;
· the outer part of the unit circle;
· the inside of the unit circle

Related

Making a Circle in using Math nodes

I tried to make a circle using Math Nodes in Blender Shader Editor on a default Plane. A default Plane has dimensions of [2 m * 2 m]. I used the standard equation
(x-g)^2 + (y-h)^2 - r^2 = 0
But the circle formed exceeds the Plane when I use the value (1,1) for (g,h). But when I use (0.5,0.5) for (g,h) I get the desired result.
Mathematically, shouldn't the top right corner of the Plane be (2,2) while the centre of the Plane be (1,1).
Please help me.
With the shown setup your center is at 0.5,0.5 and the diameter is 1.
That does get you a circle spanning the 0..1/0..1 coordinate range of a texture.
Try using a 4x4 plane. It probably will give you an insight:
The texture coordinates and the object/vertex coordinates are different.

projecting points onto the sky

If I had a sphere with unit radius, and I had some points defined as latitudes and longitudes on that sphere, and I had a camera defined with a vertical and horizontal field of view angle which is always in the centre of the sphere. How can I project these points onto that camera?
A point at direction (x,y,z) at infinity has homogeneous coordinates of (x,y,z,0). So assuming that you use a typical view-projection matrices to describe your camera model, it is as simple as calculating
P * V * ( cos(lon)*cos(lat), sin(lon)*cos(lat), sin(lat), 0 )'
and then proceeding with a perspective divide and rasterization.

Point project onto plane

Given a p point(3D) and a plane(contain a base point and a normal vector), how should I project point onto plane?
Details:
Point p is a 3D vector(x1,y1,z1), and plane is represent by a point q at the plane and a 3D normal vector l(x3,y3,z3). All data is defined and no camera projection. The task is to project p onto the plane, return its new position(x1',y1',z1').

Rotating a line defined by two points in 3D

I have edited this question for additional clarity and because of some of the answers below.
I have an electromagnetic motion tracker which tracks a sensor and gives me a point in global space (X, Y, Z). It also tracks the rotation of the sensor and gives Euler angles (Yaw, Pitch, Roll).
The sensor is attached to a rigid body on a baseball cap which sits on the head of a person. However, I wish to track the position of a specific facial feature (nose for example) which I infer from the motion tracker sensor's position and orientation.
I have estimated the spatial offset between the motion tracker and the facial features I want to track. I have done this by simply measuring the offset along the X, Y and Z axis.
Based on a previous answer to this question, I have composed a rotation matrix from the euler angles given to me by the motion tracker. However, I am stuck with how I should use this rotation matrix, the position of the sensor in global space and the spatial offset between that sensor and the nose to give me the position of the nose in global space.
The sensor will give you a rotation matrix (via the Euler angles) and a position (which should be that of the center of rotation).
Whatever item is rigidly fastened to the sensor, such as the nose, will undergo the same motion. Then knowing the relative coordinates of the nose and the sensor, you get the relation
Q = R.q + P
where R is the rotation matrix, P the position vector of the sensor and q the relative coordinates of the nose.
Note that the relation between the rotation matrix and the angles can be computed using one of these formulas: https://en.wikipedia.org/wiki/Euler_angles#Rotation_matrix. (You will need to read the article carefully to make sure which your cases is among the 12 possibilities.)
In principle, you determine R and P from the readings of the sensor, but you are missing the coordinates q. There are several approaches:
you determine those coordinates explicitly by measuring the distances along virtual axes located at the rotation center and properly aligned.
you determine the absolute coordinates Q of the nose corresponding to known R and P; then q is given by R'(Q - P) where R' denotes the transpose of R (which is also its inverse). To obtain Q, you can just move the sensor center to the nose without moving the head.

Projection matrix point to sphere surface

I need to project a 3D object onto a sphere's surface (uhm.. like casting a shadow).
AFAIR this should be possible with a projection matrix.
If the "shadow receiver" was a plane, then my projection matrix would be a 3D to 2D-plane projection, but my receiver in this case is a 3D spherical surface.
So given sphere1(centerpoint,radius),sphere2(othercenter,otherradius) and an eyepoint how can I compute a matrix that projects all points from sphere2 onto sphere1 (like casting a shadow).
Do you mean that given a vertex v you want the following projection:
v'= centerpoint + (v - centerpoint) * (radius / |v - centerpoint|)
This is not possible with a projection matrix. You could easily do it in a shader though.
Matrixes are commonly used to represent linear operations, like projection onto a plane.
In your case, the resulting vertices aren't deduced from input using a linear function, so this projection is not possible using a matrix.
If the sphere1 is sphere((0,0,0),1), that is, the sphere of radius 1 centered at the origin, then you're in effect asking for a way to convert any location (x,y,z) in 3D to a corresponding location (x', y', z') on the unit sphere. This is equivalent to vector renormalization: (x',y',z') = (x,y,z)/sqrt(x^2+y^2+z^2).
If sphere1 is not the unit sphere, but is say sphere((a,b,c),R) you can do mostly the same thing:
(x',y',z') = R*(x-a,y-b,z-c) / sqrt((x-a)^2+(y-b)^2+(z-c)^2) + (a,b,c). This is equivalent to changing coordinates so the first sphere is the unit sphere, solving the problem, then changing coordinates back.
As people have pointed out, these functions are nonlinear, so the projection cannot be called a "matrix." But if you prefer for some reason to start with a projection matrix, you could project first from 3D to a plane, then from a plane to the sphere. I'm not sure if that would be any better though.
Finally, let me point out that linear maps don't produce division-by-zero errors, but if you look closely at the formulas above, you'll see that this map can. Geometrically, that's because it's hard to project the center point of a sphere to its boundary.

Resources