Flip a 2D character, using inverse kinematics, in Unity - unity3d-2dtools

I have a rigged 2D character with Sprites And Bones in Unity and I use inverse kinematics to animate it.
But if I want to flip the X-axis, my character go berserk :
I have a script attached to "Karateka", containing a simple Flip() function :
public void Flip(){
facingRight = !facingRight;
Vector3 karatekaScale = transform.localScale;
karatekaScale.x *= -1;
transform.localScale = karatekaScale;
}
"Karateka" is just a container for bones, sprites and IK targets
"Skeleton" contains the Skeleton script from Sprite And Bones
"Right leg bone", "Right lower leg bone", etc. have bone and IK scripts
"Right leg", "Right lower leg", etc. are the sprites
"IK" contains all the IK targets
I have the same kind of effect with an other IK script, Simple CCD, from Unite 2014 - 2D Best Practices In Unity, so I may just do something stupid.
What can I do to properly flip my character?
EDIT (for Mark) :
I got it to work using this :
public void Flip(){
facingRight = !facingRight;
/*
Vector3 karatekaScale = transform.localScale;
karatekaScale.x *= -1;
transform.localScale = karatekaScale;
*/
GameObject IK = GameObject.Find ("IK");
GameObject skeleton = GameObject.Find ("Skeleton");
Vector3 ikScale = IK.transform.localScale;
ikScale.x *= -1;
IK.transform.localScale = ikScale;
if (facingRight) {
skeleton.transform.localEulerAngles = new Vector3(skeleton.transform.localEulerAngles.x, 180.0f, skeleton.transform.localEulerAngles.z);
} else {
skeleton.transform.localEulerAngles = new Vector3(skeleton.transform.localEulerAngles.x, 0.0f, skeleton.transform.localEulerAngles.z);
}
}

Okay, so as we discussed in comments, just to sum it up:
1) You don't flip everything and/or don't reset bones correctly, that's why the animation "falls apart" on flipping
2) One can do a SpriteRenderer.flip[X/Y] but this should be done on every element of the sprite
3) Skeleton script has an "Edit mode" checkbox. When it's checked and one flips the character using scale.x *= -1, can see that everything except the bones are flipped
And last but not least, the question got updated with the code snippet providing the "code level solution" to the issue.Thanks for sharing all the details Cyrille as well as I'm glad I could help.

It is also possible to just rotate the root of the prefab (here Karateka) 180° on the Y axis.
The 2D sprite are actually 2 sided 3D mesh, rotating them 180° on the Y axis does the same as flipping the render, and if the IK are parented to the root of the object they will follow the rotation, and all the animations will work.

Related

JavaFX 3D set rotation to "absolute position"

I am playing around, trying to make a little JavaFX application to visualize data received via the serial port from an arduino-based board and some sensors.
After adding some live-updating LineGraphs, I am currently trying to display the roll, pitch and yaw values received from the micro-controller, by rotating a simple box-element.
I have one thread calling a function every x ms, which stores the incoming data into an ObservableList with an changeListener and calls the controller based function to update/rotate the orientation of the box.
Since the calculation of the angles is allready done on the micro-controller, I would like to rotate the box to the received absolute orientation.
From what I've understand so far, I can't simply rotate from any previous orientation to a new absolute one, but only change the orientation relatively to the previous one.
I came up with the following idea to just subtract the last roll/pitch/yaw values from the penultimate one of the observableList.
Data dataTmp = observableList.get(observableList.size()-2);
Data dataTmp2 = observableList.get(observableList.size()-1);
newRoll = dataTmp2.getRoll() - dataTmp.getRoll();
newPitch = dataTmp2.getPitch() - dataTmp.getPitch();
newYaw = dataTmp2.getYaw() - dataTmp.getYaw();
Platform.runLater(new Runnable() {
#Override
public void run() {
controller.setToPosition(newRoll, newPitch, newYaw);
}
});
//...
This only works out to a certain extent. I still want to rotate to the absolute position received from the micro-controller.
So my question is this: Is there a way to reset the orientation of the box to e.g. 0, 0, 0 from where I could rotate to my new absolute orientation? Simply removing the box and adding a new one did not work out at all.
group.getChildren().remove(box);
box = new Box(300,50,300);
group.getChildren().add(box);
Thank you in advance for any ideas or even solutions. If you need more information or code snippets let me know.
Referring to this example, an onMouseMoved handler rotates the red Box around the x and y axes as the mouse moves. The following onKeyPressed handler restores the red Box to its original position when the Z key is pressed. Each handler uses the setAngle() method of the Rotate class.
scene.setOnKeyPressed(e -> {
if (e.getCode() == KeyCode.Z) {
content.rx.setAngle(0);
content.ry.setAngle(0);
content.rz.setAngle(0);
}
});
Similarly, your setToPosition() implementation can invoke setAngle() to establish the new roll, pitch and yaw values.
Before:
After:
More subtly, verify that you synchronize access to any data shared between your data acquisition thread and the JavaFX application thread. This example illustrates a Task<Canvas>, while your application might instead implement a Task<Point3D>, where a Point3D holds the roll, pitch and yaw values.

How to set Raycast direction to 90 degrees instead of through camera

I have a problem with my 3D Project. It is quite complicated to describe the purpose so I try to abstract it to the minimum.
I have an live videostream of the unity program which I bring up to fullscreen (1920 x 1200). One user clicks on the screen to send the coords to the unity app.
sending coords:
// relative coord
float x = mouse_x / 1920.0f;
float y = mouse_y / 1200.0f;
The receiver is the unity app, which trys to make an 3D coordinate of it and finds a wall or an obstacle to place a mark.
1. Attempt
// 1268 x 720 receiver viewport size
Ray ray = Camera.main.ScreenPointToRay(new Vector3(Position.x * 1268.0f, Position.y * 720.0f, 0));
2. Attempt
// * 1268 not necessary
Vector3 far = Camera.main.ViewportToWorldPoint(new Vector3(fix.Position.x, fix.Position.y, 1));
Vector3 near = Camera.main.ViewportToWorldPoint(new Vector3(fix.Position.x, fix.Position.y, 0));
Vector3 dir = far - near;
dir.Normalize();
Ray ray = new Ray(near, dir);
RaycastHit hitInfo;
if(Physics.Raycast(ray, out hitInfo))
{
// place mark
}
Both attempts results in the same way. If the coordinate is around the center then it is in the center on the receiver as well. But the more it goes to the edge it will be much farer from the position it should be. The picture shows what I think happens. The red circle is the current behaviour and the green is what I was expecting. I'd rather have a 90 degrees ray from the screen to the wall than right through the cam.
I really do not know what to do. Thank you very much for your help in advance.
You're right about your drawing, this is indeed what's happening.
Here is a test I did using Debug.DrawRay.
The blue ray is the output of this code.
Debug.DrawRay(Camera.main.transform.position, Camera.main.transform.forward * 100f, Color.red);
And here is the drawing of the red, like you did.
var viewportPointRay = Camera.main.ViewportPointToRay(viewportTouchPos);
Debug.DrawRay(viewportPointRay.origin, viewportPointRay.direction * 3f, Color.blue);
I was expecting a truly simple answer but I wasn't able to find one. I did found a trick to do what you want though.
var ray = new Ray(viewportPointRay.origin, Camera.main.transform.forward);
Debug.DrawRay(ray.origin, ray.direction, Color.green);
Result

how to color point cloud from image pixels?

I am using google tango tablet to acquire point cloud data and RGB camera images. I want to create 3D scan of the room. For that i need to map 2D image pixels to point cloud point. I will be doing this with a lot of point clouds and corresponding images.Thus I need to write a code script which has two inputs 1. point cloud and 2. image taken from the same point in same direction and the script should output colored point cloud. How should i approach this & which platforms will be very simple to use?
Here is the math to map a 3D point v to 2D pixel space in the camera image (assuming that v already incorporates the extrinsic camera position and orientation, see note at bottom*):
// Project to tangent space.
vec2 imageCoords = v.xy/v.z;
// Apply radial distortion.
float r2 = dot(imageCoords, imageCoords);
float r4 = r2*r2;
float r6 = r2*r4;
imageCoords *= 1.0 + k1*r2 + k2*r4 + k3*r6;
// Map to pixel space.
vec3 pixelCoords = cameraTransform*vec3(imageCoords, 1);
Where cameraTransform is the 3x3 matrix:
[ fx 0 cx ]
[ 0 fy cy ]
[ 0 0 1 ]
with fx, fy, cx, cy, k1, k2, k3 from TangoCameraIntrinsics.
pixelCoords is declared vec3 but is actually 2D in homogeneous coordinates. The third coordinate is always 1 and so can be ignored for practical purposes.
Note that if you want texture coordinates instead of pixel coordinates, that is just another linear transform that can be premultiplied onto cameraTransform ahead of time (as is any top-to-bottom vs. bottom-to-top scanline addressing).
As for what "platform" (which I loosely interpreted as "language") is simplest, the native API seems to be the most straightforward way to get your hands on camera pixels, though it appears people have also succeeded with Unity and Java.
* Points delivered by TangoXYZij already incorporate the depth camera extrinsic transform. Technically, because the current developer tablet shares the same hardware between depth and color image acquisition, you won't be able to get a color image that exactly matches unless both your device and your scene are stationary. Fortunately in practice, most applications can probably assume that neither the camera pose nor the scene changes enough in one frame time to significantly affect color lookup.
This answer is not original, it is simply meant as a convenience for Unity users who would like the correct answer, as provided by #rhashimoto, worked out for them. My contribution (hopefully) is providing code that reduces the normal 16 multiplies and 12 adds (given Unity only does 4x4 matrices) to 2 multiplies and 2 adds by dropping out all of the zero results. I ran a little under a million points through the test, checking each time that my calculations agreed with the basic matrix calculations - defined as the absolute difference between the two results being less than machine epsilon - I'm as comfortable with this as I can be knowing that #rhashimoto may show up and poke a giant hole in it :-)
If you want to switch back and forth, remember this is C#, so the USEMATRIXMATH define must appear at the beginning of the file.
Given there's only one Tango device right now, and I'm assuming the intrinsics are constant across all of the devices, I just dumped them in as constants, such that
fx = 1042.73999023438
fy = 1042.96997070313
cx = 637.273986816406
cy = 352.928985595703
k1 = 0.228532999753952
k2 = -0.663019001483917
k3 = 0.642908990383148
Yes they can be dumped in as constants, which would make things more readable, and C# is probably smart enough to optimize it out - however, I spent too much of my life in Agner Fogg's stuff, and will always be paranoid.
The commented out code at the bottom is for testing the difference, should you desire. You'll have to uncomment some other stuff, and comment out the returns if you want to test the results.
My thanks again to #rhashimoto, this is far far better than what I had
I have stayed true to his logic, remember these are pixel coordinates, not UV coordinates - he is correct that you can premultiply the transform to get normalized UV values, but since he schooled me on this once already, I will stick with exactly the math he presented before I fiddle with too much :-)
static public Vector2 PictureUV(Vector3 tangoDepthPoint)
{
Vector2 imageCoords = new Vector2(tangoDepthPoint.x / tangoDepthPoint.z, tangoDepthPoint.y / tangoDepthPoint.z);
float r2 = Vector2.Dot(imageCoords, imageCoords);
float r4 = r2*r2;
float r6 = r2*r4;
imageCoords *= 1.0f + 0.228532999753952f*r2 + -0.663019001483917f*r4 + 0.642908990383148f*r6;
Vector3 ic3 = new Vector3(imageCoords.x,imageCoords.y,1);
#if USEMATRIXMATH
Matrix4x4 cameraTransform = new Matrix4x4();
cameraTransform.SetRow(0,new Vector4(1042.73999023438f,0,637.273986816406f,0));
cameraTransform.SetRow(1, new Vector4(0, 1042.96997070313f, 352.928985595703f, 0));
cameraTransform.SetRow(2, new Vector4(0, 0, 1, 0));
cameraTransform.SetRow(3, new Vector4(0, 0, 0, 1));
Vector3 pixelCoords = cameraTransform * ic3;
return new Vector2(pixelCoords.x, pixelCoords.y);
#else
//float v1 = 1042.73999023438f * imageCoords.x + 637.273986816406f;
//float v2 = 1042.96997070313f * imageCoords.y + 352.928985595703f;
//float v3 = 1;
return new Vector2(1042.73999023438f * imageCoords.x + 637.273986816406f,1042.96997070313f * imageCoords.y + 352.928985595703);
#endif
//float dx = Math.Abs(v1 - pixelCoords.x);
//float dy = Math.Abs(v2 - pixelCoords.y);
//float dz = Math.Abs(v3 - pixelCoords.z);
//if (dx > float.Epsilon || dy > float.Epsilon || dz > float.Epsilon)
// UnityEngine.Debug.Log("Well, that didn't work");
//return new Vector2(v1, v2);
}
As one final note, do note the code he provided is GLSL - if you're just using this for pretty pictures, use it - this is for those that actually need to perform additional processing.

How do I take a 2D point, and project it into a 3D Vector by a perspective camera

I have a 2D Point (x,y) and I want to project it to a Vector, so that I can perform a ray-trace to check if the user clicked on a 3D Object, I have written all the other code, Except when I got back to my function to get the Vector from the xy cords of the mouse, I was not accounting for Field-Of-View, and I don't want to guess what the factor would be, as 'voodoo' fixes are not a good idea for a library. any math-magicians wanna help? :-).
Heres my current code, that needs FOV of the camera applied:
sf::Vector3<float> Camera::Get3DVector(int Posx, int Posy, sf::Vector2<int> ScreenSize){
//not using a "wide lens", and will maintain the aspect ratio of the viewport
int window_x = Posx - ScreenSize.x/2;
int window_y = (ScreenSize.y - Posy) - ScreenSize.y/2;
float Ray_x = float(window_x)/float(ScreenSize.x/2);
float Ray_y = float(window_y)/float(ScreenSize.y/2);
sf::Vector3<float> Vector(Ray_x,Ray_y, -_zNear);
// to global cords
return MultiplyByMatrix((Vector/LengthOfVector(Vector)), _XMatrix, _YMatrix, _ZMatrix);
}
You're not too fart off, one thing is to make sure your mouse is in -1 to 1 space (not 0 to 1)
Then you create 2 vectors:
Vector3 orig = Vector3(mouse.X,mouse.Y,0.0f);
Vector3 far = Vector3(mouse.X,mouse.Y,1.0f);
You also need to use the inverse of your perspective tranform (or viewprojection if you want world space)
Matrix ivp = Matrix::Invert(Projection)
Then you do:
Vector3 rayorigin = Vector3::TransformCoordinate(orig,ivp);
Vector3 rayfar = Vector3::TransformCoordinate(far,ivp);
If you want a ray, you also need direction, which is simply:
Vector3 raydir = Normalize(rayfar-rayorigin);

Method to combine multiple affine transforms as if each was specified in un-transformed space

I'm looking for a way to combine affine transforms in such a way so that the effect is equivalent to using each transform to manipulate a shape in succession. The problem is that if I simply concatenate the transforms, then each successive transform's effect is interpreted in the existing transform's co-ordinate space.
For example, consider a square around the origin (-50,-50, 100,100). I want to rotate it, and then translate it down 100px. If I take a transform and rotate and then translate, the translation gets interpreted in the rotated coordinates. Instead, if I transform the shape itself to rotate it, and then transform that shape again to translate it, both translations are interpreted in the "normal" un-translated plane, and it gives me what I want.
The problem is that for what I'm doing many transforms may take place, each of which needs to be interpreted in the normal coordinate plane, but I don't want to store a stack of transforms, nor can I simply keep manipulating a shape, because I need to at any time be able to create the final transformed shape from the original starting shape.
I'm aware that for this simple example if I did the translate before the rotate I'd get the same result, but that's missing the point. I'm dealing with an arbitrary set of successive scale, translate, and rotate transforms, so simply putting them in a certain order doesn't cut it.
I have an inkling that there should be a way to concatenate transforms in such a way that you modify the new transform before you concatenate it, correcting for the existing transform so that the effect is that the new transform appears to have been applied as if it were referencing the un-transformed coordinate plane. For example, if you translate by (70.7, 70.7) in the above example instead of (0,100), the result becomes equivalent. I just can't seem to figure out what the math is to figure out in general how to alter the new transform so it works out correctly.
Thanks for reading - hope this made sense. Heres the source of the example that created the screenshot:
public class TransformExample extends JPanel {
#Override
protected void paintComponent(Graphics _g) {
super.paintComponent(_g);
Graphics2D g = (Graphics2D) _g;
g.setRenderingHint(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON);
g.translate(150, 100); // translate so we can see method 1 clearly
paintConcatenate(g);
g.translate(200, 0); // translate again so we can see method 2 to the right of method 1
paintSuccessive(g);
}
private void paintConcatenate(Graphics2D g) {
AffineTransform tx = new AffineTransform();
Shape shape = new Rectangle(-50, -50, 100, 100);
// Draw the 3 steps, altering the transform each time
draw(g, shape, tx, Color.GRAY);
tx.rotate(Math.PI / 4);
draw(g, shape, tx, Color.GREEN);
tx.translate(70.7, 70.7);
draw(g, shape, tx, Color.PINK);
}
private void paintSuccessive(Graphics2D g) {
Shape shape = new Rectangle(-50, -50, 100, 100);
// Draw the 3 steps, altering the shape each time with a new transform
draw(g, shape, null, Color.GRAY);
shape = AffineTransform.getRotateInstance(Math.PI / 4).createTransformedShape(shape);
draw(g, shape, null, Color.GREEN);
shape = AffineTransform.getTranslateInstance(0, 100).createTransformedShape(shape);
draw(g, shape, null, Color.PINK);
}
private void draw(Graphics2D g, Shape shape, AffineTransform tx, Color color) {
if (tx != null) {
shape = tx.createTransformedShape(shape);
}
g.setColor(color);
g.fill(shape);
}
public static void main(String[] args) {
JFrame f = new JFrame("Transform Example");
f.setSize(500, 350);
f.setContentPane(new TransformExample());
f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
f.setVisible(true);
}
}
(I'm working with Java2D, although I don't think the language or 2d library is all that pertinent here.)
I suggest you to keep track of some absolute values and then do less transformations as you can.
For example, store the translation matrix and the rotation angle around the origin.
int translate[2];
int rotate;
Now, suppose that you want to rotate around its center and then translate the object somewhere, and then rotate it again under its center.
Because with affine transformations, rotation matrix aren't commutative, so if you apply a rotation,translation, rotation you'll get an wrong result.
But you can simply sum the rotation angle of the first and third rotation, and apply a single rotation and then the translation.
Hope to be clear.
when you rotate an object, you normally rotate around a specific point. It looks like you are just rotating around (0,0) which is usually not what you want.
To rotate around a specific point (x,y),
translate the point to 0 (-x, -y),
then rotate,
then translate back (x, y).
public static AffineTransform getRotateInstance(double theta,
double anchorx,
double anchory)

Resources