JavaFX 3D set rotation to "absolute position" - javafx

I am playing around, trying to make a little JavaFX application to visualize data received via the serial port from an arduino-based board and some sensors.
After adding some live-updating LineGraphs, I am currently trying to display the roll, pitch and yaw values received from the micro-controller, by rotating a simple box-element.
I have one thread calling a function every x ms, which stores the incoming data into an ObservableList with an changeListener and calls the controller based function to update/rotate the orientation of the box.
Since the calculation of the angles is allready done on the micro-controller, I would like to rotate the box to the received absolute orientation.
From what I've understand so far, I can't simply rotate from any previous orientation to a new absolute one, but only change the orientation relatively to the previous one.
I came up with the following idea to just subtract the last roll/pitch/yaw values from the penultimate one of the observableList.
Data dataTmp = observableList.get(observableList.size()-2);
Data dataTmp2 = observableList.get(observableList.size()-1);
newRoll = dataTmp2.getRoll() - dataTmp.getRoll();
newPitch = dataTmp2.getPitch() - dataTmp.getPitch();
newYaw = dataTmp2.getYaw() - dataTmp.getYaw();
Platform.runLater(new Runnable() {
#Override
public void run() {
controller.setToPosition(newRoll, newPitch, newYaw);
}
});
//...
This only works out to a certain extent. I still want to rotate to the absolute position received from the micro-controller.
So my question is this: Is there a way to reset the orientation of the box to e.g. 0, 0, 0 from where I could rotate to my new absolute orientation? Simply removing the box and adding a new one did not work out at all.
group.getChildren().remove(box);
box = new Box(300,50,300);
group.getChildren().add(box);
Thank you in advance for any ideas or even solutions. If you need more information or code snippets let me know.

Referring to this example, an onMouseMoved handler rotates the red Box around the x and y axes as the mouse moves. The following onKeyPressed handler restores the red Box to its original position when the Z key is pressed. Each handler uses the setAngle() method of the Rotate class.
scene.setOnKeyPressed(e -> {
if (e.getCode() == KeyCode.Z) {
content.rx.setAngle(0);
content.ry.setAngle(0);
content.rz.setAngle(0);
}
});
Similarly, your setToPosition() implementation can invoke setAngle() to establish the new roll, pitch and yaw values.
Before:
After:
More subtly, verify that you synchronize access to any data shared between your data acquisition thread and the JavaFX application thread. This example illustrates a Task<Canvas>, while your application might instead implement a Task<Point3D>, where a Point3D holds the roll, pitch and yaw values.

Related

How to set Raycast direction to 90 degrees instead of through camera

I have a problem with my 3D Project. It is quite complicated to describe the purpose so I try to abstract it to the minimum.
I have an live videostream of the unity program which I bring up to fullscreen (1920 x 1200). One user clicks on the screen to send the coords to the unity app.
sending coords:
// relative coord
float x = mouse_x / 1920.0f;
float y = mouse_y / 1200.0f;
The receiver is the unity app, which trys to make an 3D coordinate of it and finds a wall or an obstacle to place a mark.
1. Attempt
// 1268 x 720 receiver viewport size
Ray ray = Camera.main.ScreenPointToRay(new Vector3(Position.x * 1268.0f, Position.y * 720.0f, 0));
2. Attempt
// * 1268 not necessary
Vector3 far = Camera.main.ViewportToWorldPoint(new Vector3(fix.Position.x, fix.Position.y, 1));
Vector3 near = Camera.main.ViewportToWorldPoint(new Vector3(fix.Position.x, fix.Position.y, 0));
Vector3 dir = far - near;
dir.Normalize();
Ray ray = new Ray(near, dir);
RaycastHit hitInfo;
if(Physics.Raycast(ray, out hitInfo))
{
// place mark
}
Both attempts results in the same way. If the coordinate is around the center then it is in the center on the receiver as well. But the more it goes to the edge it will be much farer from the position it should be. The picture shows what I think happens. The red circle is the current behaviour and the green is what I was expecting. I'd rather have a 90 degrees ray from the screen to the wall than right through the cam.
I really do not know what to do. Thank you very much for your help in advance.
You're right about your drawing, this is indeed what's happening.
Here is a test I did using Debug.DrawRay.
The blue ray is the output of this code.
Debug.DrawRay(Camera.main.transform.position, Camera.main.transform.forward * 100f, Color.red);
And here is the drawing of the red, like you did.
var viewportPointRay = Camera.main.ViewportPointToRay(viewportTouchPos);
Debug.DrawRay(viewportPointRay.origin, viewportPointRay.direction * 3f, Color.blue);
I was expecting a truly simple answer but I wasn't able to find one. I did found a trick to do what you want though.
var ray = new Ray(viewportPointRay.origin, Camera.main.transform.forward);
Debug.DrawRay(ray.origin, ray.direction, Color.green);
Result

Point Cloud Library viewer coordinate system changes every run

Trying to capture some images from the 3D viewer in the Point Cloud Library. My issue is, I'll open the viewer, manipulate the camera to the proper position, then record the camera position and view angles, then in my code I'll set the camera position to the recorded values. However, the view I get the next time I run my program doesn't match with the view I had when I recorded the camera position. Is this something within PCL that I'm missing or something in my code?
Another issue I'm having is that the 3D viewer doesn't actually display anything unless I interact with it using the mouse (either dragging it around in the window or zooming in/out). I'd like to automate this process so this is a bit inconvenient. Relevant code:
boost::shared_ptr<pcl::visualization::PCLVisualizer> simpleVis (pcl::PointCloud<pcl::PointXYZ>::ConstPtr cloud)
{
boost::shared_ptr<pcl::visualization::PCLVisualizer> viewer (new pcl::visualization::PCLVisualizer ("3D Viewer"));
viewer->setBackgroundColor (0, 0, 0);
viewer->addPointCloud<pcl::PointXYZ> (cloud, "sample cloud");
viewer->setPointCloudRenderingProperties (pcl::visualization::PCL_VISUALIZER_POINT_SIZE, 1, "sample cloud");
viewer->addCoordinateSystem (1.0);
viewer->initCameraParameters ();
return (viewer);
}
int main() {
outputCloud->width = (int) outputCloud->points.size();
outputCloud->height = 1;
boost::shared_ptr<pcl::visualization::PCLVisualizer> viewer =
simpleVis(outputCloud);
std::string outName = generateOutputName("cross");
viewer->setCameraPosition(3283.64, 4997.91, 2367.14, 0, -.3, 1);
while (!viewer->wasStopped ())
{
viewer->spinOnce (100);
boost::this_thread::sleep (boost::posix_time::microseconds (100000));
}
viewer->saveScreenshot(outName);
}
This code is pretty much taken directly from the PCL tutorials except for the setCameraPosition line which I added in, so I'm not entirely sure what could be going wrong.

Unity: Stop RigidBody2D from pushing each other

I have a player and a few NPCs.
The NPCs have random movement, and I control my players movement. They both have RigidBody2D to deal with physics and BoxCollider2D to deal with Collisions.
However, when I walk into a NPC my player pushes it. Same thing if a NPC moves into my player while the player stands still.
I can't set the mass of either object to some extreme number since that will interfere with how they behave with other objects in my game.
What I want:
When an NPC collides with the player, the NPC stops (I get this effect if I set player mass to ex. 1000, but then the player can push the NPC, which I dont want), and the NPC acts as a "wall", i.e it doesnt move, but nor can the player push it around. How can I do this?
EDIT: So I created my own method for it:
void OnCollisionEnter2D(Collision2D other){
if (other.gameObject.name == "Player") {
collidedWithPlayer = true; //we only move if !collidedWithPlayer
isMoving = false; //stop moving
myRigidBody.mass = 1000; //turn NPC into "wall"
}
}
void OnCollisionExit2D(Collision2D other){
if (other.gameObject.name == "Player") {
collidedWithPlayer = false;
waitCounter = waitTime; //stop NPC from starting to move right after we exit
myRigidBody.mass = 1;
}
}
I mean this works, but is there no native method to do this?
What you are trying to do is essentially use a "realistic" physics engine to create rather unrealistic physics. That's why it's not supported by Unity's built-in functions. Furthermore, you are correct in assuming that messing with the object masses is not a good idea.
Here's one suggestion that avoids playing with mass. It's a bit kludgey, but give it a try and see if it works for you. (I assume your player rigidbody is not Kinematic?)
Step 1: Create 2 new layers; call them NPCWall and PlayerWall. Setup 2D physics so that player collides with NPCWall and NPC collides with PlayerWall, but player does not collide with NPCs. (If your NPCs and player are on the same layer, then of course put them on 2 separate layers.)
Step 2: Create an NPCWall prefab that uses the same kind of collider as the NPCs. I assume you only have one size of NPC. Likewise, create a PlayerWall prefab that uses the same kind of collider as the player. Set the NPCWall prefab to NPCWall layer, and PlayerWall prefab to PlayerWall layer.
Step 3: We can't parent the NPCWall to the NPC, because it would end up as part of the rigidbody. Therefore add a simple script to the NPCWall and PlayerWall:
public class TrackingWall
{
//This invisible wall follows an NPC around to block the player.
//It also follows the player around to block NPCs.
Transform followTransform;
public void Init(Transform targetTrans)
{
followTransform = targetTrans;
transform.position = followTransform.position;
transform.rotation = followTransform.rotation;
}
void Update()
{
if (followTransform == null)
Destroy(gameObject);
transform.position = followTransform.position;
transform.rotation = followTransform.rotation;
}
}
Step 4: In the NPC and player scripts:
TrackingWall myWallPrefab;
void Start()
{
[whatever else you are doing in Start()]
TrackingWall myWall = Instantiate<TrackingWall>(myWallPrefab);
myWall.Init(transform);
}
Obviously, for NPCs, myWallPrefab should be set to the NPCWall prefab, and for players, myWallPrefab should be set to the PlayerWall prefab.
In theory this should give each character an impenetrable, immovable wall that only moves when they do, prevents other characters from pushing them, and cleans itself up when they are destroyed. I can't guarantee it will work though!

Calculating bank angle between two objects

I have a drone following a path for movement. That is, it doesn't use a rigidbody so I don't have access to velocity or magnitude and such. It follows the path just fine, but I would like to add banking to it when it turns left or right. I use a dummy object in front of the drone, thinking I could calculate the bank/tilt amount using the transform vectors from the two objects.
I've been working on this for days as I don't have a lot of math skills. Basically I've been copying pieces of code trying to get things to work. Nothing I do works to make the drone bank. The following code manages to spin (not bank).
// Update is called once per frame
void Update () {
Quaternion rotation = Quaternion.identity;
Vector3 dir = (dummyObject.transform.position - this.transform.position).normalized;
float angle = Vector3.Angle( dir, transform.up );
float rollAngle = CalculateRollAngle(angle);
rotation.SetLookRotation(dir, transform.right);// + rollIntensity * smoothRoll * right);
rotation *= Quaternion.Euler(new Vector3(0, 0, rollAngle));
transform.rotation = rotation;
}
/// <summary>
/// Calculates Roll and smoothes it (to compensates for non C2 continuous control points algorithm) /// </summary>
/// <returns>The roll angle.</returns>
/// <param name="rollFactor">Roll factor.</param>
float CalculateRollAngle(float rollFactor)
{
smoothRoll = Mathf.Lerp(smoothRoll, rollFactor, rollSmoothing * Time.deltaTime);
float angle = Mathf.Atan2(1, smoothRoll * rollIntensity);
angle *= Mathf.Rad2Deg;
angle -= 90;
TurnRollAngle = angle;
angle += RollOffset;
return angle;
}
Assuming you have waypoints the drone is following, you should figure out the angle between the last two (i.e. your "now-facing" and "will be facing" directions). The easy way is to use Vector2.Angle.
I would use this angle to determine the amount I'll tilt the drone's body: the sharper the turn, the harder the banking. I would use a ratio value (public initially so I can manipulate it from the editor).
Next, instead of doing any math I would rely on the engine to do the rotation for me - so I would go for Transform.Rotate function.In case banking can go too high and look silly, I would set a maximum for that and Clamp my calculated banking angle between zero and max.
Without knowing exactly what you do and how, it's not easy to give perfect code, but for a better understand of the above, here's some (untested, i.e. pseudo) code for the solution I visualize:
public float turnSpeed = 7.0f; //the drone will "rotate toward the new waypoint" by this speed
//bankSpeed+turnBankRatio must be two times "faster" (and/or smaller degree) than turning, see details in 'EDIT' as of why:
public float bankSpeed = 14.0f; //banking speed
public float turnBankRatio = .5f; //90 degree turn == 45 degree banking
private float turnAngle = 0.0f; //this is the 'x' degree turning angle we'll "Lerp"
private float turnAngleABS = 0.0f; //same as turnAngle but it's an absolute value. Storing to avoid Mathf.Abs() in Update()!
private float bankAngle = 0.0f; //banking degree
private bool isTurning = false; //are we turning right now?
//when the action is fired for the drone it should go for the next waypoint, call this guy
private void TurningTrigger() {
//remove this line after testing, it's some extra safety
if (isTurning) { Debug.LogError("oups! must not be possible!"); return; }
Vector2 droneOLD2DAngle = GetGO2DPos(transform.position);
//do the code you do for the turning/rotation of drone here!
//or use the next waypoint's .position as the new angle if you are OK
//with the snippet doing the turning for you along with banking. then:
Vector2 droneNEW2DAngle = GetGO2DPos(transform.position);
turnAngle = Vector2.Angle(droneOLD2DAngle, droneNEW2DAngle); //turn degree
turnAngleABS = Mathf.Abs(turnAngle); //avoiding Mathf.Abs() in Update()
bankAngle = turnAngle * turnBankRatio; //bank angle
//you can remove this after testing. This is to make sure banking can
//do a full run before the drone hits the next waypoint!
if ((turnAngle * turnSpeed) < (bankAngle * bankSpeed)) {
Debug.LogError("Banking degree too high, or banking speed too low to complete maneuver!");
}
//you can clamp or set turnAngle based on a min/max here
isTurning = true; //all values were set, turning and banking can start!
}
//get 2D position of a GO (simplified)
private Vector2 GetGO2DPos(Vector3 worldPos) {
return new Vector2(worldPos.x, worldPos.z);
}
private void Update() {
if (isTurning) {
//assuming the drone is banking to the "side" and "side" only
transform.Rotate(0, 0, bankAngle * time.deltaTime * bankSpeed, Space.Self); //banking
//if the drone is facing the next waypoint already, set
//isTurning to false
} else if (turnAngleABS > 0.0f) {
//reset back to original position (with same speed as above)
//at least "normal speed" is a must, otherwise drone might hit the
//next waypoint before the banking reset can finish!
float bankAngle_delta = bankAngle * time.deltaTime * bankSpeed;
transform.Rotate(0, 0, -1 * bankAngle_delta, Space.Self);
turnAngleABS -= (bankAngle_delta > 0.0f) &#63 bankAngle_delta : -1 * bankAngle_delta;
}
//the banking was probably not set back to exactly 0, as time.deltaTime
//is not a fixed value. if this happened and looks ugly, reset
//drone's "z" to Quaternion.identity.z. if it also looks ugly,
//you need to test if you don't """over bank""" in the above code
//by comparing bankAngle_delta + 'calculated banking angle' against
//the identity.z value, and reset bankAngle_delta if it's too high/low.
//when you are done, your turning animation is over, so:
}
Again, this code might not perfectly fit your needs (or compile :P), so focus on the idea and the approach, not the code itself. Sorry for not being able right now to put something together and test myself - but I hope I helped. Cheers!
EDIT: Instead of a wall of text I tried to answer your question in code (still not perfect, but goal is not doing the job, but to help with some snippets and ideas :)
So. Basically, what you have is a distance and "angle" between two waypoints. This distance and your drone's flight/walk/whatever speed (which I don't know) is the maximum amount of time available for:
1. Turning, so the drone will face in the new direction
2. Banking to the side, and back to zero/"normal"
As there's two times more action on banking side, it either has to be done faster (bankSpeed), or in a smaller angle (turnBankRatio), or both, depending on what looks nice and feels real, what your preference is, etc. So it's 100% subjective. It's also your call if the drone turns+banks quickly and approaches toward the next waypoint, or does things in slow pace and turns just a little if has a lot of time/distance and does things fast only if it has to.
As of isTurning:
You set it to true when the drone reached a waypoint and heads out to the next one AND the variables to (turn and) bank were set properly. When you set it to false? It's up to you, but the goal is to do so when the maneuver is finished (this was buggy in the snippet the first time as this "optimal status" was not possible to ever be reached) so he drone can "reset banking".For further details on what's going on, see code comments.Again, this is just a snippet to support you with a possible solution for your problem. Give it some time and understand what's going on. It really is easy, you just need some time to cope ;)Hope this helps! Enjoy and cheers! :)

XNA tile based movement [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 4 years ago.
Improve this question
I am trying to make a 2D tile-based top-down game in XNA. It is 16 x 16 tiles, and each tile is 25 pixels.
I have a character sprite starting at (0, 0) first tile, and I'm trying to make it movable using the keyboard from tile to tile. So in the Update method, when the arrow key is pressed, I tried adding or subtracting 25 to x or y of the position vector. It seems to be aligned in the tiles when moving, but it's moving about 4-5 tiles instead of just 1 tile at a time. I have tried multiplying it with gameTime.TotalGameTime.TotalSeconds, but it doesn't seem to help.
I'm kind of new to using XNA. Does anyone have any tutorials or can help with how to calculate the movement? Thanks in advance.
If you just check IsKeyDown every frame, it will say it is down on each frame that it is held down. At 60 frames-per-second, pressing a key will result in it being in the down state for several frames. Hence on each frame you are moving your character! By the time you let go of the key - he'll have moved several squares.
If you want to detect each key press (the key entering the "down" state), you need something like this:
KeyboardState keyboardState, lastKeyboardState;
bool KeyPressed(Keys key)
{
return keyboardState.IsKeyDown(key) && lastKeyboardState.IsKeyUp(key);
}
override void Update(GameTime gameTime)
{
lastKeyboardState = keyboardState;
keyboardState = Keyboard.GetState();
if(KeyPressed(Keys.Right)) { /* do stuff... */ }
}
However if you want to add a "repeat" effect when holding down the key (like what happens in typing), you need to count the time - something like this:
float keyRepeatTime;
const float keyRepeatDelay = 0.5f; // repeat rate
override void Update(GameTime gameTime)
{
lastKeyboardState = keyboardState;
keyboardState = Keyboard.GetState();
float seconds = (float)gameTime.ElapsedGameTime.TotalSeconds;
if(keyboardState.IsKeyDown(Keys.Right))
{
if(lastKeyboardState.IsKeyUp(Keys.Right) || keyRepeatTime < 0)
{
keyRepeatTime = keyRepeatDelay;
// do stuff...
}
else
keyRepeatTime -= seconds;
}
}
When you use IsKeyDown, you don't have to use timers.
public void HandleInput(KeyboardState keyState)
{
if (keyState.IsKeyDown(Keys.Left))
{
//go left...
}
}

Resources