I need to map a screen coordinate (retrieved from an eyetracker) to a Node coordinate (ImageView). Is there a way to do this with JavaFX?
Related
How do you get coordinate of the element if you have the locator? That means you can use appium-desktop to get the locator find coordinate by script. If I have A's coordinate because I can guess coordinate of many elements around of A which can't find the element or coordinate.
Appium has a method that allows you to retrieve the screen location of an element using its locator.
http://appium.io/docs/en/commands/element/attributes/location/
I have a calibrated stereo camera. So I have the camera intrinsic matrix for the left camera. I have built an absolute 3D model of the scanned region which I am able to load as a mesh file.
I have recorded a video of stereo camera's left camera as I scanned the region. I also know the position and orientation of the stereo camera at every point during the scanning process. So I recreated this motion of the stereo camera using PCL. If the position and orientation of the stereo camera in the real world matches with that of the camera of pcl visualizer, will the left cam's photo and the pcl rendered view match?
I tried doing this but looks like the perspective projection done in PCL visualizer is different from that of the camera. If so, is there a way to change the projection algorithm used in PCL visualizer so that I can match the rendered view that match exactly with the camera's image?
I am working on 360 degree videos, where I need to render a equirectangular video format in a normal 2D field of view, given any latitude and longitude coordinates in the sphere, just like what 360 degree player does, but control it with code and not by mouse interactions.
So my question is
How can I orient the virtual camera in that sphere with code (python preferred) given input as equirectangular video format, so that I can get the image of that particular angle in that sphere.
In order to do above, what sort of transformation operations are required to be performed on input video (equirectangular format), so that I can get the cropped normal field of view where virtual camera is oriented.
I am drawing situation.
Camera (Raspberry pi) looking on scene, there is an object. I know real width and height of object. Is there any way, how can I calculate distance between camera and object from cam photo? Object on picture is not always situated in the middle. I know height of camera and angle of camera.
Yes, but it's gonna be difficult. You need the intrinsic camera matrix and a depth map. If you don't have them, then forget it. If you only posses a RGB image then it won't work.
This is a first person shooter game where I am trying to launch a projectile from a moving player. The launch direction depends on where i click on the screen. The "launcher" has a fixed fire strength, which means that the projectile if fired more horizontally will travel further before hitting the ground, likewise a projectile fired in a more upwards direction will go higher but will have travelled less horizontally when it hits the ground due to gravity. The firing vector is determined by where the finger touches the screen, and then multipled by a public parameter "firingstrength". Make sense so far?
What I am confused about is how to get the position of the finger on the screen, which i use to calculate the "vector" to apply to the projectile.
I have imagined doing this (I am new to Unity) by the following:
Empty object (Player): Contains
movement scripts
Camera
Invisble inverted sphere which surrounds the camera, and I use this sphere to pick up the mouse clicks (i.e when i click on the screen the game should detect where on the inside of the sphere I have clicked on, in order to calculate a vector between the camera position and the sphere wall that i clicked on)
Once i have the vector, i just multiply it by a "firingstrength variable and apply it to a projectile that originates from camera position.
Does this make sense or is there a better way to do this?
Kevin
You do not need a sphere just use Input.mousePosition and construct a ray from it using ScreenPointToRay(). Than you just use ray.Direction to get Vector3 from your camera position.
Ray ray = Camera.main.ScreenPointToRay(Input.mousePosition);
Vector3 shootVector = ray.Diraction;
This should do the trick.
For touch interface:
Ray ray = Camera.main.ScreenPointToRay(Input.GetTouch(0).position);
Vector3 shootVector = ray.Diraction;