I need some guidance on how to implement an Arm model for the Daydream controller as required by: https://developers.google.com/vr/distribute/daydream/design-requirements#UX-C1.
Background / Problem:
I'm trying to develop a daydream application with the Android NDK with controller interactions for the Pixel, using ControllerApi from references like [1]. However, there doesn't seem to be any GVR Arm Model helper class we can use in the Android Daydream SDK / NDK, nor is there any documentation guidance on Daydream's SDK documentation site.
As such, my questions are:
1) Is there a GVR Arm Model helper class, or is this something that developers would implement themselves individually? If the latter, is there documentation on how this can be done?
2) If we do implement it ourselves, could we simplify the problem by assuming:
A fixed point for the Elbow joint in absolute space (an assumed Vector3f position)
Forearm and Hand Length
So as to then calculate the controller location / rotation based on the rotation around the wrist + Elbow? Or is there a separate recommended approach?
[1] - https://developers.google.com/vr/android/ndk/reference/group/controller#gvr_controller_state_create
1) Is there a GVR Arm Model helper class, or is this something that developers would implement themselves individually? If the latter, is there documentation on how this can be done?
There is no GVR Arm Model helper class included in the Android NDK. However, both the Unreal and Unity Daydream integrations have Arm Model code built-in that you can utilize. I would recommend using the C++ version of the of the arm model that is included as part of Unreal. The arm model has no dependencies on Unreal code, so you should be able to integrate it into your app without too much refactoring.
The C++ Arm Model in Unreal behaves just like the arm model in Daydream Home. It exposes a Wrist Position and Wrist Rotation that are relative to the user's head, as well as a recommended alpha to render the controller at so that it doesn't clip uncomfortably into the user's head. It also provides an angle that the the laser should be tilted downward from the wrist for ergonomic comfort when pointing at objects. Some more information about the Arm Model can be found here: https://developers.google.com/vr/unity/controller-support
2) If we do implement it ourselves, could we simplify the problem by assuming:
I would strongly recommend you use the Unreal C++ Arm Model as a starting point, however you should definitely feel free to modify it and tune it so that it suites the needs of your application.
Related
I am trying to build a Xamarin.Forms app that enables indoor positionning using iBeacons. For now, i can only do testing on Android.
Before I jump into trying to adapt existing native packages, I wanted to know if there were some existing libraries.
Thanks in advance.
Indoor positioning using iBeacon is a complex feature that requires both HW (iBeacon devices) and SW -- all components to setup location map, iBeacon position, all logic related to position calculation, and so on. So building this all from scratch is quite a complex task. Thus, I suggest trying something that already exists.
For example, Estimote has indoor positioning feature in their SDK, but from what I know, they use Fingerprinting method to calculate position which is a bit inaccurate; Leantegra (Leantegra GitHub), in turn, posesses this functionality as well, and uses Multilateration method which is more accurate. So, feel free to try it out.
Trilateration is only the first step, to get an appropriate accuracy, you need to use Multilateration method -- calculate position based on signal to multiple (more then 3) beacon devices. If represent ibeacon devices as circle with radius equal to signal strength, you will get a lot of possible disposition, where the calculation of a position becomes quite a complex task...
I have an application written in in Java 3D. As Java 3D is now virtually dead I am thinking about converting the code to JavaFX (JavaFX 8 supports 3D objects).
The question is whether it is relatively simple to convert Java 3D code to Java FX code?
Are there straightforward counterparts of Java 3D methods in JavaFX or would it be more like a total redesign of the code?
Here is a little list of packages used in the Java 3D code:
javax.media.j3d.Alpha;
javax.media.j3d.Appearance;
javax.media.j3d.Behavior;
javax.media.j3d.BoundingSphere;
javax.media.j3d.BranchGroup;
javax.media.j3d.Canvas3D;
javax.media.j3d.GeometryArray;
javax.media.j3d.LineArray;
javax.media.j3d.PointLight;
javax.media.j3d.Shape3D;
javax.media.j3d.Switch;
javax.media.j3d.Transform3D;
javax.media.j3d.TransformGroup;
javax.media.j3d.WakeupOnElapsedFrames;
javax.media.j3d.WakeupOnElapsedTime;
javax.vecmath.Matrix4f;
javax.vecmath.Vector3d;
javax.vecmath.Vector3f;
Java 3D isn't dead, you're completely wrong as you can see here. There is a wide choice of scenegraph APIs more capable than JavaFX 3D API which is particularly poor in my humble opinion.
I don't know what gouessej is saying about Java 3D not being dead, there will not be feature development for Java3D going forward.
However he/she is correct that the base JavaFX 3D API is very lacking in features.
If you want to port your application to JavaFX 3D, you will have to rewrite the rendering portions to match the new JavaFX API. From the list that you provided only PointLight and Shape3D have DIRECT counterparts. Alpha transparency is an undocumented unsupported feature as of 8u40 that will get compiled into the official build for Java 9. The F(X)yz team has a demo of it working just fine but we had to recompile the platform from sources ;-).
You are not alone though, there is now free open source third party support via F(X)yz: (shameless plug....)
http://www.fxyz3d.org
I wonder what is the bounds of Qt's perimeter. I know for exemple that it can specify types (such as qint or QString), and I know it cannot get system informations such as CPU Usage or Memory Usage.
My question is about the limits of Qt.
Is it correct that Qt can only interact with what is inside the project but not with what is outside (I mean system-related) ?
You can get information about operating system with QSysInfo class, if you are looking for this. This is one example, I am sure there are other helper classes. I think you should use other libraries for information like CPU usage etc, see here and also this question.
QT is nothing more/nothing less then a GUI C++ cross-platform framework. It's doesn't really have a perimeter, it has certain cross-platform functions implemented (like widgets/frames/controls a lot of other things). And within it's own functionality it provides (As being mentioned above) QSysInfo class, but you are free to add any OS dependent (if you target your application for particular platform) or cross-platform solutions for whatever tasks you need - hardware info/OS monitoring/etc..
I want to use PCL(point cloud library) to implement cube or rectangle detection for any size in a scene.
Can anyone give me some direction?
You may want to take a look at this PCL tutorial or, in general, at all the techniques implemented in the pcl::recognition module.
On the PCL users mailing list archive (here), there is an older but yet useful discussion about simple object recognition. For simple objects, as in your case, you may consider using Sample Consensus for segmenting the model inside your scene point cloud.
hi guys
i am in trouble with add picking object in a JOGL project.
i know that this could be done with pick buffer.. but i can't find examples
anyone?
In general, as you are probably aware, JOGL code translates directly from any other OpenGL examples you might see on the web.
GL_SELECT based picking seems to be very much out of favour these days; deprecated in the spec and poorly implemented by drivers.
Alternatives you can use are:
Rendering each object with a unique color (and all lighting / fog etc disabled) so you can determine which object the mouse is over via glReadPixels. (Clearing buffers after the picking stage so that you can then render your normal graphics). This approach is explained by the top rated answer in OpenGL GL_SELECT or manual collision detection? for example.
Ray-casting into your geometry (see the selection FAQ link below). This also means that you don't have to have an active gl context in the thread you call the code from, fwiw.
I've used both of these methods in the same application, currently having good results with the latter, but since most of the objects in that application are spheres it is a lot cheaper than it might be with arbitrary models.
http://www.opengl.org/resources/faq/technical/selection.htm