How to load animaton on DirectX 10 - keyframe

I am a beginer on game development. I wont write simple game with DirectX 10 and need to load animation like man playing cards. I don't know how to strat. As I know I must create my own file format and convert from some other format to it, but I don't know how. So please help me and if it is possible give some code example.

The DirectX SDK features a sample application with source code called Skinning10 that implements character animation.
The NVIDIA Direct3D SDK shows off Skinning with Dual Quaternions.
Also, you may want to have a look at the CAL3D Character Animation Library.

Related

Qt6 circular gauge issue

I am developing a qml app in qt5 with circular gauge. I want to port application qt 6. But in qt 6 qtquick extras is missing, so circular gauge is not available. Is qt planning to make it available in next releases of qt6? What can I do instead of using circular gauge in qt6?
Is there anyone who knows about it?
As far as I know there are no plans yet to port it to Qt 6. You could create a suggestion on Jira to express your interest in having it.
The code is here and here if you want to try to port it yourself or just use parts of it (keeping in mind the license, which is LGPL).
As you stated in the question, CircularGauge is not available in Qt6. So what can you do?
As a minimal effort, you can substitute CircularGauge with a functional similar component, for instance, RangeSlider. Of course, a RangeSlider looks nothing like a CircularGauge, but, it will, at least, allow you to compile and run your application. It will give you an application to test whilst you decide your options.
Then, as others have stated, you need to spend more effort allocated to porting. If you refer to the source code of CircularGauge, you see that they're using Canvas with a custom onPaint implementation. You could do the same in your port, or, you can find an alternative, e.g. Shape with ShapePath, etc. These efforts are non-trivial, and it boils down to the level of effort you wish to invest.

qt3d and the oculus sdk

Given qt3d's structure, is it possible to integrate the oculus sdk with a qt3d application?
I have tried but my two main obstacles are:
I cant use the textures from the texture swap chain created by the oculus sdk as a render target attachment
I am not able to call ovr_SubmitFrame at the end of each frame since qt3d doesnt have a signal that would allow me to do so.
Has anyone successfully gotten the oculus sdk to work with qt3d? If so, how did you overcome these issues?
Are there any plans for allowing the integration of VR SDKs (not just oculus') in qt3d in further releases?
You could probably do it with some sort of custom framegraph that encapsulated the stereo rendering functionality and included a custom component that could take the currently rendered content and submitted it to the SDK prior to the swapbuffer call.
Alternatively you could dive into the code that processes the framegraph itself and see how hard it would be to customize it to work against a VR API. I've done significant work with integrating Qt apps with VR, but not specifically with Qt3D.
The frame graph will indeed provide one part of the solution for the stereoscopic rendering setup. There is already an anaglyphic stereo example showing most of what you need that ships with Qt 3D.
To integrate the swap chain of the occulus SDK will require deeper integration. I do not know the details of the Occulus SDK as yet but we can take a look.
From what I can see you should be able to do something analogous to the Scene3D custom Qt Quick 2 item to be able to render to the textures provided by the Occulus SDK and to tell Qt 3D which OpenGL Context to use. See
http://code.qt.io/cgit/qt/qt3d.git/tree/src/quick3d/imports/scene3d?h=5.7
Nicolas, I also do not appreciate you publicly saying that KDAB are not much help. I only received an email from Karsten on Friday which I responded to despite being on vacation saying that we can help but it will be on a best efforts basis since you are not paying and I have a very full workload preparing Qt 3D for release at the end of the month along with Qt 5.7. Today is a public holiday in the UK, as you are aware, yet you are already saying detrimental things about us.
You were also directed to post to the interest#qt-project.org mailing list on the qt-forums as I do not tend to monitor SO or the qt-forums on a regular basis. You could have also emailed us directly or via the development#qt-interest mailing list.
We would be more than happy to set up a support agreement with you.

Sketchup API for navigating around a model (eventually to integrate with Leap Motion)

I'm trying to use to SketchUp API to navigate around 3D models (zoom, pan, rotate, etc). My ultimate aim is to integrate it with a Leap Motion app.
However, right now I think my first step would be to figure out how to control the basic navigation gestures via the Sketchup API. After a bit of research, I see that there are the 'Camera' and the 'Animation' interfaces, but I think they would be more suited to 'hardcoded' paths and motions within the script.
Therefore I was wondering - does anyone know how I can write a plugin that is able to accept inputs from another program (my eventual Leap Motion App in this case) and translate it into specific navigation commands using the Sketchup API (like pan, zoom, etc). Can this be done using the 'Camera' and the 'Animation' interfaces (in some sort of step increments), or are there other interfaces I should be looking at.
As usual, and examples would be most helpful.
Thanks!
View, Camera and the Animation class is what you are looking for. Maybe you don't even need the Animation class, you might just be ok with using the time in the UI class. Depends on the details of what you will be doing.
You can set the camera directly like so:
Sketchup.active_model.active_view.camera.set(ORIGIN, Z_AXIS, Y_AXIS)
or you can use View.camera= which also accept a transition time argument if you find that useful.
For bridging input you could always create a Ruby C Extension that takes care of the communication between the applications. There are some quirks in getting C Extensions work for SketchUp Ruby as oppose to regular Ruby though - depending on how you compile it.
I wrote a hello world example a couple of years ago: https://bitbucket.org/thomthom/sketchup-ruby-c-extension
Though note that I have since found a better solution for Windows, using the Development Kit from Ruby Installers: http://rubyinstaller.org/
This answer is related to my comment above regarding the view seemingly 'jumping' when I assign a new camera to the current view using camera=, but not if I use camera.set.
I figured out this was happening because the camera FOV for the original camera was different, and the new camera was defaulting to an FOV of 30. By explicitly creating the camera with the optional perspective and FOV arguments from the initial camera solves this problem:
new_camera = Sketchup::Camera.new new_eye, new_target, curr_camera.up, curr_camera.perspective?, curr_camera.fov
Hope people find this useful!

How do I build a console app for Xbox Kinect

I'm fairly new to programming and I've been presented with fairly daunting assignment at work. I need to build a program from scratch in order to take advantage of the Kinect's motion tracking capabilities to interface with another application.
Some context:
Someone else I work with has built the test program- a console app using OpenGL. The test program consists of a cube inside of a skymap. The camera looks at the small cube, and can be rotated around the cube to view it from different perspectives
Someone else was able to use the sample codes in the developer toolkit to control the test program. The test program now works with motion tracking (swiping your hand to the side rotates the cube; moving your head side to side changes the camera angle so it looks like you are looking around a floating 3D object; walking forward or backward zooms the camera). It works as it is, but...
The problem is this: Now that we know it all works, it's time to simplify everything so that we can run the test program on a tablet. So the code needs to be stripped down to the bare bones. We need to remove everything from the SkeletalViewer code except for the elements that gather and process the data, so that it can be used in another program.
I've been asked to build a console app from scratch (rather than tearing apart the sample code-as this is extremely messy) that allows us to use the Kinect with our test program.
I've spent the last few weeks trying to figure out the code and I'm feeling overwhelmed! I don't know where to start.
Here's my question: what are the absolute bare essential building blocks in the Kinect program? I do not need it to draw ANYTHING. I just need a console app that, when running, gathers the motion tracking data and sends it to the other program.
I would greatly appreciate any guidance you can provide.
Thank you in advance!!!
-JD
You don't need to draw anything, but there's a need to create an event for caching the frames in order to work one them.
Here's pretty cool description of skeletal joints: MSDN - Kinect
Another thing I can give you is the main page of Kinect project. Here are the libraries, guides, code samples etc. You can download and install Kinect Toolkit, there are several programs inside (binaries + code samples) - everything you need to learn Kinect API: Kinect For Windows - Downloads and Kinect For Windows - Learn

GeoTools and JOGL

I am developing a desktop application for Windows that will constantly show the map moving as a vehicle moves, getting the location from a GPS unit. I have written a quick little demo that accomplishes this, however it constantly flickers. I need to implement a solution that has a nice smooth movement. I have several things in mind that will hopefully resolve this.
First I think it would be better to create the map at around 2 to 3 times the size of the visible window and move the window over the map. When the window gets close to a side a new map is built in the background which replaces the current map. I'm still working on how to do that. (Hints welcome!! lol)
I think it would be a big step forward and the ultimate solution to turn on hardware acceleration for the application. Based on what I have read in the forums JOGL is the only practical way to add hardware acceleration to a Java desktop application. So here I am.
Has anyone toyed around with adding JOGL to GeoTools. I'm not talking about 3D at the moment, just making the display more efficient. It looks like it's going to be a big job implementing this change as it appears that it will affect almost all of the code from JMapPane on down. Any suggestions, code examples and hints will be greatly appreciated!!!
You may use the GLG2D project to render a Swing JComponent such as the JMapPane using hardware accelerated using JogAmp JOGL 2. This will allow you to use hardware acceleration with minimal changes to the GeoTools source.
http://brandonborkholder.github.io/glg2d/
JComponent renderPane = new JMapPane(...
JFrame frame = new JFrame("My GeoTools application");
frame.setContentPane(new GLG2DCanvas(renderPane));
...

Resources