How do I build a console app for Xbox Kinect - console

I'm fairly new to programming and I've been presented with fairly daunting assignment at work. I need to build a program from scratch in order to take advantage of the Kinect's motion tracking capabilities to interface with another application.
Some context:
Someone else I work with has built the test program- a console app using OpenGL. The test program consists of a cube inside of a skymap. The camera looks at the small cube, and can be rotated around the cube to view it from different perspectives
Someone else was able to use the sample codes in the developer toolkit to control the test program. The test program now works with motion tracking (swiping your hand to the side rotates the cube; moving your head side to side changes the camera angle so it looks like you are looking around a floating 3D object; walking forward or backward zooms the camera). It works as it is, but...
The problem is this: Now that we know it all works, it's time to simplify everything so that we can run the test program on a tablet. So the code needs to be stripped down to the bare bones. We need to remove everything from the SkeletalViewer code except for the elements that gather and process the data, so that it can be used in another program.
I've been asked to build a console app from scratch (rather than tearing apart the sample code-as this is extremely messy) that allows us to use the Kinect with our test program.
I've spent the last few weeks trying to figure out the code and I'm feeling overwhelmed! I don't know where to start.
Here's my question: what are the absolute bare essential building blocks in the Kinect program? I do not need it to draw ANYTHING. I just need a console app that, when running, gathers the motion tracking data and sends it to the other program.
I would greatly appreciate any guidance you can provide.
Thank you in advance!!!
-JD

You don't need to draw anything, but there's a need to create an event for caching the frames in order to work one them.
Here's pretty cool description of skeletal joints: MSDN - Kinect
Another thing I can give you is the main page of Kinect project. Here are the libraries, guides, code samples etc. You can download and install Kinect Toolkit, there are several programs inside (binaries + code samples) - everything you need to learn Kinect API: Kinect For Windows - Downloads and Kinect For Windows - Learn

Related

Sketchup API for navigating around a model (eventually to integrate with Leap Motion)

I'm trying to use to SketchUp API to navigate around 3D models (zoom, pan, rotate, etc). My ultimate aim is to integrate it with a Leap Motion app.
However, right now I think my first step would be to figure out how to control the basic navigation gestures via the Sketchup API. After a bit of research, I see that there are the 'Camera' and the 'Animation' interfaces, but I think they would be more suited to 'hardcoded' paths and motions within the script.
Therefore I was wondering - does anyone know how I can write a plugin that is able to accept inputs from another program (my eventual Leap Motion App in this case) and translate it into specific navigation commands using the Sketchup API (like pan, zoom, etc). Can this be done using the 'Camera' and the 'Animation' interfaces (in some sort of step increments), or are there other interfaces I should be looking at.
As usual, and examples would be most helpful.
Thanks!
View, Camera and the Animation class is what you are looking for. Maybe you don't even need the Animation class, you might just be ok with using the time in the UI class. Depends on the details of what you will be doing.
You can set the camera directly like so:
Sketchup.active_model.active_view.camera.set(ORIGIN, Z_AXIS, Y_AXIS)
or you can use View.camera= which also accept a transition time argument if you find that useful.
For bridging input you could always create a Ruby C Extension that takes care of the communication between the applications. There are some quirks in getting C Extensions work for SketchUp Ruby as oppose to regular Ruby though - depending on how you compile it.
I wrote a hello world example a couple of years ago: https://bitbucket.org/thomthom/sketchup-ruby-c-extension
Though note that I have since found a better solution for Windows, using the Development Kit from Ruby Installers: http://rubyinstaller.org/
This answer is related to my comment above regarding the view seemingly 'jumping' when I assign a new camera to the current view using camera=, but not if I use camera.set.
I figured out this was happening because the camera FOV for the original camera was different, and the new camera was defaulting to an FOV of 30. By explicitly creating the camera with the optional perspective and FOV arguments from the initial camera solves this problem:
new_camera = Sketchup::Camera.new new_eye, new_target, curr_camera.up, curr_camera.perspective?, curr_camera.fov
Hope people find this useful!

PIC Prgramming - The basic flow of things

Can someone please explain to me the basic flow of how this is done.
So currently I a USB pic programmer and also a multi pic adapter. I understand that I can use this to write my program to the PIC. But Im not sure what happens before that, like how do I actually test it with LED or some input sensor etc that gives out analog data?
This is what I have now: http://www.piccircuit.com/shop/pic-programmer/26-ica01-usb-pic-programmer-set.html
So I need to connect this to to a breadboard? And if so how? Im completely lost!! This is the first time that I attempt to do this. What I have done is use my Synapse RF Engine EK2100 to build what I want.
Now what...?
I'm not entirely sure what you're trying to accomplish but what you purchased is a programmer for PIC microcontrollers. After you have written some code whether in assembly or C and compiled it to a hex file, this device will put that code onto a PIC microcontroller that you buy separately. Have you purchased a PIC device to program or do you just have the programmer and the EK2100 kit? If you provide some more detail we can point you in the right direction.
Write a basic 'flash LED' program and then wire-up the PIC to see if it works.
Hot tip - use the internal oscillator to minimise the external component count (makes things simpler). Browse around a PIC savvy site like http://digital-diy.com/ to get lots of interesting ideas and code samples.
The community there mostly use PIC Basic type languages (such as Swordfish) that will land you code that looks something like this (header/setup removed for ease of explanation):
While True
High(LED)
DelaymS(500)
Low(LED)
DelaymS(500)
Wend

Get the next direction help provided GPS location

I am working on proof of concept project that is an augmented reality based navigation system. Basically, it is supposed to provide users a video of real scene with additional navigational signs that are created with the use of GPS location.
The thing I need to do is not a complete useful system but rather a proof of concept. There were mainly two tasks that I had to do. First real image of the scene is processed to determine the road in it, and with the use of GPS I will place the navigational sign like turn right, go straight on the road. The challenge I have here is how i can fetch the required navigation information. I considered Google Maps Api but it requires you to have your google host. I also made some research about how to get directions from it, but it seems that results are not very well structured for my purposes. I mean I need something like having a certain number of outputs and rigorously defined (e.g. turn right, turn left, etc) but its output is very unstructured. How do you think I can achieve this task. Thanks for any help.
This Link might help for Google Map Location extraction
https://developers.google.com/maps/documentation/javascript/streetview
You can use Layar , or other available AR sdks(Metaio, QUalcomm also provide one ) for AR related Overlays
Good luck !

GeoTools and JOGL

I am developing a desktop application for Windows that will constantly show the map moving as a vehicle moves, getting the location from a GPS unit. I have written a quick little demo that accomplishes this, however it constantly flickers. I need to implement a solution that has a nice smooth movement. I have several things in mind that will hopefully resolve this.
First I think it would be better to create the map at around 2 to 3 times the size of the visible window and move the window over the map. When the window gets close to a side a new map is built in the background which replaces the current map. I'm still working on how to do that. (Hints welcome!! lol)
I think it would be a big step forward and the ultimate solution to turn on hardware acceleration for the application. Based on what I have read in the forums JOGL is the only practical way to add hardware acceleration to a Java desktop application. So here I am.
Has anyone toyed around with adding JOGL to GeoTools. I'm not talking about 3D at the moment, just making the display more efficient. It looks like it's going to be a big job implementing this change as it appears that it will affect almost all of the code from JMapPane on down. Any suggestions, code examples and hints will be greatly appreciated!!!
You may use the GLG2D project to render a Swing JComponent such as the JMapPane using hardware accelerated using JogAmp JOGL 2. This will allow you to use hardware acceleration with minimal changes to the GeoTools source.
http://brandonborkholder.github.io/glg2d/
JComponent renderPane = new JMapPane(...
JFrame frame = new JFrame("My GeoTools application");
frame.setContentPane(new GLG2DCanvas(renderPane));
...

Flex: Testing UI components at the click level?

I've been working on a Flex component and I'd like to write some automated tests for it. The trouble is, the UI testing tools I've looked at (FlexMonkey and Selenium Flex API) don't simulate "enough":
Most of the bugs which have come up so far relate to the way Flex deals with dragging and dropping, which these libraries can't simulate accurately enough. For example, I need to test a case where there is a "drop" event which occurs in the bottom half of a component – neither FlexMonkey nor Selenium Flex API can do that (they may simulate a mouse event, but they won't include coordinates).
So, is there any "good" way to automate that sort of testing?
Edit: After much research, it looks like the only piece of software that can do this is iMacros, which is Windows-only and the interface is... Lacking. So I'm going to be writing my own. Basically, it will put an HTTP interface on java.awt.Robot so code (in any language) can simulate mouse/keyboard events. If you're interested, PM me and I'll keep you updated.
Edit 2: I have published the first version of the framework I wrote, Blunderbuss, over at BitBucket: http://bitbucket.org/wolever/blunderbuss/ . You'll need Jython to run it (http://www.jython.org/), but after that the flex-client example should work.
Videos of Blunderbuss live over at Vimeo:
Automating Flex testing with Blunderbuss
Blunderbuss test suite running
At the moment this remains a proof-of-concept, as I haven't had the cycles to clean it up and make it more useable… But maybe enough people bothering me would give me that time :)
I've used Eggplant to test Flash and AIR apps without having to add any hooks into the code. It's a great tool but it's quite expensive. It simulates a real user by VNC-ing into a system and uses image recognition - among other things - to interact with the app.
I am definitely interested in your custom Java class, and (though I am not the best at Java (yet...)), I would be willing to help out if you're thinking of making this collaborative.
As to Flash MouseEvents. Unfortunately, there really isn't an accurate way to simulate the drag/drop experience in Flash. MouseEvents, when generated by the mouse, are handled in a very different way than regular events and while you could simulate actions by passing events into the handling functions, or by making the dispatcher fire a new DragEvent( DragEvent.DRAG_DROP..., it will not be the same as having the user interact with it. And for some functionality (like gaining access to the clipboard), nothing inside Flash will accomplish your goals.
To be honest, you're probably headed in the right direction -- using something which is not written in Flash to drive faked mouse events is probably your best bet.
I've never had to use it in Flex but i recently stumbled across some info on automation packages in the MS Surface SDK... after looking into it those classes automated user behavior which can be used for testing i.e. move a fake mouse to this point, perform this action. As you're using Flex mx.automation packages and classes. My guess (and hope) is that you'd be able to achieve what you want using these classes.
You could also try auto-hotkey - it is similarly a macro-editing program but it has proven to be very efficient and you can write scripts and set it up very easily.

Resources