Cocos3d:Animation is not working in the output - cocos3d

I am developing cocos3d app for iOS.
I added the walk animation for a "Man" blend file. Looks, the man is doing walk animation fine in Blender. I have used the following settings to convert to collada and then pod. I am getting wrong output mentioned like below.
Export .dae options such as "Export Data Options (Apply modifiers, Selection only, Include Armatures are enabled),
Texture options (Include UV Textures, Include Material textures and copy are enabled),
Armature Options (Deform Bones only is enabled),
Collada Options (Use Object Instance enabled)
options enabled in Blender when converting to .dae file.
And then, In PVRGeoPOD 2.13 version,
Export Geometry (Primitive Type: Indexed Triangle list)
Use custom optimisations settings (PVRTGeomterySort sorting method), Vertex data optiomisations (Interleave Vertex data, align vertex data 32 bits)
Vector (Position - float, Normal - float, Color - RGBA)
Export Skinning data ( Bone indices - unsigned byte; Bone weights - unsigned bytes)
Matrix palette size - 11
Export Mapping Channels (uvw0 - float only enabled)
Flip V co-ordinate enabled
Material - 'Export Materials' only enabled
Transformations - Export animations, Index animation data are enabled. Co-ordinate system - OpenGL model space
OUTPUT:
A Man walking animation is happening kind of, but the man is completed black shaded and bones are unlikely expanded. Output is ugly one.
Please note: If i add the same Man without adding "Armature (and bone, walk animation)" in blender and the exported pod is showing the man very well in the device output without any animation.
Output 1: When i added walk animation using armature bones, output is black shaded with improper walk animation. Pls. refer this link to see output
https://www.yousendit.com/download/UVJpWUh0bThiV3dsYzhUQw
Output 2: Output without any animation in that Man pod model. Pls refer this link.
https://www.yousendit.com/download/UVJpWUh0bThoMlhyZHNUQw
I have uploaded the .blend, .pod files attached in this link -> https://www.yousendit.com/download/UVJpWUhndWNsUjk3czlVag
How do i solve the animation issue and provide the smooth walk animation with clear view? As i need to fix this issue urgently, could you please help on suggesting to solve this issue?
thank you.

I have solved it. Here is the solution-> http://www.cocos2d-iphone.org/forum/topic/345818

Related

Converting DICOM images into nrrd images preserving pixel spacing

I am trying to convert a series of MRI DICOM images (.dcm) into .nnrd format. I found this guide for doing it in 3D slicer and I managed to do it. The problem is that the new nrrd image that is created has lost the pixel spacing of the original DICOM image.
In the additional settings, while converting the image, I also unticked the "Compress" box but the problem is still there. For instance, checking the two images (original .dcm and new .nrrd) in Imagej I get this:
The two images (nrrd on the left and dcm on the right) where I highlighted the old and the new pixel spacing
Anyone knows how to solve this? Any other alternative (that preserves the pixel spacing) is well accepted.
Thanks a lot in advance,
Tommaso
Your DICOM file is corrupted. Some mandatory tags are missing (e.g. 20,37, Image Orientation (Patient)). Therefore Slicer cannot properly compute the spacing. It even shows you in "Dicom Browser" window the following warning (after clicking Examine button): "Reference image in series does not contain geometry information. Please use caution".
If you cannot fix original images, you can apply manually all required spacing elements. Either do it in Slicer before exporting (Module Volumes -> Volume Information), or you can fix nrrd files themselves. Open them in your favorite text editor:
NRRD0004
# Complete NRRD file format specification at:
# http://teem.sourceforge.net/nrrd/format.html
type: short
dimension: 3
space: left-posterior-superior
sizes: 512 512 1
space directions: (1,0,0) (0,1,0) (0,0,1)
kinds: domain domain domain
endian: little
encoding: gzip
space origin: (0,0,0)
You have to update this line:
space directions: (0.507812,0,0) (0,0.507812,0) (0,0,4)
The true spacing are under tags 0028,0030 (X and Y) and 0018,0050 (Z).

Project Tango strange rotation visualisation

I am working on 3D reconstruction with tango. Our system is quite similar to KinectFusion, which uses voxel representation, but use Tango as tracker. Left image (in video linked below) is rendered by Raycast at current pose (given by tango) in real time. Raw pose converted by GetOC2OWMat() as in code examples, in addition sign of tx and rx are flipped to cope with our system. Everything works fine except ration in Z axis, which changes angle in rendered image. I guess coordinate system conversion is not done properly, but depth integration is working if no Z rotation is involved. I have also checked det(R) is always 1.
Video
It sounds like you are not factoring in intrinsics - have you accounted for camera and device IMU frames ? You need these to fully re-establish original viewpoint, i.e. both camera and device imu frame matrices need to be multiplied in to your stack
Sorry that I just find the place where things goes wrong. When the image is displayed with opengl, the rendered gl size does not have same aspect ratio as Raycasting image.
Do you program with Java/C/Unity? I'm curious because my device has problems with the camera data and you seem to capture it without problems. I am quite sure it's a bug but I would like to make sure it really is one.

How to avoid strange structure artifacts in scaled images?

I create a big image stitched out of many single microscope images.
Suddenly, (after several month of working properly) the stitched overview images became blurry and they are containing strange structural artefacts like askew lines (not the rectangulars, they are because of not perfect stitching)
If I open any particular tile in full size, they are not blurry and the artefacts are hardly observable. (Consider, the image below is already 4x scaled)
The overview image is created manually by scaling each tile using QImage::scaled and copying all of them to the corresponding region in the big image. I'm not using opencv's stitching.
I assume, this happens because of image contents, because most if the overview images are ok.
The question is, how can I avoid such hardly observable artefacts to become very clearly visible after scaling? Is there some means in OpenCV or QImage?
Is there any algorithms to find out, if image content could lead to such effect for defined scale-factor?
Many thanks in advance!
Are you sure the camera is calibrated properly? That the lightning is uniform? Is the lens clear? Do you have electrical components that interfere with the camera connection?
If you add image frames of photos on a uniform material (or non-uniform material, moved randomly for significant time), the resultant integrated image should be completely uniform.
If your produced image is not uniform, especially if you get systematic noise (like the apparent sinusoidal noise in the provided pictures), write a calibration function that transforms image -> calibrated image.
Filtering in Fourier space is another way to filter out the noise but considering that the image is rotated you will lose precision, and you'll be cutting off components of the real signal, too. The following empiric method will reduce the noise in your particular case significantly:
ground_output: composite image with per-pixel sum of >10 frames (more is better) over uniform material (e.g. excited slab of phosphorus)
ground_input: the average(or sqrt(sum of px^2)) in ground_output
calib_image: ground_input /(per px) ground_output. Saved for the session, or persistent in a file (important: ensure no lossy compression! (jpeg)).
work_input: the images to work on
work_output = work_input *(per px) calib_image: images calibrated for systematic noise.
If you can't create a perfect ground_input target such as having a uniform material on hand, do not worry too much. If you move any material uniformly (or randomly) for enough time, it will act as a uniform material in this case (think of a blurred photo).
This method has the added advantage of calibrating solitary faulty pixels that ccd cameras have (eg NormalPixel.value(signal)).
If you want to have more fun you can always fit the calibration function to something more complex than a zero-intercept line (steps 3. and 5.).
I suggest scaling the image with some other software to verify if the artifacts are in fact caused by Qt or are inherent in the image you've captured.
The askew lines look a lot like analog tv interference, or CCTV noise induced by 50 or 60 Hz power lines running alongside the signal cable or some other electrical interference on the signal.
If the image distortion is caused by signal interference then you can try to mitigate it by moving the signal lines away from whatever could be the source of the problem, or fit something to try to filter the noise (baluns for example).

moving a spinning 3D object across the screen, making it face the correct way when it stops

The best example of what I am trying to achieve is on this youtube video
http://www.youtube.com/watch?v=53Tk-oGL2Uo
The letters that make up the word 'Atari' fly in from the edges of the screen spinning and then line up to make the word at the end.
I know how to make an object move across the screen, but how do I calculate the spinning so that when the object gets to its end position it's facing the correct direction?
The trick is to actually have the object(s) in the right position for a specific time (say t=5.0 seconds) and then calculate backwards for the previous frames.
i.e. before 5.0 seconds you rotate the object(s) by [angular velocity] * (5.0 - t) and translate by [velocity] * (5.0 - t)
If you do this, then it will look like the objects fly together and line up perfectly. But what you've actually done is blown them apart in random directions and played the animation backwards in time :-)
The CORRECT way of doing this is using keyframes. You can create the keyframes in any 3D editor (I use MAX, but you could use Blender). You don't necessarily need to use the actual characters, even a cuboid would suffice. You will then need to export those animation frames (again, in MAX I would use ASE - COLLADA would work with Blender) and either load them up at runtime or transform them to code.
Then it's a simple matter of running that animation based on the current time.
Here's a sample from my own library that illustrates this technique. Doing this once will last you far longer and give you more benefits in the long run than figuring out how to do this procedurally.

Air in synthetic DICOM

I'm generating a synthetic DICOM image with the Insight Toolkit (using itk::GDCMImageIO) and I've found two problems:
VolView fails loading my DICOM (with the message: Sorry, the file cannot be read). ITK-Snap opens and shows it OK.
I'm trying to use this image in a Stryker surgical navigator. The problem is that the image is loaded ok, but then the padding pixels are shown in a certain gray level, showing a box (actually the bounding box) of the image. If I load non synthetic DICOMs this doesn't happen.
This is what gdcminfo is showing:
MediaStorage is 1.2.840.10008.5.1.4.1.1.7 [Secondary Capture Image Storage]
TransferSyntax is 1.2.840.10008.1.2.1 [Explicit VR Little Endian]
NumberOfDimensions: 2
Dimensions: (33,159,1)
Origin: (0,0,0)
Spacing: (1,1,1)
DirectionCosines: (1,0,0,0,1,0)
Rescale Intercept/Slope: (0,1)
SamplesPerPixel :1
BitsAllocated :16
BitsStored :16
HighBit :15
PixelRepresentation:0
ScalarType found :UINT16
PhotometricInterpretation: MONOCHROME2
PlanarConfiguration: 0
TransferSyntax: 1.2.840.10008.1.2.1
Orientation Label: AXIAL
I'm using unsigned short as pixel type in itk::Image object and I'm setting all the padding pixels to 0 (zero), as is suggested by the DICOM standard for unsigned scalar images. gdcminfo does not show it, but I'm also setting the Pixel Padding (0028,0120) field to zero.
I would really appreciate any hint about this problem.
Thanks in advance,
Federico
After a lot of experimentation, I'll answer my own question. I've found that some DICOM readers directly assume that you're using the Hounsfield scale if the type of the DICOM files is CT. In this case you have to use short as pixel type and use -1024 for air (less than -1000 is air in Hounsfield scale), and it will render the image ok. These readers I've been experimenting with don't use the Pixel Padding field nor the Rescale Intercept/Slope. But If you use ITK-Snap/VolView/3DSlicer you won't have any problem if you specify those fields.
Dicom is a VERY tricky file format. You will need to carefully read and understand the conventions for the visualization platform, the storage platform, and the type of medical image you are trying to synthesize.
This is very likely NOT an error with the toolkit, but an error with what is being defined in the file format itself.

Resources