I would like to convert a 3d image (hyperspectral cube) into a 2d one with 3 channels.
I have all images in the source and target – paired.
What should be changed in the code in order to support this?
(see good code with explanation here (Jason Brownlee):
https://machinelearningmastery.com/how-to-develop-a-pix2pix-gan-for-image-to-image-translation)
Thanks,
Eli
Related
I'm looking to auto-generate a UML class model in virtual reality using A-Frame.io (or another technology) by passing in values. Has anyone ever done something similar in the past? Not sure where to start.
Thanks
You might want to look into plantuml which is a nice UML generator. Most of it's diagrams are generated as input to graphviz's dot. Dot is a layout engine - it takes a list of nodes and connections and puts them into 2D space and then renders them to one of it's output formats - or just returns the graph, but this time with coordinates on where to draw what. You could meddle with this data and render the elements with volume but on a 2D plane with dot generated coordinates. Perhaps even you could modify it to place them in 3D space instead of a plane.
Or you could just render the plantuml output on a 2D plane, place it in 3D space and it would probably be good enough. There are even online generators for plantuml.
I am trying to build my custom Photosphere viewer to run using SDL2 and a custom IMU I purchased. So far, I have managed to read IMU values, open the .jpg and display it using SDL2.
My issue is how to make sense IMU data to read parts of the jpg appropriately. Basically, I do not want to display the whole jpg but just parts of it based on IMU data (I receive Euler angles or Quaternions). Right now, I am just using a single mono photosphere (I am not concerned with stereo yet), which is stored as a equirectangular projection, and I need to use the IMU to get it to a polar projection (I believe?)
I am not sure how to index the jpg based on IMU data to create a working photosphere viewer and I cannot seem to find a good explanation of how to address the jpg. Can anyone point me into the right direction? Thanks!
I was able to find a really great OpenGL based simple Python photosphere viewer here. I just then needed to create a rotation matrix from the sensor IMU. There are good tutorials to convert from Quaternion to Matrix like this one.
I'm generating a synthetic DICOM image with the Insight Toolkit (using itk::GDCMImageIO) and I've found two problems:
VolView fails loading my DICOM (with the message: Sorry, the file cannot be read). ITK-Snap opens and shows it OK.
I'm trying to use this image in a Stryker surgical navigator. The problem is that the image is loaded ok, but then the padding pixels are shown in a certain gray level, showing a box (actually the bounding box) of the image. If I load non synthetic DICOMs this doesn't happen.
This is what gdcminfo is showing:
MediaStorage is 1.2.840.10008.5.1.4.1.1.7 [Secondary Capture Image Storage]
TransferSyntax is 1.2.840.10008.1.2.1 [Explicit VR Little Endian]
NumberOfDimensions: 2
Dimensions: (33,159,1)
Origin: (0,0,0)
Spacing: (1,1,1)
DirectionCosines: (1,0,0,0,1,0)
Rescale Intercept/Slope: (0,1)
SamplesPerPixel :1
BitsAllocated :16
BitsStored :16
HighBit :15
PixelRepresentation:0
ScalarType found :UINT16
PhotometricInterpretation: MONOCHROME2
PlanarConfiguration: 0
TransferSyntax: 1.2.840.10008.1.2.1
Orientation Label: AXIAL
I'm using unsigned short as pixel type in itk::Image object and I'm setting all the padding pixels to 0 (zero), as is suggested by the DICOM standard for unsigned scalar images. gdcminfo does not show it, but I'm also setting the Pixel Padding (0028,0120) field to zero.
I would really appreciate any hint about this problem.
Thanks in advance,
Federico
After a lot of experimentation, I'll answer my own question. I've found that some DICOM readers directly assume that you're using the Hounsfield scale if the type of the DICOM files is CT. In this case you have to use short as pixel type and use -1024 for air (less than -1000 is air in Hounsfield scale), and it will render the image ok. These readers I've been experimenting with don't use the Pixel Padding field nor the Rescale Intercept/Slope. But If you use ITK-Snap/VolView/3DSlicer you won't have any problem if you specify those fields.
Dicom is a VERY tricky file format. You will need to carefully read and understand the conventions for the visualization platform, the storage platform, and the type of medical image you are trying to synthesize.
This is very likely NOT an error with the toolkit, but an error with what is being defined in the file format itself.
Do you know what would be the best approach to generate 3D output for one of these new "3D ready" televisions from software. Our application have some nice 3D visualizations, we want these to look nice.
Also, how feasible is it to generate it from a Flash (Flex) app.
I believe that the gaming and 3DTV industries have paved the way for you. As long as your app already outputs 3D visualizations, it may just be a matter of installing a driver. You can get started with this NVIDIA 3D Stereo User’s Guide, but I believe there's tons of other stuff out there if you look.
See also the answers to this question.
3D televisions can display 3D output only for images shot in 3D. This means "intended for simulated 3D," not just a two-dimensional projection of a 3D image.
Stereoscopy is produced by generating two completely separate images per frame (one for each eye) in which the foreground objects are offset to simulate a 3D image. You cannot take a 2D image and make it into a 3D image, the source frames must be produced as 3D frames from the beginning.
More information:
http://en.wikipedia.org/wiki/3D_television
http://en.wikipedia.org/wiki/Stereoscopy
I am IT student and I have to make a project in VB6, I was thinking to make a 3D Software Renderer but I don't really know where to start, I found a few tutorials but I want something that goes in depth with the maths and algorithms, I will like something that shows how to make 3D transformations, Camera, lights, shading ...
It does not matter the programing language used, I just need some resources that shows me exactly how to make this.
So I just want to know where to find some resources, or you can show me some source code and tell me where to start from.
Or if any of you have a better idea for a VB6 project.
Thanks.
I disagree with the previous posts, a 3D renderer is actually pretty simple. A high-quality 3D renderer is hard however.
Get a bunch of 3D data, triangles are simplest.
Learn about homogenous coordinates and the great 4x4 matrix for transforms.
Define a camera by a position and a rotation (expressed in the 4x4 matrix).
Transform your 3D geometry by this camera.
Perform the perspective divide and scale to your window. This converts your 3D data to 2D.
Render the data as 2D.
Now you're going to lose out on a depth buffer, so stick to wireframes in the beginning. :-)
Don't listen to these nay-sayers, go out and have some fun!
Many years ago I made a shaded triangle renderer that used library calls to draw the triangles. It's a rather naive approach but you would be able to achieve the same result using VB6. I got all the maths & techniques from "Computer Graphics principles and practice" by Foley et al. Some parts are out of date now but I think you'd find it very helpful for this project and it can be bought 2nd hand at reasonable prices from Amazon for example.
One simple approach could be:
Read model file as triangles
Transform each triangle using matrices to account for camera position
Project triangle points onto 2D
Draw 2D triangle (probably using GDI)
This covers wireframe viewing. To extend this to hidden surface removal you need to work out which triangles are in front. Two possible ways:
Z-order sorting the triangles and drawing the ones furthest from the camera first. This is simple but inefficient if there are a lot of triangles and can give overlapping triangle effects when the order is not quite correct. You also have to decide how to sort the triangles - e..g by centroid, by extents...
Using a software depth buffer. This will give better results but is more work to implement. You will have to write your own triangle drawing code so cannot rely on GDI. See bresenham's line algorithm and related algorithms for doing filled triangles for how to do this.
After this you'd also need some kind of shading based on lighting. The calculations are covered in Computer Graphics principles and practice. For simple shading you can stick with drawing triangles using gdi , but if you want to do gouraud or phong shading the colour values vary across a triangle. One way around this is to sub-divide the triangle into smaller triangles, but this is inefficient and won't give very nice looking results. Better would be to draw the triangles yourself as required above for the software depth buffer.
A good extension would be to support primitives other than triangles. Basic approach would be to split primitives into triangles as you read them.
Good luck - could be an interesting project.
VB6 is not the best suited language to do maths and 3D graphics, and given that you have no previous knowledge about the subject either, I would recommend you to choose something different (and easier).
As it's Visual Basic, you could try something more form-oriented, that is the original intent of the language.
There is the 3D engine list which lists three engine in pure basic (an oxymoron) + Source code and of them one is in Visual Basic (Dex3D)
DeX3D is an open source 3D engine
coded entirely in Visual Basic from
Jerry Chen ( -onlyuser#hotmail.com ).
Gouraud shading
Transparency
Fogging
Omni and spot lights
Hierarchical meshes
Support for 3D Studio files
Particle systems
Bezier curve segments
2.5 D text
Visual Basic source
More information, screenshots and the
source can be found on the Dex3D
Homepage. (<= Dead Link)
EGL25 by Erkan Sanli is a fast open source VB 6 renderer that can render, rotate, animate, etc. complex solid shapes made of thousands of polygons. Just Windows API calls – no DirectX, no OpenGL.
VBMigration.com chose EGL25 as a high-quality open-source VB6 project to demonstrate their VB6 to VB.Net upgrade tool.
A 3D software renderer as a whole project is fairly complex if you've never done it before. I would suggest something smaller - like just doing the 3D portion and using lines to do the rendering OR just write a shaded triangle renderer (which is the underpinnings of 3D renderers anyway).
Something a little simpler rather than trying to write a full-blown 3D software renderer on the first go - especially in VB.
A software renderer is a very difficult project and the language VB6 is not indicated at all ( for a task like this c++ is the way.. ), anyway I can suggest you some great books I used:
Shaders: http://wiki.gamedev.net/index.php/D3DBook:Introduction_%28Volume%29
Math: 3D Math Primer for Graphics and Game Development
There are other 2 books. Even if they are for VB.NET you can find some useful code:
.NET Game Programming with DirectX 9.0
Beginning .NET Game Programming in VB .NET
I think you can take two ways either go the Direct X way and use DirectX 8 that has VB 5-6 support. I found a page http://www.gamedev.net/reference/articles/article1308.asp
You can always write a engine group up but by doing so you will need some basic linear algebra like Frank Krueger suggests.