Converting a vector to digital ink - vector-graphics

Is it possible to convert a vector image to a digital ink xml using any known methods?
Digital ink is an xml format that contains the co-ordinates of the points. However svg is significantly different. Therefore, can you go from an svg to digital ink?
Furthermore, is it possible to do raster to digital ink?

Related

DICOM pixel data lossless rendering and representation

I quote :
DICOM supports up to 65,536 (16 bits) shades of gray for monochrome image display, thus capturing the slightest nuances in medical imaging. In comparison, converting DICOM images into JPEGs or bitmaps (limited to 256 shades of gray) often renders the images unacceptable for diagnostic reading. - Digital Imaging and Communications in Medicine (DICOM): A Practical Introduction and Survival Guide by Oleg S. Pianykh
As I am a beginner in image processing I'm used to process images colored and monochrome with 256 levels, so for Dicom images, in which representation I have to process pixels without rendering them to 256 levels?, because of the loss of information.
note: If you can put a better tittle for this question, please feel free to do so, I've a hard time doing that and didn't come to a good one.
First you have to put the image's pixels through the Modality VOI/LUT transform in order to transform modality-dependent values to known units (e.g. Hounsfield or Optical Density).
Then, all your processing must be done on the entire range of values (do not convert 16 bit values to 8 bit).
The presentation (visualization) can be performed using scaled values (using 8 bit values), usually passing the data through the Presentation VOI/LUT (window-width or LUT).
See this for the Modality transform: rescale slope and rescale intercept
See the this for Window/Width: Window width and center calculation of DICOM image

QGis: How to import svg or raster images into Quantum GIS?

these vector or raster files being classic files without geocoordinates. They are lat/long projection, I want to import them into QGIS, scale them up/down, place them to their right place, and they become reusable shp or raster geocoordinated layers.
Edit: I'am from the wikipedia Graphic Lab>Map workshop, we want to work more using GIS. We litteraly have hundreds maps to migrate to GIS technologies....
File:Chinese_plain_5c._BC-en.svg
File:Vignobles_basse_loire.svg
Partial Solution: load SVG into Inkscape, Save as DXF file, then you can load this into QGIS. This should at least get you most of the linework into QGIS.
However, it won't yet be properly georeferenced or styled, and different layers may be in different places because the SVG has some scaling and translating operators on parts of the map data that QGIS or Inkscape is ignoring. You'll probably need to work with a layer at a time. This probably isn't a problem since maybe you are only interested in the added data on the map, and not the base map (country outlines etc) since you will probably want to overlay your data onto standard map base layer (natural earth, OpenStreetMap tiles).
The only way I see to do the transformation at present is to work out the affine transformation parameters and use the QgsAffine plugin, but that does require you to work out the parameters beforehand by fitting known source coordinates to known target coordinates.
But to do hundreds? You might be better off writing some custom SVG parsing code for each one...
If you only want to display it in the correct place, scale and rotation treat it as an SVG icon.
1. create a point layer and put a single point at the georeferenced centre of the SVG you will load.
2. edit the symbology and load the SVG as an icon
3. set the size units to map units
4. supply the appropriate dimensions
5. rotate as necessary
The redraw is very slow and painful, but if you use Project>import/Export>Export map as image you can make a georeferenced raster.

a planet in openGL: vector data or texture mapping?

I am completely new to 3D and started with Jeff Lamarche's tutorials as an introduction to openGL ES for iPhone, then so far, I am able to draw a spinning sphere, which will the base of my application.
What I want to do is render a planet Earth, thanks to 2D GIS vector data (polygones, lines or points with latitude/longitude or x/y coord).
I want to be able to turn different layers on/off and maybe able to identify an object that wold be touched.
My questions are :
would it be easier to rasterize my vector data to use them as image texture or apply the vector data onto the sphere (keeping in mind that I want to turn on/off the layers, the touch-enabled objects being optional)?
would it be easier to use a software like blender to draw the planet and add the layers rather than starting with the sphere I already have (procedural sphere)?
do the export tool from blender to opengl work well?
This kind of question is difficult to answer in general. Technically your intention sounds a lot like if you would like to write a program like Google Earth or KDE Marble. Since you're referring to GIS data you will require very high resolution. Textures only make sense for limited resolution data.
GIS applications usually work using hybrid approaches where some vector data are rendered directly (roads, waters, borders), while others are rendered to texture and the texture, or more accurately texture tiles, being used as caches, for example for building outlines in dense cities or the like. However data as it comes from say OSM can be directly rendered as vector data, since they are not very dense.

Convert 4 corners into a matrix for OpenGL to Direct3D sprite conversion

I am working on code for Scrolling Game Development Kit. An old release (2.0) of this program was based on DirectX and was using Direct3D Sprite objects to draw all the graphics. It used the Transform property of the sprite object to specify how the texture rectangle would be transformed as it was being output to the display. The current release (2.1) was a conversion to OpenGL and is using GL TexCoord2 and GL Vertex2 calls to send coordinates of the source and output rectangles for drawing sprites. Now someone says that their video card worked great with DirectX, but their OpenGL drivers do not support GL_ARB necessary to use NPOTS textures (pretty basic). So I'm trying to go back to DirectX without reverting everything back to 2.0. Unfortunately it seems it's much easier to get 4 points given a matrix than it is to get a matrix given 4 points. I have done away with all the matrix info in version 2.1 so I only have the 4 corner points left when calling the function that draws images on the display. Is there any way to use the 4 corner information to transform a Direct3D Sprite?
Alternatively does anybody know why DirectX would be able to do something than OpenGL can't -- are some video cards' drivers just that bad where DirectX supports NPOTS textures but OpenGL doesn't?
It's probably worth reading up on how they do bump mapping. See e.g. this site. You end up with a tangent space matrix, which maps from world space to tangent space (the space relative to the current face). The purpose of that is taking a vector in world space, generally a vector from a light, and converting it into a vector in tangent space, that being the space that your texture defines surface normals in.
Anyway, if you inverted that matrix you'd have a mapping from tangent space to world space. Which I think is what you want? The mapping produced in that tutorial is purely for direction vectors, but expanding out to a 4x4 and anchoring the origin somewhere meaningful shouldn't be difficult.

Generating 3D TV stereoscopic output programmatically

Do you know what would be the best approach to generate 3D output for one of these new "3D ready" televisions from software. Our application have some nice 3D visualizations, we want these to look nice.
Also, how feasible is it to generate it from a Flash (Flex) app.
I believe that the gaming and 3DTV industries have paved the way for you. As long as your app already outputs 3D visualizations, it may just be a matter of installing a driver. You can get started with this NVIDIA 3D Stereo User’s Guide, but I believe there's tons of other stuff out there if you look.
See also the answers to this question.
3D televisions can display 3D output only for images shot in 3D. This means "intended for simulated 3D," not just a two-dimensional projection of a 3D image.
Stereoscopy is produced by generating two completely separate images per frame (one for each eye) in which the foreground objects are offset to simulate a 3D image. You cannot take a 2D image and make it into a 3D image, the source frames must be produced as 3D frames from the beginning.
More information:
http://en.wikipedia.org/wiki/3D_television
http://en.wikipedia.org/wiki/Stereoscopy

Resources