How to rotate a raster image in Meshlab? - raster

I'm testing a scanning process where I want to reproject the texture of the scanned object through raster layers, as shown through these great tutorials : https://www.youtube.com/watch?v=7yeSqH1ftT4.
However, I am encountering a bit of a problem : as you can see in the picture, the image I'm using is rotated 90 degrees. I've tried rotating it through Windows Photo viewer but it is not taken into account in Meshlab.
Is there a simple way to just rotate the raster image so that I don't have to break my neck when trying to align the mesh ^^ ?
I've tried searching in the camera parameters but can't seem to find the solution. Thank you and sorry if it's obvious!

Related

Finding center coordinate of transformed image in terms of previous coordinate system

Apologies for asking this question, but I'm having trouble getting my head around 2D geometry and transforms (probably due to lack of sleep), can't visualize things in my mind's eye. Please could you help.
I'm using Qt and QTransform, although this is largely irrelevant as this is a mathematical problem. I have an image that takes up the whole viewport, I zoom into the image at a point (zoomPos) that is clicked on. I accomplish this with the following transform:
zoomTransform.translate(zoomPos.x(), zoomPos.y());
zoomTransform.scale(zoomFactor, zoomFactor);
zoomTransform.translate(-zoomPos.x(), -zoomPos.y());
What I wish to calculate are the point coordinates of the center of the scaled (zoomed) image in terms of the original (unscaled) coordinate system. Another way of explaining this is: I wish to calculate the point coordinates of the original image that is the center of the scaled (zoomed) image. I hope that makes sense.
I tried using QTransform::map which maps a point to the coordinate system defined by the transform. I think I have to use an inverted zoomTransform (not sure) and am not sure which coordinates to map from.
Thanks for reading.

LibGDX - Rectangle collision detection in 2d?

OK, I want to try this:
Make two cars (with sprites: e.g. red rectangle for car 1 texture, green rectangle for car 2 texture). With width: 32px and height: 20px.
(Movement of the cars are not the problem)
Then check collision detection like in the picture. The first is front crash and the second is side crash.
collision http://img802.imageshack.us/img802/2934/rectangles2.png
Then delete the sprites and only hold the vectors in the code. (position and rotation)
I want it so, because I want to add 3d Cars at these positions with his rotations.
I mean, Collision detection without sprites in 2d.
In the end game, there will be no sprites. Only 3d Objects.
Anybody has some codes for that?
I want to make it without Box2D. But when there is a good box 2d example. Then I can make it with box2d.
Thank You for any help.
Well if you want to do collision detection I would just use the included box2d. Have a look at Box2d Car Physics, this will give you a good starting point on how to build up the car.The code is for C however because LibGDX is a wrapper all methods that are demonstrated in the tutorial are available. If you need help setting up the box2d physics in libgdx the wiki is very good. To get started building your engine you should just use the box2d debugger provided with libgdx, This just draws all shapes (box/circle/polygon) then once your happy with the behaivour of your engine, you can just change the rendering code and use the X,Y positions and rotation of your car and use your 3d models.

2d point to 3d point on a sphere

i haven’t been entirely sure what to google or search for to help solve my problem, really hoping someone here can help a little…
currently i have a 3d scene, it has a massive sphere with a texture mapped to it and the camera at the center of the sphere, so it’s much like a qtvr viewer.
i’d like a way to click on the polygons within the sphere and update the texture at that position with something and dot etc..
the only part of the process where i need help is converting the 2d mouse position to a point on the inside of the sphere.
hope this makes sense…
fyi, im only looking for a pure math solution..
The first thing you need to do is convert the screen coordinate into a line in 3d space. This will pass through the point you click and your eyepoint.
Once you have this line you can then intersect this line with your sphere to find the intersection point on the sphere.
You may get 2d coordinates of the polygons (triangles?) that are making up the sphere and then find the one that contains the mouse pointer point.

a planet in openGL: vector data or texture mapping?

I am completely new to 3D and started with Jeff Lamarche's tutorials as an introduction to openGL ES for iPhone, then so far, I am able to draw a spinning sphere, which will the base of my application.
What I want to do is render a planet Earth, thanks to 2D GIS vector data (polygones, lines or points with latitude/longitude or x/y coord).
I want to be able to turn different layers on/off and maybe able to identify an object that wold be touched.
My questions are :
would it be easier to rasterize my vector data to use them as image texture or apply the vector data onto the sphere (keeping in mind that I want to turn on/off the layers, the touch-enabled objects being optional)?
would it be easier to use a software like blender to draw the planet and add the layers rather than starting with the sphere I already have (procedural sphere)?
do the export tool from blender to opengl work well?
This kind of question is difficult to answer in general. Technically your intention sounds a lot like if you would like to write a program like Google Earth or KDE Marble. Since you're referring to GIS data you will require very high resolution. Textures only make sense for limited resolution data.
GIS applications usually work using hybrid approaches where some vector data are rendered directly (roads, waters, borders), while others are rendered to texture and the texture, or more accurately texture tiles, being used as caches, for example for building outlines in dense cities or the like. However data as it comes from say OSM can be directly rendered as vector data, since they are not very dense.

Generating 3D TV stereoscopic output programmatically

Do you know what would be the best approach to generate 3D output for one of these new "3D ready" televisions from software. Our application have some nice 3D visualizations, we want these to look nice.
Also, how feasible is it to generate it from a Flash (Flex) app.
I believe that the gaming and 3DTV industries have paved the way for you. As long as your app already outputs 3D visualizations, it may just be a matter of installing a driver. You can get started with this NVIDIA 3D Stereo User’s Guide, but I believe there's tons of other stuff out there if you look.
See also the answers to this question.
3D televisions can display 3D output only for images shot in 3D. This means "intended for simulated 3D," not just a two-dimensional projection of a 3D image.
Stereoscopy is produced by generating two completely separate images per frame (one for each eye) in which the foreground objects are offset to simulate a 3D image. You cannot take a 2D image and make it into a 3D image, the source frames must be produced as 3D frames from the beginning.
More information:
http://en.wikipedia.org/wiki/3D_television
http://en.wikipedia.org/wiki/Stereoscopy

Resources