I have a model of a shoehorn that is quite large. I would like to make a travel size by scaling the X & Y axes by 66.7% and the Z axis by 80% in order to make it wider than is achieved by uniform scaling. Is there a way to do this in MeshLab? I am brand new user of MeshLab operating on a Linux (Ubuntu) laptop with AMD processor and video.
I looked through all the menus and the documentation I can find and did not find this addressed. I am expecting that a powerful sophisticated tool like MeshLab will permit independent axis scaling.
I found it in Meshlab-EduTech Wiki as follows:
Filters > Normals, Curvatures and Orientations > Tranform: Scale
In the dialog box, independent values can be input for each axis.
Related
I was wondering how graphing calculators were able to plot functions and relations so quickly.
For a funtion, I can see just testing all the x values numerically in a domain, and outputting it that way. But how does this work for relations (such as x^2 + y^2 = 1)? Numerically testing every possible x and y value isn't that fast, as it would be O(n^2), right? How is it possible?
Thank you.
It's based on the zoom, when you zoom in, you render the same amount of values. The graph only lets you see a max of 5 steps at a time so it doesn't check all the x values, it only checks the x values to the step*5. It also does not render the decimals like you would think. Instead of rendering x=x/100 to make the line look smooth, it does x=x/screenres. This means that like with 99% of graphics programs, it gets slower the higher your screen resolution is.
Vulkan uses a coordinate system where (-1, -1) is in the top left quadrant, instead of the bottom left quadrant as in the standard cartesian coordinate system one typically learns about in school. So (-1, 1) is in the bottom left quadrant in Vulkan's coordinate system.
(image from: http://vulkano.rs/guide/vertex-input)
What are the advantages of using Vulkan's coordinate system? One plain advantage I can see is pedagogical: it forces people to realize that coordinate systems are arbitrary, and one can easily map between them. However, I doubt that's the design reason.
So what are the design reasons for this choice?
Many coordinate systems in computer graphics put the origin at the top-left and point the y axis down.
This is because in early televisions and monitors, the electron beam that draws the picture starts at the top-left of the screen and progresses downward.
The pixels on the screen were generally made by reading memory in sequential addresses as the beam moved down the screen, and modulating that electron beam in accordance with each byte read in sequence. So the y axis corresponds to time, which corresponds to memory address.
Even today, virtually all representations of a bitmap in memory, or in a bitmapped file, start at the top-left.
It is natural when drawing bitmaps in such a medium to use a coordinate system that starts at the top-left too.
Things become a little more complicated when you use a bottom-left origin because finding the byte that corresponds to a pixel requires a little more math and needs to account for the height of the bitmap. There is usually just no reason to introduce the extra complexity.
When you start to introduce matrix transformations however, it becomes much more convenient to work with an upward-pointing y axis, because that lets you use all the vector algebra you learned in school without having to reverse the y axis and all the rotations in your thinking.
So what you'll usually find is that when you are working in a system that lets you do matrix operations, translations, rotations, etc., then you will have an upward-pointing y axis. At some point deep inside, however, the calculations will transform the coordinates into a downward-pointing y axis for the low-level operations.
One of the common sources of confusion and bugs in OpenGL was that NDC and window coordinates had y increasing upwards, which is opposite of the convention used in nearly all window systems and many (but not all) image formats, where y is [0..1] increasing downwards. Developers ended up having to insert a y-flip in their transformation pipeline in many cases, and it wasn't always clear when they did and didn't.
So Vulkan decided to make it so you could load an image from a y-downwards image format directly into memory and draw it to the screen without any explicit y flips, to avoid this source of errors.
Other coordinate systems were then chosen to be consistent with that, in the sense that the y direction never flips direction in the standard Vulkan transformation pipeline. That meant that clip space vertex coordinates also had y increasing downwards.
This ended up meaning that Vulkan clip coordinates have a different orientation than D3D clip coordinates, which was an annoyance for developers supporting both APIs. So the VK_KHR_maintenance1 extension adds the ability to specify a negative viewport height, which essentially introduces a y-flip to the clip-space to framebuffer coordinate transform. (D3D has essentially always had an implicit y-flip here.)
This is how I remember the reasoning in the Vulkan Working Group, anyway. I don't think there's an authoritative public source anywhere.
Some of the folks on my team, including myself, find it pretty disorienting that in a Bokeh scatter plot, say using the circle method, that for an initial autoscale fit of the data on the figure we can dial in a reasonable size for our data, using for example something like plot.circle( x , y , size=3 )
However when we interactively zoom into our data the glyph sizes as displayed are invariant to the zoom. Is there a way to have them scale proportionally to the zoom we've dialed into? Something akin to an vector graphics interaction (eg svg). If memory serves me right matlab figures and matplotlib figures should maintain zoom proportionality behavior. To demonstrate the behavior we're seeing consider the first image and the red box I approximately zoom into on the second image.
Just as a quick demo using Powerpoint to illustrate the sort of desired behavior...
For circles, set the radius kwarg instead of the size value. (There similar, glyph-specific values for the other glyph-types).
i.e.:
plot.circle(x=[1,2,3], y=[1,2,3], radius=0.5)
size is always rendered in screen coordinates (pixels), but radius and the related properties are computed in data coordinates and should change in magnitude with zooming.
Here's a good demo by Bryan Van de Ven showing the difference between pixel coordinates (size) and data coordinates (radius) given in this conference talk:
Intro to Data Visualization with Bokeh - Part 2 - Strata Hadoop San Jose 2016
... the point is all of these attributes can be vectorized. We could
for instance say size equals you know 2, 4, 6, 8, 10, and now the size
is modulated right. So we have one that has size 2 and one that has
size 4. Size is usually in pixels, radius is usually in data dimension
units. But all the other ones here as well all the colors, all the
visual attributes can be vectorized in this way. You can either give
them a single value as we've done for instance with the line fill
color, or you can give them a vector of values in which case all of
the things are different.
So next exercise here you go to this
notebook this is that second notebook "02 - plotting" it is to try to
create the same example but now set the radius instead of the size and
sort of see what's the difference if you set if you set radius instead
of size.
I am trying to enlarge a point cloud data set. Suppose I have a point cloud data set consisting of 100 points & I want to enlarge it to say 5 times. Actually I am studying some specific structure which is very small, so I want to zoom in & do some computations. I want something like imresize() in Matlab.
Is there any function to do this? What does resize() function do in PCL? Any idea about how can I do it?
Why would you need this? Points are just numbers, regardless whether they are 1 or 100, until all of them are on the same scale and in the same coordinate system. Their size on the screen is just a visual representation, you can zoom in and out as you wish.
You want them to be a thousandth of their original value (eg. millimeters -> meters change)? Divide them by 1000.
You want them spread out in a 5 times larger space in that particular coordinate system? Multiply their coordinates with 5. But even so, their visual representations will look exactly the same on the screen. The data remains basically the same, they will not be resized per se, they numeric representation will change a bit. It is the simplest affine transform, just a single multiplication.
You want to have finer or coarser resolution of your numeric representation? Or have different range? Change your data type accordingly.
That is, if you deal with a single set.
If you deal with different sets, say, recorded with different kinds of sensors and the numeric representations differ a bit (there are angles between the coordinate systems, mm vs cm scale, etc.) you just have to find the transformation from one coordinate system to the other one and apply it to the first one.
Since you want to increase the number of points while preserving shape/structure of the cloud, I think you want to do something like 'upsampling'.
Here is another SO question on this.
The PCL offers a class for bilateral upsampling.
And as always google gives you a lot of hints on this topic.
Beside (what Ziker mentioned) increasing allocated memory (that's not what you want, right?) or zooming in in visualization you could just rescale your point cloud.
This can be done by multiplying each points dimensions with a constant factor or using an affine transformation. So you can e.g switch from mm to m.
If i understand your question correctly
If you have defined your cloud like this
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud (new pcl::PointCloud<pcl::PointXYZ>);
in fact you can do resize
cloud->points.resize (cloud->width * cloud->height);
Note that doing resize does nothing more than allocate more memory for variable thus after resizing original data remain in cloud. So if you want to have empty resized cloud dont forget to add cloud->clear();
If you just want zoom some pcd for visual puposes(i.e you cant see what is shape of cloud because its too small) why dont you use PCL Visualization and zoom by scrolling up/down
I'm making a side scroller similar to Castle Crashers and right now I'm using SAT for collision detection. That works great, but I want to simulate level "depth" by allowing objects to move up and down on the screen, basically along a z-axis (like this screenshot http://favoniangamers.files.wordpress.com/2009/07/castle-crashers-ps3.jpg). This isn't an isometric game, but rather uses parallax scrolling.
I added a z component to my vector class, and I plan to cull collisions based on the 'thickness' of a shape and it's z position. I'm just not sure how calculate the positions of shapes for rendering or how to add jumping with gravity. How do I calculate the max y value (for the ground) as the z position changes? Basically it's the relationship of the z and y axis that confuses me.
I'd appreciate links to resources if anyone knows of this topic.
Thanks!
It's actually possible to make your collision detection algorithm dimensionally agnostic. Just have a collision detector that works along one dimension, use that to check each dimension, and your answer to "are these colliding or not" is the logical AND of the collision detection along each of the dimensions.
Your game should be organised to keep the interaction of game objects, and the rendering of the game to the screen completely seperate. You can think of these two sections of the program as the "model" and the "view". In the model, you have a full 3D world, with 3 axes. You can't go halvesies on this point without some level of pain. Your model must be proper 3D.
The view will read the location of all the game objects, and project them onto the screen using the camera definition. For this part you don't need a full 3D rendering engine. The correct technical term for the perspective you're talking about is "oblique", and it can be seen in many ancient chinese and japanese scroll paintings and prints- in particular look for images of "The Tale of Genji".
The on screen position of an object (including the ground surface!) goes something like this:
DEPTH_RATIO=0.5;
view_x=model_x-model_z*DEPTH_RATIO-camera_x;
view_y=model_y+model_z*DEPTH_RATIO-camera_y;
you can modify for a straight orthographic front projection:
DEPTH_RATIO=0.5;
view_x=model_x-camera_x;
view_y=model_y+model_z*DEPTH_RATIO-camera_y;
And of course don't forget to cull objects outside the volume defined by the camera.
You can also use this mechanism to handle the positioning of parallax layers for you. This is of course, a matter changing your camera to a 1-point perspective projection instead of an orthographic projection. You don't have to use this to change the rendered size of your sprites, but it will help you manage the x position of objects realistically. if you're up for a challenge, you could even mix projections- use 1 point perspective for deep backgrounds, and the orthographic stuff for the foreground.
You should separate your conceptual Y axis used by you physics calculation (collision detection etc.) and the Y axis you actually draw on the screen. That way it becomes less confusing.
Just do calculations per normal pretending there is no relationship between Y and Z axis then when you actually draw the object on the screen you simulate the Z axis using the Y axis:
screen_Y = Y + Z/some_fudge_factor;
Actually, this is how real 3d engines work. After all the world calculations are done the X, Y and Z coordinates are mapped onto screen_X and screen_Y via a function (usually a bit more complicated than the equation above, but just a bit).
For example, to implement pseudo-isormetric view in your game you can even apply Z to the screen_X axis so objects are displaced diagonally instead of vertically.