I have an problem: My 3d plot gets distorted, if I rotate or pan the plot. I know this behaviour is intended to show the user the most at all times, but it looks silly, so I want an orthogonal view or Axes which are equally long.
Earth distorted.
I have an WPF Application, where the ILN Form/Control is hosted via WindowsFormsHost.
I tried every possibility regarding Plotcube.Projection, have set Plotcube.Limits, changed ILN_Panel autosize etc. My next idea was that maybe I need to configure the WPF and/or Forms window itself.
Thanks!
I think that is not possible. At least not easy within PlotCube. If you do not need PlotCube just put the object into the Camera node and set Projection to Orthogonal. And you are done.
With PlotCube it is much more complicated. You must make sure that all containers have equal aspect ratio (width / height) WinformsHostControl, PlotCube, PlotCube.ScreenRect, PlotCube.DataScreenRect, PlotCube.Plots (data group), and its Limits. If you want to rotate everything free you must make sure to have equal aspect ration on all 3 dimensions wehere applicable.
Related
There was a gif on the internet where someone used some sort of CAD and drew multiple vector pictures in it. On the first frame they zoom-in on a tiny dot, revealing there a whole new different vector picture just on a different scale, and then they proceed to zoom-in further on another tiny dot, revealing another detailed picture, repeating several times. here is the link to the gif
Or another similar example: imagine you have a time-series with a granularity of a millisecond per sample and you zoom out to reveal years-worth of data.
My questions are: how such a fine-detailed data, in the end, gets rendered, when a huge amount of data ends up getting aliased into a single pixel.
Do you have to go through the whole dataset to render that pixel (i.e. in case of time-series: go through million records to just average them out into 1 line or in case of CAD render whole vector picture and blur it into tiny dot), or there are certain level-of-detail optimizations that can be applied so that you don't have to do this?
If so, how do they work and where one can learn about it?
This is a very well known problem in games development. In the following I am assuming you are using a scene graph, a node-based tree of objects.
Typical solutions involve a mix of these techniques:
Level Of Detail (LOD): multiple resolutions of the same model, which are shown or hidden so that only one is "visible" at any time. When to hide and show is usually determined by the distance between camera and object, but you could also include the scale of the object as a factor. Modern 3d/CAD software will sometimes offer you automatic "simplification" of models, which can be used as the low res LOD models.
At the lowest level, you could even just use the object's bounding
box. Checking whether a bounding box is in view is only around 1-7 point checks depending on how you check. And you can utilise object parenting for transitive bounding boxes.
Clipping: if a polygon is not rendered in the view port at all, no need to render it. In the GIF you posted, when the camera zooms in on a new scene, what is left from the larger model is a single polygon in the background.
Re-scaling of world coordinates: as you zoom in, the coordinates for vertices become sub-zero floating point numbers. Given you want all coordinates as precise as possible and given modern CPUs can only handle floats with 64 bits precision (and often use only 32 for better performance), it's a good idea to reset the scaling of the visible objects. What I mean by that is that as your camera zooms in to say 1/1000 of the previous view, you can scale up the bigger objects by a factor of 1000, and at the same time adjust the camera position and focal length. Any newly attached small model would use its original scale, thus preserving its precision.
This transition would be invisible to the viewer, but allows you to stay within well-defined 3d coordinates while being able to zoom in infinitely.
On a higher level: As you zoom into something and the camera gets closer to an object, it appears as if the world grows bigger relative to the view. While normally the camera space is moving and the world gets multiplied by the camera's matrix, the same effect can be achieved by changing the world coordinates instead of the camera.
First, you can use caching. With tiles, like it's done in cartography. You'll still need to go over all the points, but after that you'll be able zoom-in/zoom-out quite rapidly.
But if you don't have extra memory for cache (not so much actually, much less than the data itself), or don't have time to go over all the points you can use probabilistic approach.
It can be as simple as peeking only every other point (or every 10th point or whatever suits you). It yields decent results for some data. Again in cartography it works quite well for shorelines, but not so well for houses or administrative boarders - anything with a lot of straight lines.
Or you can take a more hardcore probabilistic approach: randomly peek some points, and if, for example, there're 100 data points that hit pixel one and only 50 hit pixel two, then you can more or less safely assume that if you'll continue to peek points still pixel one will be twice as likely to be hit that pixel two. So you can just give up and draw pixel one with a twice more heavy color.
Also consider how much data you can and want to put in a pixel. If you'll draw a pixel in black and white, then there're only 256 variants of color. And you don't need to be more precise. Or if you're going to draw a pixel in full color then you still need to ask yourself: will anyone notice the difference between something like rgb(123,12,54) and rgb(123,11,54)?
I am not sure how to put this problem in a single sentence, sorry if the title is misleading.
I am currently developing a simple terrain editor with a circle-shaped brush size. The image below shows a few cases that represent my problem.
additional info: the square size is fixed and uniform and in the current version, my concern is only to find which one is hit and which one is not (the amount of region covered is important for weighting the hit, but probably not right now)
My current solution (which is not even correct for a certain condition) is: given a hit in a position (x, y) with radius r, loop through all square from (x-radius, y-radius) to (x+radius, y+radius) and apply 2-D box to circle collision detection. But I don't think this is optimal (or even correct IMO).
Can anyone help me with this one? Thank you
Since i can't add a simple comment due to bureaucracy on this website i have to type it out here.
Anyway you're in luck since i was trying to do this recently as well! The way i did it is i iterated through the vertex array and check if the current vertex falls inside the radius of the circle. But perhaps what you want is to check it against each quad center and if that center falls inside the radius then add the whole quad as it's being collided.
Of course depending on the size of your grid the performance will vary so it's good to try to iterate through as few quads as needed. Though accessing these quads from the array is something you have to figure out yourself.
Result of my code:
Basically, what the issue is, the transparent part of my image are not blending correctly with what is drawn before it. I know I can do a
if(alpha<=0){discard;}
in the fragment shader, the only issue is I plan on having a ton of fragments and don't want the if statement for each fragment on mobile devices.
Here is my code related to alpha, and depth testing:
var gl = canvas.getContext("webgl2",
{
antialias : false,
alpha : false,
premultipliedAlpha: false,
}
);
gl.enable(gl.BLEND);
gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA);
gl.enable(gl.DEPTH_TEST);
gl.depthFunc(gl.GREATER);
Also, these are textured gl.POINTS I am drawing. If I change the order the two images are drawn in the buffer, the problem doesn't exist. They will be dynamically rotating during the program's runtime so this is not an option.
It's not clear what your issue is without more code but it looks like a depth test issue.
Assuming I understand correctly you're drawing 2 rectangles? If you draw the red one before the blue one then depending on how you have the depth test setup the blue one will fail the depth test when the X area is drawn.
You generally solve this by sorting what you draw, making sure to draw things further away first.
For a grid of "tiles" you can generally sort by walking the grid itself in the correct direction instead of "sorting"
On the other hand, if all of your transparency is 100% draw or not draw then discard has its advantages and you can draw front to back. The reason is because in that case drawing front to back, the pixel drawn (not discarded) by the red quad will be rejected when drawing the blue quad by the depth test. The depth test is usually optimized to happen before running the fragment shader for a certain pixel. If the depth test says the pixel will not be drawn then no reason to even run the fragment shader for that pixel, time saved. Unfortunately as soon as you have any transparency that is not 100% opaque or 100% transparent then you need to sort and draw back to front. Some of these issues are covered in this article
A few notes:
you mentioned mobile devices and you mentioned WebGL2 in your code sample. There is no WebGL2 on iOS
you said you're drawing with POINTS. The spec says only POINTS of 1 pixel in size are required. It looks like you're safe up to points of size 60 but to be safe it's generally best to draw with triangles as there are other isses with points
you might also be interested in sprites with depth
I have a graph like:
I would like to generate a set of (x,y) pairs that correspond to points of this graph.
Maybe one for each horizontal pixel.
How would I go about doing this?
If I had the image in uncompressed bitmap format, maybe cropped to the actual graph, I could examine each vertical strip for the blackest point...
I would prefer to work in Python, but I'm interested in any technique.
I answered a question like this a while back. It should be fairly easy to detect the grid, from there you can get the pixel's coordinates relatively to the grid. However, it wasn't clear how to extract the numbers, which you need to do in order to get the the scale of the grid. Although, it might be possible fairly easily if you can match the font and font size (which might be possible via scaling). Otherwise, you'd have to enter the numbers manually.
To extract the grid, you'd start from the top right and move diagonally until you find the start of the grid. From there you can follow the vertical and horizontal lines (of the grid) until they end. This should allow you to say with fairly high probability where the outer rectangle of the grid is and what the x and y intervals of the grid are in terms of pixels. The blackest parts within the grid should do for finding the curve, but it may require some interpolation depending on how many data points you need/want.
It also may be useful to look into techniques for reversing anti-aliasing effects. Although, the uncompressed bitmap image may not need it.
I'm making a side scroller similar to Castle Crashers and right now I'm using SAT for collision detection. That works great, but I want to simulate level "depth" by allowing objects to move up and down on the screen, basically along a z-axis (like this screenshot http://favoniangamers.files.wordpress.com/2009/07/castle-crashers-ps3.jpg). This isn't an isometric game, but rather uses parallax scrolling.
I added a z component to my vector class, and I plan to cull collisions based on the 'thickness' of a shape and it's z position. I'm just not sure how calculate the positions of shapes for rendering or how to add jumping with gravity. How do I calculate the max y value (for the ground) as the z position changes? Basically it's the relationship of the z and y axis that confuses me.
I'd appreciate links to resources if anyone knows of this topic.
Thanks!
It's actually possible to make your collision detection algorithm dimensionally agnostic. Just have a collision detector that works along one dimension, use that to check each dimension, and your answer to "are these colliding or not" is the logical AND of the collision detection along each of the dimensions.
Your game should be organised to keep the interaction of game objects, and the rendering of the game to the screen completely seperate. You can think of these two sections of the program as the "model" and the "view". In the model, you have a full 3D world, with 3 axes. You can't go halvesies on this point without some level of pain. Your model must be proper 3D.
The view will read the location of all the game objects, and project them onto the screen using the camera definition. For this part you don't need a full 3D rendering engine. The correct technical term for the perspective you're talking about is "oblique", and it can be seen in many ancient chinese and japanese scroll paintings and prints- in particular look for images of "The Tale of Genji".
The on screen position of an object (including the ground surface!) goes something like this:
DEPTH_RATIO=0.5;
view_x=model_x-model_z*DEPTH_RATIO-camera_x;
view_y=model_y+model_z*DEPTH_RATIO-camera_y;
you can modify for a straight orthographic front projection:
DEPTH_RATIO=0.5;
view_x=model_x-camera_x;
view_y=model_y+model_z*DEPTH_RATIO-camera_y;
And of course don't forget to cull objects outside the volume defined by the camera.
You can also use this mechanism to handle the positioning of parallax layers for you. This is of course, a matter changing your camera to a 1-point perspective projection instead of an orthographic projection. You don't have to use this to change the rendered size of your sprites, but it will help you manage the x position of objects realistically. if you're up for a challenge, you could even mix projections- use 1 point perspective for deep backgrounds, and the orthographic stuff for the foreground.
You should separate your conceptual Y axis used by you physics calculation (collision detection etc.) and the Y axis you actually draw on the screen. That way it becomes less confusing.
Just do calculations per normal pretending there is no relationship between Y and Z axis then when you actually draw the object on the screen you simulate the Z axis using the Y axis:
screen_Y = Y + Z/some_fudge_factor;
Actually, this is how real 3d engines work. After all the world calculations are done the X, Y and Z coordinates are mapped onto screen_X and screen_Y via a function (usually a bit more complicated than the equation above, but just a bit).
For example, to implement pseudo-isormetric view in your game you can even apply Z to the screen_X axis so objects are displaced diagonally instead of vertically.