cant project framebuffer for Reflection - reflection

water reflection just using scaling glScalef(1, 1, -1) is being blocked by terrain under water.
So i think it would be better if reflection is first stored in framebuffer.
but how can i project the rendered texture on the water

disabling depth test lets terrain not to block.

Related

How to calculate three.js camera matrix given projection matrix

Problem context: I'm working on using the google maps webgl api with threejs wrapper to create an interactive browser game.
My understanding of the framework is that google maps takes control of the webgl camera (e.g., to enable the usual maps controls like drag-to-pan and scroll-to-zoom) and only allows client three.js code to query camera information via the following documented api:
this.camera.projectionMatrix.fromArray(
transformer.fromLatLngAltitude(this.anchor, this.rotation, this.scale)
);
I've attempted to click on three.js objects using the following method for calculating projection rays:
raycast(
normalizedScreenPoint: three.Vector2,
): three.Intersection[] {
this.projectionMatrixInverse.copy(this.camera.projectionMatrix).invert();
this.raycaster.ray.origin
.set(normalizedScreenPoint.x, normalizedScreenPoint.y, 0)
.applyMatrix4(this.projectionMatrixInverse);
this.raycaster.ray.direction
.set(normalizedScreenPoint.x, normalizedScreenPoint.y, .5)
.applyMatrix4(this.projectionMatrixInverse)
.sub(this.raycaster.ray.origin)
.normalize();
...
}
where normalizedScreenPoint ranges from -1 to 1 and is just the X/Y coordinates within the map div.
This method generally seems to be working correctly close to ground level. However, for objects at high altitudes (400m, or 400 threejs units) close to but not occluded by the camera (still entirely within the viewing frustrum), my projection rays are not intersecting these objects as expected. The problem gets even worse with altitude, with objects being nearly unselectable at 1000m. I do not have this issue when running in a pure three.js environment using three.js native functions for generating projection rays, which require the cameras position in three.js space to be known.
I have to believe there's so kind of coordinate mismatch between three.js cartesian coordinates and the google maps azimuthal projection, or some comparable issue that's leading to the api to return a "bad" projection matrix. The googlemaps hooks to webgl are closed-source, so I'm unable to dig in how the camera projection is generated, but I believe it'd be easier to be able to manually move the camera position up in height a few meters if I was able to set and calculate it. How could I do this given its projection matrix?
The other alternatives, of trying to integrate three.js myself with another map rendering engine like Tangram to give me full control, would resolve these issues of dealing with an proprietary api but presumably be much more time-intensive.

What device/instrument/technology should I use for detecting object’s lying on a given surface?

First of: Thanks for taking the time to help me with my problem. It is much appreciated :)
I am building a natural user interface. I’d like the interface to detect several (up to 40) objects lying on it. The interface should detect if the objects are moved on it’s the canvas. It is not important what the actual object on surface is
e.x. “bottle”
or what color it has – only the shape and the placement of the object is of interest
e.x. “circle” .
So far I’m using a webcam connected to my computer and Processing’s blob functionality to detect the objects on the surface of the interface (see picture 1). This has some major disadvantages to what I am trying to accomplish:
I do not want the user to see the camera or any alternative device because this is detracting the user’s attention. Actually the surface should be completely dark.
Whenever I am reaching with my hand to rearrange the objects on the interface, the blob detection gets very busy and is recognizing objects (my hand) which are not touching the canvas directly. This problem can hardly be tackled using a Kinect, because the depth functionality is not working through glass/acrylic glass – correct me if I am wrong.
It would be nice to install a few LEDs on the canvas controlled by an Arduino. Unfortunately, the light of the LEDs would disturb the blob detection.
Because of the camera’s focal length, the table needs to be unnecessarily high (60 cm / 23 inch).
Do you have any idea on an alternative device/technology to detect the objects? Would be nice if the device would work well with Processing and Arduino.
Thanks in advance! :)
Possibilities:
Use Reflective tinted glass so that the surface would dark or reflective
Illuminate the area, where you place the webcam with array of IR LED's.
I would suggest colour based detection and contouring of the objects.
If you are using colour based detection convert frames to HSV and CrCb colour space. These are much better for segmentation of required area while using colour based detection.
I do recommend you to check out https://github.com/atduskgreg/opencv-processing. This interfaces Open-CV with processing, you will be getting lot functionalities of Open-CV in processing .
One possibility:
Use a webcam with infrared capability (such as a security camera with built-in IR illumination). Apparently some normal webcams can be converted to IR use by removing a filter, I have no idea how common that is.
Make the tabletop out of some material that is IR-transparent, but opaque or nearly so to visible light. (Look at the lens on most any IR remote control for an example.)
This doesn't help much with #2, unfortunately. Perhaps you can be a bit pickier about the size/shape of the blobs you recognize as being your objects?
If you only need a few distinct points of illumination for #3, you could put laser diodes under the table, out of the path of the camera - that should make a visible spot on top, if the tabletop material isn't completely opaque. If you need arbitrary positioning of the lights - perhaps a projector on the ceiling, pointing down?
Look into OpenCV. It's an open source computer vision project.
In addition to existing ideas (which are great), I'd like to suggest trying TUIO Processing.
Once you have the camera setup (with the right field of view/lens/etc. based on your physical constraints) you could probably get away with sticking TUIO markers to the bottom of your objects.
The software will pickup detect the markers and you'll differentiate the objects by ID, but also be able to get position/rotation/etc. and your hands will not be part of that.

How to render superimposed planar objects in OpenGL?

I am trying to render geographical data obtained at different time with different sensors. Currently, I manage (through OpenGL and QOpenGL widget) to render a single image (i.e. all vertices have a z=0 coordinates). However, I am wondering how to add new "images" (still with different vertices and texture) which can overlap (in the same plane z=0) the others.
Sample from each texture in your fragment shader doing whatever composing you need, such as additive, though for geospatial data its probably more complex than that.
If using a library that does all that, then simply disable depth testing, and render each layer, adjusting transparency function between passes.

How to achieve realistic reflection with threejs

I am trying to render as realistically as possible a scene in which a point light hits an object and bounces off with the same angle wrt the normal of the face (angle of incidence = angle of reflection) and illuminates the scene elsewhere.
Now, I know reflection in threejs is normally dealt with CubeCamera-material as per the examples I found online, but it doesn't quite apply to my case, for I may be observing the scene from a point in which I might not be able to observe the reflection of the object on the mirror-like surface of another one.
Consider this example prototype I'm working on: if the box that is protruding from the wall in the scene had a mirror-like material (accomplished with a CubeCamera), I wouldn't be able to see the green cube's reflection on the bottom face unless the camera was at a specific position; in real life, however, if an object illuminated by a light source passes in the vicinity of another one, it will in part light it as if it were a light source itself (depending on the object's index of reflectivity, of course) and such phenomenon should be visible from any point of view the object receiving indirect lighting is visible from.
Hence I came up with the idea of adding a PointLight to the cube, but this of course produces undesirable effects on the surroundings.
I will try to illustrate my goal with the following sequence:
1) Here, the far side of what I will henceforth refer to as balcony is correctly dark, while the areas marked with a red 'x' are the consequence of the cube having a child PointLight which shines in all directions.
2) Here, the balcony's far face is still dark and the bottom one is receiving even more light as the cube passes by, which is desirable, but the wall behind the cube should actually be dark (I haven't added shadows yet, I first want to get the lighting right), as well as the ground beneath it and the lamp post.
3) Finally, when the cube has passed the balcony, it's just plain wrong for the balcony's side and bottom face to be illuminated, for we all now that a reflected ray does not bounce back the way it came from. Same applies to the lamp post.
Now I realize that all the mistakes that occur are due to the fact that the cube emits light itself, what I'm hoping you can help me with is determining a way to produce physically accurate reflected rays.
I would like to avoid using ambient light or other hacks to simulate real-life scenarios and stick to physics as much as possible; I suspect what I want to achieve is very computationally heavy to render, let alone animate in a real-time use case, but that's not an issue for I'm merely trying to develop a proof-of-concept, not something that should necessarily perform fast.
From what I gather, I should probably be writing custom vertex and fragment shaders for the materials receiving indirect illumination, right? Unfortunately I wouldn't know where to begin, can anyone point me in the right direction? Cheers.
If you do not want to go to the Volumetric rendering then you have 3 options (I know of)
ray-tracing
you have to use ray-trace rendering (back ray-trace) to achieve this. This will also cover shadows,transparent materials,reflected illumination and much more if coded properly. Unless you want to do also precise atmospheric scattering then this is the way.
back raytracing is one (or 3) ray(s) per each screen pixel. It is much faster but not that precise.. (still precise enough)
raytracing is one ray per each 3D angular unit (steradian) of space per each light source. It is slow but precise (if ray density is high enough).
If the casted ray hits any obstacle then its color is changed (due to obstacle property) and new ray is casted as reflected light ray. If material is transparent then also refracted ray is casted ... Each hit or refraction affect light intensity so you stop when intensity is lower then some treshold or on some layer of recursion (limit max number of refractions per ray) to avoid infinite loops and you can manipulate performance/quality ...
standard polygon rendering
With this approach (I think you are using it right now) you have to improvise. The reflection and illumination effects can be done similar to shadowing techniques. For each surface you have to render the scene in reflected direction. The same can be done with shadows but then you just rendering to the light direction or use shadow map instead. If you have insane number of reflective surfaces then this approach is not the way also to achieve reflection of refraction you have to render recursively making it multiple rendering pass per polygon which is also insane.
cubemap
You can use cube map per each object. It is similar to bullet 2 but the insanity is done just once while generating cubemaps instead of per frame ... If you have too much objects then this is also not the way. You can use cube map only for objects with reflective surfaces to make it manageable. Also if the objects are moving then you have to re-generate cubemaps once in a while ...

buffering direct3d draw operations

The question is 2D specific.
I am a constantly updating texture, which is a render target for one of my layers. The update is a whole redraw of the texture and is performed by drawing sprites and outputting text. The operation is performed frequently, consumes quite a lot of CPU and I have, of course, optimized the number of the redraws to keep it down.
Is there a way to buffer these operations in Direct3D? Because currently I have to repeatedly construct a chain of sprite/text operations. Lets assume any game performing a world update - how do they overcome this tedious work? Maybe by creating more layers?
The best thing for me would be creating a modifiable draw chain object, but I haven't found anything like this in Direct3D.
There are a few general methods you might look into:
Batching: Order and combine draws to perform as few calls as possible, and draw as many objects between state changes as you can.
Cache: Keep as much geometry in vertex buffers as you can. With 2D, this gets more interesting, since most things are textured quads. In that case...
Shaders: It may be possible to write a vertex shader that takes a float4 giving the X/Y position/size of your quad, then use that to draw 4 vertexes. You won't need to perform full matrix state changes then, just update 4 floats in your shader (skips all the view calculations, 75% less memory and math). To help make sure the right settings are being used with the shaders, ...
State Blocks: Save a state block for each type of sprite, with all the colors, modes, and shaders bound. Then simply apply the state block, bind your textures, set your coordinates, and draw. At best, you can get each sprite down to 4 calls. Even still...
Cull: It's best not to draw something at all. If you can do simple screen bounds-checking (can be faster than the per-poly culling that will be done otherwise), sorting and basic occlusion (flag sprites with transparency). With 2D, most culling checks are very very cheap. Sort and clip and cull wherever you can.
As far as actually buffering, the driver will handle that for you, when and where it's appropriate. State blocks can effect buffering, by delivering all the modes in a single call (I forget whether it's good or bad, though I believe they can be beneficial). Cutting down the calls to:
if (sprite.Visible && Active(sprite) && OnScreen(sprite))
{
states[sprite.Type]->Apply();
device->BindTexture(sprite.Texture);
device->SetVertexShaderF(sprite.PositionSize);
device->Draw(quad);
}
is very likely to help with CPU use.

Resources