Material properties databases (BRDF) - reflection

I am working on a new Ray Tracing project which has to provide highly realistic simulation.
I have a problem with calculating reflection on target. I wanted to use the BRDF (bidirectional reflectance distribution function) model which gives information about diffuse and specular reflection to calculate how much energy was reflected from the target
What I wanted to ask:
I was wondering if there are any databases with measurements of BRDF for different materials (including incoming light direction and outgoing direction) and for different wavelengths of the light source (the most interesting for me is the laser source 800-1600nm)?
The second thing, do you know any good databases with material properties like: ambient, diffuse, specular, shininess and roughness of surface values.
What I've already found:
http://www.merl.com/brdf/
https://globe3d.sourceforge.io/g3d_html/gl-materials__ads.htm
http://www.cs.columbia.edu/CAVE/software/curet/
https://github.com/POV-Ray/povray/tree/master/distribution/include
But if I understand correctly all of them used a lamp as a light source and I don't see any references to the wavelengths of the light source. Moreover, there aren't any good values of material properties.
I am quite new in this so if you have any advice or I wrote something wrong, I will be grateful for any correction and information connecting with this topic.

Related

Accelerometer using ADXL345 for Earthquake Detection

Well, i want to ask if ADXL345 can be used to detect an Earthquake Occurrence based on its magnitude/intensity level. For more information, I want to used an accelerometer to create a Device that can detect the intensity/magnitude level of an Earthquake.
I have absolutely no experience in this field, but it looks useful and fascinating.
Questions are:
is this device able to detect medium scale earthquakes?
if yes, does anybody did it, available to share experiences?
if no to the previous, is there any guide which explains algorithms, calculations and mechanical plans?
That sensor is not suitable. It has 13 bit resolution at +-16g full range. That gives you a sensitivity of 0.002g for the lsb. In order to detect an earthquake directly below you, you need approx. a few milli-g (e.g. see here), even less for earthquakes with an epicentre elsewhere.
You want a sensor which is much more sensitive by a factor of 100 and probably with more resolution (better ADC), too.
(And you should have been able to do this quick google-search analysis yourself ;) )
Using accelerometers reading tells you nothing about the actual magnitude of the quake itself. It tells you the size of the quake at your location. Combining location and amplitude will give you a 'weighted' measurement, but that's still useless without a calibration curve. Without knowing what acceleration, at a certain distance, corresponds to what magnitude you will be unable to tell what the magnitude is. You can certainly conclude that your measured earthquake has a median amplitude of, say, 2000% of a non-earthquake reading, but you won't be able to turn it into a Richter measurement. To do that you'd need to take some data during earthquakes of known magnitude and then work out how acceleration, distance and magnitude are related for your device. You could alternatively use a scale like the Shindo (just Google it).

What device/instrument/technology should I use for detecting object’s lying on a given surface?

First of: Thanks for taking the time to help me with my problem. It is much appreciated :)
I am building a natural user interface. I’d like the interface to detect several (up to 40) objects lying on it. The interface should detect if the objects are moved on it’s the canvas. It is not important what the actual object on surface is
e.x. “bottle”
or what color it has – only the shape and the placement of the object is of interest
e.x. “circle” .
So far I’m using a webcam connected to my computer and Processing’s blob functionality to detect the objects on the surface of the interface (see picture 1). This has some major disadvantages to what I am trying to accomplish:
I do not want the user to see the camera or any alternative device because this is detracting the user’s attention. Actually the surface should be completely dark.
Whenever I am reaching with my hand to rearrange the objects on the interface, the blob detection gets very busy and is recognizing objects (my hand) which are not touching the canvas directly. This problem can hardly be tackled using a Kinect, because the depth functionality is not working through glass/acrylic glass – correct me if I am wrong.
It would be nice to install a few LEDs on the canvas controlled by an Arduino. Unfortunately, the light of the LEDs would disturb the blob detection.
Because of the camera’s focal length, the table needs to be unnecessarily high (60 cm / 23 inch).
Do you have any idea on an alternative device/technology to detect the objects? Would be nice if the device would work well with Processing and Arduino.
Thanks in advance! :)
Possibilities:
Use Reflective tinted glass so that the surface would dark or reflective
Illuminate the area, where you place the webcam with array of IR LED's.
I would suggest colour based detection and contouring of the objects.
If you are using colour based detection convert frames to HSV and CrCb colour space. These are much better for segmentation of required area while using colour based detection.
I do recommend you to check out https://github.com/atduskgreg/opencv-processing. This interfaces Open-CV with processing, you will be getting lot functionalities of Open-CV in processing .
One possibility:
Use a webcam with infrared capability (such as a security camera with built-in IR illumination). Apparently some normal webcams can be converted to IR use by removing a filter, I have no idea how common that is.
Make the tabletop out of some material that is IR-transparent, but opaque or nearly so to visible light. (Look at the lens on most any IR remote control for an example.)
This doesn't help much with #2, unfortunately. Perhaps you can be a bit pickier about the size/shape of the blobs you recognize as being your objects?
If you only need a few distinct points of illumination for #3, you could put laser diodes under the table, out of the path of the camera - that should make a visible spot on top, if the tabletop material isn't completely opaque. If you need arbitrary positioning of the lights - perhaps a projector on the ceiling, pointing down?
Look into OpenCV. It's an open source computer vision project.
In addition to existing ideas (which are great), I'd like to suggest trying TUIO Processing.
Once you have the camera setup (with the right field of view/lens/etc. based on your physical constraints) you could probably get away with sticking TUIO markers to the bottom of your objects.
The software will pickup detect the markers and you'll differentiate the objects by ID, but also be able to get position/rotation/etc. and your hands will not be part of that.

How to achieve realistic reflection with threejs

I am trying to render as realistically as possible a scene in which a point light hits an object and bounces off with the same angle wrt the normal of the face (angle of incidence = angle of reflection) and illuminates the scene elsewhere.
Now, I know reflection in threejs is normally dealt with CubeCamera-material as per the examples I found online, but it doesn't quite apply to my case, for I may be observing the scene from a point in which I might not be able to observe the reflection of the object on the mirror-like surface of another one.
Consider this example prototype I'm working on: if the box that is protruding from the wall in the scene had a mirror-like material (accomplished with a CubeCamera), I wouldn't be able to see the green cube's reflection on the bottom face unless the camera was at a specific position; in real life, however, if an object illuminated by a light source passes in the vicinity of another one, it will in part light it as if it were a light source itself (depending on the object's index of reflectivity, of course) and such phenomenon should be visible from any point of view the object receiving indirect lighting is visible from.
Hence I came up with the idea of adding a PointLight to the cube, but this of course produces undesirable effects on the surroundings.
I will try to illustrate my goal with the following sequence:
1) Here, the far side of what I will henceforth refer to as balcony is correctly dark, while the areas marked with a red 'x' are the consequence of the cube having a child PointLight which shines in all directions.
2) Here, the balcony's far face is still dark and the bottom one is receiving even more light as the cube passes by, which is desirable, but the wall behind the cube should actually be dark (I haven't added shadows yet, I first want to get the lighting right), as well as the ground beneath it and the lamp post.
3) Finally, when the cube has passed the balcony, it's just plain wrong for the balcony's side and bottom face to be illuminated, for we all now that a reflected ray does not bounce back the way it came from. Same applies to the lamp post.
Now I realize that all the mistakes that occur are due to the fact that the cube emits light itself, what I'm hoping you can help me with is determining a way to produce physically accurate reflected rays.
I would like to avoid using ambient light or other hacks to simulate real-life scenarios and stick to physics as much as possible; I suspect what I want to achieve is very computationally heavy to render, let alone animate in a real-time use case, but that's not an issue for I'm merely trying to develop a proof-of-concept, not something that should necessarily perform fast.
From what I gather, I should probably be writing custom vertex and fragment shaders for the materials receiving indirect illumination, right? Unfortunately I wouldn't know where to begin, can anyone point me in the right direction? Cheers.
If you do not want to go to the Volumetric rendering then you have 3 options (I know of)
ray-tracing
you have to use ray-trace rendering (back ray-trace) to achieve this. This will also cover shadows,transparent materials,reflected illumination and much more if coded properly. Unless you want to do also precise atmospheric scattering then this is the way.
back raytracing is one (or 3) ray(s) per each screen pixel. It is much faster but not that precise.. (still precise enough)
raytracing is one ray per each 3D angular unit (steradian) of space per each light source. It is slow but precise (if ray density is high enough).
If the casted ray hits any obstacle then its color is changed (due to obstacle property) and new ray is casted as reflected light ray. If material is transparent then also refracted ray is casted ... Each hit or refraction affect light intensity so you stop when intensity is lower then some treshold or on some layer of recursion (limit max number of refractions per ray) to avoid infinite loops and you can manipulate performance/quality ...
standard polygon rendering
With this approach (I think you are using it right now) you have to improvise. The reflection and illumination effects can be done similar to shadowing techniques. For each surface you have to render the scene in reflected direction. The same can be done with shadows but then you just rendering to the light direction or use shadow map instead. If you have insane number of reflective surfaces then this approach is not the way also to achieve reflection of refraction you have to render recursively making it multiple rendering pass per polygon which is also insane.
cubemap
You can use cube map per each object. It is similar to bullet 2 but the insanity is done just once while generating cubemaps instead of per frame ... If you have too much objects then this is also not the way. You can use cube map only for objects with reflective surfaces to make it manageable. Also if the objects are moving then you have to re-generate cubemaps once in a while ...

N Game character Physics

For my 2d platform game am following 2 articles from creator of N N+ game. Two article covers how the collision is handled and a broad phase collision detection by storing all the AABB shapes info in to tile map along with some other potential information required for collision.No where on the Internet how the player character movement handling is explained.
http://www.madgravityradio.com/ngame.html
I tried with small rectangular piece in the place of player , the result I got is rectangle is very responsive as far as the collision concern but not realistic though (I have no idea how to tilt the player rectangle to some angle when stepping down or up on Slope edges.).
What type of object the player is comprised of? Is the player kept inside a box shape and the box is translated? Please shed some light on how the character is controlled , concept behid this virtual player? I read somewhere Rag doll is used.
I have few more general questions
In SAT how to handle/apply the minimum translation vector to make the movement more realistic?
I haven't started creating tile map for this iphone game , I do have some experience in title map creation for flash games. No idea how to handle iphone memory efficiently - any recommendations please?
The term you are looking for is collision response - ie now that you have detected a collision and have the collision data, what do you do with it to produce a meaningful response. This is a pretty big topic so maybe investigate it and come back with some more specific questions. Here's a basic primer and google/wikipedia will take you much farther.

Wifi Triangulation

What would be the best way to triangulate a wireless network passively. Are there tools available? Algorithms? Libraries?
My goal would be to create a relative map of various objects that sends or receive signals using signal strength (DB's), signal/noise ratio, signal phase, etc. from a few location points. With enough sampling, i'm guessing it would be possible to create a good 2d/3d map.
I'm searching for stuff in any language / platform.
Some keywords: wi-fi site survey, visualization, coverage, location, positioning
Thinking about using kismet to gather the data and then process it. Maybe Free Space Path Loss for RF in the 2.4Ghz range to calculate a relative distance. And optionally try to use RF obstacle attenuation estimation values (based on some user input) to give better estimates. Then use trilateration to generate possible relative coordinates.
You can't use the GPS technique because the timing is nothing like accurate enough.
The best you can do is Trilateration based on the signal strength from each base station and assume that range is proportional to signal.
You will probably need to force a connection to each base station in turn in order to measure the signal strength.
Interesting question. Initial thoughts were using output from something like the WiSpy spectrum analyzer. I like the idea of using a directional antenna. Looks like some research (may) be underway.
Instead of trilateration you could use bilinear interpolation. This is said to be better for non-linear distance vs. signal strength data like wifi in an urban environment would be. http://courses.cit.cornell.edu/ee476/FinalProjects/s2007/ayl26_ym82/ayl26_ym82/index.htm has the background math and the what I assume is AVR C for doing it with magnetic field sensors.
Using signal strength to judge distance could easily be thrown off by differences in materials blocking line-of-sight to each of the sampling points. It would probably be better to do the sampling with a directional antenna, and from each sampling point, find the bearing that maximizes signal strength to each device you want to locate. With this technique, you can use only two or three sampling locations, depending on the accuracy with which you can estimate the bearings.
Ars Technica has an article about this, citing the Fraunhofer Institute and Skyhooks Wireless. This technology is built into every iPhone and iPad.
Actually I think you should try using an algorithm like the GPS one (wikipedia).. of course you can simplify it according to your need, for example:
you need to install on every item that should broadcast its position (the navigation signal) an application that actually does it
you should use a different channel for every single item to be sure not to generate collisions (it depends also on how much you broadcast the signal)
so if you place at least 4 broadcasters you can triangulate on every client to allow it to calculate its position. Naturally the bcasters should be as much similar as possible in response..
by the way these are just ideas..

Resources