I am trying to create a representation of an LED (i.e. light bulb) which emits light of varying colors in all directions. Additionally, it must do so independently of other LEDs on the canvas such that each diode can have its own color.
When I first found Babylon, I thought it was logical to simple use a PointLight -- an LED is just a point which emits light -- however it seems to me that a mesh must reflect the light in order for it to be visible. Working under that assumption, I have tried to light a sphere with a DirectionalLight and a HemisphericLight, but neither lights one a single sphere while lighting every surface of the sphere.
Is there an easy solution here or do I need to put multiple lights of some kind on each "bulb"?
The best solution is to use the emissiveColor material property to give a "bulb" object the illusion of reflecting a light which does not exist. This demo shows the effect.
Thank you to #Temechon and #Wingnut for providing the tip on this forum post!
Related
My use-case is to let a ball bounce and come towards the camera (been able to do that with a simple dynamic-body sphere on a static-body grid). However, rather than it rolling down to a position where it loses its velocity (or momentum), is there a way to stop it at a desired point? I tried placing a (invisible) hurdle object but it rolls back. I would like it to remain stationary once it reaches the desired point. Thanks
You can stop the dynamic-body by zeroing a few body attributes
let body = el.body // el = aframe entity
body.velocity.set(0,0,0);
body.angularVelocity.set(0,0,0);
body.vlambda.set(0,0,0);
body.wlambda.set(0,0,0);
Working example here.
However IF your ball is on a slope, the physics engine will, properly, slowly accelerate it.
If you want it to stop no matter what (defying the laws of CANNON.js physics), then either remove the dynamic-body component, or swap it with a static-body.
First of: Thanks for taking the time to help me with my problem. It is much appreciated :)
I am building a natural user interface. I’d like the interface to detect several (up to 40) objects lying on it. The interface should detect if the objects are moved on it’s the canvas. It is not important what the actual object on surface is
e.x. “bottle”
or what color it has – only the shape and the placement of the object is of interest
e.x. “circle” .
So far I’m using a webcam connected to my computer and Processing’s blob functionality to detect the objects on the surface of the interface (see picture 1). This has some major disadvantages to what I am trying to accomplish:
I do not want the user to see the camera or any alternative device because this is detracting the user’s attention. Actually the surface should be completely dark.
Whenever I am reaching with my hand to rearrange the objects on the interface, the blob detection gets very busy and is recognizing objects (my hand) which are not touching the canvas directly. This problem can hardly be tackled using a Kinect, because the depth functionality is not working through glass/acrylic glass – correct me if I am wrong.
It would be nice to install a few LEDs on the canvas controlled by an Arduino. Unfortunately, the light of the LEDs would disturb the blob detection.
Because of the camera’s focal length, the table needs to be unnecessarily high (60 cm / 23 inch).
Do you have any idea on an alternative device/technology to detect the objects? Would be nice if the device would work well with Processing and Arduino.
Thanks in advance! :)
Possibilities:
Use Reflective tinted glass so that the surface would dark or reflective
Illuminate the area, where you place the webcam with array of IR LED's.
I would suggest colour based detection and contouring of the objects.
If you are using colour based detection convert frames to HSV and CrCb colour space. These are much better for segmentation of required area while using colour based detection.
I do recommend you to check out https://github.com/atduskgreg/opencv-processing. This interfaces Open-CV with processing, you will be getting lot functionalities of Open-CV in processing .
One possibility:
Use a webcam with infrared capability (such as a security camera with built-in IR illumination). Apparently some normal webcams can be converted to IR use by removing a filter, I have no idea how common that is.
Make the tabletop out of some material that is IR-transparent, but opaque or nearly so to visible light. (Look at the lens on most any IR remote control for an example.)
This doesn't help much with #2, unfortunately. Perhaps you can be a bit pickier about the size/shape of the blobs you recognize as being your objects?
If you only need a few distinct points of illumination for #3, you could put laser diodes under the table, out of the path of the camera - that should make a visible spot on top, if the tabletop material isn't completely opaque. If you need arbitrary positioning of the lights - perhaps a projector on the ceiling, pointing down?
Look into OpenCV. It's an open source computer vision project.
In addition to existing ideas (which are great), I'd like to suggest trying TUIO Processing.
Once you have the camera setup (with the right field of view/lens/etc. based on your physical constraints) you could probably get away with sticking TUIO markers to the bottom of your objects.
The software will pickup detect the markers and you'll differentiate the objects by ID, but also be able to get position/rotation/etc. and your hands will not be part of that.
I have succesfully registered two point clouds of the same scene obtained from different camera positions. Color values are different due to changes in light condition between both positions. I would like to know how to perform a smart color blending between two aligned point clouds in order to obtain an uniform color along the global model. Any idea?
I enclose a capture where you can see how color is darker in the cloud on the right.
I was trying to adapt image blending approaches to 3D point clouds, but it's not straightforward at all, so I applied an easier solution that solved my problem for the moment.
Since texture changes are mainly given by changes in scene lighting due to different camera positions, theoretically just a exposure compensation between both clouds should provide good results. I've fixed my problem extending a standard approach of 2D exposure compensation to a 3D scenario. Concretely, just a gain compensation (point 6 of the paper) is enough if the lighting difference is low enough.
I am making a 2D (top-down) horror game in game maker. Each player has a flash light which drains overtime. The flashlight uses surfaces to draw light and the cone gets smaller overtime.I would like for the flashlight to act like a real flashlight instead of going through walls. Is there anyway to do this?Picture of what I want it to look like
how are you currently drawing your flashlight?
I would recommend not drawing a flashlight sprite and instead filling a surface with black (to act as darkness) and cutting your lights out of that.
Then You can use the collision_line function to sweep in an arc from your player and get either where it hits an object or whether the line extends past your flashlight range. Then store all those vertices and draw a primitive with blending to act as flashlight.
Hope that makes sense, otherwise I swear I've seen some posts on the gamemaker forums on this, good luck!
I am trying to render as realistically as possible a scene in which a point light hits an object and bounces off with the same angle wrt the normal of the face (angle of incidence = angle of reflection) and illuminates the scene elsewhere.
Now, I know reflection in threejs is normally dealt with CubeCamera-material as per the examples I found online, but it doesn't quite apply to my case, for I may be observing the scene from a point in which I might not be able to observe the reflection of the object on the mirror-like surface of another one.
Consider this example prototype I'm working on: if the box that is protruding from the wall in the scene had a mirror-like material (accomplished with a CubeCamera), I wouldn't be able to see the green cube's reflection on the bottom face unless the camera was at a specific position; in real life, however, if an object illuminated by a light source passes in the vicinity of another one, it will in part light it as if it were a light source itself (depending on the object's index of reflectivity, of course) and such phenomenon should be visible from any point of view the object receiving indirect lighting is visible from.
Hence I came up with the idea of adding a PointLight to the cube, but this of course produces undesirable effects on the surroundings.
I will try to illustrate my goal with the following sequence:
1) Here, the far side of what I will henceforth refer to as balcony is correctly dark, while the areas marked with a red 'x' are the consequence of the cube having a child PointLight which shines in all directions.
2) Here, the balcony's far face is still dark and the bottom one is receiving even more light as the cube passes by, which is desirable, but the wall behind the cube should actually be dark (I haven't added shadows yet, I first want to get the lighting right), as well as the ground beneath it and the lamp post.
3) Finally, when the cube has passed the balcony, it's just plain wrong for the balcony's side and bottom face to be illuminated, for we all now that a reflected ray does not bounce back the way it came from. Same applies to the lamp post.
Now I realize that all the mistakes that occur are due to the fact that the cube emits light itself, what I'm hoping you can help me with is determining a way to produce physically accurate reflected rays.
I would like to avoid using ambient light or other hacks to simulate real-life scenarios and stick to physics as much as possible; I suspect what I want to achieve is very computationally heavy to render, let alone animate in a real-time use case, but that's not an issue for I'm merely trying to develop a proof-of-concept, not something that should necessarily perform fast.
From what I gather, I should probably be writing custom vertex and fragment shaders for the materials receiving indirect illumination, right? Unfortunately I wouldn't know where to begin, can anyone point me in the right direction? Cheers.
If you do not want to go to the Volumetric rendering then you have 3 options (I know of)
ray-tracing
you have to use ray-trace rendering (back ray-trace) to achieve this. This will also cover shadows,transparent materials,reflected illumination and much more if coded properly. Unless you want to do also precise atmospheric scattering then this is the way.
back raytracing is one (or 3) ray(s) per each screen pixel. It is much faster but not that precise.. (still precise enough)
raytracing is one ray per each 3D angular unit (steradian) of space per each light source. It is slow but precise (if ray density is high enough).
If the casted ray hits any obstacle then its color is changed (due to obstacle property) and new ray is casted as reflected light ray. If material is transparent then also refracted ray is casted ... Each hit or refraction affect light intensity so you stop when intensity is lower then some treshold or on some layer of recursion (limit max number of refractions per ray) to avoid infinite loops and you can manipulate performance/quality ...
standard polygon rendering
With this approach (I think you are using it right now) you have to improvise. The reflection and illumination effects can be done similar to shadowing techniques. For each surface you have to render the scene in reflected direction. The same can be done with shadows but then you just rendering to the light direction or use shadow map instead. If you have insane number of reflective surfaces then this approach is not the way also to achieve reflection of refraction you have to render recursively making it multiple rendering pass per polygon which is also insane.
cubemap
You can use cube map per each object. It is similar to bullet 2 but the insanity is done just once while generating cubemaps instead of per frame ... If you have too much objects then this is also not the way. You can use cube map only for objects with reflective surfaces to make it manageable. Also if the objects are moving then you have to re-generate cubemaps once in a while ...