my BH1750 I2C light sensor is giving me a reading in lux but I need a lumen-value.
From what I read I just multiply the lux-reading by the surface area of the sensor to get my lumen-value.
But from the datasheet on page 6 I'm getting a very small surface area of 0.25mm by 0.3mm. That doesn't seem right. What am I doing wrong?
I'm getting a reading of about 8,000 lx on this cloudy afternoon which should be somewhere around 600 lumens.
You seem to have a wrong understanding of photometric quantities. Let me try to get this straight with an analogon: Consider a water fountain that emits water. This fountain will stand for our light source.
The total amount of water that the fountain emits can be measured as m³/s (cubic meters per second). This is a characteristic of the fountain, which could be called the water power. Going back to photometry, this power is equivalent to the luminous flux, which is measured in lumen. Therefore, the luminous flux describes how much light a light source emits. You can restrict this definition to a given set of directions (e.g., measure the luminous flux of a light bulb only in a downward cone). This will give you the total amount of light that travels in that cone. For the fountain example, this can be done equivalently. Just measure the water emitted into a given cone. The unit is still m³/s or lumen.
Now, let us not only consider the fountain (as the light source) but also the surrounding surfaces. We can pick an arbitrary point on the surrounding surface (a point with no area) and measure how much water/light arrives at this point. This might be a bit hard to imagine because it is a differential quantity. You can approximate this value by measuring the amount of water that arrives in a small neighborhood of the point and dividing by the area of that neighborhood. This is actually what your sensor is doing. The resulting unit is m³/s/m² (cubic meters per second per square meter) or for the photometric case lm / m² (lumen per square meter), which is the definition of lux (unit of illuminance). Therefore, different points can have different illuminance. Especially, points far away from the light source usually have a smaller illuminance. You can calculate the total luminous flux by integrating the illuminance of the entire surface area. This is equivalent to measuring the amount of water at very many small surface pieces around the fountain (i.e. illuminance multiplied by area) and summing them up.
With this background knowledge, we see that it does not make sense to convert lux to lumen. They measure completely different things. Intuitively, illuminance tells you how much light shines at a given point, which is usually what you want. What you did (by multiplying the illuminance by the sensor area) is calculating the total luminous flux that arrives at the sensor (total amount of water at a given surface patch). Naturally, this measure grows as your sensor gets bigger (there will be more light arriving at the surface; or equivalently, as you consider bigger and bigger patches around the fountain, you will collect more and more water). Therefore, it does also not make sense to state that 8 klx should be 600 lm.
Related
Maybe somebody has some links about determining distance where the scanners are more than 15 meters apart?
All the articles I found describe how to determine the distance, but for example in room 5/5 with 4 scanners or even closer together. I want to know what accuracy can I have when scanners are they are further apart.
Unfortunately beacon distance estimates are wildly inaccurate at distances over a few meters. It is nearly impossible to discern between beacons 15 meters and 25 meters away.
Why? Because radio signals get weaker with distance, and at distances that near the maximum range of Bluetooth beacons (typically 20-40 meters in home/school/office indoor environments with walls and instructions) the signal level nears the noise floor. There is very little absolute difference between the signal level of a beacon at 25 meters from one at 15 meters. By looking for such tiny differences in signal you are mostly seeing error and non-distance influences on the signal measurement.
The best you can do with a distance calculation is determine that it is likely the beacon is far away (e.g. > 10 meters). One you know it is far away determining how far is beyond the limits of the technology.
I need to make a simulation to see what areas would be affected if the sea level rises in X meters. Could anyone give me tips were to start? I've search for tools embedded in the google maps API but didn't find any workaround.
The idea is to create a function such as this:
isAffected <- function( coordinate, metersRised)
---- return True if it is affected, false otherwise
Thanks in advance!
First reaction is I can't see there being any quick straightforward solution with off the shelf R libraries/data sets on top of which to build a function like that. Second is wondering if you'd like to model it or rely on already developed products, or something in the middle. The most rigorous would be applying a hydrodynamic model and the other bookend is sampling someone else's grid of anticipated results.
Just for context, For river level affected by sea level rise near the coast, you may want to consider variable river stages if they vary quite a bit. If the rivers are running high due to recent storms or snowmelt events, it will worsen flooding from sea level rise alone. So maybe you could assume a limited number of river heights (say rainy season - high, dry season - low). Tides complicate things too, as do storms and storm surge - basically above average ocean heights due to the temporary very low pressure. An example worst cast scenario with those three components is, how much of x city (regional coastline) would be flooded, say New Orleans or Australian coast, during storm surge, a high tide, and the local river very full from spring snowmelt, with 5 feet more extra sea level added, so lots of data needs to consider - eg you may want some sort of x,y,z data for those river height assumptions. Lots of cities have inundation maps where you can get those river stage elevations. The bigger the sea level rise assumption, the less the rivers might matter. Eg, a huge sea level rise scenario could easily inundate the whole city as it is today, no matter how high the river is, with the mouth of the river moving miles inland.
Simplyifying things, I'd say the most important data will be the digital elevation model (DEM), probably a raster file of x,y,z coordinates, with z being the key piece - the elevation of a pixel at every xy location above some certain datum. Higher resolution DEMs will give much more detailed and realistic inundation. Processed LiDAR data is maybe ideal - very high resolution data that some else has produced - raw LiDAR data is a burden. There's at least some here for New Zealand - http://opentopo.sdsc.edu/datasets - but I'm not sure of good warehouses for data outside the US.
A basic workflow might be, decide what hydraulic components you'll consider and how many scenarios. Eg, you'll ignore tides by using an average sea level and have just two sea level rise scenarios, and assume the river is always at __ feet, or maybe __ ft and __ ft. Download/build DEM, and then add your river heights to the digital elevation model (not trivial, but searching GIS Stack overflow a good start). That's a reference baseline elevation to combine sea water with. With an assumption of sea level rise, say 10 feet, that's incorporated into another DEM, one approach is raster math centric, subtracting one from the other and the result will show the new inundation areas. Once you've done the raster math, you could have a binary xy grid with either flooded or not flooded, to apply that final xy search function: is xy 1 or 0, but by far the trickiest part is all before that. There's maybe more straightforward or simplified approaches, but the system is so dyanmic so the sky is the limit for how complicated your model will be. Here's more information on the river component, that might help visualize the river starting points to which you'll add your sea water scenario(s) https://www.usgs.gov/mission-areas/water-resources/science/flood-inundation-mapping-science?qt-science_center_objects=0#qt-science_center_objects
The library raster might be a good start, that will read in downloaded raster/grid files, like .tif, and also perform the raster math you'd need - adding/subtracting same size rasters together. Or forgetting all this processing, maybe you could just read in pre-processed rasters of such scenarios done by others, then do your search on them. There's probably a good number for certain sea level rises, but it just gets much trickier if you want to assume both sea level and river elevation scenarios.
I have an Ogre3D application and I would like to render a surface that represents the water with waves.
I think I am not the only one that has this purpose, so I was looking for an example to follow.
I imagine that if I want to create a water surface and want to move it like a wave I have to create a surface with many vertexes (according to which precision I want) and then control the height of each vertexes.
As the water will be quite big, I think that the water will take long time to be rendered, so I was wandering if it was better to render it by vertex or nurbs? Or are there any better way?
There's an Ocean example included in Ogre distribution that you can use as starting point. I don't remember if it uses any LOD system but it has quite nice random waves and Fresnel shader.
The nurbs won't help you much as there's no easy way to push them into GPU. They're good for some modelling tasks but at the end you need to convert them to 'real' geometry.
So I am working on a simulation of the solar system and ran into a roadblock...
In reality, the speed of the moon compared to the earth is much slower then that of the earth compared to the sun. However, the moon completes its orbit much quicker because it has to travel much less distance. The moon orbits the earth about 13 times in 1 year.
In my simulation however, the moon gets maybe 2 orbits in a year...
I've checked the speed with wikipedia and they are correct.
The only difference is that I scale everything, making me suspect that that's the cause.
All distances are devided by 100 000 and all speeds are devided by 1000
this.angle += (speed * deltatime);
this.x = this.semi_major_axis * Math.cos(this.angle) + this.parent.x + this.focalX;
this.y = this.semi_minor_axis * Math.sin(this.angle) + this.parent.y + this.focalY;
Speed is the speed according to Wikipedia. (29.78 km/s for earth and 1.022 km/s for the moon)
Parent in this case means the object it is orbiting (in case of the earth, it's the sun. In case of the moon, it's the Earth)
focalX and focalY are the offset from the planet.
Speed and the 2 axis values are already scaled at this point.
Am I wrong in the manner of the scale? Am I completely missing something obvious? Am I just doing it completely the wrong way?
Since speed is distance/time (eg kilometres/second) when you scale speed by 1000 and distance by 100000 you have, whether you know it or not, scaled time by 100. Are you sure that you have taken this into account in the rest of your calculations ?
And yes, you are approaching this entirely the wrong way. If you were building a mechanical simulator you would want to scale distances quite early in the process, but in a numerical simulator why scale them at all ? Just work in the original units.
Since you don't have a computer screen which is several AU (astronomical units) across, you might have to scale the numbers for imaging but most graphics systems will do that for you at some point in the pipeline.
I'd say you should go through the exercise of non-dimensionalizing the original equation of motion, similar to what people do with Navier-Stokes equation for fluids (it's a good example to look for). You'll see that non-dimensional groupings should turn up, like the Prandtl and Reynolds numbers for fluids, that will give you meaningful insight into the problem AND make your numerical solution more tractable.
I don't think it'll be the scale, the simulation will do those distances 100 times quicker than is accurate (big drop in distance, small drop in speed), but it should be uniform across the board so the Earth and Moon will both speed up by the same amount, I'd look at the base speeds again and make sure they're correct and also your algorithm that calculates distance travelled.
I would like to create a digital (square) signal on my sound card. It works great if I generate high frequencies. But, since I can't output DC on a sound card, for lower frequencies the resulting digital bits will all slowly fade to 0.
This is what the soundcards high pass does to my square wave:
http://www.electronics-tutorials.ws/filter/fil39.gif
What's the mathematical function of a signal, that, when passed through a high pass will become square?
Ideally, the solution is demonstrated in gnuplot.
The sound card cuts out the low frequencies in the waveform, so you need to boost those by some amount in what you pass to it.
A square wave contains many frequencies (see the section on the Fourier series here). I suspect the easiest method of generating a corrected square wave is to sum a Fourier series, boosting the amplitudes of the low frequency components to compensate for the high-pass filter in the sound card.
In order to work out how much to boost each low frequency component, you will first need to measure the response of the high-pass filter in your soundcard, by outputting sine waves of various frequencies but constant amplitude, and measuring for each frequency the ratio r(f) of the amplitude of the output to the amplitude of the input. Then, an approximation to a square wave output can be generated by multiplying the amplitude of each frequency component f in the square wave fourier series by 1/r(f) (the 'inverse filter').
It's possible that the high-pass filter in the soundcard also adjusts the phase of the signal. In this case, one might be better off modelling the high pass as an RC filter, (which is probably how the soundcard is doing the filtering), and invert both the amplitude and phase response from that.
Some of the previous answers have correctly noted that it is the high-pass filter (AC coupling capacitor on the soundcard's output) is what is preventing the low frequency square waves from "staying on" so they decay quickly.
There is no way to completely beat this filter from software or it wouldn't be there, now would it? If you can live with lower amplitude square waves at the lower frequencies, you can approximate them by sending out something like a triangle wave. From a transient analysis perspective, the theory of operation here is that as the coupling capacitor is discharging (blocking DC) you are increasing its bias voltage to counteract that discharge thus maintaining the square wave's plateau for a while. Of course you eventually run out of PCM headroom (you can't keep increasing the voltage indefinitely), so a 24-bit card is better in this respect than a 16-bit one as it will give you more resolution. Another, more abstract way to think of this is that the RC filter works as a differentiator, so in order to get the flat peaks of the square wave you need to give it the flat slopes of the triangle wave at the input. But this is an idealized behavior.
As quick proof of concept, here's what a 60Hz ±1V triangle signal becomes when passing through a 1uF coupling cap on a 1Kohm load; it approximates a ±200mV square wave
Note that the impedance/resistance of the load matters quite a bit here; if you lower it to, say, 100ohm the output amplitude decrease dramatically. This is how the coupling caps block DC on speakers/headphone because these devices have much lower impedance than 1Kohm.
If I can find a bit more time later today, I'll add a better simulation, with a better shaped stimulus instead of the simple triangle wave, but I can't get that from your average web-based circuit simulator software...
Well, if you're lucky you can get one of those $0.99 USB sound cards where the manufacturer has cut corners so much that they didn't install coupling caps. https://www.youtube.com/watch?v=4GNRzwfP7RE
Unfourtunately, you cannot get a good approximation of a square wave. Sound hardware is intentionally slew rate limited and would not be able to produce a falling or rising edge beyond its intended frequency range.
You can approximate a badly deformed square wave by alternating a high and low PCM code (+max, -max) every N samples.
You can't actually produce a true square wave, because it has infinite bandwidth. You can produce a reasonable approximation of a square wave though, at frequencies between say 10 Hz and 1 kHz (below 10 Hz you may have problems with the analogue part of your sound card etc, and above around 1 kHz the approximation will become increasingly inaccurate, since you can only reproduce a relatively small number of harmonics).
Tp generate the waveform the sample values will just alternate between +/- some value, e.g. full scale, which would be -32767 and +32767 for a 16 bit PCM stream. The frequency will be determined by the period of these samples. E.g. for a 44.1 kHz sample rate, if you have say 100 samples of -32767 and then 100 samples of +32767, i.e. period = 200 samples, then the fundamental frequency of your square wave will be 44.1 kHz / 200 = 220 Hz.
I found an application that I build on it.
http://www.blogger.com/blogger.g?blogID=999906212197085612#editor/target=post;postID=7722571737880350755
you can generate the format you want and even the pattern you need.
The code uses SLIMDX.