VEML6075 UVA and UVB measuring ranges - arduino

I use a VEML6075 sensor to read UVA, UVB, and UV index. The UV Index is automatically calculated by the chip, also sent by I2C to my MCU.
According to the next Datasheets:
Vishay Datasheet
Adafruit VEML6075
Vishay Application Note
It can present the raw data for UVA and UVB. These values are presented in counts/μW/cm2 and it depends on the measured channels.
My problem is that I don't know about the measuring range for this raw data
uint16_t uva = my_veml6075.getUVA();
uint16_t uvb = my_veml6075.getUVB();
Have anybody the range for these two values?

There is no upper limit. At some point your sensor will be saturated.
As the datasheet states this sensor is for solar irradiation. You should not use focussing optics of course. So in the intended use case the sensor will probably not saturate.
It returns the UVA and UVB levels as a 16bit value.
The value is returned as counts/µW/cm²
To get the UV index from a irradiation value divide it by 25W/m²
The highest UV index ever measured was 43.3
Typically the values range between 0 and 12. Thats a maximum of 300mW/m²

Related

Tap Detection data conversion for ADXL345 sensor

I am interfacing ADXL345 sensor by using the datasheet as well as other libraries I am able to under stand the setup of TAP threshold.
I need to confirm that in example code :
// Set values for what is considered a TAP and what is a DOUBLE TAP (0-
255)
adxl.setTapThreshold(50); // 62.5 mg per increment
adxl.setTapDuration(15); // 625 μs per increment
adxl.setDoubleTapLatency(80); // 1.25 ms per increment
adxl.setDoubleTapWindow(200); // 1.25 ms per increment
in which user setup values which is mentioned according to scale factor as per mentioned in datasheet,
I am facing doubt here and need to clear this
the values mentioned for Tap solution is decimal or hexadecimal values ?
Need to know the conversion formulae which is used to create for setup the threshold.
As ADXL345 sensor which, I am using has maximum resolution of 13 bits so I want to set the value as per 13 bits
Any suggestion advice regarding this will be very helpful for me to work on ADXL345 sensor interfacing with an Arduino
The values are decimal values - you can see in the comments how they relate to actual physical values:
adxl.setTapThreshold(50); // 62.5 mg per increment -> 62.5mg * 50 = 3.125g
adxl.setTapDuration(15); // 625 μs per increment -> 625us * 15 = 9.375ms
adxl.setDoubleTapLatency(80); // 1.25 ms per increment -> 1.25ms * 80 = 100ms
adxl.setDoubleTapWindow(200); // 1.25 ms per increment -> 1.25ms * 200 = 250ms
So to work out the value you need for a threshold of Xg, use the formula
v = X / 62.5mg = X / 0.0625
For example, for a threshold of 5g:
adxl.setTapThreshold(80); // Because 5 / 0.0625 = 80

Understanding hex output of raw 8-bit 8000Hz PCM sine wave

Using Audacity, I generated a 1Hz Sine Wave with a 1 second length and 1.0 amplitude. This resulted in the following wave as expected.
With the Audacity sample size set to 8000Hz, I then exported the audio as RAW (header-less) Signed 8-bit PCM which resulted in an 8000 byte file (each byte is an 8-bit number between -128 and +127).
Opening up the .raw file in HxD and then setting the 'Bytes per row' to 1 and the Offset to decimal shows 8000 lines, each line showing the 8-bit number in Hex.
I can see that there are 10 0's then 10 1's then 10 2's and so on but once it goes to 16, there are 11 16's but then 10 17's and 10 18's. My question is, why are there 10 of some numbers and 11 of others?
This is just the shape of the sine wave. As you get closer to the maximum the curve is flatter so you get more equal sample values.
The left column can't be hex. It must be the sample time offset. The right column is measured value. What are the values of the right column when it's greater than 9?

What's the used format for decimals in the dht11 sensor?

I am using a dht11 sensor and get the raw bits, where the first 8 Bits are the integral of humidity and following 8 bit the decimals of the humidity. The next 8 Bits are the the integral of temperature, followed by 8 bit decimals of the temperature. In the end there is a 8 bit checksum.
I read some datasheets, but could not find any information about how the decimals have to be read.
Does anyone know if it is a simple fixed-point 8 Bit decimals or do I have to do it differently?
Any help is appreciated
From the dht11 datasheet, only positive values for humidity and temperature can be returned, so no bit reserved for the sign.
This is a Q8.8 fixed point representation (see also https://en.wikipedia.org/wiki/Q_(number_format)).
To translate from the representation to the physical value you have to divide by 2^8, where 8 is the number of fractional bits.
So for example:
0000 0010 1000 0000 = 640 decimal
640/256 = 2.5 decimal

How to efficiently convert a few bytes into an integer between a range?

I'm writing something that reads bytes (just a List<int>) from a remote random number generation source that is extremely slow. For that and my personal requirements, I want to retrieve as few bytes from the source as possible.
Now I am trying to implement a method which signature looks like:
int getRandomInteger(int min, int max)
I have two theories how I can fetch bytes from my random source, and convert them to an integer.
Approach #1 is naivé . Fetch (max - min) / 256 number of bytes and add them up. It works, but it's going to fetch a lot of bytes from the slow random number generator source I have. For example, if I want to get a random integer between a million and a zero, it's going to fetch almost 4000 bytes... that's unacceptable.
Approach #2 sounds ideal to me, but I'm unable come up with the algorithm. it goes like this:
Lets take min: 0, max: 1000 as an example.
Calculate ceil(rangeSize / 256) which in this case is ceil(1000 / 256) = 4. Now fetch one (1) byte from the source.
Scale this one byte from the 0-255 range to 0-3 range (or 1-4) and let it determine which group we use. E.g. if the byte was 250, we would choose the 4th group (which represents the last 250 numbers, 750-1000 in our range).
Now fetch another byte and scale from 0-255 to 0-250 and let that determine the position within the group we have. So if this second byte is e.g. 120, then our final integer is 750 + 120 = 870.
In that scenario we only needed to fetch 2 bytes in total. However, it's much more complex as if our range is 0-1000000 we need several "groups".
How do I implement something like this? I'm okay with Java/C#/JavaScript code or pseudo code.
I'd also like to keep the result from not losing entropy/randomness. So, I'm slightly worried of scaling integers.
Unfortunately your Approach #1 is broken. For example if min is 0 and max 510, you'd add 2 bytes. There is only one way to get a 0 result: both bytes zero. The chance of this is (1/256)^2. However there are many ways to get other values, say 100 = 100+0, 99+1, 98+2... So the chance of a 100 is much larger: 101(1/256)^2.
The more-or-less standard way to do what you want is to:
Let R = max - min + 1 -- the number of possible random output values
Let N = 2^k >= mR, m>=1 -- a power of 2 at least as big as some multiple of R that you choose.
loop
b = a random integer in 0..N-1 formed from k random bits
while b >= mR -- reject b values that would bias the output
return min + floor(b/m)
This is called the method of rejection. It throws away randomly selected binary numbers that would bias the output. If min-max+1 happens to be a power of 2, then you'll have zero rejections.
If you have m=1 and min-max+1 is just one more than a biggish power of 2, then rejections will be near half. In this case you'd definitely want bigger m.
In general, bigger m values lead to fewer rejections, but of course they require slighly more bits per number. There is a probabilitistically optimal algorithm to pick m.
Some of the other solutions presented here have problems, but I'm sorry right now I don't have time to comment. Maybe in a couple of days if there is interest.
3 bytes (together) give you random integer in range 0..16777215. You can use 20 bits from this value to get range 0..1048575 and throw away values > 1000000
range 1 to r
256^a >= r
first find 'a'
get 'a' number of bytes into array A[]
num=0
for i=0 to len(A)-1
num+=(A[i]^(8*i))
next
random number = num mod range
Your random source gives you 8 random bits per call. For an integer in the range [min,max] you would need ceil(log2(max-min+1)) bits.
Assume that you can get random bytes from the source using some function:
bool RandomBuf(BYTE* pBuf , size_t nLen); // fill buffer with nLen random bytes
Now you can use the following function to generate a random value in a given range:
// --------------------------------------------------------------------------
// produce a uniformly-distributed integral value in range [nMin, nMax]
// T is char/BYTE/short/WORD/int/UINT/LONGLONG/ULONGLONG
template <class T> T RandU(T nMin, T nMax)
{
static_assert(std::numeric_limits<T>::is_integer, "RandU: integral type expected");
if (nMin>nMax)
std::swap(nMin, nMax);
if (0 == (T)(nMax-nMin+1)) // all range of type T
{
T nR;
return RandomBuf((BYTE*)&nR, sizeof(T)) ? *(T*)&nR : nMin;
}
ULONGLONG nRange = (ULONGLONG)nMax-(ULONGLONG)nMin+1 ; // number of discrete values
UINT nRangeBits= (UINT)ceil(log((double)nRange) / log(2.)); // bits for storing nRange discrete values
ULONGLONG nR ;
do
{
if (!RandomBuf((BYTE*)&nR, sizeof(nR)))
return nMin;
nR= nR>>((sizeof(nR)<<3) - nRangeBits); // keep nRangeBits random bits
}
while (nR >= nRange); // ensure value in range [0..nRange-1]
return nMin + (T)nR; // [nMin..nMax]
}
Since you are always getting a multiple of 8 bits, you can save extra bits between calls (for example you may need only 9 bits out of 16 bits). It requires some bit-manipulations, and it is up to you do decide if it is worth the effort.
You can save even more, if you'll use 'half bits': Let's assume that you want to generate numbers in the range [1..5]. You'll need log2(5)=2.32 bits for each random value. Using 32 random bits you can actually generate floor(32/2.32)= 13 random values in this range, though it requires some additional effort.

How to determine FFT resulted indexes Freq and plot the Amplitude/Frequency Graph

I Have a bit of a hypothetical question to understand this concept..
Let's say I captured a mono voice clip with 8000hz sample rate, that is 4096 bytes in data..
Feeding the First 512 Bytes(16bit encoding) through an FFT of size 256, will return me 128 values, which i convert to amplitude.
So my frequencies for this output are
FFT BIN #1
0: 0*8000/256
1: 1*8000/256
.
.
127: 127*8000/256
So far so good ey? So now i 3584 bytes of unprocessed data left. So i perform another fft of 256 size on 512 bytes of data. And get the same amount of results..
So for this do i again have frequencies of:
FFT BIN #2:
Example1:
0: 0*8000/256
1: 1*8000/256
.
.
127: 127*8000/256
or
FFT BIN #2
Example2:
128: 129*8000/256
139: 130*8000/256
.
.
255: 255*8000/256
Because I would like to plot this amplitude/freq graph. But i don't understand if all these fft bins should be overlapped on the same frequencies like examaple1, or spread out like the second example.
Or am I trying to do something that is completely redundant? Because what i want to accomplish is find the peak amp value of every 30-50ms time frame to use for comparison of other sound files..
If anyone can clear this out for me, I'd be very grateful.
Your FFT result bins represent the same set of frequencies in every FFT, as in your example #1, but for different slices of time.
Each FFT will allow you to plot magnitude vs. frequency for about a 12 mS window of time.
You could also vector sum the FFT magnitudes together to get a Welch method PSD (power spectral density) for a longer time frame.
If you want to find the peak amp value of every 30-50ms time frame, you just need to plot the amp spectra for signals in each of the time frames.
Also, if you take FFT of 256 samples for each frame, then you should get 129, not 128, frequency components. The first one is the DC component, and the last one is the Nyquist frequency component.

Resources