Tap Detection data conversion for ADXL345 sensor - arduino

I am interfacing ADXL345 sensor by using the datasheet as well as other libraries I am able to under stand the setup of TAP threshold.
I need to confirm that in example code :
// Set values for what is considered a TAP and what is a DOUBLE TAP (0-
255)
adxl.setTapThreshold(50); // 62.5 mg per increment
adxl.setTapDuration(15); // 625 μs per increment
adxl.setDoubleTapLatency(80); // 1.25 ms per increment
adxl.setDoubleTapWindow(200); // 1.25 ms per increment
in which user setup values which is mentioned according to scale factor as per mentioned in datasheet,
I am facing doubt here and need to clear this
the values mentioned for Tap solution is decimal or hexadecimal values ?
Need to know the conversion formulae which is used to create for setup the threshold.
As ADXL345 sensor which, I am using has maximum resolution of 13 bits so I want to set the value as per 13 bits
Any suggestion advice regarding this will be very helpful for me to work on ADXL345 sensor interfacing with an Arduino

The values are decimal values - you can see in the comments how they relate to actual physical values:
adxl.setTapThreshold(50); // 62.5 mg per increment -> 62.5mg * 50 = 3.125g
adxl.setTapDuration(15); // 625 μs per increment -> 625us * 15 = 9.375ms
adxl.setDoubleTapLatency(80); // 1.25 ms per increment -> 1.25ms * 80 = 100ms
adxl.setDoubleTapWindow(200); // 1.25 ms per increment -> 1.25ms * 200 = 250ms
So to work out the value you need for a threshold of Xg, use the formula
v = X / 62.5mg = X / 0.0625
For example, for a threshold of 5g:
adxl.setTapThreshold(80); // Because 5 / 0.0625 = 80

Related

Power BI DAX - Throughput Board Time Intelligence Metrics with different time zones

Dealing with a bit of a head scratcher. This is more of a logic based issue rather than actual Power BI code. Hoping someone can help out! Here's the scenario:
Site
Shift Num
Start Time
End Time
Daily Output
A
1
8:00AM
4:00PM
10000
B
1
7:00AM
3:00PM
12000
B
2
4:00PM
2:00AM
7000
C
1
6:00AM
2:00PM
5000
This table contains the sites as well as their respective shift times. The master table above is part of an effort in order to capture throughput data from each of the following sites. This master table is connected to tables with a running log of output for each site and shift like so:
Site
Shift Number
Output
Timestamp
A
1
2500
9:45 AM
A
1
4200
11:15 AM
A
1
5600
12:37 PM
A
1
7500
2:15 PM
So there is a one-to-many relationship between the master table and these child throughput tables. The goal is to create use a gauge chart with the following metrics:
Value: Latest Throughput Value (Latest Output in Child Table)
Maximum Value: Throughput Target for the Day (Shift Target in Master Table)
Target Value: Time-dependent project target
i.e. if we are halfway through Site A's shift 1, we should be at 5000 units: (time passed in shift / total shift time) * shift output
if the shift is currently in non-working hours, then the target value = maximum value
Easy enough, but the problem we are facing is the target value erroring for shifts that cross into the next day (i.e. Site B's shift 2).
The shift times are stored as date-independent time values. Here's the code for the measure to get the target value:
Var CurrentTime = HOUR(UTCNOW()) * 60 + MINUTE(UTCNOW())
VAR ShiftStart = HOUR(MAX('mtb MasterTableUTC'[ShiftStartTimeUTC])) * 60 + MINUTE(MAX('mtb MasterTableUTC'[ShiftStartTimeUTC]))
VAR ShiftEnd = HOUR(MAX('mtb MasterTableUTC'[ShiftEndTimeUTC])) * 60 + MINUTE(MAX('mtb MasterTableUTC'[ShiftEndTimeUTC]))
VAR ShiftDiff = ShiftEnd - ShiftStart
Return
IF(CurrentTime > ShiftEnd || CurrentTime < ShiftStart, MAX('mtb MasterTableUTC'[OutputTarget]),
( (CurrentTime - ShiftStart) / ShiftDiff) * MAX('mtb MasterTableUTC'[OutputTarget]))
Basically, if the current time is outside the range of the shift, it should have the target value equal the total shift target, but if it is in the shift time, it calculates it as a ratio of time passed within the shift. This does not work with shifts that cross midnight as the shift end time value is technically earlier than the shift start time value. Any ideas on how to modify the measure to account for these shifts?

VEML6075 UVA and UVB measuring ranges

I use a VEML6075 sensor to read UVA, UVB, and UV index. The UV Index is automatically calculated by the chip, also sent by I2C to my MCU.
According to the next Datasheets:
Vishay Datasheet
Adafruit VEML6075
Vishay Application Note
It can present the raw data for UVA and UVB. These values are presented in counts/μW/cm2 and it depends on the measured channels.
My problem is that I don't know about the measuring range for this raw data
uint16_t uva = my_veml6075.getUVA();
uint16_t uvb = my_veml6075.getUVB();
Have anybody the range for these two values?
There is no upper limit. At some point your sensor will be saturated.
As the datasheet states this sensor is for solar irradiation. You should not use focussing optics of course. So in the intended use case the sensor will probably not saturate.
It returns the UVA and UVB levels as a 16bit value.
The value is returned as counts/µW/cm²
To get the UV index from a irradiation value divide it by 25W/m²
The highest UV index ever measured was 43.3
Typically the values range between 0 and 12. Thats a maximum of 300mW/m²

Unity conversions in transmission delay

I'm currently learning about transmission delay and propagation. I'm really having a tough time with the conversions. I understand how it all works but I cant get through the converting. For example:
8000bits/5mbps(mega bits per second) I have no idea how to do this conversion , I've tried looking online but no one explains how the conversion happens. I'm supposed to get 1.6 ms, but I cannot see how the heck that happens. I tried doing it this way, 8000b / 5x10^6 b/s but that gives me 1600 s.
(because that would not fit in a comment):
8000 bits = 8000 / 1000 = 8 kbit, or 8000 / 1000 / 1000 = 0.008 mbit.
(or 8000 / 1024 = 7.8 Kibit, or 8000 / 1024 / 1024 = 0.0076 Mibit,
see here: https://en.wikipedia.org/wiki/Data_rate_units)
Say you have a throughput of 5mbps (mega bits per second), to transmit your 8000 bits that's:
( 0.008 mbit) / (5 mbit/s) = 0.0016 s = 1.6 ms
That is, unit wise:
bit / (bit/s)
bit divided by bit => the bit unit disappear,
then divide and divide by seconds = not "something per second", but second,
result unit is second.

Time Division Multiplexing: Frame rate and bit rate

Can someone explain to me how to get the answer I am so lost.
Q1. A multiplexer combines four 100-kbps channels using a time slot of 2 bits.
What is the frame rate and the bit rate?
The answer is 50,000 Frame per sec and 400 kbps.
bit rate = no. of channels * Input bit rate
So, bit rate = 4 * 100kbps = 400kbps
Frame rate = Input bit rate (convert it into bit) / time slot bit(s)
So, Frame rate = 100000bps / 2b = 50000 frames per second (Frames/s)
4 * 100kbps = 400kbps
100kbps / 2b = 100000bps / 2b = 50000fps
Frame rate is the number of frames per second.
If we have 400000 bits per second transmission
Since each frame carries 8 bits, the frame rate would be:
400000/8 = 5000 per second.

How to efficiently convert a few bytes into an integer between a range?

I'm writing something that reads bytes (just a List<int>) from a remote random number generation source that is extremely slow. For that and my personal requirements, I want to retrieve as few bytes from the source as possible.
Now I am trying to implement a method which signature looks like:
int getRandomInteger(int min, int max)
I have two theories how I can fetch bytes from my random source, and convert them to an integer.
Approach #1 is naivé . Fetch (max - min) / 256 number of bytes and add them up. It works, but it's going to fetch a lot of bytes from the slow random number generator source I have. For example, if I want to get a random integer between a million and a zero, it's going to fetch almost 4000 bytes... that's unacceptable.
Approach #2 sounds ideal to me, but I'm unable come up with the algorithm. it goes like this:
Lets take min: 0, max: 1000 as an example.
Calculate ceil(rangeSize / 256) which in this case is ceil(1000 / 256) = 4. Now fetch one (1) byte from the source.
Scale this one byte from the 0-255 range to 0-3 range (or 1-4) and let it determine which group we use. E.g. if the byte was 250, we would choose the 4th group (which represents the last 250 numbers, 750-1000 in our range).
Now fetch another byte and scale from 0-255 to 0-250 and let that determine the position within the group we have. So if this second byte is e.g. 120, then our final integer is 750 + 120 = 870.
In that scenario we only needed to fetch 2 bytes in total. However, it's much more complex as if our range is 0-1000000 we need several "groups".
How do I implement something like this? I'm okay with Java/C#/JavaScript code or pseudo code.
I'd also like to keep the result from not losing entropy/randomness. So, I'm slightly worried of scaling integers.
Unfortunately your Approach #1 is broken. For example if min is 0 and max 510, you'd add 2 bytes. There is only one way to get a 0 result: both bytes zero. The chance of this is (1/256)^2. However there are many ways to get other values, say 100 = 100+0, 99+1, 98+2... So the chance of a 100 is much larger: 101(1/256)^2.
The more-or-less standard way to do what you want is to:
Let R = max - min + 1 -- the number of possible random output values
Let N = 2^k >= mR, m>=1 -- a power of 2 at least as big as some multiple of R that you choose.
loop
b = a random integer in 0..N-1 formed from k random bits
while b >= mR -- reject b values that would bias the output
return min + floor(b/m)
This is called the method of rejection. It throws away randomly selected binary numbers that would bias the output. If min-max+1 happens to be a power of 2, then you'll have zero rejections.
If you have m=1 and min-max+1 is just one more than a biggish power of 2, then rejections will be near half. In this case you'd definitely want bigger m.
In general, bigger m values lead to fewer rejections, but of course they require slighly more bits per number. There is a probabilitistically optimal algorithm to pick m.
Some of the other solutions presented here have problems, but I'm sorry right now I don't have time to comment. Maybe in a couple of days if there is interest.
3 bytes (together) give you random integer in range 0..16777215. You can use 20 bits from this value to get range 0..1048575 and throw away values > 1000000
range 1 to r
256^a >= r
first find 'a'
get 'a' number of bytes into array A[]
num=0
for i=0 to len(A)-1
num+=(A[i]^(8*i))
next
random number = num mod range
Your random source gives you 8 random bits per call. For an integer in the range [min,max] you would need ceil(log2(max-min+1)) bits.
Assume that you can get random bytes from the source using some function:
bool RandomBuf(BYTE* pBuf , size_t nLen); // fill buffer with nLen random bytes
Now you can use the following function to generate a random value in a given range:
// --------------------------------------------------------------------------
// produce a uniformly-distributed integral value in range [nMin, nMax]
// T is char/BYTE/short/WORD/int/UINT/LONGLONG/ULONGLONG
template <class T> T RandU(T nMin, T nMax)
{
static_assert(std::numeric_limits<T>::is_integer, "RandU: integral type expected");
if (nMin>nMax)
std::swap(nMin, nMax);
if (0 == (T)(nMax-nMin+1)) // all range of type T
{
T nR;
return RandomBuf((BYTE*)&nR, sizeof(T)) ? *(T*)&nR : nMin;
}
ULONGLONG nRange = (ULONGLONG)nMax-(ULONGLONG)nMin+1 ; // number of discrete values
UINT nRangeBits= (UINT)ceil(log((double)nRange) / log(2.)); // bits for storing nRange discrete values
ULONGLONG nR ;
do
{
if (!RandomBuf((BYTE*)&nR, sizeof(nR)))
return nMin;
nR= nR>>((sizeof(nR)<<3) - nRangeBits); // keep nRangeBits random bits
}
while (nR >= nRange); // ensure value in range [0..nRange-1]
return nMin + (T)nR; // [nMin..nMax]
}
Since you are always getting a multiple of 8 bits, you can save extra bits between calls (for example you may need only 9 bits out of 16 bits). It requires some bit-manipulations, and it is up to you do decide if it is worth the effort.
You can save even more, if you'll use 'half bits': Let's assume that you want to generate numbers in the range [1..5]. You'll need log2(5)=2.32 bits for each random value. Using 32 random bits you can actually generate floor(32/2.32)= 13 random values in this range, though it requires some additional effort.

Resources