I am working on a project where I need to use Fixed Point math, I am not able to figure out why the numbers are "Rolling Over", I was able to get a large enough number when I changed the shift amount from 16 to 8 and finally to 4. Here is the code I am using at present:
#define SHIFT_AMOUNT 8
#define SHIFT_MASK ((1 << SHIFT_AMOUNT) - 1)
#define FIXED_ONE (1 << SHIFT_AMOUNT)
#define INT2FIXED(x) ((x) << SHIFT_AMOUNT)
#define FLOAT2FIXED(x) ((int)((x) * (1 << SHIFT_AMOUNT)))
#define FIXED2INT(x) ((x) >> SHIFT_AMOUNT)
#define FIXED2FLOAT(x) (((float)(x)) / (1 << SHIFT_AMOUNT))
int32_t test = FLOAT2FIXED(1.00);
void setup()
{
Serial.begin(57600);
}
void loop(){
test += FLOAT2FIXED(1.00);
Serial.println(FIXED2FLOAT(test));
}
And the output:
1
2
3
...
127
-128
-127
-126
When SHIFT_AMOUNT = 8 I am only able to store variables from -128 to 128 but since I am using a 32 bit variable shouldn't a 16 bit shift move the decimal point to the "Middle" leaving 2 16 bit sections, one for the Whole Number and the other for the decimals? Shouldn't the whole range of the int32_t be −2,147,483,648 to 2,147,483,647 with the shift at 16? Is there a setting that I am missing or am I just way off with how this works?
If SHIFT_AMOUNT = 4 I get a range that I need but this doesn't seem right since all the other examples that I have seen online use the 16 bit shift.
Here is a link showing what I am looking to do
EDIT
If I have this correctly, when shifting 8 bits when using a 16 bit wide type that leaves 8bits for the whole and 8 for the fractal leaving a range of -128 to 128. Hence the need for using the 4bit shift increasing the range of the whole to -32,768 to 32,767 is this correct? If that is right then is the int32_t not a true 32 bit wide?
EDIT2
Patrick Trentin pointed out where I was going wrong. Everything was correct except for the part I copied from the linked question. I was casting to a int not a int32_t. The int type is 16bits wide, hence having to use 4 to get the range I needed.
Change this:
#define FLOAT2FIXED(x) ((int)((x) * (1 << SHIFT_AMOUNT)))
into this:
#define FLOAT2FIXED(x) ((int32_t)((x) * (1 << SHIFT_AMOUNT)))
Rationale: the size of int is 16-bit on an Arduino Uno (see the documentation), this caps the size of the values that you are storing within your int32_t variable to 16 bits.
EDIT:
The fact that int16_t is an alias of signed int, which is an alias for int, can be corroborated by either looking at the online documentation or at the content of the file
arduino-version/hardware/tools/avr/lib/avr/include/stdint.h
among the Arduino Uno sources:
/** \ingroup avr_stdint
16-bit signed type. */
typedef signed int int16_t;
Related
I write a code for Arduino Nano and I experience this weird behaviour:
#define GREAT (60 * 60000)
#define STRANGE (60 * 6000)
#define ZERO_X (60 * 1000)
void setup() {
Serial.begin(115200);
Serial.println(GREAT); // Prints 3600000, that's correct
Serial.println(STRANGE); // Prints 32320, thats wrong
long zerox = ZERO_X;
Serial.println(zerox); // Prints -5536, thats also wrong, obviously
}
void loop() {}
What is going on?
I use MSVS2019 Comunity with vMicro
You use integer literals to define your values, and as described in documentation type of literal depends on where it can fit. According to specs
On the Arduino Uno (and other ATmega based boards) an int stores a 16-bit (2-byte) value.
(emphasis is mine) Arduino Nano has CPU with 2 bytes int - 60, 6000 and 1000 fit in signed integer and such type is used. Though neither values of 60 * 6000 nor 60 * 1000 can fit in 2 bytes int so you get integer overflow with UB and unexpected values.
On another side 60000 does not fit into signed int of 2 bytes, so it gets type long with 4 bytes and 60000 * 60 fits there so you get expected result. To fix your problem you can just specify suffixes:
#define GREAT (60 * 60000L)
#define STRANGE (60 * 6000L)
#define ZERO_X (60 * 1000L)
and force them all to be type long. It is not necessary to do it for 60000, but it is better to have it for consistency.
For your code change:
long zerox = ZERO_X;
this line does not solve the issue as after macro substitution it is equal to:
long zerox = (60 * 1000);
and it does not help, as first calculations with type int are done on the right side of initialization, overflow happens and then int is promoted to long. To fix it you need to convert to long one of the arguments:
long zerox = 60 * static_cast<long>(1000);
or use suffix as suggested before.
From what I understand, the range of QModbusDataUnit::InputRegisters is range 0-65535 which is unsigned short.
The method to read 1 unit of inputregisters is as follows:
QModbusDataUnit readUnit(QModbusDataUnit::InputRegisters, 40006, 1);
The value of that will be in the reply, i.e : int value = result.value(0);
My question is that what if I have to read a value of unsigned int which is much larger of the range of 0 to 4,294,967,295.
How can I retrieve that value?
As you stated, Modbus input registers are 16 bit unsigned integers. So without some type of conversion they are limited to the range: 0 - 65535. For 32-bit unsigned values it is typical (in Modbus) to combine two registers.
For example, the high 16-bits could be stored at 40006 and the low 16-bits at 40007.
So, if you were reading the value 2271560481 (0x87654321 hex), you would read 34661 (0x8765) from address 40006 and 17185 (0x4321 hex) from location 40007. You would then combine them to give you the actual value.
I don't know the Qt Modbus code, but expanding on your example code you can probably read both values at the same time by doing something like this:
readUnit(QModbusDataUnit::InputRegisters, 40006, 2);
and combine them
quint32 value = result.value(0);
value = (value << 16) | result.value(1);
The documentation of
QImage::QImage(uchar *data, int width, int height, Format format, QImageCleanupFunction cleanupFunction = Q_NULLPTR, void *cleanupInfo = Q_NULLPTR)
describes that the data, refered by parameter 'data', must be 32 bit aligned. http://doc.qt.io/qt-5/qimage.html#QImage-3 But it's at least unclear what is meant exactly. I assume, each pixel takes 32 bits. But that is not the case. Constructing an image like this is working:
uint8_t* rgb = new uint8_t[3 * height * width];
QImage Img(rgb, width, height, QImage::Format_RGB888);
But this is confusing. When I want to get the pixel values from the image, I thought I need to do this (since the data is 32 bit aligned and Qrgb is 32 bit):
QRgb*rawPixelData = (QRgb*) Img.bits();
for(uint32_t i = 0; i < (Img.width * Img.height); ++i)
{
qDebug() << "Red" << qRed(rawPixelData[i]);
qDebug() << "Green" << qGreen(rawPixelData[i]);
qDebug() << "Blue" << qBlue(rawPixelData[i]);
}
But this is not working (leads to a crash). So, I assume, the data is not 32bit aligned. So, isn't the data 32 bit aligned, or I'm understanding something wrong?
I assume that by the "data" they mean the array of bytes used. And by alignment they mean that the first byte of the array would be 32bit aligned and thus data % 4 would always be 0. It is not the internal alignment of every pixel, just the alignment of the memory block that contains the pixel data.
Furthermore, bits() returns a pointer to an unsigned byte, not a pointer to a QRgb. A QRgb is essentially just an integer:
typedef unsigned int QRgb;
I suspect you are getting a crash because the raw data is "compacted". Meaning that if your image has only RGB and no alpha, it will use only 24bits or 3 bytes per pixel, because that would eliminate a 25% memory usage overhead. As a result, you are walking off the actual data and getting a crash.
You should try iterating it as w * h * 3 unsigned chars and incrementing by 3 for each next pixel, and your rgb would be respectively the bytes at i, i+1, i+2.
It could probably work if your image format was RGBA.
And indeed if you bother to check the byteCount you'd realize that the amount of bytes used internally are the minimum amount for a given format:
QImage img(100, 100, QImage::Format_RGB888);
qDebug() << img.byteCount(); // 30000 or 3 bytes or 24 bits
QImage img2(100, 100, QImage::Format_RGB555);
qDebug() << img2.byteCount(); // 20000 or 2 bytes or 15 bits
QImage img3(100, 100, QImage::Format_RGBA8888);
qDebug() << img3.byteCount(); // 40000 or 4 bytes or 32 bits
But it's at least unclear what is meant exactly.
The expression is part of the software engineering vernacular and has nothing to do with the specific situation at hand: it doesn't have anything to do with Qt nor images nor pixels.
On platforms where Qt is supported, it has the following strict meaning:
uchar *data = ...;
Q_ASSERT(reinterpret_cast<uintptr_t>(data) & 3 == 0);
Or, on an arbitrary C++17 platform, it has the following strict meaning:
size_t size = ...;
uchar *data = ...;
Q_ASSERT(std::align(4, size, reinterpret_cast<void*&>(data), size) ==
reinterpret_cast<void*>(data));
I have a simple circuit setup to read the light level via an LDR into an Arduino. I'm trying to implement a simple low pass filter to data read in. How best to tackle this given that analogRead() returns an unsigned int.
I have tried to implement a simple fixed point representation but am unsure if this is the correct approach.
Here's a code snippet:
#define WLPF 0.1
#define FIXED_SHIFT 4
ldr_val = ((int)analogRead(A0)) << FIXED_SHIFT;
while (true) {
int newval = (int)analogRead(A0) << FIXED_SHIFT;
ldr_val += WLPF*(newval - ldr_val);
Serial.println(ldr_val >> FIXED_SHIFT, DEC);
}
Note the resolution of the ADC is 10 bits and I am working with an 8-bit Arduino Micro.
I'm paraphrasing from the book "Musical Applications of Microprocessors" by Hal Chamberlin, page 438:
If you allow large numbers in the accumulator, then you can make a first-order low-pass filter with one multiplication and some right-shifts.
out = accum >> k
accum = accum - out + in
Choose 'k' to change the cutoff frequency. The more shifts, the lower the low-pass cutoff, but the larger the value in the accumulator. With a 10-bit value from analog_read(), you can easily right-shift 4 places, and still have 2 bits of headroom in the accumulator (as #datafiddler noted above).
Cypress has some app-notes for their PSOC chips with similar equations, and using shifts. I remember one had a nice table that related number of shifts to the cutoff frequency.
The approximate cutoff frequency is the sampling frequency divided by 2-pi times the gain factor:
f0 ~ fs / (2 pi a)
where 'a' is that power of two.
Keep smoothin' those signals!
On a device with no FPU rather then multiplying by 0.1 (which in any case make this a floating not fixed point implementation) you should divide by 10:
#define WLPF_DIV 10
...
ldr_val += (newval - ldr_val) / WLPF_DIV;
However division on an 8 bit processor is often expensive (although probably dwarfed by the execution time of Serial.println() in the loop - but that is a different issue). Instead it is more efficient to select a power of two so that the division can be performed with a right-shift.
#define WLPF_SHIFT 3 // divide by 8
...
ldr_val += (newval - ldr_val) >> WLPF_SHIFT ;
The use of signed int is problematic since right-shift of a signed type is undefined behaviour. In this case this can be resolved by changing the code to:
#define WLPF_DIV 8
...
ldr_val += (newval - ldr_val) / WLPF_DIV ;
The compiler will most likely spot the power-of-two constant and generate the code using an arithmetic-shift-right in any case. However you would probably do better to reconsider the data type.
You still have a right-shift in the Serial.println() call, but that too could by replaced with a divide-by-16:
#define WLPF_DIV 8
#define FIXED_MUL 16
ldr_val = (int)analogRead(A0) * FIXED_MUL ;
for(;;)
{
int newval = (int)analogRead(A0) * FIXED_MUL ;
ldr_val += (newval - ldr_val) / WLPF_DIV
Serial.println(ldr_val / FIXED_MUL, DEC);
}
Non-deterministic output of the data on a per sample basis is not going to make for a very accurate filter and will dominate the timing in any case so you have little control over the frequency response and it will not be stable. It also makes the previous performance optimisations rather pointless. You may want to think about that if it is important in your application - but that is a different question.
Stick with integer arithmetics:
#define WLPF 9
filtered = ((long)filtered * WLPF + newValue) / (WLPF + 1);
I have a small 3.5ch USeries helicopter controlled by an IR remote control, using an Arduino I have decoded its 32 bit protocol. Except for last 3 bits which appear to be some form of checksum. As I have successfully decoding the channels from the remote, in that they track their corresponding controls, I can see that slight changes in the controls yield specific changes in the 3 bits, that are very reproducible and deterministic. Whereas I have not yet found a common theme or formal to reproduce the supposed checksum. I have tried simple things like Parity or Added Checksum. I can see the effects of changing specific bits on the cksum but when I combine the changes they don't simply add to the 3 bit value.
struct Useries // bit structure recieved from 32 bit IR command
{
unsigned cksum : 3; // 0..2
unsigned Rbutton : 1; // 3
unsigned Lbutton : 1; // 4
unsigned Turbo : 1; // 5
unsigned Channel : 2; // 6,7
unsigned Trim : 6; // 8..13
unsigned Yaw : 5; // 14..18
unsigned Pitch : 6; // 19..24
unsigned Throttle : 7; // 25..31
};
So the question is "How can I determine the formula for the chksum?" or what ever it is, as to program a recreation of it.
As it appears deterministic one should be able to take the recorded output of cksum and the other 27 bits and derive a formula for it. Much like PLD logic. Whereas the stimuls being 2^27 bit or 128M possibilities, versus the output being only 2^3 or 8 I would suspect even a small sample of <1% or less would provide the formula.
Another way, Is to look at it as a crypto problem and the 3 bit cksum is a hash.
Either way. Any methods or guidance as to determine the solution is greatly appreciated.
Here is sample data
FYI - The USeries is not the Syma. The Syma's decode does not have a cksum. Once I get the USeries chksum determined I will open source them from a fork of Ken Shirriff.
Just FYI
Struct SymaR5// bit structure recieved from 32 bit IR command
{
unsigned Trim : 8; // 0..7 0x7F
unsigned Throttle : 7; // 8..15 0x7F
unsigned Channel : 1; // 16 0x01
unsigned Pitch : 8; // 17..24 0x7F
unsigned Yaw : 8; // 25..31 0x7F
};
A quick check on parity masks results in seven masks that always give parity zero on your data. (Two of your bits are always the same, so I made an assumption about regularity in the mask to eliminate some contenders.) The masks are:
0x2e5cb972
0x5cb972e5
0x72e5cb97
0x972e5cb9
0xb972e5cb
0xcb972e5c
0xe5cb972e
Any of these masks anded with any of your data values (all 32 bits) results in parity zero. Three can be considered special, since each of your identified parity bits occurs just once respectively in those three (the ones ending in 2, 9, and c). So those three masks without the last three bits can be used to get each of the parity bits.
The mask repeats these seven bits: 0010111. This C code uses shifts and exclusive-ors to apply the mask and parity calculation:
p = x;
while ((x >>= 7) != 0)
p ^= x;
p = (p ^ (p >> 1) ^ (p >> 2) ^ (p >> 4)) & 7;
where x and p are 32-bit unsigned types. x is the 32 bits received. If p is zero when done, then the received value is good.